Manual testing in DevOps & QA. With the sheer volume of tools available, it’s safe to say we are living in a golden age of software. However, the human element will still be critical in a world of automation and DevOps.
With the sheer volume of apps, tools, SaaS, cloud technology and the IoT, it’s safe to say we are living in a golden age of software. Businesses and consumers alike have more access to a wider range of products, services and support now than ever before.
But we are also in the age of automation, where businesses are looking to cut costs and streamline departments, getting production, development and testing to work together in a synonymously smooth way.
It’s the unenviable job of DevOps to integrate several teams within an organisation and get them working together as quickly and efficiently as possible, bringing in their plethora of tools and software under one banner.
One way to do this is to automate as much as possible, particularly with repeatable tasks that might otherwise drain time and resources. Some research even suggests an actor-based framework to solve the multiple communication problems, passing data to each relevant entity autonomously (Cois, Yankel & Connell 2014).
However, in a world increasingly dominated by automation, it’s the vital role of quality assurance that can miss out, as teams focus too much on efficiency and not enough on quality, leading to poorer products.
That’s why it’s important to remember that for all the benefits of automation, the human is still a crucial part of the process, and that’s doubly so when it comes to software testing.
Why we automate
The desire to automate is understandable, after all, it’s good practice – machine-managed monotonous tasks free-up developers and testers for other fun activities such as, well, setting up more automated test scripts. It can also aid in quicker and smoother releases meaning they get into production faster.
In some instances, testing can account for 50-75% of the cost of a product’s release, and so cutting that through automating can have an impact across the project (Blackburn, Busser & Norman 2004).
Manual testing takes time, skill and in many cases patience. It’s far easier to set a test suite to run through a library of test cases and report the findings. But, whilst this might give you a better yield, it can also lead to an over-reliance on automation and therefore make you vulnerable to some of its misgivings.
So, what are the weaknesses of automation, and where does it fall short compared to its human counterpart – manual testing?
Time & resources
It may seem silly, given that automation is designed to free up time, but to create the appropriate automated testing for a specific case can take a lot of time and resources.
Obviously, it depends on the complexity of your software, but building or finding an automated test that matches your requirements can take a lot of work. This is fine if you expect to be testing similar items for a long time, but for individual projects, or those with tight deadlines, setting up an automated function might see you falling behind schedule as you look to future-proof your foundation of testing.
Another factor is that if a change is made, then the automated testing will also need updating which again can be a big time drain. Sometimes, even the smallest of changes to the interface can lead to a large amount of test re-writing.
This can lead to pushing releases back, which has a knock-on effect across other departments and further down the line, can cause concern among investors or stakeholders.
For the most part, when a human makes a mistake he/she knows it and can set about correcting the task. In an automated test, QA’s can often spend as much time trying to figure out the error in the test as they would have testing manually. There have even been cases of automated tests introducing new faults not covered by existing test cases, which can cause further delays (Nakajima et al 2016).
There is a great temptation to replace manual testing with automated, when in-fact the latter should be used to support the former.
When automated testing is the sole means of bug hunting, there will always be a gap in testing. For example, if the developer is writing the tests, and there’s an error in the code that they have not thought of, then they likely haven’t written a test to find it, either.
Or, there may be unexpected usability issues, such as with the UX, if the onscreen text doesn’t have enough contrast with the background, the user will be unable to see it clearly. A human would naturally spot this straight away, but an automated script most likely won’t have been programmed to check contrast on pieces of text. The train-track style execution of set commands is limited to its own design, and it fails when it meets unexpected issues.
Lost in translation (false positive and false negatives)
Without preaching to the choir, automation, by its very nature, is simply a machine running through its list of commands. There’s no creativity to wonder ‘have I checked this’ or ‘perhaps if I run something different’ – it does what it was told to do and nothing more. That lack of creativity is cited as being one of the biggest drawbacks to automation (Pan 1999).
This is fine if the test is a simple one, but for the more complex, there is a greater risk of something getting lost in the translation. The setback that an undiscovered false positive or negative can cause can be detrimental to the whole QA process, and it’s where thorough manual testing really holds sway over automated.
Less creativity in tests
As previously mentioned, automation takes creativity out of the equation. This may be fine for basic tasks, but for more in-depth and exploratory testing, you need to take time and get into the core of your software, testing the expected and unexpected performance of critical functions.
A skilled and experienced tester will know what to look out for, what functions can cause problems, and perhaps most importantly, what defects warrant new test cases of their own. A manual tester discovering a bug can also feedback to the automation test writers, to cover that issue in the future. Indeed, the most often cited reason for exploratory testing is the difficulty in designing test cases for complicated functionality and the need for testing from the end-users’ viewpoint (Itkonen & Rautiaien 2005).
Keeping skills in practice (for when you need them!)
Even the most finely tuned machine will eventually break down. A testing team could have a library of automated tests running like a Swiss watch, but eventually, there will come a time where the testers will be tested. When something different and unexpected goes wrong, and you need someone in your team to get creative and find a new solution to this unprecedented problem.
That takes skill and experience. Skills and experience are only gained when they are put into practice, and for the most part that only happens through the development and implementation of manual testing.
Manual testing will always have a place
With hundreds of thousands of new applications created every year, there’s no doubt that as the range and diversity of software continues to grow, automated testing will be needed to take over repetitive tasks and free teams up for more complex issues.
However, no matter how advanced automated testing becomes, a human will always be needed to identify the more advanced and unexpected issues.
A statement somewhat confirmed by a survey of 115 software professionals, which found that 80% of practitioners disagreed with the vision that automated testing would fully replace manual testing (Rafi & Moses et al 2012).
I believe, the ability to think creatively about the user experience will mean manual testing will always have a place in the QA process, even if it comes down to something as simple as ‘that doesn’t look right’.
Scott Sherwood, Founder, TestLodge