The manual testing approach

manual testing

In the modern testing landscape, we appear to be setting ourselves unreasonable targets, which has increased the speed at which testing has to be carried out, and also the amount of automation and emulation packages which are needed to ensure testing meets the ‘go live, pass’ criteria set. And this is true whether it is end-to-end, integration, or unit testing.

Our ability to automate testing has resulted in job losses within the field of testing and can be compared to job losses due to production line automation, faster robotic technology and the use of improved Programmable Logic Controllers. In reality, this is not always a bad thing, and only good when quality can be assured.

With the advent of interlocking applications, allowing the isolation of a failure within an end-to-end process, how testing is carried out has changed and also what is considered a ‘pass’ or a ‘fail’. Here are some thoughts on why we can never really move away from testing using humans, despite all the modern testing aids which are available. I believe testing is dependent on listening to the judgements humans are able to make, based on something other than logic, including emotional reactions based on how something makes us feel, like or dislike. This is something software will never be able to do, at least one hopes.

Why human interaction matters

As hard as I try, I cannot think of a truly automated environment in which I have worked. An environment where the entire end-to-end process had no human involvement. I have witnessed increasingly clever code procedures which mimic keystrokes and functions carried out by humans – but they only ever mimic, they are not yet capable of functioning in a ‘what if?’ type environment. They can only conduct a repetitive process. My belief is, as long as the solution, software or product is going to be used by humans, there will always be a requirement for humans to test the product; for usability, navigation and essentially to ensure it does what it says on the tin.

Another important factor, which cannot be answered with technology, is that of a product’s aesthetics – ‘does this solution/ product appeal to the end user, and is it easy to use?’ is a question that can only ever be answered by humans. Ease of functionality is another key element. As an example, let’s look at Microsoft’s Excel program; for office workers, this is a ubiquitous product. It can be used as an everyday spreadsheet application to carry out simple calculations, or a modelling tool with complex formulas, ratios, and the resultant outputs. Part of the attraction of Excel is the way it can take a novice and within a month turn them into an advanced user. This is possible because Excel has numerous template wizards, formula wizards, recognisable icons and help texts. Also, via the internet in the wider world, you can find an incredible number of books, forums, and websites which allow a user to get to grips with the product and its capabilities. The only drawback of having something as powerful as Excel as part of your end-to-end process is that it is not always tested as thoroughly as something like a software upgrade.

When a core spreadsheet fails the most significant problem faced is that it is not always easy to recover, and in some cases can be unrecoverable. The reason for this is that the knowledge needed to fix an Excel problem is often the intellectual property of just one person – the spreadsheet developer. This is a single point of failure. I was once involved in creating a very complex workaround after an Excel spreadsheet left by my predecessor stopped working following a Windows upgrade. A very painful lesson! However, the proliferation of Excel solutions in the workplace is a very different topic and should be covered in another article.

Manual testing essentials

For the purpose of this article, the case being made about Excel is that it feels as if it’s been developed and tested by humans for humans since it is such an intuitive tool. Its functionality is enhanced by icons which allow usability of the product across the globe, despite differing language formats. I feel Excel would not be the product it is without human intervention.

This is a good example of why we can never exclude humans in testing when the product or software has to be used by humans – to do so leaves huge gaps in the solution, which has to be back-filled at a later date. We can, of course, emulate network traffic or data packets, the stuff humans don’t see, but data also creates many of the delays in completion of a process. The frustration felt, while an hourglass spins, is just as significant a test as is the successful completion of a process.

Rewriting test scripts in agile

Methodologies can actually create additional work when it comes to testing. For example, one of the adverse side effects of using agile methodology comes from one of its fundamental benefits; the ability to make fluid changes and reflect them in the product development flow, the UI, or features and functionalities. This fluid capability has a major impact for testing and its prerequisites, test scenarios and scripts. Any changes which impact the fundamental design, function and end product require a rewrite of the automated and manual test script to reflect these changes, otherwise testing, in part, becomes invalid.

Test automation isn’t always good value

Let’s stack up a few of the most common costs incurred when looking at testing automation:

• software + license + maintenance = general costs

• configuration of software = consultancy costs

• writing test scripts usually by a software specialist = SME costs

• reusability of scripts = wastage costs.

For large scale projects, or global change projects involving multiple sites and a   single product, solution, telephony, or repetitive testing scenarios, it is worth paying for the elements listed. However, for small or one-off projects these costs do not make sense. To fully appreciate this, you would need to carry out a return on investment (ROI) study, to gauge whether automation tests or man-hours-spent makes more sense.

Another downside of automated testing is that you need to stay on top of test script versions and ensure it matches the application’s version it will test. Testing needs to reveal enough to allow the organisation to make meaningful changes to the overall solution and realign plans and/or delivery dates. Remember, automation of testing takes a very black-and-white approach, it is either a pass or fail. Human intervention allows a test script to be stopped and understanding that a bug fix or reconfiguration will arrive first thing in the morning, realise it is better to postpone it rather than fail the test.

The following should be understood: technology cannot do it all, there is no such thing as a self-cleaning oven – after a while it needs us to get on our hands and knees, and scrub!

From manual tester to super user

When going through a test phase, I prefer to work with in-house testers or employees. It always amazes me how much enthusiasm an in-house tester can bring, how much they notice, and how often they have helped in shaping a product to be user-friendly, easy to navigate, and right for their company’s needs. Humans will often think outside the box and beyond the logic of a simple ‘yes’ or ‘no’.

As an example, when someone who is familiar with the way their organisation will use the solution or software is involved in testing, they may look at the help texts in respect of functionality, to ensure it supports the function. While this may not be a reason for failing the test, it may be reason enough for the solution to be enhanced.

The other benefit of using an in-house tester is that, by the time testing is complete and the solution is rolled out, an organisation has in their midst a Super User, and this will help to enhance how proficiently an organisation uses a solution.

The limitations of automated testing

Before leaping into the world of automated testing, here are some scenarios where automated testing will need the human touch:

• device compatibility and interaction cannot be tested using automated scripts

• switching WiFi off and then turning it on cannot be automated

• switching permission settings, and retesting access cannot be automated

• calls and texts can stop a test script in its entirety

• running multiple applications.

Some of these bullet points apply to mobile apps, however, mobile apps are becoming increasingly important and more frequently used in the corporate world, a couple of examples of this:

• purchasing authorising applications, allowing managers to authorise monetary value based on their authority level

• PDA’s and docking information for maintenance, forklift trucks etc.

This section is only highlighting some of the limitations of automated testing which exist today, and it is nowhere near an exhaustive list.

Faster, faster I need it yesterday!

The current testing landscape is shaped by the desire to shorten the release cycle, and this explains the growth and demand for automated testing since it can cut out delays caused by human hesitation when making decisions. Automated testing gives us a very definite ‘yes’ or ‘no’, and it does this within a split second of the results being available. Also, depending on the overall pass/fail percentage, it allows a systemic decision to be made purely on outcome. Try to remember that passing any testing stage gate with a 70% pass rate means you still have 30% of the functionality to fix. This reveals itself by the number of patch releases and bug fixes needed after a product has gone to market, or has been released in-house, whether it is on your mobile phone, PC or server.

Some industries may consider pushing a product to market after a 70% pass rate, and the automotive industries are often no different. My opinion on this practice of lowering testing pass rates and criteria, and removing human intervention, is that it is damaging to the reputation of the industry and, after a while, has a negative impact on the users’ viewpoint. Without confidence in a product, it will impact sales for external products and cause a lack of usage when it is developed for in-house solutions. This can lead to a dip in market share, profits, and ultimately it could result in a profit warning for external products, and complete rejection of the solution. This lowering of testing standards can also be seen in internal projects.

To give an example of this, ask yourself the following question: “how many organisations are you aware of where workarounds exist, despite state-of-the-art systems?” For me the answer is simple, every company I have ever worked for has workarounds or supplementary processes, usually via Excel, which allows the full end to- end process to be completed. My feeling is that part of this is down to the way we test, but an automated test does not ask such questions as:

• why doesn’t the software do that?

• I’d expect it to complete the process?

• why do I have to go into the system and press a button?

• why can’t I easily get a report from this system?

Automated testing often tests keystrokes or partial gateways which make up the end-to-end process, and based on these results it will consider it as an overall pass or fail. The major problem is, these series of tests never ask questions about usability or convenience.

My final succinct thoughts on testing are as such: testers should clearly define the outcome of each test, clearly define what is expected if a test fails (further development), clearly define the roles played by the business and tester (particularly for internal projects if external testers are being used), clearly define automation and manual testing roles and clearly define any go/no-go criteria. Testing and the outcomes of testing should allow decisions to be made and to allow an organisation to go forward. Do not underestimate the value of a well thought-out and thorough testing strategy; it will avoid a lot of pain later.

Written by Azik Chowdhury, Business Analyst & Project Manager, Jaguar Land Rover

Related Posts

Menu