The 4 requirements to achieve parallelisation
With increased automation and scalability come increasing challenges, but those challenges don’t need to become roadblocks.
A strange thing is happening in the world of testing. On one hand, the organisations I talk to today understand better than ever just how important automation is to the success of their business. They understand that in the digital age, building an engaged and loyal customer base goes hand-in-hand with rapidly delivering high-quality, easy-to-use web and mobile applications, and that running automated tests continuously, throughout the software delivery cycle, is a necessity.
As a result, they’re devoting more time, energy and resources to automated testing than they ever have before.
On the other, when I initially engage with them, many of those same organisations are struggling and in search of answers. The more they attempt to automate and scale their testing efforts, the more they seem to run into roadblocks, and consequently, the less frequently they’re able to deliver new and updated apps to their customers. They’re not alone in their frustration, either.
According to a recent Forrester report, the percentage of organisations releasing software on an ‘at least monthly’ basis declined from 36% in 2017 to 27% in 2018.
Parallelisation: the missing ingredient
The shared frustration so many organisations experience on their automated testing journeys usually boils down to this: as the number of tests in a suite starts to grow, the suite starts taking too long to run. And when tests start taking too long to run, the entire agile development process breaks down.
If you’re looking for the reason why release velocity is stalling despite our collective investment in agile development, look no further than this.
If we’re to close the gap between expectation and reality, there’s only one solution: parallelisation. To better understand the power of parallelisation, let’s consider a hypothetical scenario in which we have a suite of 100 tests, each of which takes two minutes to run. If you have to run those 100 tests sequentially, it’ll take better than three hours for the entire suite to complete – time your developers spend unproductively waiting for feedback.
If, however, you can run those 100 tests in parallel, then it’ll take just two minutes to complete the entire suite, and you can get feedback to your developers just that quickly. It’s not a stretch to say this is the difference between a successful automated testing initiative and a failed one.
In other words, and in no uncertain terms, you cannot have effective automated testing – and thus, by extension, effective agile development – if you cannot effectively run tests in parallel. Of course, effectively running tests in parallel is more easily said than achieved, and many organisations immediately encounter a number of challenges as they look to scale their automated testing initiatives through parallelisation. Fortunately, none of those challenges are insurmountable.
With this in mind, let’s examine the four requirements necessary to achieve effective parallelisation in automated testing.
1. Keep your tests atomic
The more I talk to customers, the more I study automated testing, and the more I myself execute automated test suites, the more I am convinced that the single most important thing any organisation can do to achieve parallelisation and deliver automation at scale is to run atomic tests. An atomic test is one that assess just one single piece of application functionality, and nothing more.
Let’s say you want to validate that your login page loads, your user name field displays, your password field displays, items can be added to a cart, and a check-out can be processed. As a default motion, many developers would script a single test that aims to validate each of those five functions. If you’re leveraging atomic tests, however, you would script five completely separate tests that validate each of those functions individually.
The benefits of doing so are many, but let’s start with speed, which is what parallelisation is really all about. When I conduct automated testing workshops, I’ll often show two test suites, each covering the same exact application features. One suite features 18 long-flow, end-to-end tests, and the second features 180 atomic tests. I’ll then ask attendees to predict which of the two test suites will execute faster.
To the surprise of most, the answer is the suite featuring 180 atomic tests. And to their even greater surprise, it’s usually not even close. In live demos, the suite featuring 180 atomic tests will typically execute eight times faster than the suite with 18 long-flow tests.
Think about that for a second: 10 times more tests, eight times faster execution. That’s the power of atomicity.
If you’ve never been exposed to the concept of an atomic test, you’d understandably (but mistakenly) assume that running longer tests in smaller quantities is a faster approach than running atomic tests in larger quantities. If that’s your mindset, then when you inevitably encounter longer-than-desired test run times, your default reaction will be to add more validations to a smaller number of tests. As you’ll soon find out, this will only make your wait longer. It’s a painful cycle that only atomic tests can break.
It’s not enough to have some atomic tests in your suite, either. They should all be atomic. Remember, when you apply parallelisation, your suite will only execute as quickly as the slowest test in your suite. Just one long test is enough to undermine the execution time of an otherwise beautifully scripted suite of atomic tests. If you have a suite of 30 tests, 29 of which are atomic tests that take just a minute to execute, and one of which is an end-to-end test that takes 30 minutes to execute, then you’ll have to wait the full 30 minutes to get the results of every test in that suite.
Leveraging atomicity not only increases test execution time but it also greatly enhances test quality, and as any experienced tester will tell you, test quality is everything. Everything you do with automated testing is only relevant to the extent that you’re running high-quality tests from which you can glean valuable and reliable insight into the state of your application.
Suites that leverage atomic tests are far more stable and reliable than suites that don’t. After all, every single validation request you add to a test is another opportunity for something to go wrong and for the test to fail. Atomic tests are also far easier to debug when a test does fail. For starters, as previously explained, atomic tests execute much faster than non-atomic tests, so developers are usually getting feedback on code they just wrote. This makes it considerably easier to go back and fix that code.
Moreover, since an atomic test focuses on just one piece of application functionality, there’s no ambiguity whatsoever about what’s broken in the event of a failed test. It can only be that one thing. That means developers don’t have to spend their valuable time searching out the source of the failure.
By contrast, when a non-atomic test fails, the culprit could be any number of validation requests. Making matters worse, developers get no feedback on features beyond the point of the failure. So, if you’re testing 30 different elements of functionality within a single test, and the failure occurs at element 10, the remaining 20 elements go untested.
2. Keep your tests autonomous
If keeping your tests atomic is the first step on the road to effective parallel testing, keeping them autonomous follows closely behind. As its name suggests, an autonomous test is one that is scripted to run completely independent of all the other tests in your suite. This means that if you have 30 tests in a suite, all 30 should be able to execute to completion regardless of the outcome of the other 29 tests in that same suite.
Unfortunately, one of the more common mistakes I see development teams make is the scripting of an end-to-end test scenario in which one test cannot successfully execute until its predecessor has done the same.
In such a scenario, if you want to do something as simple as validating the efficacy of your checkout function, you must first successfully execute tests for each of the functions that precede it in the application workflow. The moment one test in the workflow fails, all of the other dependent tests fail as well.
This type of inter-dependency will ultimately undermine your ability to achieve parallelisation at scale, so make sure you design your tests such that they can all run entirely on their own and in any order necessary.
3. Properly manage test data
Proper control and management of your test data is an essential component of effective parallel testing. If you’re relying on traditional hard-coded test data, the static nature of which is a poor fit for the dynamic nature of automation, then you’re in for an uphill battle.
A better strategy is to leverage what’s known as ‘just-in-time data’, where you create test data, utilise it for a given automated test suite, and then destroy it upon completion of the test. This real-time approach ensures that data from a previously executed test doesn’t muddy the results of your current test, a common problem for many organisations as they attempt to scale their automation and run tests in parallel.
4. Avoid the static keyword
Deeply technical, but deeply important, the final requirement for effective parallel testing has to do with applying the ‘static’ keyword or, perhaps better said, with not applying it.
Whether it’s to the WebDriver instance you’re managing in your test scripts, or to Page Objects, or to a number of other ancillary classes for test automation, applying the ‘static’ keyword is a swift way to undermine your best efforts at parallelisation.
When you identify the WebDriver as ‘static’, it effectively means that there can only be one instance of WebDriver attached to and shared between all of the tests in your suite. It’s akin to asking all the restaurants in the world to share the same chef!
To say this is an inefficient approach would be an understatement. And, while there is a workaround in which testers ‘fork’ the JVM process for each test in their suite, thus creating a new instance of the entire test suite for every test, it’s only marginally more efficient – in other words, still highly inefficient.
Of course, there are exceptions to every rule, and there are in this case as well (such as strings and loggers), but, in general, the ‘static’ keyword should be avoided if at all possible.
The new competitive advantage
I opened by talking about the dichotomy that today exists between expectation and reality in the world of automated testing and agile development. That so many organisations are still struggling to deliver high-quality web and mobile applications at speed means that an enormous competitive advantage awaits those who can. Adhering to the four mandatory requirements of parallelisation and delivering true automation at scale is a great place to start.