Is AI all that it’s cracked up to be for today’s testing?

Featured
Is AI all that it’s cracked up to be for today’s testing?

Testing in the Third Industrial Revolution: Is AI all that it’s cracked up to be for today’s testing?

This article seeks to make sense of the promise of AI in testing. It returns first to understand the pressure currently placed on testing by iterative software delivery, identifying some core requirements for testing in the “Third Industrial Revolution”. It then considers why current testing methodologies currently fail to fulfil these objects. Only then are emerging technology from the world of AI considered. The goal is to identify how current AI technologies might remedy the challenges created by rise in automated test execution, building on the tools and techniques in place today.

Objectives for QA in the Third Industrial Revolution

New methodologies and practices have revolutionised software development in recent decades. The software industry has moved from sequential waterfall approaches – in which software requirements were pre-defined, coded, tested, and eventually released often years later – to rapid evolutionary development of a minimal viable product within days, weeks or months. The advantages of these new methodologies allow us to unlock value earlier, while rapidly obtaining feedback to test hypotheses and drive future releases.

Not only have the methodologies for building software changed, the underlying technology we use to assemble software has changed with them. Notice, the word ‘assemble’. Today, we write abstract tasks on-top of several layers of abstraction. This binds together third-party libraries and APIs, building a few pieces of custom business logic. This bespoke code then becomes the application’s unique selling point and our own intellectual property.

These revolutions have enabled software to be developed an incredible speed, while virtually every company today has become a software organisation in order to remain competitive. The biggest companies in the world today leverage software to drive efficiencies, widen profit margins, and create new markets. Meanwhile, the consumers of today make buying decisions on the usability and features of a platform. Software has therefore then been disruptive to the point of transforming how business operate, and some rightly refer to the current error as the ‘third industrial revolution’.

However, testing practices have not always adapted at the speed of development methodologies. QA is therefore battling to keep up with the ever-increasing rate of change of software. That’s a problem for organisations who are themselves battling for software-driven consumers, as today slow response times, defects in production, and a poor user experience equate to customer churn, reduced subscriptions, and lower revenue. Testing therefore has a greater role than ever in protecting an organisation’s bottom line.

New Development Methodologies: New Pressure on QA

Business therefore have testing on their mind, but often view it negatively as a bottleneck and resource-drain in otherwise well-oiled software factories. The pressure placed on testing today can be understood by considering current development methodologies in more detail, before next considering why current testing methods cannot keep up with the speed of iterative development.

Most modern development practices are labelled “Agile”, a now ubiquitous term in software development. In fact, as little as 2% of organisations describing themselves as “pure Waterfall”.[1] Broadly speaking, agile has become an umbrella term for iterative and incremental software development, typically a variation on Scrum delivery.

While there are many variations, the core value revolves around providing a lightweight framework that promotes iterative software development throughout a products life-span. Requirements and solutions evolve through the collaboration between delivery teams, the business, and customers. This collaboration feeds feedback driven development, as defined in the agile manifesto.

The driver for agile adoption usually comes from the business who want to release earlier and with greater frequency. The goal is to unlock value faster and obtain valuable feedback earlier in the cycle, thereby mitigating the risk of building software that does not meet user needs. Feedback is used to drive software advancement to ensure the solution meets the latest market demand.

Within an agile development team, the journey starts with a backlog of tasks, known as stories. Developers pick off stories one-by-one and implement them. After implementation, they the stories integrated into the global source control system. Once the new code is integrated, testers assure that the newly submitted feature meets the initial software requirements.

There are typically also further requirements of what constitutes a “done” feature, for example performance constraints and peer reviews. This loop continues until the objectives for the sprint are complete, at which time a new release candidate is built. The objective is for all the cross-functional teams (development, product owner, testers, etc.) to work in close alignment, continuously ensuring software is developed to meet the latest business needs.

Testing Practices have not Kept Pace

New development practices are therefore releasing more frequently, merging discrete features into the master code. Each merge requires testing, and QA must therefore test systems that are more complex than they have ever faced, all while testing faster than ever before.

However, testing has historically not kept pace with evolving development practices. Many organisations are therefore now attempting to align the two, creating the cross-functional teams just mentioned. The latest trend has seen dedicated testing centre of excellences diffuse into agile teams whom deliver the full SDLC. Each team takes ownership of their piece of software, and associated testing. This has seen the role of the test manager transform into quality driven scrum masters who manage a team’s development and quality simultaneously.

The problem is that new team structures do not necessarily equate with new QA practices. The same manual testing processes often persist, and testing quickly lags behind the pace of iterative development. With each piece of code checked-in, testers today frequently still:

  • Design new test cases which assure the functionality of a new feature, while striving to maximise the overall test coverage for the application.
  • Update the existing regression tests if their respective journeys are affected by new code changes. This involves searching through all tests to find impacted test cases and then repairing them. Regression tests are vital in assuring a quality deliverable each time build is made.
  • Execute tests to assure new features meet their requirements and existing functionality is still operational.

Much of these efforts remain manual, even if automated test execution is applied within the third change. The processes are also frequently siloed and repeated from scratch each time, so that each code commit requires more manual design and maintenance than there is time in a sprint. Meanwhile, there is rarely a formalised notion of coverage in manual test design and maintenance. Under-testing is then the norm, while some features go heavily and wastefully over-tested.

Automated testing has frequently been presented as the answer to the downfalls of manual testing. In practice, automating execution amplified many of these manual challenges. While the execution of tests exponentially increases the speed of test execution, scripting the automated tests remains a manual process. This introduces another slow and manual process that does nothing to improve the coverage of manually designed test cases. Test maintenance can furthermore kill any automation ROI, or worse yet cost more time and money than manual testing in the first place. In fact, it’s not unusual for organisations to down all tools for up to 5 days to update existing regression tests.

QA bottlenecks mean that testing cannot keep up with the pace of code commits.

Figure 1 – QA bottlenecks mean that testing cannot keep up with the pace of code commits.

Even though most organisations are adopting agile and succeeding with the development and delivery of software in short release cycles, the quality assurance is missing with each delivery. Sometimes, testing lags behind the releases by a cadence of up to 12 months. New testing techniques are required to produce and maintain rigorous automated tests at the speed of agile.

The rise of Artificial Intelligence: A New Dawn for Testing?

Automating test execution has not therefore delivered on the promise of delivering true quality at the speed demanded by the Third Industrial Revolution. As organisations are realising the bottlenecks that persist even with automated testing, “AI” and “intelligent testing” are being touted as the new “fix” for testing.

The challenge is that commercial AI technologies are still emergent if not embryonic in the testing industry. They are yet to be tried-and-tested in the field and we still don’t know what “works” for testing. The language of AI has furthermore flooded the market and is often merely stamped onto existing technologies as a marketing activity. It is therefore unclear what “AI” is in testing and how it relates exactly to previous testing technologies and techniques.

Intelligent testing is instead often discussed in terms of promise and not practice, and it is unclear how organisations can move from current tools and technologies to introducing valuable AI. We can’t just “buy a box of AI”, so how do we begin to incorporate it into testing?

The multiplicity of testing practices makes this a particularly difficult question to answer. When people envision AI in software testing, they often perceive armies of bots testing our applications for us. A comparison is often drawn to the autonomous vehicles that are now close to reality. However, the world of autonomous vehicles is arguably less challenging than that of software testing. Firstly, we have been driving cars relatively successfully for several decades.

There are clearly defined rules for each country dictating how we should drive our cars, for example the highway code in the UK, and there are various driving tests which provide exactly defined requirements for driving. Cars themselves furthermore also follow a strict set of standards for being road legal. The space is therefore relatively constrained, with a well-known strict set of rules and procedures that can be reflected in autonomous driving systems.

No such standards exist for testing and developing software. The requirements are highly variable and informal, residing in the brains of subject matter experts.  The data consumed by testing is also frequently highly unstructured, typically textual and stored across across many formats and systems. There are also no set standards for testing, and an array of techniques are often at play: from Behaviour-Driven Development and Model-Based Testing to Chaos Engineering and Acceptance test–driven development.

These methodologies moreover apply to several levels of software (API, UI, database). This vast variability makes software testing a complex and ad hoc process, and testers today often struggle with the testing procedures due to the plethora of non-documented facets. How then can intelligent robots be taught to test?

Introducing AI to QA: First ask: “Do I have the information required?”

Let’s start answering this question with a well-known phrase in Artificial Intelligence and Machine Learning: “Garbage in, garbage out”. In order to get to a situation where we can start applying intelligent techniques, we first need to have highly structured high-quality data about the systems.

Models provide one way to record the structured data that feeds into and out of the software development lifecycle. This in turn works to avoid Garbage In, Garbage Out scenarios. Effective modelling entails maintaining accurate models of how a piece of software should operate, providing a single source of truth to drive intelligent testing.

The models need be nothing more than a simple flowchart of events. Technologies today mean that the models can be created from the left of the SDLC as a requirement model, or from the right, once the software has been developed. Accelerators such as scanners and central data dictionaries that re-use existing automation code to generate new tests make the construction of such models very quick, as do integrations with monitoring tools that can be used to automatically build models from activity:

Intelligent modelling iteratively maintains a Single Source of Truth.

Figure 2 – Intelligent modelling iteratively maintains a Single Source of Truth.

The models are therefore living documentation that captures the latest understanding of the ideal system. The same models can drive test generation and maintenance. This works with the test automation already in place, removing many of the bottlenecks which force testing to lag behind development.

Test data definitions can be overlaid to make and find the right data within backend sources for each test scenario. This ensures that QA has up-to-date data, in the right place, at the right. Test automation actions, meanwhile, are overlaid to specify exactly what each step should perform on the underlying system as it passes through that action.

Once a model is created with the relevant overlaid attributes it can be used to automatically create coverage focused tests. That is, the minimum number of automated tests to maximise coverage across the application across the entire application stack:

Formal modelling enables automated test generation, removing the bottleneck of manual test design.

Figure 3 – Formal modelling enables automated test generation, removing the bottleneck of manual test design.

Auto-generating the test artefacts from the model ensure that tests keep pace with the fast-changing user needs, keeping the fastest agile developers on their toes. Meanwhile, the visual nature of modelling makes it easy for stakeholders to understand how the system operates, and how it is being tested. Such a visualisation is a great communication asset for sharing amongst cross-functional team members, and quickly identifies gaps or misinterpretations with a solution.

Test maintenance furthermore becomes a much simpler effort. Visually it’s very easy to identify which components (and therefore also the tests linked to them) have been impacted by a change. When the system flows into the next release, QA teams only need to update the models, which will then automatically populate and update our upstream and downstream generated artefacts.

Introducing AI to Model-Based Testing

AI-based testing technologies today further help remove the challenge of automated maintenance and can be introduced readily to model-based test design. Self-healing identifiers, for instance, remove the need for human test maintenance after every system change. Identifiers are instead reactive to changes and automatically update on every execution.

Run results are also overlaid instantly on top of the model in this approach, enabling stakeholders to understand what scenarios and user journeys have been impacted by any failures. Such data can also be used to generate a new and pinpointed set of tests. These targeted tests revolve around the failure to iterative perform root cause analysis. This enables stakeholders to make more informed decisions over the impact of a defect, and whether to consider continuing with a release.

Further Intelligence can be overlaid to dynamically creates tests based on risk and time. The central models provide a single source of truth to which user stories and code can be linked. As new user stories or code commits come in, automated impact analysis is applied and new tests can be generated to pin-point the updated functionality:

Intelligent modelling enables targeted regression.

Figure 4 – Intelligent modelling enables targeted regression.

The Promise of AI in Testing

AI can therefore help remove many of the bottlenecks that force testing further behind the speed of modern software delivery. However, before an organisation invests heavily in the latest AI technologies, they must make sure that their current testing practices are structured enough to facilitate intelligent testing. They must make sure that there is sufficient traceability between technologies and artefacts, such that accurate information can flow into a single source of truth.

This single source of truth can then enable automated and intelligent test maintenance. The accuracy of the central asset meanwhile avoids a “garbage in, garbage out” situation in which AI-driven testing creates inaccurate tests.

Modelling provides one approach today to creating such a single source of truth. It can be combined with AI-based technologies like self-healing identifiers and code impact analysis. This generates more resilient automated tests at the speed with which systems today change, an approach to QA for the Third Industrial Revolution.

[1] Tech Beacon, Survey: Is agile the new norm?, retrieved from https://techbeacon.com/app-dev-testing/survey-agile-new-norm on 23-10-2019.

 

James Walker (PhD)
Director – Curiosity Software Ireland Ltd.

 

 

Related Posts

Menu