How to win at continuous testing

FeaturedManagement
How to win at continuous testing

How to win at continuous testing

The prize of test automation and continuous testing has always been a much sought after one by organisations, but what exactly is the prize they seek?

The prize of Test Automation has always been a much sought after one by organisations and nevermore so with the exponential growth of those organisations wanting to adopt Agile and DevOps practices. But what exactly is that prize they seek?

The more you delve into this, it seems the overriding factor is not quality, but to go faster and bring the cost of testing down.

But wouldn’t that be their agenda? Do they ever expect that quality isn’t something that can be achieved today? Or is there a problem that originates a little closer to home? Ask yourselves, how do my test teams effectively demonstrate that product owners should have confidence in the product being delivered, and on a sustainably-continual basis.

Why would the stakeholder question that view, each time you tell them you’ve achieved 100% test coverage? And what did you really mean by that anyway?

Foundations

Having spent the majority of my career trying to create technical test capabilities from within a traditional centre of excellence, we have had many successes – be that creating a centralised automation team and standardising on frameworks, or creating an environment virtualisation capability.

However, all of it seemed somehow too short-term and too bespoke. It dawned on me whilst facing the challenges of creating a centralised test data capability, that what we were doing, and always had been doing, was building on shaky foundations. We didn’t know what the product was supposed to be doing upfront, so how could we prove it worked? Let alone try and automate those unknowns, create test data for those unknowns and create and representative interface for those unknowns.

We were being reactive, redesigning in test, but the rate of change and the traditional ways of creating mounds of test cases in the usual test management tools wasn’t helping. In fact, they were exacerbating the problem, creating ever-more tests with questionable value and providing the false confidence to stakeholders that running thousands of tests equals ‘quality’.

We were shifting right when we should have been shifting left.

Shifting left

But shift left shouldn’t be a test only – as a collective we’d let ourselves move IT change to the right, knowing the foundations weren’t correct. We needed to shift quality back to the left, and so we went back to basics: validation (the process of checking whether the specification captures the customer’s needs); verification (the process of checking that the software meets the specification).

As testers, we’d lost the craft of testing, and the same can be said of our amigos who we work with from design and development. However, now that we’re recognising this, we are accelerating in our addressing of the debt of unknowns we’ve built up over time. We are doing this through modelling or model based testing (or model driven development if you are culturally further along the journey of ‘quality is everybody’s responsibility’).

I’m keen to stress here that although tooling is an enabler, it certainly doesn’t end with just buying a tool and doing the same thing we always have – that would just be committing the same mistakes under a new buzzword!

Target your continuous testing efforts

What we are really talking about, at its core, is critical questioning and collaboration, and understanding where to target appropriate effort to achieve a quality outcome.

We build models, but they are an aid to the conversations, queries and rules that we need to illicit from various stakeholders. Models provide a structured representation of ‘validation’ and we realise we actually have many customers who have needs to be met; be that compliance, security and operations, as well as the end consumer of the software actually being created.

Models allow us, as testers, to have a rich conversation, but they also allow us to tick the ‘just enough documentation’ box for the wider team – with our models becoming the living specifications.

Tool limitations

Recognising limitations in static modelling capabilities, be that in design or test, using products like Office (Word, Excel, Vision), or indeed some of the newer tool offerings that seem to come hand-in-hand with the agile sales pitch, serve a purpose in the short term. But, when the solution they represent becomes more complex – which inevitably most do as more features are added – they can only be building up technical debt for the future.

Without living specifications (models), which can guide you to impact change, this is the key factor to keeping the continuous delivery training rolling, and in preventing layering of understanding.

Model building

So, in our teams we have models, those models are built based on rules of the system and methodical questioning. We have a tick in the box for validation – at the same time we’ve also created the perfect environment of predictable and repeatable criteria with absolute expected results. Now we can give the stakeholders what they want – automation! But for all the right reasons.

We know each test we run, derived from the model, is a valuable one, therefore we can confidently articulate that testing activities achieve the verification element of the solution.  We should always augment our testing with exploration, but we need to ensure what we learn is fed back into our models, we are always stretching the bounds of understanding, appreciating we can never know all (Fig 1).

Fig 1 – testing volcanoes and the importance of continual exploration : learning

Next steps in validation

If all is good, we start modelling (structured flows), ask questions and all of our problems go away?

They don’t if we haven’t changed where we target our validation. If we are too concentrated on business flows alone, we create business flow models which in turn will force us to a test approach where we are too UI based. We know testing on the UI is slow and automation of the UI is the hardest to achieve.

Rather, we need to look at the right shaped approach (Fig 2) to ensure we validate how things are imagined to work under the ‘iceberg’ of a solution we are being asked to test.

We must validate (model) at the service / API and those models have to connect to the business flows (Fig 3), otherwise how will verification ever have a chance of proving something that’s never been defined –other than in the head of a developer or in an out-of-date, happy path, static specification.

Fig 3 – modelling the whole ice berg, not just what the consumer sees

Conclusion

Breaking down traditional silos and understanding quality is something that needs to have consensus before it can be proven, seems to be the fundamental ‘elephant in the room’, which people may be missing in the pursuit of continuous testing.

Where this is understood, automation thrives, but the interesting cultural shift is that those stakeholders who sponsor these high performing teams, also understand quality isn’t born from test automation, rather automation is an outcome of taking the time to define upfront what ‘quality’ really means.

 

Richard Jordan
Test engineering manager
Nationwide Building Society

 

 

 

 

Related Posts

Menu