TEST Magazine, in partnership with CA Technologies, hosted an industry roundtable in March 2016, where senior testing and QA professionals from end user organisations got together to debate synthetic data generation and reactive automation.
Attendees included heads of testing and QA directors from the financial, ecommerce and education sectors, industry consultant Paul Gerrard and Huw Price, VP at CA Technologies, Tom Pryce, Product Marketing Manager at CA Technologies, Cecilia Rehn, Editor and General Manager at TEST Magazine.
New EU general data protection regulation
Attendees at the TEST Magazine roundtable highlighted the potential impacts of the new EU General Data Protection Regulation and its likely effect on test data management.
The EU General Data Protection Regulation (GDPR) was initially proposed in 2012 to unify and strengthen existing legislation and was voted into approval in April by the EU MPs. Therefore, there is a sense of urgency in the IT community, as the two‑year implementation window means it is time to act now.
The regulation will apply to any organisation worldwide processing data in the EU, meaning that even if the UK voted to leave the EU, UK firms who handle EU data would still be held accountable.
Headlines concerning the GDPR have traditionally focused on the numbers; firms found breaking the law face fines of €20 million or 4% of annual turnover (whichever is higher). And the issues arising from ‘Right to Erasure’: to withdraw consent (Article 7) or for data to be forgotten unless there is a legitimate reason to keep it (Article 17).
But what about testing?
Masking OK until it goes wrong
It has been the norm to use masked production data in test and development environments.
Ahead of testing from a smaller organisation at the roundtable lamented the fact that “there just isn’t enough time or money to invest in anything other than masked data.”
However, organisations will now have to put the privacy of their customers’ data at the top, which is worrisome for many. The GDPR requires data to be stored for a limited period of time and for no longer than it is useful to the organisation. Data will also have to be stored in such a way that it cannot be directly identified by reverse engineering data masking (data obfuscation), or pseudonymisation, as the GDPR calls it.
This raises a whole host of new questions such as: If information is left in as a form of compromise, such as inter‑column relationships, is this not GDPR compliant?
As the definition of ‘personal information’ continues to grow to include anything related to genetic, mental, economic, cultural or social identity, how easy is it to mask all of this content, while retaining the referential integrity needed for testing?
Crucially, as the GDPR demands, is the masked data safe? Is it possible to reverse engineer data from complex relationships using a piece of external information?
“The GDPR defines when consent is and is not needed, but organisations will have to ask themselves, ‘What in our test environments do we need consent for?’ says Tom Pryce, Product Marketing Manager, CA Technologies. “Additionally, the GDPR includes language emphasising that should individuals request to see how their personal data is used/or have it deleted, organisations must comply ‘without delay.’”
Roundtable delegates were in agreement that little or no specific preparation has been carried out in advance of the GDPR in their organisations.
One senior testing manager from a high street bank noted the reluctance from their end to implement change in time for the deadline: “We’re aware of the GDPR, it’s being talked about. But I fear that with all the other regulation and scrutiny the banking sector faces, this will be a case of waiting to see if someone gets fined before action is taken.”
The ‘without delay’ clause is particularly troubling, the roundtable participants agreed, as if this becomes well‑known amongst European citizens, it could cripple organisations struggling to sort through data stored in different locations.
Role‑based approaches to limiting data access were also brought up during the roundtable. Are these measures enough? Arguably not, given that data access must be restricted based on the task being performed (and if there is consent for it), and to a limited number of individuals, for a limited amount of time.
The changing nature of consent
The GDPR highlights why organisations might need to re‑consider their test data management strategy, especially when it comes to consent. According to the new rules, individuals must be informed of their right to withdraw consent.
In addition, the ‘opt out’ rules of the past are now gone; consent must be clearly distinguishable from other matters in a written document and must be provided “in an intelligible and easily accessible form, using clear and plain language.”
“The practice of testers copying and sharing masked data ad hoc, and keeping it on their machines indefinitely, is just not viable and increases significantly the risk of running afoul of the GDPR,” said Huw Price, VP at CA Technologies.
It is clear that organisations will need to know exactly where personal data is, when the data was collected, who’s using it, and for what purpose. A question that many organisations are struggling to find time to answer.
Compliance and the fines landscape
Now in place, the GDPR promises to levy heavy fines on non‑compliant firms, so for those looking to avoid punitive damages, investment in solutions such as synthetic data generation is appealing.
The CA Test Data Manager solution creates synthetic data with all the characteristics of production, but none of the sensitive content. The synthetic data can be fed into multiple test environments at once, and from a compliance perspective, this data can then be stored alongside existing, masked production data in a central Test Data Warehouse. From here re‑usable data sets can be provisioned to authorised individuals on demand. Access can therefore be granted only if data will be used for a reason for which consent has been given, rather than granting it solely on a role basis.
The second roundtable session saw attendees discussing ‘reactive automation’ – automation that can keep up with changing user or business requirements.
Attendees agreed that across testing departments there is a large amount of manual testing, although there is a desire to switch to automation, in order to eliminate testing bottlenecks and meet the increased pressure for short delivery time to market.
As buzzwords such as continuous delivery and DevOps take the stage, it is sometimes unclear how automation is going to help? Traditionally, test automation has been focused on replacing manual test execution, but if this remains the sole focus many other tasks will remain too slow and manual, while test automation itself can introduce additional labour.
Problems with traditional test automation
As organisations look to adopt the high levels of test automation they seek, issues can, and most likely will, arise. As pointed out during the roundtable, manual scripting can hinder automation and maintenance is a major painpoint for many.
Although organisations can have their pick from good automation frameworks, they tend to rely on scripting, which results in time wasted on manual test case design.
“One team we worked with took 6 hours to write just 11 tests,” said Pryce.
In addition, the issue of test automation maintenance needs to be addressed in many organisations. It is often the case that test scripts can not be traced back to the inert requirements from which they originated so that testers have to manually check and update the pile of existing tests to reflect a change made to the system. All of this eats up precious time and resources, leaving many to wonder what the time‑saving capabilities of automation were all about.
“We need to shift testing away from being a tools problem into thinking about it as a modelling problem,” said Paul Gerrard, industry consultant. “Through improvements in modelling and in the requirements phase, we can eliminate mistakes in code and in automation.”
Successful test automation should free up testers to focus on the exploratory, build test models and inform future testing.
If automation is the future, how does reactive automation fit in?
“Reactive automation is the reason I came to this roundtable,” said one QA Director in the ecommerce space. “It’s an unknown for us right now, but we need to look into it.”
Reactive automation sees test automation encompassing test creation and maintenance to help speed up delivery and increase test coverage.
This can only be achieved if ‘active’ requirements are used for the automated testing. CA Agile Requirements Designer, an end‑to‑end requirements gathering and test case design tool, allows organisations to run automated tests that can be derived automatically from requirements modelled as unambiguous flowcharts. These tests are updated automatically when the requirements change, solving the issues of time wasted on maintenance and manual test generation.
In addition, optimisation techniques can be implemented to generate the smallest number of tests needed for maximum coverage, shortening test cycles.