The Financial Conduct Authority regulates 56,000 financial services firms and financial markets in the UK, in order to help protect the UK’s economy, citizens and businesses. TEST Magazine spent some time with the independent regulator to learn about the importance of stakeholder communications and what the legacy of testing is.
Testing and QA at the Financial Conduct Authority is undergoing a transition. Cecilia Rehn, Editor of TEST Magazine talked to Mike Jarred, Head of Software Testing at the FCA, about the new changes and why he sees the organisation as a big data company.
Tell us about yourself and your background?
I have not always worked in testing; when I left school, I had aspirations to become a tree surgeon, and I secured a place at the local agricultural college. In the months before my course commenced, I took a job at a motor factors, selling car spares to garages and body shops, helping to man phones and take orders and doing stock control. As much as I wanted a job that would allow me to work outside, I was quite happy with what I was doing and decided to stay. My career then progressed in the motor industry, and my last position before moving into IT was as an assistant parts manager for a Jaguar and Rolls Royce dealership.
I moved into IT in 1989 when a provider of automotive data required industry experience in data analyst roles as they were bringing their first digital products to the market. I fell into testing as they then needed people with the industry expertise and data knowledge to make sure the software applications worked correctly. I developed this role (testing was a less understood discipline in 1989 than it is today!) and became increasingly interested in testing and saw that this was a natural fit for myself. I have subsequently worked in the financial, retail, insurance and pharmaceutical & life sciences domains in both senior testing and development roles, and I joined the FCA in 2015 as Head of Software Testing.
You started at the FCA in 2015, how has the first year been?
It’s been really good. I’ve been delighted with the capability of my team; they’re a really experienced group of people, that provide me with incredible support in my testing role. There are a lot of things that have changed, which I will explain in more detail.
The way we’re structured in the FCA is that I have a permanent team of 11 senior experienced test managers. My team own the strategic approach to testing in the FCA as well as the standards and the approaches that we ask our suppliers to follow, and we assure the quality of the work conducted for us by our suppliers. The strategy for testing continuously evolves and is informed by taking a long term view of our portfolio of work, and the changing nature of the technology and delivery models that we need to introduce to effectively support our business. The strategy is also informed by listening to our stakeholders, and a key principle that underscores our strategy is to provide a value for money service. We have a dynamic and varied portfolio of work which we provide testing services for, to illustrate this, we had around 200 people conducting testing at our peak last year, and currently it’s close to 130. We need a flexible provision of testers as our releases vary in size and complexity month by month.
The test group in the FCA is part of the Business and Technology Solutions division. Despite the FCA being an organisation that regulates financial services organisations, I believe that BTS has more in common with a big data company. We have financial services domain experts in our business, and it is through the accumulation and analysis of data – some captured by us, some sent in by the firms we regulate – that enables the FCA to conduct effective regulation. Therefore, on the tech side, we closely resemble a big data organisation.
Tell us about what changes you have implemented and why? What was your first priority?
When I started I felt we were too entrenched in a centre of excellence model, which had standardised the delivery of testing approaches which did not always provide value for money or adequately address risks to operational service of our applications when changes were made. It felt too prescriptive, with a one‑size‑fits‑all approach. One of my first aims was to evaluate how we engaged with our stakeholders and make sure we were delivering the right service, at the right time against an agreed set of risks we were looking to mitigate.
We now provide testing services to projects through a new mechanism called ‘consultative engagement’ which ensures conversations around risk with project stakeholders take place. The outputs of this engagement is a risk profile which informs our testing approaches as agreed mitigation for key areas of concern for the project stakeholders.
We have introduced visualisation tools to show what level of risk is attached to projects and the commensurate testing effort as mitigation. It is a simple bubble chart which makes it easy to draw comparisons and approaches for similar pieces of work. One of the great benefits of the consultative engagement approach is it helps with perceptions as we can demonstrate where testing adds value.
How did the various stakeholders perceive testing?
One of my first priorities was to identify our stakeholders, and learn what they wanted from us. By mapping out our stakeholders it became apparent that there were numerous groups that the test team didn’t fully appreciate had a stake in the efforts of the test group.
It was important for us to understand our stakeholders and what they felt testing needed to do. Overall, our stakeholders were happy with the services we provided; however, there was an opportunity to address concerns over the value of the test group, and scope of our services.
Stakeholders all have different needs and generally speaking, testers need to provide information of value to those specific stakeholder groups so they can make an informed opinion on whether the testing conducted on their behalf represents good value for money. We have addressed this by using amongst other things, control charts to demonstrate the level of protection against operational failure that the test group provides.
If you don’t engage with stakeholders with information, including in most instances some data, which supports your narrative of how you help them achieve their goals, negative perception is inevitable as their views become subjective. As I believe Deming said, “Without data, you are just another person with an opinion”.
What other changes have you seen at the FCA?
We have transformed the way we gain confidence that our testing services are helping project teams to deliver great working software. We had a governance model which was checklist based, which largely focussed on ensuring that test documentation was created and signed off at requisite times for stage gate signoff. This was important, but it did not ensure that the right testing was happening at the right time, and it did not provide confidence in the quality of testing. Now that we carry out consultative engagement, our approach has shifted to one of test assurance, where we can explore what testing is taking place and why.
We changed the terms of reference of the test assurance model has so that it creates a supportive environment where test managers can come for advice and direction, and the members of the test assurance board can be used for any items that require escalation so we can aid delivery.
We have also included some of our stakeholders in the test assurance board in order to improve visibility of our test approaches; in particular including our operations team in the model has improved the flow of software changes into the production environment.
We are also now focussing on defect prevention, and we are calling out the cost of rework, to help ensure we build a culture of quality. We are mining the data coming out of testing, to help identify areas for improving the way we build software, and running these improvement projects using lean and six sigma tools to demonstrate the value of these improvements.
Finally, as we deliver more software using continuous delivery models and agile engineering practices, we are also implementing a strong strategy to improve test automation and non‑functional testing, which we see as key to unlocking problems in different areas, such as CI and integration testing. I have a great relationship with our application development manager, and we are continuously looking at how we can distribute testing across the development and testing organisation, and employ effective methods to build high quality working software.
The FCA is a regulator and held accountable to the Treasury and to Parliament. How does this affect testing strategy?
We are funded entirely by the firms that we regulate. The FCA is an independent body and we do not receive any funding from the government. As a regulator I am very aware that we need to be providing value for money. A core element of my role is to ensure we’re carrying this ethos of value for money and adopting working practices that enable us to save money, or proactively avoid unnecessary cost.
The test group keep record of where we save money, including tracking the costs of suppliers, direct hires, tooling arrangements, and other operational costs. In 2015, the testing department won an internal award for our cost savings efforts, which was great recognition for my team.
What does the future of testing look like at FCA?
There are a number of initiatives that excite me: We have recently selected Cognizant as our principle supplier for application development and testing services and this creates an opportunity to ensure effective collaboration in how we build great software. This move supports our ability to deliver software using the continuous delivery model, and it makes sense to review how to distribute testing across the delivery teams, and refresh the tools we use to support delivery.
We’ve built a testing community, which allows for exchanging of ideas and information internally. We also bring in external speakers for mini conferences, to help develop and stimulate our team. I am going to continue to builds links with our testing group and the broader testing community, something we started last year, through attending conferences, meet‑ups and forums.
One last thing, how do you see the test management role evolving?
My message to testers everywhere is to evolve your thinking, and to consider the legacy of testing. The legacy of testing is information. There is a vast amount of information stored inside testing management tools and inside support teams, which can be used to improve the way software is built. If you can provide data, analytics and insight to assist improving software engineering, this can help to reduce rework. Ultimately, rework is waste.
Minimising rework to reduce operational cost of building software is a conversation which testers can own and can start with their peers in an organisation, and importantly with their managers. Detecting defects is valuable, but comes with a cost that is associated with poor quality. If testers can communicate and demonstrate knowledge of how to create a better product with less rework, then perceptions of our role will rapidly change.