The future of software testing
Based in California, Curtail is a software company that safeguards customer’s finance and user experience. Chief technical officer, Robert Ross, tells us what software testing and DevOps means to him and why continuous change is vital to the industry.
Tell us about your company and what you do?
RR: Curtail is changing how IT is implemented for government agencies, financial institutions, service providers and enterprise organisations that are developing and launching new software and services, particularly in DevOps environments. By accelerating development while stopping vulnerabilities before they occur, Curtail keeps systems protected and running in the face of unplanned downtime, attacks or software failure.
What does working for a company like this mean for you?
RR: Creating new technologies that can improve software quality and the development experience is an exciting challenge that can ultimately improve many lives in a variety of ways. And we’re addressing security concerns that can cause significant problems for organisations – that’s no small thing.
Why is software testing important to you?
RR: Increasingly our world is controlled by software. From social interactions to financial management and self-driving cars, networked software continues to become a more significant factor in all of our lives. Making the development process more efficient while improving the ultimate quality and security of software is beneficial for everyone.
Can you tell us more about your new, alternative approach to software testing and high risks/exposure to bugs?
RR: Curtail’s method simultaneously runs live user traffic against a current software version and the proposed upgrade, enabling developer teams to quickly uncover differences, bugs and other defects in quality. These bugs and defects can cause outages and security issues that can significantly impact an organisation. Our comparison-based network traffic analysis tool, ReGrade, offers unique insights into how real network behaviours differ when systems process identical inputs sampled from production traffic. Catching software flaws earlier with ReGrade prevents costly rollbacks that occur when a new software package fails in production.
What is it that makes this approach stand out?
RR: Software teams need a way to identify potential bugs and security concerns prior to release, with speed and precision and without the need to roll back or stage. However, DevOps teams haven’t previously been able to do side-by-side testing. By simultaneously running live user traffic against the current software version and the proposed upgrade, users would see only the results generated by the current production software unaffected by any flaws in the proposed upgrade. Meanwhile, administrators would be able to see how the old and new configurations respond to actual usage. This would allow teams to keep costs down, while also ensuring both quality and security, and the ability to meet delivery deadlines – which ultimately helps boost return-on-investment.
The new technique that you use is aimed to overcome the existing problems that testers experience. How does this work?
RR: The DevOps model has dramatically changed how testing and development cycles work. To remain competitive, software developers must continually release new application features. In an environment of continuous integration and continuous delivery, developers make changes to the code that trigger a pipeline of automated tests. This is dangerous, since automated tests are looking for specific issues, but they can’t know everything that could possibly go wrong. This shortened timeframe means that bugs and problems are sometimes pushed through without the testing required, potentially leading to network downtime and frustrated end users.
By testing in production to evaluate release versions side by side, software can still be rapidly iterated, but the risk to the software development lifecycle will be reduced. Teams are able to release high-quality, secure products that don’t require expensive rollbacks or staging.
How did you discover/ build on this method?
RR: Considering ways to yield security benefits via use of distributed systems and consensus related techniques led to the core ideas underlying Curtail’s technology.
Why is changing software practices so vital today?
RR: The standard model of QA testing is not effective in light of today’s increasingly frequent and fast development cycles. As the information economy rapidly matures within a software and services delivery model, the cost of unplanned downtime can take a significant toll on profits, revenue and end-user satisfaction.
According to Gartner, the average cost of network downtime is around $5,600 per minute, which translates to around $300,000 per hour. In addition, the cost to fix an error after a software release is four to five times higher than if it were uncovered during the design phase —and it can lead to even costlier development delays. Beyond these costs, service downtime can have tremendous impacts on end users. For example, in the case of the recent Cloudflare outage, one bad software update brought down thousands of sites worldwide.
What else are the benefits of your system?
Curtail’s network traffic analysis solution, ReGrade, helps customers release software updates faster with higher quality, enabling software and DevOps team to:
- Verify quality of software upgrades and patches using real production traffic, thereby bridging the gap between continuous integration and continuous delivery
- Prevent costly rollbacks and cumbersome staging
- Regression test in development, quality assurance and production using breakthrough network traffic comparison analysis
- Compare open source alternatives to achieve best application performance and security
- Quickly report on software and web services differences including content, metadata, and application behaviour and performance.
What does this approach mean for the future of software testing?
RR: The current way of doing things isn’t sufficient and this will only become more of a problem as DevOps cycles continue to be expedited. Business culture needs to change in tandem with the way software testing is done, that’s where we really see Curtail’s solutions as an enabler.
As organisations move to the cloud and look to microservices environments, they also need new tools to monitor these.
What does DevOps mean to you and your firm?
RR: Historically, development and operations have been in separate silos organisationally and have had very different (if not directly conflicting) goals. DevOps is bringing these very culturally different parts of organisations together and working to adjust processes to better achieve the goals of both development and operations teams.
Earlier on this year, your company managed to raise $3.25 million in Series Seed funding. How has that money helped so far and what does it mean for the firm?
RR: The funding is being used to build the company, particularly the engineering, sales and marketing teams and initiatives.
Getting money like this is a huge move for any business. How have you managed to become so successful in what you do?
RR: This is not our first startup company. Combining the experience of past companies with the passion for developing a previously unavailable networking capability has led to the success we’ve seen so far. But we always remain humble, always.
What do you think the future of software testing and cloud computing holds in general?
RR: Cloud computing is headed towards solving increasingly subtle problems, such as the weakness of software monocultures. The problems of hardware redundancy have been solved, but service failures still occur. Now, these failures are increasingly caused by flaws and misconfigurations in software. Rapid and simple deployment of alternate software stacks is increasingly within reach and can significantly improve the ability to respond to threats.
Software testing is increasingly dependent on tools and automation. Manually running tests continues to be replaced with automated tests that can be integrated with approaches such as CI/CD. As software testing continues to evolve, this automation will become increasingly powerful and new tools to grant visibility into application behaviour (and misbehaviour) will emerge.
Grace Palin Barnott