The technical glitches in the IowaRecorder App saw the start of the American election cycle descend into scenes of counting chaos.
Iowa was the first contest in a string of nationwide state-by-state votes, known as primaries and caucuses, that culminates in the selection of a Democratic nominee in July. The mobile app’s failure to correctly tally the votes at the 2020 Iowa Caucus caused an unprecedented time delay to the political process.
But what actually happened?
We spoke to expert Bill Curtis, SVP and Chief Scientist at CAST about the Iowa Caucas app glitch, to find out what caused it to fail, the reactions from the American public and software community and how it all could have been avoided.
What was the main issue with the Iowa Caucus app glitch?
The IowaRecorder App glitch fell victim to bad software development practices.
Software developers were rushed to deliver critical software without time for proper testing and little intelligence on its structural integrity.
The result caused a glitch in the app’s data transmission function which left the caucus results in purgatory. Election officials reported that it did not always report complete results and some lost connectivity during transmission.
Ultimately, only a quarter of tallied votes were transmitted electronically.
What has been the response to the issue in the American software industry?
There were early concerns about the app’s performance before the event. The software was reportedly developed in two months. This is a notoriously short time for developing critical communication apps with strict cybersecurity requirements. Two months left insufficient time to test the code functionality, let alone thoroughly evaluate its structural integrity before it went live.
How has the American public at large reacted to the situation?
Unsurprisingly, this glitch reduced the American public’s confidence in digitising the 2020 Presidential Elections compared to using the traditional pencil and paper ballots.
As more voting processes are digitised, election officials need to reassure the voting public their electronic voting capabilities are secure and efficient by ensuring the software is trustworthy.
How do you think this situation could have been avoided?
The Iowa Caucus glitch could have been avoided by allowing software developers more time to build the app and test its functionality, as well as a deep analysis of the structural quality of the software system to ensure it was resilient and secure. It is this type of software intelligence that can raise confidence in the system.
Think of it as running it through an ‘MRI for software’ to get an accurate view of its condition before the critical go-live date. The Cybersecurity and Infrastructure Security Agency of the Department of Homeland Security uses software intelligence and other capabilities and offers free vetting of election technology.
With more intelligence on the correctness and engineering of the software, developers would have obtained an objective and accurate view of the application’s security, reliability and efficiency before it went live.
What is your personal opinion about the app’s failure to deliver results?
The entire affair is a case study in how not to deliver software. The app’s failure to deliver complete election results is no surprise when looking at the rushed development process which lacked the discipline required to deliver trustworthy and secure software.
These lax practices cannot be tolerated, whether in election technology, financial systems or any other business or life-critical software-intensive systems. Flying blind without sufficient intelligence on the integrity of the software is no longer an option.
Do you think digital voting apps or online voting will ever become the standard?
Software systems have become the brains of modern society and businesses, therefore digital voting apps and online voting will eventually become the standard.
However, election officials need to approach adoption of these systems with caution. They need to ensure election system vendors have a disciplined development process which guarantees the resulting product will be trustworthy and secure.
They should review the vendor’s test and analysis results, as well as conduct their own acceptance tests under simulated live conditions to ensure the system can be operated on election day with confidence.
What’s your day to day role at CAST as SVP and Chief Scientist like and what do you do for the company?
As Chief Scientist I head CAST Research Labs and conduct empirical research on software measurement, software intelligence, and issues affecting software productivity and quality.
I conduct executive briefings and guide customers in building software measurement programs for productivity and quality.
I’m also heavily involved in developing international standards for measuring the security, reliability, and other qualities of software systems through the Consortium for Information and Software Quality (CISQ) and head the American delegation participating in developing the ISO/IEC 25000 series of software quality standards.
How did you first enter the software industry?
I started by writing statistical analysis programs in graduate school. I’ve worked in the software industry for 40+ years, initially leading research on the effectiveness of programming practices and software metrics at General Electric Space Division.
Over the years I’ve implemented numerous productivity, quality, measurement, and process improvement programs. I led development of the Capability Maturity Model (CMM) and People CMM at the Software Engineering Institute at Carnegie Mellon University.
I then co-found a software process improvement consulting company called TeraQuest, which was later acquired by Borland.
What are the most valuable lessons you’ve learnt from your time working with different developers and programs in the software industry?
Software defects are inevitable, and we are now in the era of 9-digit glitches (defects whose damages run over $100M).
The only defense against untrustworthy systems is to have a rigorously disciplined development process with an in-depth, fact-based analysis of software code to develop confidence the code can be trusted. If you have not developed enough intelligence on the system, you put the public at risk, and they can have no more trust in you than you have in the software.
Do you have any advice for engineers who are currently working on election voting apps or software?
Software engineers who are currently developing software-intensive election systems need to demand they be allowed to implement a disciplined development process to ensure high-quality, secure, and efficient code.
Software engineers can take advantage of automatically gathered software intelligence about the integrity of their software system to detect structural weaknesses that would otherwise go unseen. This is required in order to prevent debacles such as occurred in the Iowa Caucus where the infamous hanging chads were replaced by hanging apps.