During the 2020 Democratic Iowa caucus, a voting app built by D.C.-based tech company Shadow Inc. was used to assist with transmitting results. Unfortunately, the app did not do that — in fact, its failure to consistently and securely report data set into motion a huge debacle, including a partial recanvassing, a number of op-eds, and a national debate over the role technology should play in the voting process.
In the shadow of this disaster (no pun intended), our society finds itself asking the same questions always asked whenever disruptive technology goes horribly, publicly wrong: How could this happen? Didn’t they test it? We heard similar dialogue last year when Samsung’s Galaxy Fold…well, folded. It reaffirms the age-old anti-technologist chorus that emerging tech is unreliable and unstable.
What went wrong?
Why did the app fail right out of the gate? All kinds of experts have weighed in, but the general consensus is that it simply wasn’t tested enough.
“This app has never been used in any real election or tested at a statewide scale and it’s only been contemplated for use for two months now,” said David Jefferson, a computer scientist and board member of Verified Voting, in an interview with the New York Times.
According to the Wall Street Journal, many election technology providers adopt a strategy of developing and releasing multiple projects or upgrades, then fixing their issues on the fly. This accelerated pace doesn’t leave much room for adequate testing.
If that sounds familiar, it’s because it happens all the time. These days, the expected speed of development makes production schedules very tight. Any delays through the product lifecycle eat away at time dedicated to testing. Many companies are pushing automation as the antidote to time crunches in manual testing, but automated testing still has gaps. It currently only covers about 25% of test cases needed to ensure a product’s stability.
With development teams battling against time and resource constraints, as many as 3 out of 4 professionals feel that the quality of their releases could be better. They’re crossing their fingers as large areas of their products get released without being thoroughly tested. The events surrounding the Iowa caucus voting app is the worst-case scenario of an all too common conundrum facing today’s software development teams.
What could they have done?
Naysayers can say as much as they like; despite this app’s lousy debut, plans to use election app technology are already in motion. But how do we prevent something like this from happening again, especially when so many development teams are faced with the same constraints that exacerbated this fiasco?
Dogfooding
Testing early and often throughout the development process is essential to producing a secure and well-functioning app, especially within today’s hefty time constraints. Dogfooding — meaning testing your app within your company — is a great way to start assessing software performance. It’s the first line of defense against quality and usability issues that pop up outside of the lab.
Alpha Testing
Usability only matters if your product is reliable. The need for reliability grows tenfold when it comes to election technology. Alpha testing looks specifically at quality concerns; it relies on a group of technical users to work through your product and root out showstopping issues that could, for example, prevent precinct captains from signing in or cause your app to crash once it hits a certain threshold of users.
Beta and Delta Testing
One of the biggest challenges of building technology for use in elections is accounting for the wide variety of people that technology has to serve. The precinct leaders using the app, for example, had vastly different technical abilities. This happens all of the time with new technology. Some simply didn’t have the technical know-how to install a tech product, much less troubleshoot issues.
Beta and delta testing allow development teams to secure rapid, ongoing customer feedback through short, focused tests that are more compatible with constant iteration and continuous delivery. By testing small sections of the app across large market segments, it’s easier to identify rough spots and quickly prioritize fixes that will make the most overall impact.
While stories like the caucus app debacle always fuel debate over the power and usefulness of emerging technology, they also serve to push best practices into the limelight. Hopefully, this example will encourage more thorough and rigorous testing for both the government agencies commissioning and the tech companies building apps for use in future elections.
For more best practices on efficient testing, check out the troubleshooting tips in the Customer Validation Visual Workshop.
Download the Infographic ‘Customer Validation: A Visual Workshop’