By Mike Weber, VP of Labs, Coalfire

Many days have passed since American citizens came out in droves to place their votes. While most races are a lock, others are still in the process of ballot counting as analysts, politicians, and the public are reflecting on, and in some cases questioning, the validity of the results.

This election cycle, hacking has been less in the spotlight than machine malfunction, purported fraud.  What is abundantly clear is that people have little confidence in our voting process. There is a legitimate basis for this concern—we have a no centralized governance across all states, no rigorous oversight, and a standard that only addresses machine security (sort of), not the end-to-end process.

When people are concerned about voting security, they tend to focus on the machines and whether they are hackable. That is a very legitimate concern, given that recent security penetration tests found that the U.S. Election Commission’s Voluntary Voting System Guidelines 1.1 (VVSG 1.1) standard for machines leaves gaps to enable hacks. However, machines are only one part of a much larger voting insecurity problem, and the only formal mechanism we have to address and standardize security, VVSG, doesn’t even touch the rest of the process. 

To illustrate the problem, consider ‘found’ votes or votes that are outside the official vote chain of custody. In previous elections, we have seen stories of votes found in warehouses or being transported to a polling center in private vehicles. At issue is the lack of solid process to ensure ballots are kept within official oversight and chain of custody. VVSG does not address this, nor does any other centralized and standardized mandate. Many states have adopted their own approaches to dealing with this gap, but it varies widely.

Several districts are still counting votes. In the case of Broward County, there was little transparency as to the number of remaining ballots left to be counted, leading to lack of trust. Transparency would be assuring to voters. There are many other issues in play in the voting process, such as the security of voter registration systems, networks, physical security of machines in storage, and training of staff against social engineering techniques, none of which have standardized and mandatory testing requirements.

Again – this is an area where states often cook up their own recipe to fill this gap.

With respect to machine security and VVSG 1.1, the standard is a good overall security foundation, but it doesn’t close all gaps that allow a hacker to compromise a voting system. Where it falls short is the lack of specificity on how these requirements should be met, the parameters that assure they are effectively implemented, and a lack of comprehensive testing requirements spanning end-to-end security to assure the controls actually provide security and are working as intended.

For example, VVSG mandates that a strong password standard must be applied, but does not specify password storage or transmission protocols: “If the voting system uses a username and password authentication method, the voting system shall allow the administrator to enforce password strength, histories, and expiration.”

It further states: “Manufacturers shall use multiple forms of protective software as needed to provide capabilities for the full range of products used by the voting system.” In spirit, the guidelines are on point; but open to interpretation such that a manufacturer’s deployment could meet the standard while leaving gaps for an attacker. This lack of specificity can be found across the standard, and while these may seem minor, attackers are highly skilled in detecting minor gaps—then other minor gaps and chaining them together like stepping stones until they can wage a successful attack. The next version of the guidelines VVSG 2.0, in progress and already shows significant improvements over 1.1, however a review of a recent draft shows a number of attack vectors are unaddressed.

For all the reasons outlined above, mandating end-to-end penetration testing is an essential, missing element of the VVSG standard. The standard does require auditing of the implementations and testing by an accredited Voting System Testing Laboratory (of which there are only two nationwide) against isolated elements of security, including access control and data interception and disruption. But it does not require independent, third-party penetration testing, which simulates a malicious attacker approaching the challenge in the precise way they would in a true attack—by looking for vulnerabilities across the entire environment. VVSG 2.0 has yet to determine their testing criteria; it will be difficult to establish rigorous enough criteria to test the controls it lays out.

If a manufacturer were to check every box in the VVSG 1.1, an attacker could still wage an attack, as Coalfire conducted in under two minutes against a voting machine. a is detailed in “Securing the Vote,” a pen testing report which shows vulnerabilities in machines, the end-to-end process and in the standard, as well as provides recommendations for improving the process.

FedRAMP is a good example of a standard that has been time-tested and improved over many years. It requires technical testing, and the results of this testing become valuable input to future updates of the standard. If voting—a very vital critical infrastructure function for our nation—is to rely on a standard, either the standard should incorporate a required, and ‘open-ended’ penetration testing component, or more stringent federal requirements for end-to-end technical security testing independent of the standard should be considered. Better oversight, governance, and testing are also required against the end-to-end process—to properly secure our vote.

Mike Weber is Vice President of Labs at Coalfire, a cybersecurity advisor that provides independent and tailored advice, assessments, technical testing and cyber engineering services to private and public-sector organizations.