Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Introduction to New and Expanded Material

Introduction to New and Expanded Material

The VVSG contains considerable new material and material expanded from previous versions of the voting standards.  This section provides an introduction to and overview of major features of the VVSG, those being

  • organization of the VVSG, requirements structure, and classes;
  • usability performance benchmarks;
  • expanded human factors coverage;
  • Software Independence, Independent Voter Verified Records voting systems, and the Innovation Class;
  • open-ended vulnerability testing and expanded security coverage;
  • treatment of COTS in voting system testing;
  • end-end testing for accuracy and reliability;
  • new metric for voting system reliability;
  • expanded core requirements coverage.

 

The VVSG Structure

The VVSG structure is markedly different from the structure of previous versions.  First and foremost, the VVSG should be considered as a foundation for requirements for voting systems; it is a foundation that provides precision, that reduces ambiguity and multiple repeated requirements, and that provides for change, i.e., the addition of new requirements for new types of voting devices or voting variations.

It was necessary to focus on providing this robust foundation for several reasons.  First, previous versions suffered from ambiguity, which resulted in a less-robust testing effort.  In essence, it has been more difficult to test voting systems when the requirements themselves are subject to multiple interpretations.  This new version should go a long way towards reducing that ambiguity.

Secondly, there are simply more different types of voting devices than anticipated by previous versions, and new devices will continue to be marketed as time goes by.  This proliferation of new devices required a strong organizational foundation so that existing devices could be unambiguously described and so that the development of new devices can proceed in an orderly, structured fashion.

 

Organization of the VVSG

The VVSG has been reorganized to bring it in line with applicable standards practices of ISO, W3C and other standards-creating organizations.  It contains 3 volumes or "Parts" for different types of requirements:

  • Part 1, Equipment Requirements, provides guidelines for manufacturers to produce voting systems that are secure, accurate, reliable, usable, accessible, and fit for their intended use. Part 1 sets a precedent where requirements in VVSG 2005 that were ambiguous have been clarified.  In those cases where no precise replacement could be determined and no testing value could be ascribed, requirements have been deleted.
  • Part 2, Documentation Requirements, is a new section containing documentation requirements separate from functional and performance requirements applying to the voting equipment itself.  It contains requirements applying to the Technical Data Package, the Voting Equipment User Documentation, the Test Plan, the Test Report, the Public Information Package, and the data for voting software repositories.
  • Part 3, Testing, contains requirements that apply to the national certification testing to be conducted by non-governmental certified testing laboratories. It has been reorganized to focus on test methods and avoid repetition of requirements from the product standard. Although different testing specialties are likely to be subcontracted to different laboratories, the prime contractor must report to the certifying authority on the conformity of the system as a whole.

The requirements in these Parts rely on definitition and strict usage of certain terms, included in Appendix A, Definition of Words with Special Meaning in the VVSG.  This covers terminology for standardization purposes that must be sufficiently precise and formal to avoid ambiguity in the interpretation and testing of the standard. Terms are defined to mean exactly what is intended in the requirements of the standard, no more and no less.  Note: Readers may already be familiar with definitions for many of the words in this section, but the definitions here often may differ in small or big ways from locality usage because they are used in special ways in the VVSG. 

The VVSG also contains a table of requirement summaries, to be used as a quick reference for locating specific requirements within sections/subsections.  Appendix B contains references and end notes.

 

Voting System and Device Classes

Voting system and device classes are new to the VVSG.  Classes in essence form profiles of voting systems and devices.  They are used as fields in requirements and they connote what the requirement applies to.  For example, Figure 1 shows the high-level device class called vote-capture device.  There are various requirements that apply to vote-capture device; this means that all vote-capture devices must satisfy these requirements (e.g., for security, usability, etc.).

There are also requirements that apply more specifically to, say, IVVR vote-capture device and those explicit devices underneath it, such as VVPAT.  These devices inherit the requirements that apply to Vote-capture device, that is, they must satisfy all the general Vote-capture device requirements as well as the more specific requirements that apply.  In this way, new types of specific Vote-capture devices can be added in the future; they must satisfy the general requirements that all Vote-capture devices are expected to satisfy, but at the same time they can satisfy specific requirements that only apply to the new device.  This structure assists in unambiguously making it clear to vendors and test labs which requirements apply to ALL Vote-capture devices, for example, as opposed to which requirements apply specifically to just VVPAT systems.  This structure also allows for the addition or modification of new or existing device requirements without impacting the rest of the standard.

 

Figure 1: Voting device class hierarchy

 

Requirements Structure

Requirements are now very specific to either a type of voting variation or a type of voting device (as stated in the previous section, the voting device can be a general profile of voting devices or a more specific voting device).  They contain expanded text and more precise language to make explicit what exactly is required and what type of testing is to be used by the test lab to determine whether the requirement is satisfied.  If possible, the requirement also contains a reference to versions of the requirement in previous standards (e.g., VVSG 2005 or the 2002 VSS) so as to show its genesis and to better convey its purpose.

 

Strict Terminology

The terminology used in the VVSG has been considered carefully and is used strictly and consistently.  In this way, requirements language can be made even more clear and unambiguous.  Hypertext links are used throughout the VVSG for definitions of terminology so as to reinforce the importance of understanding and using the terminology in the same way.However, it is important to understand the terminology used in the VVSG is specific to the VVSG.  An effort has been made to make sure than the terms used in the VVSG mean essentially the same thing as used in other contexts, however at times the definitions in the VVSG may vary in big or small ways.

Figure 2 illustrates the relationships and interaction between requirements, device classes, types of testing from Part 3, all in the framework of strictly used terminology.

 

Figure 2: Interaction between requirements, definitions, and parts of the VVSG

 

Usability Performance Requirements

Usability is conventionally defined as: "the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use" ([ISO98a] ISO 9241-11: 1998 Ergonomic requirements for office work with visual display terminals (VDTs) -- Part 11: Guidance on usability).  In VVSG'05, the usability guidelines basically relied on three assessment methods:

  1. checking for the presence of certain design features which are believed to support usability, and for the absence of harmful design features,
  2. checking for the presence of certain functional capabilities which are believed to support usability, and
  3. requiring vendors to perform summative usability testing with certain classes of subjects and to report the results. However, these reporting requirements do not specify the details of how the test is designed and conducted.

While all these help to promote usability, they are all somewhat indirect methods.  The actual "effectiveness, efficiency and satisfaction" of voting systems are never evaluated directly.

This version of the VVSG uses a new method based on summative usability testing that directly addresses usability itself: how well do users achieve their goals? The features of this new method include:

  • the definition of a standard testing protocol, including a test ballot, set of tasks to be performed, and demographic characteristics of the test participants.  The protocol supports the test procedure as a repeatable controlled experiment,
  • the use of a substantial number of human subjects attempting to perform those typical voting tasks on the systems being tested, in order to achieve statistically significant results,
  • the gathering of detailed data on the subjects' task performance, including data on accuracy, speed, and confidence,
  • the precise definition of the usability metrics to be derived from the experimental data,
  • the definition of effectiveness benchmarks against which systems will be evaluated.

Obviously, the implementation of such complex tests is more difficult than simply checking design features.  However, performance-based testing using human subjects yields the most meaningful measurement of usability because it is based on their interaction with the system's voter interface, whereas design guidelines, while useful, cannot be relied upon to discover all the potential problems that may arise.  The inclusion of requirements for performance testing in these guidelines advances the goal of providing the voter with a voting system that is accurate, efficient, and easy to use.

Table 1 shows all five benchmarks; the actual values are included in Part 1 Section 3.2.1. Please see this section for full details.

 

Tabe 1: Usability performance benchmarks

Benchmark

Used to Pass/Fail

Description

Total Completion Score

Yes

The proportion of users who successfully cast a ballot.

Voter Inclusion Index

Yes

A measure of voting accuracy and variance, based on the mean accuracy per voter and the associated standard deviation.

Perfect Ballot Index

Yes

The ratio of the number of cast ballots containing no erroneous votes to the number containing one or more errors.

Average Voting Session Time

No

The mean time taken per voter to complete the process of activating, filling out, and casting the ballot.

Average Voter Confidence

No

The mean confidence level expressed by the voters that the system successfully recorded their votes.

 

Expanded Usability and Accessibility Coverage

In addition to usability performance benchmarks , the treatment of human factors, i.e., usability, accessibility, and privacy, has been expanded considerably. Table 2 summarizes the new and expanded material.

 

Table 2: Expanded human factors coverage

Human Factors Topic

Description

Voter-Editable Ballot Device

The VVSG defines a new class of voting station: Voter-Editable Ballot Device (VEBD). These are voting systems such as DREs and EBMs that present voters with an editable ballot (as opposed to manually-marked paper ballots), allowing them easily to change their choices prior to final casting of the ballot.

Ballot Checking and Correction

Requirements for both interactive and optical-scan based ballot checking and correction (so-called "voter's choice" issues).  There is a new requirement for detection and reporting of marginal marks.

Notification of Ballot Casting

Requirements to notify the voter whether the ballot has been cast successfully.

Plain Language

Requirements for the use of plain language when the voting system communicates with the voter.  The goal is to make the instructions for use of the system easier to understand and thus improve usability.

Icons and Language

New requirement that instructions cannot rely on icons alone; they must also include linguistic labels.

Choice of Font and Contrast

Requirements for the availability of the choice of font size and contrast on VEBDs.

Legibility

Legibility for voters with poor reading vision has been strengthened from  a recommendation to a requirement.

Timing

Requirements on the timing for interactive systems. Addresses the response time of system to the user (no undue delay) and mandates that systems issue a warning if there is lengthy user inactivity. 

Alternative Languages

This entire section has been expanded and clarified. 

Poll Workers

Addresses usability for poll workers as well as for voters. Vendors are required to perform usability testing of system setup, operation, and shutdown.  System safety is addressed

End-to-end Accessibility

New requirement to ensure accessibility throughout the entire voting session.

Accessibility of Paper Records

Requirements address the need for accessibility when the system uses paper records as the ballot or for verification. In particular, an audio readback mechanism is required to ensure accessibility for those with vision problems.

Color Adjustment

Consolidated and clarified material on color adjustment of voting station.

Synchronized Audio and Video

Mandates the availability of synchronized audio and video for the accessible voting station.  The voter can choose any of three modes: audio-only, visual-only, or synchronized audio/video.

Adjustability

Clarified that when the voter can control or adjust some aspect of voting station, the adjustment can be done throughout the voting session.

Software Independence

Software independence [Rivest06] means that an undetected error or fault in the voting system's software is not capable of causing an undetectable change in election results.  All voting systems must be software independent in order to conform to the VVSG.

There are essentially two issues behind the concept of software independence, one being that it must be possible to audit voting systems to verify that ballots are being recorded correctly, and the second being that testing software is so difficult that audits of voting system correctness cannot rely on the software itself being correct.  Therefore, voting systems must be "software independent"so that the audits do not have to trust that the software is correct; the voting system must provide other proof that the ballots have been recorded correctly, e.g., voting records produced in ways in which their accuracy does not rely on the correctness of the voting system's software.

This is a major change from previous versions of the VVSG, because previous versions permitted voting systems that are software dependent, that is, voting systems whose audits must rely on the correctness of the software, to be conformant.  One example of a software dependent voting system is the DRE, which is non-conformant to the VVSG.

Independent voter-verifiable records

There are several general types of voting systems that can satisfy the definition of  software independence, but the VVSG currently contains requirements for only one: those types of voting systems that use voter-verifiable paper records (VVPR), such as with

  • optical scanners used in conjunction with
  • manually-marked paper ballots or
  • an EBP or EBM;  and
  • VVPAT.

The relevant requirements, though, have been abstracted to apply to a higher-level type of voter-verifiable records called independent voter-verifiable records (IVVR), which can be audited independently of the voting system software much the same as with VVPR but do not necessarily have to be paper-based.  IVVR relies on voter-verification, that is, the voter must verify that the electronic record is being captured correctly by examining a copy that is maintained independently of the voting system's software, i.e., the independent voter-verifiable record.

NOTE: If different types of IVVR are developed that do not use paper, systems that use them can also be conformant to the VVSG "as is.".  In other words, new types of IVVR that do not use paper are already "covered"by the IVVR requirements in the VVSG; new requirements do not necessarily need to be added.

Figure 3 illustrates this in a tree-like structure.   At the top of the tree is software independence; as stated previously all voting systems that are conformant to the VVSG must be software independent.  One route to achieving software independence is to use IVVR.  The VVSG contains requirements for IVVR, of which VVPR is one (ccurently the only) type.  New types of IVVR voting systems, as long as they meet the current requirements in the VVSG, will also be conformant to the VVSG without needing to add additional requirements.

 

Figure 3: Voting systems that can conform to current requirements in the VVSG

 

The Innovation Class

Use of IVVR is currently the only method specified by requirements in the VVSG for achieving software independence.  Vendors that produce systems that do NOT use IVVR must use the Innovation Class as a way of proving and testing conformance to the VVSG.  The innovation class is for the purpose of ensuring a path to conformance for new and innovative voting systems that meet the requirement of software independence but for which there may not be requirements in the VVSG.  Technologies in the innovation class must be different enough to other technologies permitted by the VVSG so as to justify their submission.   Technologies in the innovation class must meet the relevant requirements of the VVSG as well as further the general goals of holding fair, accurate, transparent, secure, accessible, timely, and verifiable elections.

A review panel process, separate from the VVSG conformance process, will review innovation class submissions and make recommendations as to their eventual conformance to the VVSG.

Open-Ended Vulnerability Testing

The goal of open-ended vulnerability testing (OEVT) is to discover architecture, design and implementation flaws in the system which may not be detected using systematic functional, reliability, and security testing and which may be exploited to change the outcome of an election, interfere with voters' ability to cast ballots or have their votes counted during an election, or compromise the secrecy of vote.  The goal of OEVT also includes attempts to discover logic bombs, time bombs or other Trojan Horses that may have been introduced in the system hardware, firmware or software for said purposes.  Open-ended vulnerability testing (OEVT) relies heavily on the experience and expertise of OEVT team members, their knowledge of the system, its component devices and associated vulnerabilities, and their ability to exploit those vulnerabilities.

 

Expanded Security Coverage

In addition to software independence and OEVT, the treatment of security in voting systems has been expanded considerably. There are now detailed sets of requirements for eight aspects of voting system functionality and features, as shown in Table 3.

 

Table 3: Expanded security coverage

Security Topic

Description

Cryptography

Requirements relating to use of cryptography in voting systems, e.g., use of U.S. Government FIPS standards

Setup Inspection

Requirements that support the inspection of a voting device to determine that: (a) software installed on the voting device can be identified and verified; (b) the contents of the voting device's registers and variables can be determined; and (c) components of the voting device (such as touch screens, batteries, power supplies, etc.) are within proper tolerances, functioning properly, and ready for use.

Software Installation

Requirements that support the authentication and integrity of voting system software using digital signatures provided by test labs, National Software Reference Library (NSRL), and notary repositories.

Access Control

Requirements that address voting system capabilities to limit and detect access to critical voting system components in order to guard against loss of system and data integrity, availability, confidentiality, and accountability in voting systems.

System Integrity Management

Requirements that address operating system security, secure boot loading, system hardening, etc.

Communications Security

Requirements that address both the integrity of transmitted information and protect the voting system from communications based threats.

System Event Logging

Requirements to address system event logging to assist in voting device troubleshooting, recording a history of voting device activity, and detecting unauthorized or malicious activity.

Physical Security

Requirements that address the physical aspects of voting system security: locks, tamper-evident seals, etc.

Treatment of COTS in Voting System Testing

To clarify the treatment of components that are neither manufacturer-developed nor unmodified COTS (commercial off-the-shelf software/hardware) and to allow different levels of scrutiny to be applied depending on the sensitivity of the components being reviewed, new terminology has been introduced:  application logic, border logic, configuration data, core logic, COTS, hardwired logic, and third-party logic.  Using this terminology, requirements have been scoped more precisely than they were in previous iterations of the VVSG.

The way in which COTS is tested has also changed; the manufacturer must deliver the system to test without the COTS installed, and the test lab must procure the COTS separately and integrate it.

 

End-to-End Testing for Accuracy and Reliability

The testing specified in previous versions of the VVSG for accuracy and reliability is not required to be end-to-end but may bypass significant portions of the system that would be exercised during an actual election, such as the touch-screen or keyboard interface.  A volume test is now included that is analogous to the California Volume Reliability Testing Protocol.

 

Metric for Reliability

The metric for reliability has been changed from Mean Time Between Failure (MTBF) to a failure rate based on volume that varies by device class and severity of failure. 

Reliability, accuracy, and probability of misfeed are now assessed using data collected through the course of the entire test campaign, including the volume testing.  This increases the amount of data available for assessment of conformity to these performance requirements without necessarily increasing the duration of testing.

 

Expanded Core Requirements Coverage

The general core requirements for voting systems has been expanded greatly.  In addition to the already noted improvements in COTS coverage, end-to-end testing for accuracy and reliability, and the new reliability metric, the following topics in Table 4 have been added or expanded.

 

Table 4: Expanded core coverage in the VVSG

Core Topic

Description

EBMs

Updates to handle EBMs and early voting.

Early voting

Updates to handle EBMs and early voting.

Coding conventions

Major revisions to coding conventions.

EMC

Major revisions to EMC requirements.

QA and CM

Major revisions to quality assurance and configuration management requirements.

Humidity

New operating tests for humidity.

Logic verification

Added logic verification for core logic.

Created September 28, 2010, Updated August 25, 2016