CRT
October 26, 2006 at 11:00 a.m. EDT
Agenda:
1) Administrative
updates (Allan E.)
2) Conclusion of
discussion on "Voting Machines: Reliability Requirements, Metrics,
and Certification" (Max E.). NOTE: A slightly revised version of
Max's paper has been posted to the web page
http://vote.nist.gov/TGDC/crt/index.html.
3) Discussion:
What should CRT present at the December TGDC plenary? (David F., Alan
G.)
4) Any other items.
Future CRT phone
meetings are scheduled for: November 16 and 30.
Participants:
Alan Goldfine, Allan Eustis, Dan Schutzer, David Flater, Max Etschmaier,
Nelson Hastings, Paul Miller, Philip Pearce, Sharon Laskowski, Steve Berger
Administrative
Updates:
- Allan E.: Welcome
Philip Pearce and Paul Miller as new members of the TGDC. New members
will be receiving an introductory package shortly. Short bios of our
new members are on the web. Working on an introductory meeting for the
end of November.
- Allan E: EAC meeting
regarding certification procedure being held today, October 26. It is
being recorded and will be available for webcast next week. Mary Saunders
of NIST's Technology Services is presenting in one of the panels. There
is a place for comments on the certification process available. Final
is scheduled for release around Dec. 7.
Voting Machines:
Reliability Requirements, Metrics, and Certification -- Max E.
- Last meeting Max
presented his paper, "Voting Machines, Reliability…" Today's
meeting will continue that discussion.
- At the next meeting
Max will have a new paper entitled, "Rethinking the Quality Assurance
and Configuration Management Aspect of the VVSG".
- A brief overview
was given by Max. Work consists of two parts. First task, dealing with
defining reliability requirements for voting machines. In analysis -
looked at the environment, laws, design of current machines, and what
reliability means in context. Looked at system and function and critical
failures. Second task - define structure for the analysis, looked a
generic model of a voting machine and followed where the analysis led.
- Conclusions of
the analysis: It is possible to build a voting machine that will not
experience critical failure and infrequent non-critical failures during
election cycle. A prototype should be constructed for testing. Second
conclusion, verification of the machine behaviors through statistical
analysis of end-to-end testing would either be non-conclusive or expensive
or both. Defined a metric of functional failure analysis. First the
machine meets the requirements; the second doesn't meet the requirements
unless certain features are changed. Looked at how we can assure a machine
does meet the requirements. The current process (current VVSG and certification
process) shares the blame for the current difficulties. Before fixing,
we need to know what an ideal process will look like.
- At the beginning
we delineate the responsibilities and authority with voting machine
and certification. Secondly, we should not freeze the technology. Vendor
knows more about product than regulator, therefore vendor should do
all analysis and certify that his machine meets all requirements and
the regulator should verify this is accurate. Volume testing should
be done for suitability testing in every use. Testing will also be done
through an ongoing monitoring system.
- Definition of
the process is different than today's. Generic process which doesn't
identify any institutions.
- Need to figure
out how this plays into the other aspects of the NIST voting work. We
need to discuss implications with entire team.
- DISCUSSION:
- David F: Conflicts
or obliviousness? Are you concerned about the accuracy paper? Max feels
his paper may be in conflict. David sees Max's idea as a validation
of testing, he sees no conflict.
- Max sees the accuracy
testing being applied to the components of the machine instead of the
overall machine.
- End-of-end test
for reliability as well as for accuracy will not give you the results
you're looking for.
- Testability issues
when breaking it down to individual components. Mechanical reliability
could look at mean time between failures of individual parts based on
stress, but looking at the accuracy of the voting system, some components
that affect the end-to-end may accuracy be untestable. What components?
When looking at a complete system, unless you're willing to tear it
apart, most of the behaviors (including the optical sensor) is unobservable
in the complete system. [Max - this is why you need this certification
process.]
- Purpose of test
(it's not going to give you complete confidence) is to give you confidence
in the analysis that was done.
- Allan asks that
TGDC members read Max's conclusions and provide feedback.
- Steve Berger:
A lot of us are expecting feedback after next week's election and then
we'll want to think through if our standards protect where we need them
to. If we see a pattern in the problems, how do you see that folding
into the test regiment proposed? [Basic flaws in voting systems. Voting
machine is integral part of system. There are no boundaries of the system.
No control over election management people. Very complex system that
requires system analysis. Looked at the whole thing and distilled a
voting machine that resembles current machine, but it is totally isolated,
clearly delineated, with a fixed boundary that can not be violated.]
If there are problems that after analysis appear to be weaknesses in
the equipment , we should ask the question, would either the current
standard or changes we're working on, prevented those flaws from being
fielded in future systems?
- Max feels vendors
unfairly blamed for machine failures when systems are failing.
- Paul Miller: Agrees
with comment about vendor blame. A lot of problems are procedural. Concerned
with testing that it will work like a recall effort of machines after
they are in the field.
- There is a requirement
in EAC Certification paper that says there has to be a process to de-certify
systems as well as certifying them. Everyone should read to see if this
is an equitable process.
- What deliverable
are we expecting on Max's proposal for revising the whole process? For
the reliability, there are two more deliverables. A more detailed, concrete
examination of the metrics in Max's formulation, the detailed examination
of the components of failures. And draft requirements for the upcoming
draft of VVSG that would implement this strategy in terms of specific
requirements on vendors and testing authorities. We need revisions throughout
the standard to accommodate this proposal. [Max will examine his ideal
system against today's reality election system and try to design how
it will fit into current system or what kind of changes necessary to
accommodate his design. Figure out how to map certifying agency into
current agencies.]
- John W sent Max's
paper to STS for comments. Not sure if STS and TGDC will want to go
Max's route. We need to have an alternative plan. [The analysis of the
reliability is not affected by future activities. The certification
process without clear relationships will not be as good but the metric
can still be applied.]
- Alan G: If process
shot down by TGDC, then the fall back position would have to be something
along what is currently there with a significantly larger meantime between
failures.
- Steve B: Would
it prevent problems currently seeing and how implementable is it in
the distributed certification system we have?
- We need more data
flow analysis.
- Software is an
integral part of the functionality in the process that Max has described.
- CRT will be presenting
this work at the December 4&5 TGDC meeting. It remains to be seen
if this will be accompanied by any resolution. We will present this
paper and the next one on Quality Assurance and Configuration Management.
- This working document
needs to be on the web by mid-November for TGDC review. Max should develop
a 3 page summary for upcoming discussions. If we want the go ahead from
TGDC, we need to have something focused, easily readable and discussed.
Discussion: What
should CRT present at the December TGDC plenary?
- Voting Machines:
Reliability Requirements, Metrics, and Certification - Max E. [We do
not want to present anything that conflicts this.]
- Accuracy Benchmark,
Metrics and Test Methods - David F. [might be too technical, might be
in conflict w/Max. We need a shorter summary to present. Steve B feels
that even if there is conflict, we should discuss. Max, David, and Alan
G will get together to see what conflicts are/if any.]
- Discussion paper
on COTS
- Discussion paper
for Testing for VVSG for Voting System Requirements (responsibility
of test labs and how it is scoped in the VVSG)
- Volume Reliability
Testing Protocol as part of the federal certification process
- Discussion paper
on Coating Conventions on Logic Verification
- Discussion paper
on Marginal Marks and Optical Scans Systems
Next meeting is November
16 and then November 30.
**************
Link
to NIST HAVA PageLast updated: July 25, 2007 Point of Contact
Privacy
policy / security notice / accessibility statement
Disclaimer
/ FOIA
NIST is an agency of the U.S. Commerce Department
|