The Face Recognition Vendor Test (FRVT) 2006 was the latest in a series of large scale independent evaluations for face recognition systems. Previous evaluations in the series were the FERET, FRVT 2000, and FRVT 2002. The primary goal of the FRVT 2006 was to measure progress of prototype systems/algorithms and commercial face recognition systems since FRVT 2002. FRVT 2006 evaluated performance on:
To guarantee an accurate assessment, the FRVT 2006 measured performance with sequestered data (data not previously seen by the researchers or developers). A standard dataset and test methodology was employed so that all participants were evenly evaluated. The government provided both the test data and the test environment to participants. The test environment was called the Biometric Experimentation Environment (BEE). The BEE was the FRVT 2006 infrastructure.
It allowed the experimenter to focus on the experiment by simplifying test data management, experiment configuration, and the processing of results. The FRVT 2006 was sponsored by multiple U.S. Government agencies and was conducted and managed by the National Institute of Standards and Technology (NIST).
One of the goals of the FRVT 2006 was to independently determine if the objectives of the Face Recognition Grand Challenge (FRGC) were achieved. The FRGC was a separate algorithm development project designed to promote and advance face recognition technology that supports existing face recognition efforts in the U.S. Government. One of the objectives of the FRGC was to develop face recognition algorithms capable of performance an order of magnitude better than FRVT 2002. The FRGC was conducted from May 2004 through March 2006. FRGC data is still available to face recognition researchers. To obtain FRGC data, potential participants must sign the required licenses and follow FRGC data release rules. To request a FRGC data set, please follow directions found on the "FRGC Webpage."
The FRVT 2006 large-scale results are available in the combined FRVT 2006 and ICE 2006 Large-Scale Results evaluation report. We received algorithms from 22 organizations in 10 different countries, with many submitting multiple algorithms. However, only those who successfully completed the large-scale tests are documented in this report. The following organizations submitted algorithms to be evaluated: