The desired method of communications with ICE management is via this FAQ page.
Answers to questions regarding the ICE will be posted on this website. This policy ensures that all potential participants are guaranteed equal access to information concerning ICE.
If you have any questions that have not already been answered in this section, please send your complete question to the ice [at] nist.gov (ICE Liaison.)
Who can participate in ICE?
A: ICE will evaluate iris recognition technology from industry, academia and research institutions. ICE is open to both commercial products and research prototypes.
Are institutes outside the U.S. allowed to participate in the ICE?
A: Yes. Iris recognition technology researchers and developers from companies, research institutions, and academia, both inside and outside the U.S. , are eligible to participate.
Can you please provide more information on the type of evaluation tests that will be conducted during ICE 2006 evaluation process?
A: ICE 2006 is a technology evaluation of iris matchers and image quality metrics. A database of iris images has been collected and will be partitioned for use in ICE 2005 and ICE 2006. Challenge problems were created for ICE 2005 activities. Data for ICE 2006 has been sequestered. To receive ICE announcements, including information on the ICE 2005 challenge problems, please follow the instructions under the "ICE Eligibility and Participation" section on the ICE home page.
Will the ICE accept Linux executables?
A: Yes, Linux executables will be evaluated. See ICE 2006 Protocol and Executable Calling Signatures documents on the ICE 2006 webpage.
If I registered to receive ICE 2006 announcements, am I automatically registered to participate in the ICE 2006 evaluation?
A: No. There will be a separate registration process for actual participation in the ICE 2006 evaluation. Potential participants will need to sign an application and submit it with their executable(s) in order to be an official ICE 2006 participant.
Do the organizations that have already registered to receive ICE 2005 announcements have to re-register to receive ICE 2006 announcements?
A: No. If you registered to receive ICE 2005 announcements, we automatically transferred your registration for announcements to ICE 2006. You will, however, still need to follow the instructions for participating in the ICE 2006 evaluation.
Who is authorized to sign the data and software (BEE) licenses?
A: The appropriate person to execute the license is generally a lawyer in the organization's legal office. The key attribute of the person is that they be authorized to bind the organization legally. For example, faculty members at universities are generally not authorized to legally bind the university to a contract. Students and post-docs are definitely not able to do that. Likewise, most companies have legal offices/representatives who would perform the function.
When will the ICE 2006 take place?
A: The ICE 2006 evaluation began on 15 June 2006. The evaluation report is in the review process and is due to be released in March 2007. When it is released, it will be posted on this website.
Why is the ICE 2006 being conducted?
A: The US Government needs unbiased and impartial research and development progress assessments and test and evaluation of iris recognition algorithms. These assessments and evaluations help the Government determine if these algorithms/systems can meet current and future requirements of Government agencies. Since this is the first evaluation of this type on iris recognition algorithms, the ICE 2006 will also establish a baseline on which to measure future progress assessments.
What is the goal of the ICE 2006?
A: The goal of the ICE 2006 is to measure state-of-the-art iris recognition algorithms and establish a baseline on which to measure future evaluations.
When the ICE 2006 gets underway on 15 June 2006, where will the participant's test and evaluation efforts be located - at NIST in Washington, DC or at the participant's facilities?
A: ICE 2006 will be an executable test. Participants will deliver their executables to NIST for testing. Since ICE 2006 is an executable test, participants will not be present when the test is administered.
Can a participant's results remain anonymous?
A: No. All similarity matrices and performance scores submitted to NIST become the property of NIST. NIST does not conduct anonymous challenge problems and evaluations. NIST will, at its discretion, report attributed performance. If initial reports do not contain labeled or completely labeled results, this does not imply a participant's performance will not be labeled in future reports.
Can participants supply their own computer to take ICE 2006?
A: No. ICE 2006 participants have to provide executables that run on the ICE 2006 computers at NIST.
What are the specifications for the computers that will run the executables?
A: The computers are Dell PowerEdge 850 servers with a single Intel Pentium 4 processor 660 at 3.6GHz and 2MB of 800Mhz cache. All systems have 4GB of DDR2 Ram at 533MHz.
What operating systems are supported by ICE 2006?
A: Windows Server 2003 Service Pack 1 (SP1) and Fedora 3.0 will be supported. Windows Server 2003 was chosen because WindowsXP is not supported by Dell on PowerEdge 850 servers. In the Face Recognition Vendor Test (FRVT) 2006, which uses the same test infrastructure as ICE 2006, executables have been successfully run on both Windows Server 2003 and Fedora 3.0.
Does "one executable" mean a single file (i.e. no supporting data files or .dll files)?
A: Windows Server 2003 and Fedora 3.0 will be supported. Windows Server 2003 was chosen because WindowsXP is not supported by Dell on PowerEdge 850 servers. An executables includes all supporting libraries and data files. Please see ICE 2006 Executable Calling Signatures documents for instructions on correctly placing the libraries and data files in directory supporting structure.
Are java class files acceptable?
A: Windows Server 2003 SP1 and Fedora 3.0 will be supported. All executables must be able to be called from the command line. Java is not installed on machines in the Face and Iris Test Lab.
By Matlab executable, does that mean a set of one or more M-files, or does that mean an executable obtained from the Matlab compiler? If M-files are acceptable, can mex-file dlls be included also?
A: Windows Server 2003 SP1 and Fedora 3.0 will be supported. All executables must be able to be called from the command line. All supporting Matlab libraries must be included with your submission. Matlab is not installed on machines in the Face and Iris Test Lab.
What's the expected installation procedure?
A: Windows Server 2003 SP1 and Fedora 3.0 will be supported. Executables must be submitted on CD or DVD. Two types of submittals are acceptable. Either your submittal must have an installation program that performs all files extractions and configurations or your submittal must be installable via direct copy of the content of your CD (or DVD) to the corresponding directory on the target platforms. In the latter case, installation should not require any additional configuration steps. Installation Method You must provide an installation program (or script) if setup and configuration of your submittals requires steps other than simple copying of files. In this case, each executable must have its own installation program (or script). The top level directories on the CD (or DVD) must correspond to the names of the executables. Inside of each top level directory, there should be a clearly named install executable file (e.g. install.bat, install.exe, install.sh, etc). Upon execution, this install file should copy all necessary files (executables, libraries, parameter files) to the appropriate directory on the target platform. The install file should also perform any necessary configuration. Other than execution of the install file, no additional setup should be necessary to run the executables on the target system. Your install program must be executable directly from your CD (or DVD) (i.e. the install program copies all necessary files to a user supplied directory) or it must be executed after the entire content of the CD (or DVD) is copied to the top level directory (i.e. home). There should be a readme.txt file in each directory that specifies whether the install program should be run directly from the CD (or DVD) or whether the files should be copied to the home directory prior to running the install program. Copy directory method In this method, participants must place all files necessary to execute their submittals in directory structures that mimic the directory structure in the ICE2006 Executable Calling Signature document. Specifically, the top level directories on the CD (or DVD) must correspond to the names of the executables. There should be a /bin directory and a /lib directory directly beneath each top level directory. The content of the /bin and /lib directories will be copied to the corresponding directory on the target platform. No additional setup should be necessary to run the executables on the target system. There should also be a readme.txt in each directory that describes the contents of the directory.
How does a participant sign up for multiple algorithms per task?
A: Send an e-mail to the ICE 2006 liaison at ice2006 [at] nist.gov (ice2006[at]nist[dot]gov) requesting to submit multiple algorithms per task.
Will there be a test-run for submitted executables before the start of the actual test?
A: There will be a conformance test for submitted executables before the start of the actual test. The items the conformance test will check includes (but not limited to): 1) outputs are in the correct format, 2) the correct output was produced, and 3) the independence rules are followed. For each executable submitted, participants will be asked to submit the executables output on a specified set biometric signature. The ICE 2006 test team will then verify that the same answer was produced by the executable installed in the ICE 2006 test facility.
Will multiple executables run at a time on a single computer?
A: Only a single executable performing a single experimental trail will run at a time on a computer.
What edition of Window 2003 system will be used?
A: Windows 2003 server Service Pack I will be used.
Your Windows server was specified with 4GBytes of memory. How much of this will be available to the test application? The default setting for 32bit windows is to assign a maximum of 2GBytes memory to a single application at a given time. This is extendable to 3GBytes but requires a configuration change.
A: The ICE 2006 test system will be configured to allow for 3GBytes of memory to be assigned to a single process. Participants who plan to take advantage of the 3GGBytes of memory will need to compile their code to take advantage of this extra memory.
What libraries will be provided on the ICE 2006 test system? Will standard libraries such as xalanc and xercesc be provided?
A: No libraries will be provided. All libraries that participants require will either need to be provided separately in the participant directory structure or incorporated into the executable.
Key information about experiments in ICE 2006 is contained in xml description files. Do participants need to write routines to parse the description files?
A: Part of the Biometric Experimentation Environment (BEE) is a C++ class called XPathXMLParser.cpp. This class should make it easy for a participant to write a class or function to easily extract necessary information from the xml description files. I have posted an example for using XPathXMLParser.cpp to read a xml parameter file on the FRGC/ICE bbs. In the BEE distribution, there is also a java version of XPathXMLParser.cpp.
Are there any fees associated with the ICE program?
A: NIST does not charge a fee to participate in the ICE 2006.
How will image quality results be reported?
A: Results will be reported using the Image Quality Receiver Operator Characteristic (IQROC) and Linear, Generalized Linear, and Generalized Linear Mixed Model frameworks. Information on IQROC can be found in the ICE announcement section on bbs.bee-biometric.org website, in the program manager presentation at the first Iris Challenge Evaluation (ICE) workshop. For information on applying Linear, Generalized Linear, and Generalized Linear Mixed Model to face recognition, see the following two papers: "How features of the human face affect recognition: a statistical comparison of three face recognition algorithms," G. Givens, J. R, Beveridge, B. A. Draper, P. Grother, and P. J. Phillips, In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Volume 2, 2004 Page(s):II-381 - II-388. "Repeated Measures GLMM Estimation of Subject-Related and False Positive Threshold Effects on Human Face Verification Performance," G. H. Givens, J. R. Beveridge, B. A. Draper, and P. J. Phillips, Workshop on Empirical Evaluation Methods in Computer Vision in conjunction with CVPR 2005. Copies of these two papers are posted on the FRGC/ICE bbs under FRGC v2 Announcements.
How much disk space may we use in the "temp" directory?
A: Participants will be given access to a 75Gbyte partition for the directory structure described in the ICE 2006 protocol. The directory structure includes a "temp" directory.
In reference to the similarity file header, you indicate that certain strings are to be terminated with 'eol'. However, the encoding conventions of 'eol' is platform dependent: In Unix systems, it is a single character '\n' (). In Windows text files or text streams, it is represented by two sequential characters '\r' '\n' (). Can you specify the exact character(s) to be used in similarity files as eol-separators?
A: The header in the similarity file is read as text and the similarity scores are read as binary. Since the header information is read as text, the ICE score code can handle both Unix and Windows text files. I have posted the C++ classes for reading and writing similarity files on bbs.bee-biometrics.org. I also included an example of using the classes to write a similarity file [posted Mon 9 January 2006]. Code in post "Btoolsmatrix code examples for FRVT 2006" under FRGC v2 Announcements [11 Jan 06]
Is the evaluation solely qualitative, i.e., independent of the scale and possible non-linearity of a participant's distance function or similarity score? Is any part of the evaluation dependent on a threshold value specified by a participant?
A: Performance in ICE 2006 will be measured independently of the scale and possible non-linearity of a participant's distance function or similarity score. Where appropriate, performance will be reported on a Receiver Operating Characteristic (ROC) and Cumulative Match Characteristic (CMC). For further details, participants can use the FRVT 2002 reports as an example. No part of the evaluation is dependent on a threshold value specified by a participant.
What is the difference between a biometric sample and a biometric signature?
A: The terms biometric sample and biometric signature are synonymous. In an ICE 2006 experiment, a biometric sample in a gallery can contain multiple images.
Are the terms "Parameter File" and "Experiment Description File" synonymous?
A: The terms "Parameter File" and "Experiment Description File" are synonymous.
Is it possible to put the dll files in the bin directory?
A: Yes.
What is the format for the log file?
A: There is not a prescribed format for the log file. The log file is to aid diagnosing problems in an executable. In deciding which information to write to the log file, participants should avoid making the log file too large. If too much information is reported, the file could be too large. If the file is too large it may not be possible for FRVT 2006 personnel to search the log file to provide you with useful information. In addition there is the amount of time to write the file and the possibility it could take up too much disk space. Logical points to write execution information are: 1) after generating a template, and 2) after completing all the matches between a query biometric sample and all the target biometric samples. Participants can select to write the information to either standard out or the specified log file.
Can templates generated from a previous experiment be placed in the tmp directory and used in subsequent experiments? Will previously generated templates be erased?
A: Templates can be stored in the tmp directory (or tmp directory structure that an executable creates). Depending on the set of experiments being conducted, previously generated templates maybe deleted.
Is the Linux OS Fedora 3 the 64-bit or 32-bit version?
A: The 32-bit version will be used.
If we decide to store templates on disk, we'd associate each template with the image file name from which it was generated. Can we assume that image filenames (excluding sub-directory names) are unique in the sense that recurring filenames always refer to the same image ?
A: Yes.
Will all the similarity matrices be symmetric, in that the target and query sets are the same?
A: No, the target and query sets will not be the same and therefore the similarity matrices will not be symmetric.
For sig-sets, is it allowed to return all XML structures as complex, despite the fact that the structure may be de facto simple?
A: Yes.
Are we allowed to put the some configuration files in the bin directory (ex. **.ini)?
A: Yes.
As mentioned in the documentation, all required outputs (similarity matrix, log file) generated should be written to the output directory. Is it necessary for me to add the relative path (..\output\) before the output file names myself? Or are the output file names given in a format of "relative path + filename"?
A: Use the file name provided in the calling signature. Do not modify the filename or path.
How will the conformance testing procedure work?
A: When we receive your executables, we subject them to a conformance test. Part of the conformance test will be to ensure that the independence rules are followed. If a group's executables fails an independence test, the group will be informed. Groups will be sent the resultant similarity matrices from the conformance tests along with sig-sets. The images in the conformance test will be from the ICE 2005 data set.
In the "Executable Calling Signature for ICE 2006" document, Figure D gives an example of a Signature Set with two iris images per complex-biometric-signature. It associates at least four different iris images with one tiff file (one left and possibly two right iris images for subject 557, and the left iris images for subjects 558 and 559). I would guess that this is the result of a text-editing error, but I am not sure how extensive that error is. Perhaps subject 557 has two iris images in 242311.tiff, but the rest of the appearances of 242311.tiff (and presentation name 242311) are mistakes? Or, perhaps, there is actually a unique mapping between a particular tiff file and a particular iris image, so that the file 242311.tiff should have appeared only once in Figure D?
A: There were a number of typos in Figure D. They have been corrected a new version of the "Executable Calling Signature for ICE 2006" has been posted. There is a unique mapping between presentation name and a particular tiff image. The file 242311.tiff and presentation name 242311 should of only have appeared once.
Should we expect that multiple iris images should be read from a single tiff file? Or, will all input images for ICE 2006 be single image tiff files as they were for ICE 2005?
A: There will be only one iris image per tiff file.
When we are performing a comparison between biometric samples with two iris images per sample, should we treat the problem as if each two-iris sample were equivalent to two consecutive one-iris samples? Or, as may be more consistent with the "Important Note" on page 11 of the "Calling Signature" document ("The dimensions of the similarity matrix corresponds to the number of signature elements in the target and query sigsets not the number of presentations."), will each element of the similarity matrix be a single value distilled from the 4 comparisons of query_id_left and query_id_right and target_id_left and and target_id_right? If so, what guidance are we given concerning the task of making this 4-way comparison?
A: Each element will be a single value distilled from the 4 comparisons of query_id_left and query_id_right and target_id_left and target_id_right. If the target set consists of n pairs of irises and the query set consists of m pairs of irises, then the output of the experiment will be an m by n similarity matrix. We do not provide any guidance on making the four-way comparison between the two sets of iris images.
Is it true that experiments will never specify a target signature set that consists of SIMPLE biometric signatures and a query signature set that consists of COMPLEX biometric signatures (or vice-versa)?
A: Yes.
The discussion of the Parameter Files in "Executable Calling Signature for ICE 2006" notes that the precise content of the Parameter files has not been determined. However, an example of a Parameter file is given in Figure B.
A: The Parameter File format, as defined in Figure B of the "Executable Calling Signature for ICE 2006," is the final format. No additional parameters will be added.
I realize that the document "Iris Challenge Evaluation (ICE) 2006 Protocol" also states that "The one task being evaluated in ICE 2006 is 1-1 Matching," but can you confirm that the type attribute will have a value of "1-1" for *all* of the experiments that will be run for ICE 2006?
A: The type attribute for all experiments in ICE 2006 will be "1-1".
Can you let us know what libraries will be installed on the linux boxes that will run ICE 2006 experiments? Will the 3rd party and other libraries that formed part of the BEE distribution be installed on the linux boxes and be available for participants' executables?
A: No libraries, including 3rd party and other libraries that form part of the BEE distribution, will be provided. All libraries that participants require will either need to be provided separately in the participant directory structure or incorporated into the executable.
As I understand it, Windows Server 2003 has the .net Framework 1.1 installed by default. Can you confirm that it has not been removed on the test server?
A: The .net Framework has not been removed. However, we cannot guarantee the version that is on any machine in the Face and Iris Test Laboratory. If the version is critical, it is recommended that the appropriate libraries are included with your submission.
Can you post an example of a simple (single iris) sig-set?
A: Below are the links to download the ICE 2005 sig-sets reformatted for ICE 2006. The difference between ICE 2005 and ICE 2006 sig-sets is the modality values. In ICE 2005 the modality attribute was "IRIS". In ICE 2006, the modality attribute is either "IRIS-RIGHT" or "IRIS-LEFT".
Can you post an example of a complex (two iris) sig-set?
A: Below is a link to a sample complex sig-set. The irises in the complex signatures are from ICE 2005. The left and right irises are from the same person.
Will the execution time of the submitted executables be measured in ICE 2006?
A: Yes. If potential participants are concerned about time versus accuracy trade-off, multiple executables with varying execution times maybe submitted.
Can you please provide an example for parsing two iris sigsets?
A: The latest sigset parser can be downloaded from the BEE electronic bulletin board at bbs.bee-biometrics.org/phpBB2/viewtopic.php?t=146. This is post "Latest SigSet parser for FRVT 2006" posted under FRGC v2 Announcements. An example of using the BTools sigset parser for multi-still 2D biometric samples is post "Quality score format for FRVT 2006" posted under FRGC v2 Announcements at bbs.bee-biometrics.org/phpBB2/viewtopic.php?t=145.
Does the Btools package contain the tools for parsing the experiment description and parameter files?
A: A parser for the experiment description file can be built using the class in XPathXMLParser.cpp. The source code for XPathXMLParser.cpp and an example is given in the post "Parsing parameter files with XPathXMLParser.cpp" posted in ICE Announcements at bbs.bee-biometrics.org/phpBB2/viewtopic.php?t=132.
When was the latest version of BEE released?
A: The latest version of BEE was distributed with ICE 2005 (ICE v1).
Have you posted additional relevant code on bbs.bee=biometrics.org?
A: Please check the posts under FRGC v2 Announcements and ICE Announcements. The post "Btoolsmatrix code examples for FRVT 2006" under FRGC v2 Announcements is an example of using Btools to read and write similarity and mask matrices.
Referencing the document "ICE_2006_Executable_Calling_Signature_for_ICE_v1.pdf," page 4, attribute "capture_device," with values "LG2200" and "TBD" (To Be Defined). Could we have the full list of the devices? In other words, could we have some more details about the devices used for ICE 2006, as either vendor / model, or image size, resolution, contrast, noise, and lightning spectrum? Could we have a few samples?
A: All images in ICE 2006 will be captured with an LG2200.
As mentioned in the Executable Calling Signature for ICE 2006, required outputs are "similarity matrices," "signature sets," "DUPLICATE LISTS" and "logfile." What are the "DUPLICATE LISTS"?
A: The task requiring a duplicate list is not part of ICE 2006. The reference to duplicate list can be ignored. The "Executable Calling Signature for ICE 2006" has been updated and the reference to duplicate list has been removed.
Can you tell us what the minimum and maximum number of biometric samples will be in the Target and Query Sigsets.
A: The minimum number of biometrics samples in a target or query sigset will be 1. The maximum number of biometric samples in a target or query sigset will be 50,000.
Could our executable directory contain a binary executable and a shell-script that invokes the binary, such that the executable is instantiated from /algorithm_name by typing:
A: Yes.
Can the shell script include a definition of LD_LIBRARY_PATH? Could the shell-script consist of the following single line?
A: Yes.
Is it LD_LIBRARY_PATH or ld_library_path that should be set for Linux systems?
A: LD_LIBRARY_PATH should be set.
What is the shell on the Linux system?
A: The bash shell is being used.
If we have read the ICE 2006 documentation correctly, LogFile name is the only parameter in the parameter file that can actually vary during the ICE 2006 tests. Is this correct?
A: Yes.
Will the system libraries and libraries needed by C++ and C (which are not part of the BEE), such as
A: These libraries will not be available for dynamic linking. As stated in ICE 2006 FAQ #50, ICE 2006 participants must provide all the libraries needed by their executables. This policy has been adopted because of problems with different versions of libraries, linkers, loaders, and compilers in previous evaluations. See FAQ #63 on modifying the library search path.
If the submitted executables do not pass the conformance tests or if submitted executables produce fatal errors when tested on your platform, what course of action will you take? Will participants have the opportunity to correct "runtime" error conditions that stem from failures to read input files correctly, failures to write output files correctly, or failures to manage memory requirements properly?
A: If a group's executables fails the conformance tests, the group will be informed. Groups will be sent the resultant similarity matrices from the conformance tests along with sig-sets. The images in the conformance test will be from ICE 2006. Groups will be given a reasonable opportunity to correct errors in the conformance tests. If executables produce fatal errors, a reasonable effort will be made to insure the failure was do not to failure to read input files or write output files. Part of this effort may include contacting the group who provided the executable.
Are static (.a) versions of libxalan-c.so, libxalanMsg.so and libxerces-c.so available in the BEE download?
A: No.
Could you provide the output from the command "rpm -qa" run on the Linux version of the test machines?
A: The command "rpm -qa" gives a list of the libraries on our test machines. As explained in FAQ 50, and clarified in FAQs #63 and #67, participants need to provide their own libraries. In addition, the Face and Iris test lab consists of machines purchased at slightly different times. The software on the machines was the latest version available at time of delivery, therefore, there maybe slightly difference versions of libraries on the machines in the Face and Iris test lab.
Does the answer of the ICE 2006 FAQ No.42 mean that the output file names are given in a format of "path + name"?
A: Yes.
Is it required to submit both a SingleIris and a TwoIris executable?
A: No. ICE 2006 participants are not required to submit both SingleIris and TwoIris executables.
Where are the name and the directory of the log file?
A: The name of the log file is specified in the experimental parameter file. The log file is to be placed in output/ directory.