In support of efforts to create safe and trustworthy artificial intelligence (AI), NIST has established the U.S. Artificial Intelligence Safety Institute (USAISI). To support this Institute, NIST has created the U.S. AI Safety Institute Consortium. The Consortium brings together more than 280 organizations to develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world. This will help ready the U.S. to address the capabilities of the next generation of AI models or systems, from frontier models to new applications and approaches, with appropriate risk management strategies.
On February 8, 2024, U.S. Secretary of Commerce Gina Raimondo announced the creation of the U.S. AI Safety Institute Consortium (AISIC). Housed under NIST, the Consortium will unite AI creators and users, academics, government and industry researchers, and civil society organizations in support of the development and deployment of safe and trustworthy artificial intelligence (AI). Read announcement.
Building upon its long track record of working with the private and public sectors and its history of reliable and practical measurement and standards-oriented solutions, NIST works with research collaborators through the AISIC who can support this vital undertaking. Specifically, it will:
To create a lasting approach for continued joint research and development, the work of the consortium will be open and transparent and provide a hub for interested parties to work together in building and maturing a measurement science for trustworthy and responsible AI.
Consortium members contributions will support one of the following areas:
Organizations had 75 days (between Nov. 2, 2023, and Jan. 15, 2024) to submit a letter of interest as described in the Federal Register.
NIST received over 600 Letters of Interest from organizations across the AI stakeholder community and the United States. As of February 8, 2024, the consortium includes more than 200 member companies and organizations.
NIST will continue to onboard organizations into the Consortium, which submitted Letters of Interest prior to the January 15, 2024, deadline. For questions, contact aisic [at] nist.gov.
There may be continuing opportunity to participate in the Consortium even after initial activity commences for participants not selected initially or which submitted their letter of interest after the selection process. Selected participants will be required to enter into a consortium CRADA with NIST. At NIST’s discretion, entities which are not permitted to enter into CRADAs pursuant to law may be allowed to participate in the Consortium pursuant to separate non-CRADA agreement.
NIST cannot guarantee that all submissions will be used, or the products proposed by respondents will be used in consortium activities. Each prospective participant will be expected to work collaboratively with NIST staff and other project participants under the terms of the Consortium CRADA.
An evaluation copy of the Artificial Intelligence Safety Institute Consortium Cooperative Research and Development Agreement (CRADA) is now available (read here).