NIST leads and participates in the development of technical standards, including international standards, that promote innovation and public trust in systems that use AI. A broad spectrum of standards for AI data, performance, and governance are – and increasingly will be – a priority for trustworthy and responsible AI. NIST carries out its work consistent with the US Government National Standards Strategy for Critical and Emerging Technology.
Global Engagement for AI Standards
Under the October 30, 2023, Presidential Executive Order, NIST developed a plan for global engagement on promoting and developing AI standards. The goal is to drive the development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing. Reflecting public and private sector input, on April 29, 2024, NIST released a draft plan. On July 26, 2024, after considering public comments on the draft, NIST released A Plan for Global Engagement on AI Standards (NIST AI 100-5). More information is available here.
Ensuring Awareness and Federal Coordination in AI Standards Efforts
- In its role as federal AI standards coordinator, NIST works across the government and with industry stakeholders to identify critical standards development activities, strategies, and gaps. Based on priorities outlined in the NIST-developed “Plan for Federal Engagement in AI Standards and Related Tools,” NIST is tracking AI standards development opportunities, periodically collecting and analyzing information about agencies’ AI standards-related priority activities and making recommendations through the interagency process to optimize engagement.
- On March 1, 2022, NIST delivered to Congress a report summarizing progress that federal agencies have made to implement the recommendations of the US Leadership in AI plan.
- NIST is facilitating federal agency coordination in the development and use of AI standards in part through the Interagency Committee on Standards Policy (ICSP), which it chairs. An ICSP AI Standards Coordination Working Group (AISCWG) aims to promote effective and consistent federal policies leveraging AI standards, raise awareness, and foster agencies’ use of AI to inform the development of standards. The group helps to coordinate government and private sector positions regarding AI international standards activities. NIST’s role in ensuring awareness and federal coordination of AI standards is explained in more detail here.
Encouraging International Standards Incorporation of the AI Risk Management Framework (AI RMF 1.0)
- Incorporation of the AI RMF in international standards will further the Framework’s value as a resource to those designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.
- The AI RMF seeks to “Take advantage of and foster greater awareness of existing standards, guidelines, best practices, methodologies, and tools for managing AI risks....” AI RMF 1.0 takes into account and cites international standards and documents.
- As part of the AI RMF Roadmap, NIST is making it a priority to continue to align the AI RMF and related guidance with applicable international standards, guidelines, and practices. The roadmap specifically cites “Alignment with international standards and production crosswalks to related standards (e.g., ISO/IEC 5338, ISO/IEC 38507, ISO/IEC 22989, ISO/IEC 24028, ISO/IEC DIS 42001, and ISO/IEC NP 42005.)”
- The first two crosswalks to the AI RMF created by NIST are for ISO/IEC FDIS23894 Information technology - Artificial intelligence - Guidance on risk management and an illustration of how NIST AI RMF trustworthiness characteristics relate to the OECD Recommendation on AI, the Proposed EU AI Act, and several other key documents. Subsequently, NIST has posted additional AI RMF Crosswalks in the NIST Trustworthy and Responsible AI Center.