Why has NIST developed the Framework?
NIST aims to cultivate trust in the design, development, use, and evaluation of AI technologies and systems in ways that enhance economic security and improve quality of life. Congress directed NIST to collaborate with the private and public sectors to develop a voluntary AI RMF. The agency’s work on the AI RMF is consistent with recommendations by the National Security Commission on Artificial Intelligence and the Plan for Federal Engagement in Developing AI Technical Standards and Related Tools. Plan for Federal Engagement in Developing AI Technical Standards and Related Tools.
The Framework was developed in collaboration with the private and public sectors.
Will NIST provide additional guidance to help with using the AI RMF?
Yes. In collaboration with the private and public sectors, NIST produced a companion Playbook as a voluntary resource for organizations navigating the AI RMF functions. It contains actionable suggestions derived from industry best practices and research insights. Organizations seeking specific guidance for how to achieve AI RMF function outcomes may borrow as many – or as few – suggestions as apply to their industry use case or interests. Comments on the Playbook are welcome at any time. They will be reviewed and integrated on a semi-annual basis.
The Playbook is part of the NIST Trustworthy and Responsible AI Resource Center. NIST already has published a variety of documents and carries out measurement and evaluation projects which inform AI risk management. See: https://www.nist.gov/artificial-intelligence