NIST’s AI portfolio includes building the scientific underpinning for measurements, evaluations, benchmarks, technical standards, and guidelines, for responsible use of AI across different contexts and sectors of the economy.
Trustworthy AI systems are demonstrated to be valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed.
Reflecting private sector interest and a congressional mandate, NIST developed the voluntary AI Risk Management Framework through collaborations with stakeholders across the public and private sectors. The AI RMF is a key piece of NIST’s AI efforts and helps to drive NIST’s priorities in AI.
Among many other things, NIST develops:
NIST’s AI Trustworthy and Responsible Resource Center hosts documents, software, standards, and related tools that contribute to better understanding, identifying, measuring, and managing various risks associated with AI systems.
Hardware for AI: