The U.S. AI Safety Institute develops and publishes risk-based mitigation guidelines and safety mechanisms to support the responsible design, development, deployment, use, and governance of advanced AI models, systems, and agents.
- Managing Misuse Risk for Dual-Use Foundation Models (Second Public Draft) – These draft guidelines identify best practices for developers of foundation models to manage the risks that their models will be deliberately misused to cause harm. We are soliciting public comment on this document until March 15, 2025, at 11:59 PM Eastern Time. To provide feedback, please email NISTAI800-1 [at] nist.gov (NISTAI800-1[at]nist[dot]gov). Comments may also be submitted via under docket number NIST-2025-0001.