NIST has published NIST AI 100-2e2025, Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations.
Artificial Intelligence (AI) systems have been on a global expansion trajectory, with the pace of development and the adoption of AI systems accelerating in recent years. These systems are being developed by and widely deployed into economies across the globe—leading to the emergence of AI-based services across many spheres of people’s lives, both real and virtual. As AI systems permeate the digital economy and become essential parts of daily life, the need for their secure, robust, and resilient operation grows.
Despite the significant progress of AI and machine learning (ML) in different application domains, these technologies remain vulnerable to attacks. The consequences of attacks become more dire when systems depend on high-stakes domains and are subjected to adversarial attacks. NIST’s Trustworthy and Responsible AI Report, Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST AI 100-2) targets this issue and offers voluntary guidance relative to identifying, addressing, and managing the risks associated with adversarial machine learning (AML). It also shares guidance for the development of:
The intended primary audience for this report includes individuals and groups who are responsible for designing, developing, deploying, evaluating, and governing AI systems. NIST plans to update this report annually as new developments emerge over time. NIST is working with partners from the U.S. AI Safety and the U.K. AI Security Institutes, industry and academia to develop and maintain this report.