Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

NIST Trustworthy and Responsible AI Report Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations

NIST has published NIST AI 100-2e2025, Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations.

Artificial Intelligence (AI) systems have been on a global expansion trajectory, with the pace of development and the adoption of AI systems accelerating in recent years. These systems are being developed by and widely deployed into economies across the globe—leading to the emergence of AI-based services across many spheres of people’s lives, both real and virtual. As AI systems permeate the digital economy and become essential parts of daily life, the need for their secure, robust, and resilient operation grows.

Despite the significant progress of AI and machine learning (ML) in different application domains, these technologies remain vulnerable to attacks. The consequences of attacks become more dire when systems depend on high-stakes domains and are subjected to adversarial attacks. NIST’s Trustworthy and Responsible AI Report, Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST AI 100-2) targets this issue and offers voluntary guidance relative to identifying, addressing, and managing the risks associated with adversarial machine learning (AML). It also shares guidance for the development of:

  • Standardized terminology in adversarial ML (AML) to be used by the ML and cybersecurity communities
  • A taxonomy of the most widely studied and effective attacks in AML, including:
    • Evasion, poisoning, and privacy attacks for Predictive AI (PredAI) systems
    • Evasion, poisoning, privacy, and misuse attacks for Generative AI (GenAI) systems
    • Attacks against all viable learning methods (e.g., supervised, unsupervised, semi-supervised, federated learning, reinforcement learning) across multiple data modalities
  • A discussion of potential mitigations in AML and the limitations of some existing mitigation techniques
  • An Index and Glossary to help understanding, navigating and referencing the taxonomy.

The intended primary audience for this report includes individuals and groups who are responsible for designing, developing, deploying, evaluating, and governing AI systems. NIST plans to update this report annually as new developments emerge over time. NIST is working with partners from the U.S. AI Safety and the U.K. AI Security Institutes, industry and academia to develop and maintain this report.

Released March 24, 2025