Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations

Published

Author(s)

Apostol Vassilev, Alina Oprea, Alie Fordyce, Hyrum Anderson, Xander Davies, Maia Hamin

Abstract

This NIST Trustworthy and Responsible AI report provides a taxonomy of concepts and defines terminology in the field of adversarial machine learning (AML). The taxonomy is arranged in a conceptual hierarchy that includes key types of ML methods, life cycle stages of attack, and attacker goals, objectives, capabilities, and knowledge. This report also identifies current challenges in the life cycle of AI systems and describes corresponding methods for mitigating and managing the consequences of those attacks. The terminology used in this report is consistent with the literature on AML and is complemented by a glossary of key terms associated with the security of AI systems. Taken together, the taxonomy and terminology are meant to inform other standards and future practice guides for assessing and managing the security of AI systems by establishing a common language for the rapidly developing AML landscape
Citation
NIST Trustworthy and Responsible AI - NIST AI 100-2e2025
Report Number
NIST AI 100-2e2025

Keywords

artificial intelligence, machine learning, attack taxonomy, abuse, data poisoning, evasion, privacy breach, attack mitigation, large language model, chatbot.

Citation

Vassilev, A. , Oprea, A. , Fordyce, A. , Anderson, H. , Davies, X. and Hamin, M. (2025), Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST Trustworthy and Responsible AI, National Institute of Standards and Technology, Gaithersburg, MD, [online], https://doi.org/10.6028/NIST.AI.100-2e2025, https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=959735 (Accessed April 1, 2025)

Issues

If you have any questions about this publication or are having problems accessing it, please contact reflib@nist.gov.

Created March 24, 2025, Updated March 25, 2025