Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Poisoning Attacks against Machine Learning: Can Machine Learning be Trustworthy?

Published

Author(s)

Alina Oprea, Anoop Singhal, Apostol Vassilev

Abstract

Many practical applications benefit from Machine Learning (ML) and Artificial Intelligence (AI) technologies, but their security needs to be studied in more depth before the methods and algorithms are actually deployed in critical settings. In this article, we discuss the risk of poisoning attacks when training machine learning models and discuss challenges for defending against this threat.
Citation
Computer (IEEE Computer)
Volume
55
Issue
11

Keywords

artificial intelligence technologies, machine learning trustworthiness, poisoning attacks, security

Citation

Oprea, A. , Singhal, A. and Vassilev, A. (2022), Poisoning Attacks against Machine Learning: Can Machine Learning be Trustworthy?, Computer (IEEE Computer), [online], https://doi.org/10.1109/MC.2022.3190787, https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=934932 (Accessed November 20, 2024)

Issues

If you have any questions about this publication or are having problems accessing it, please contact reflib@nist.gov.

Created October 25, 2022, Updated May 1, 2024