Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Search Publications by: Apostol Vassilev (Fed)

Search Title, Abstract, Conference, Citation, Keyword or Author
Displaying 1 - 25 of 48

Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations

March 24, 2025
Author(s)
Apostol Vassilev, Alina Oprea, Alie Fordyce, Hyrum Anderson, Xander Davies, Maia Hamin
This NIST Trustworthy and Responsible AI report provides a taxonomy of concepts and defines terminology in the field of adversarial machine learning (AML). The taxonomy is arranged in a conceptual hierarchy that includes key types of ML methods, life cycle

Standards and Performance Metrics for On-Road Automated Vehicles

June 4, 2024
Author(s)
Craig I. Schlenoff, Zeid Kootbally, Prem Rachakonda, Suzanne Lightman, Apostol Vassilev, David A. Wollman, Edward Griffor
On September 5–8, 2023, the National Institute of Standards and Technology (NIST) held the second Standards and Performance Metrics for On‐Road Automated Vehicles Workshop. This four‐day virtual event provided updates on NIST's recent work in automated

Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations

January 4, 2024
Author(s)
Apostol Vassilev, Alina Oprea, Alie Fordyce, Hyrum Andersen
This NIST AI report develops a taxonomy of concepts and defines terminology in the field of adversarial machine learning (AML). The taxonomy is built on survey of the AML literature and is arranged in a conceptual hierarchy that includes key types of ML

Poisoning Attacks against Machine Learning: Can Machine Learning be Trustworthy?

October 24, 2022
Author(s)
Alina Oprea, Anoop Singhal, Apostol Vassilev
Many practical applications benefit from Machine Learning (ML) and Artificial Intelligence (AI) technologies, but their security needs to be studied in more depth before the methods and algorithms are actually deployed in critical settings. In this article

Towards a Standard for Identifying and Managing Bias in Artificial Intelligence

March 15, 2022
Author(s)
Reva Schwartz, Apostol Vassilev, Kristen K. Greene, Lori Perine, Andrew Burt, Patrick Hall
As individuals and communities interact in and with an environment that is increasingly virtual they are often vulnerable to the commodification of their digital exhaust. Concepts and behavior that are ambiguous in nature are captured in this environment

NIST Roadmap Toward Criteria for Threshold Schemes for Cryptographic Primitives

July 7, 2020
Author(s)
Luis Brandao, Michael S. Davidson, Apostol T. Vassilev
This document constitutes a preparation toward devising criteria for the standardization of threshold schemes for cryptographic primitives by the National Institute of Standards and Technology (NIST). The large diversity of possible threshold schemes, as

Leveraging Side-channel Information for Disassembly and Security

February 1, 2020
Author(s)
JUNGMIN Park, Fahim Rahman, Apostol Vassilev, Domenic Forte, Mark Tehranipoor
With the rise of Internet of Things (IoT), devices such as smartphones, embedded medical devices, smart home appliances as well as traditional computing platforms such as personal computers and servers have been increasingly targeted with a variety of

BowTie - a deep learning feedforward neural network for sentiment analysis

January 3, 2020
Author(s)
Apostol T. Vassilev
How to model and encode the semantics of human-written text and select the type of neural network to process it are not settled issues in sentiment analysis. Accuracy and transferability are critical issues in machine learning in general. These properties

RTL-PSC: Automated Power Side-Channel Leakage Assessment at Register-Transfer Level

July 11, 2019
Author(s)
Miao (Tony) He, Jungmin Park, Adib Nahiyan, Apostol Vassilev, Yier Jin, Mark Tehranipoor
Power side-channel attacks (SCAs) have become a major concern to the security community due to their non- invasive feature, low-cost, and effectiveness in extracting secret information from hardware implementation of cryto algorithms. Therefore, it is