Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Search Publications

NIST Authors in Bold

Displaying 1 - 25 of 431

Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations

March 24, 2025
Author(s)
Apostol Vassilev, Alina Oprea, Alie Fordyce, Hyrum Anderson, Xander Davies, Maia Hamin
This NIST Trustworthy and Responsible AI report provides a taxonomy of concepts and defines terminology in the field of adversarial machine learning (AML). The taxonomy is arranged in a conceptual hierarchy that includes key types of ML methods, life cycle

Detection limits of AI-based SEM dimensional metrology

March 14, 2025
Author(s)
Peter Bajcsy, Brycie Wiseman, Michael Paul Majurski, Andras Vladar
The speed of in-line scanning electron microscope (SEM) measurements of linewidth, contact hole, and overlay is critically important for identifying the measurement area and generating indispensable process control information. Sample charging and damage

Semantics for Enhancing Communications- and Edge-Intelligence-enabled Smart Sensors: A Practical Use Case in Federated Automotive Diagnostics

March 10, 2025
Author(s)
Eugene Song, Thomas Roth, David A. Wollman, Eoin Jordan, Martin Serrano, Amelie Gyrard
Modern edge artificial intelligence (AI) chipsets and edge-intelligence-enabled smart sensors frameworks support real-time data processing and event detection at the signal source. Beyond just measuring local conditions and transmitting corresponding

NIST Open Media Forensics Challenge (OpenMFC Briefing for IIRD)

January 27, 2025
Author(s)
Haiying Guan
The rapid advancement of artificial intelligence (AI) has led to the emergence of several technologies, including Generative Adversarial Networks (GANs), deepfakes, generative AI, CGI, and anti-forensics techniques. These technologies pose a significant

Reflection of its Creators: Qualitative Analysis of General Public and Expert Perceptions of Artificial Intelligence

October 16, 2024
Author(s)
Theodore Jensen, Mary Frances Theofanos, Kristen K. Greene, Olivia Williams, Kurtis Goad, Janet Bih Fofang
The increasing prevalence of artificial intelligence (AI) will likely lead to new interactions and impacts for the general public. An understanding of people's perceptions of AI can be leveraged to design and deploy AI systems toward human needs and values

An Overarching Quality Evaluation Framework for Additive Manufacturing Digital Twin

September 2, 2024
Author(s)
Yan Lu, Zhuo Yang, Shengyen Li, Yaoyao Fiona Zhao, Jiarui Xie, Mutahar Safdar, Hyunwoong Ko
The key differentiation of digital twins from existing models-based engineering approaches lies in the continuous synchronization between physical and virtual twins through data exchange. The success of digital twins, whether operated automatically or with

A Plan for Global Engagement on AI Standards

July 26, 2024
Author(s)
Jesse Dunietz, Elham Tabassi, Mark Latonero, Kamie Roberts
Recognizing the importance of technical standards in shaping development and use of Artificial Intelligence (AI), the President's October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO 14110)

Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile

July 26, 2024
Author(s)
Chloe Autio, Reva Schwartz, Jesse Dunietz, Shomik Jain, Martin Stanley, Elham Tabassi, Patrick Hall, Kamie Roberts
This document is a cross-sectoral profile of and companion resource for the AI Risk Management Framework (AI RMF 1.0) for Generative AI, pursuant to President Biden's Executive Order (EO) 14110 on Safe, Secure, and Trustworthy Artificial Intelligence. The

Forecasting Operation of a Chiller Plant Facility Using Data Driven Models

July 23, 2024
Author(s)
Behzad Salimian Rizi, Afshin Faramarzi, Amanda Pertzborn, Mohammad Heidarinejad
In recent years, data-driven models have enabled accurate prediction of chiller power consumption and chiller coefficient of performance (COP). This study evaluates the usage of time series Extreme Gradient Boosting (XGBoost) models to predict chiller

An Adaptable AI Assistant for Network Management

July 3, 2024
Author(s)
Amar Abane, Abdella Battou, Mheni Merzouki
This paper presents a network management AI assistant built with Large Language Models. It adapts at runtime to the network state and specific platform, leveraging techniques like prompt engineering, document retrieval, and Knowledge Graph integration. The

Fiscal Year 2023 Cybersecurity and Privacy Annual Report

May 20, 2024
Author(s)
Patrick D. O'Reilly, Kristina Rigopoulos
During Fiscal Year 2023 (FY 2023) – from October 1, 2022, through September 30, 2023 –the NIST Information Technology Laboratory (ITL) Cybersecurity and Privacy Program successfully responded to numerous challenges and opportunities in security and privacy

An Adaptable AI Assistant for Network Management

April 12, 2024
Author(s)

Amar Abane, Abdella Battou, Mheni Merzouki

This paper presents a network management AI assistant built with Large Language Models. It adapts at runtime to the network state and specific platform, leveraging techniques like prompt engineering, document retrieval, and Knowledge Graph integration. The

2024 NIST Generative AI (GenAI): Data Creation Specification for Text-to-Text (T2T) Generators

April 1, 2024
Author(s)
Yooyoung Lee, George Awad, Asad Butt, Lukas Diduch, Kay Peterson, Seungmin Seo, Ian Soboroff, Hariharan Iyer
Generator (G) teams will be tested on their system ability to generate content that is indistinguishable from human-generated content. For the pilot study, the evaluation will help determine strengths and weaknesses in their approaches including insights
Displaying 1 - 25 of 431