Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

NIST AI Engagement

Illustration that shows an outline of a face and then icons to represent different areas of AI including heart (health), lock (cyber), windmills (energy), steering wheel (cars) and manufacturing arm
Credit: N. Hanacek/NIST

Sign up for AI email alerts here. If you have questions or ideas about how to engage with us on AI topics or have ideas about NIST’s AI activities, send us an email: ai-inquiries [at] nist.gov (ai-inquiries[at]nist[dot]gov)

NIST works with the AI community to identify the building blocks needed to cultivate trust and ensure that AI systems are accurate, reliable, safe, secure and resilient, robust, explainable and interpretable – and that they mitigate bias while also taking into account privacy and fairness. To foster collaboration and develop a shared understanding of what constitutes trustworthy AI, and to bolster scientific underpinning of how to assess and assure trustworthiness of AI systems, NIST has been organizing a series of workshops bringing together government, industry, academia, and other stakeholders from the US and around the world. The workshops’ focus is on advancing the development of AI standards, guidelines, and related tools.

Recent Workshops & Events 

  • ARIA Workshop | November 12, 2024 | Washington, DC (Hybrid) 

    NIST will convene an in-person workshop on November 12, 2024, in Washington, D.C., to provide more detail about NIST's new ARIA program. Participants will learn about ARIA’s experimentation environment and NIST’s approach to evaluation-driven research. ARIA (Assessing Risks and Impacts of AI) is a NIST AI Innovation Lab program to gather evidence about AI’s risks to people and society as part of advancing the science and practice of AI risk measurement. Learn more. 

  • Unleashing AI Innovation, Enabling Trust | September 24-25, 2024 | Washington, DC (Hybrid) 

    NIST convened a hybrid workshop on September 24-25, 2024, in Washington, D.C., to discuss recent efforts by the NIST AI Innovation Lab (NAIIL) to help unleash artificial intelligence (AI) innovations in ways which enable trust. Participants learned about recent and ongoing efforts contributing to NIST’s vision for the work ahead, including opportunities to expand its collaborations with the AI community. Watch Recording.

Past Workshops & Events

Other Ways to Engage

NIST relies on and encourages robust interactions with companies, universities, nonprofits, and other government agencies in driving and carrying out its AI agenda. There are multiple ways to engage with NIST, including:

  • NIST Draft Reports: NIST counts on stakeholders to review drafts of reports on a variety of AI issues. Drafts typically are prepared based on inputs from private and public sector individuals and organizations and then posted for broader  public review on NIST’s AI website and via email alerts. Public comments help to improve these documents. Sign up for AI-related emails here.
  • Workshops: NIST convenes single day, multi-day, and multi-week sessions with experts from diverse disciplines to tackle key questions in the field of trustworthy and responsible AI and related topics. Workshops are often hybrid and readily accessible through the use of online discussion forums.
  • Consortium: NIST is establishing the US Artificial Intelligence Safety Institute (USAISI) and a related Consortium to promote development and responsible use of safe and trustworthy AI. The Consortium will advance new measurement science – enabling the identification of proven, scalable, and interoperable techniques and metrics. 
  • Requests for Information (RFIs): NIST sometimes uses formal RFIs to inform the public about its AI activities and gain insights into specific AI issues. For example, an RFI was issued to help develop the AI Risk Management Framework. Another was issued to assist in carrying out the President’s Executive Order 14110.
  • AI Visiting Fellows: Accomplished Visiting Fellows bring thought leadership to foundational research for trustworthy and responsible AI, use-inspired AI research, AI hardware research, as well as AI related standards and evaluations conducted in NIST laboratories. 
  • Student Programs: NIST offers a range of opportunities for students to engage with NIST on AI-related work. That includes the Professional Research Experience Program (PREP), which provides valuable laboratory experience and financial assistance to undergraduate, graduate, and post-graduate students.
  • Grants: NIST offers some financial assistance to support collaborative research including AI projects.

Sign up for AI email alerts here. If you have questions or ideas about how to engage with us on AI topics or have ideas about NIST’s AI activities, send us an email: ai-inquiries [at] nist.gov (ai-inquiries[at]nist[dot]gov)

Created June 16, 2020, Updated November 12, 2024