A multidisciplinary team of computer scientists, cognitive scientists, mathematicians, and specialists in AI and machine learning with varied backgrounds and research specialties, explore and define the core tenets of explainable AI (XAI). The team aims to develop measurement methods and best practices that support the implementation of those tenets. Ultimately, NIST intends to develop a metrologist’s guide to AI systems that address the complex entanglement of terminology and taxonomy as it relates to the myriad layers of the AI field. AI must be explainable to society to enable understanding, trust, and adoption of new AI technologies, the decisions produced, or guidance provided by AI systems.
NIST held a virtual workshop on Explainable Artificial Intelligence (AI) on January 26-28, 2021. Explainable AI is a key element of trustworthy AI. As part of NIST’s efforts to provide foundational tools, guidance, and best practices for AI-related research, NIST released a draft report, Four Principles of Explainable Artificial Intelligence, for public comment. Inspired by comments received, this workshop delved further into developing an understanding of explainable AI. A summary of the workshop is available.
The final version of the report was published in September 2021. See below.
Psychological Foundations of Explainability and Interpretability in Artificial Intelligence (NISTIR 8367) (April 2021)
Interview between Dr. David Broniatowski and Natasha Bansgopaul addressing insights from Psychological Foundations of Explainability and Interpretability in Artificial Intelligence (NISTIR 8367) (April 2021), authored by Broniatowski.
Stay tuned for further NIST announcements and related activity by checking this page or by subscribing to Artificial Intelligence updates.
Please direct questions to ai-inquiries [at] nist.gov (ai-inquiries[at]nist[dot]gov).