Remarks as prepared.
Welcome to the launch of the NIST Artificial Intelligence Risk Management Framework, or AI RMF 1.0!
With the completion of the RMF we have carried out a directive in the National AI Initiative Act of 2020. Congress clearly recognized the need for this voluntary guidance and assigned it to NIST as a high priority.
We could not possibly have produced this framework without a lot of help. So I would like to thank everyone who has supported this effort.
Today we are joined by several important individuals, whom I would like to acknowledge now as I review our agenda.
Dr. Alondra Nelson, deputy director of the Office of Science and Technology Policy, will offer the White House’s perspective on the framework.
Deputy Secretary of Commerce Don Graves will be joining us in a few minutes to talk about how businesses can and hopefully will use the AI RMF 1.0.
We will share a video message from Congressman Frank Lucas, chairman of the House Science, Space, and Technology Committee.
We are also pleased to have Zoe Lofgren, ranking member of the House Science, Space, and Technology Committee, with us today to share a few remarks.
And then representatives from the business community and civil society will continue the discussion about putting the AI RMF into practice in two panel discussions.
As the federal laboratory with a mission focused on driving U.S. innovation and supporting economic security, NIST has a long-standing reputation of cultivating trust in technology. This work is critical in the AI space to ensure public trust of these rapidly evolving technologies.
AI technologies have significant potential to transform individual lives and even our society. They can bring positive changes to our commerce and health, our transportation and cybersecurity. AI technologies can drive inclusive economic growth and support scientific advancements that improve our world.
These same technologies also pose risks for negative impacts.
If we are not careful — and sometimes even when we are — AI systems can exacerbate biases and inequalities that already exist in society.
The good news is that understanding and managing the risks of AI systems will help to enhance their trustworthiness. This will in turn cultivate public trust in AI, to drive innovation while preserving civil liberties and rights.
AI risk management can reinforce responsible use and practice by helping those who design, build, release, use and evaluate AI systems to think more critically about context and potential impacts.
The framework is intended for voluntary use. It provides a flexible, but structured and measurable approach to understand, measure and manage AI risks. It is flexible to allow for innovation, and measurable because if you cannot measure something, you cannot improve it.
The flexibility also means it can be adapted by any organization of any size, to jump-start or enhance their AI risk management approaches.
By taking a rights-affirming approach, the framework can maximize the benefits and reduce the likelihood and degree of harm these technologies may bring.
It addresses challenges unique to AI systems and encourages and equips different AI stakeholders to manage AI risks proactively and purposefully.
The AI RMF will help the numerous organizations that have developed and committed to AI principles to convert those principles into practice.
It is intended to be applied across a wide range of perspectives, sectors, and technology domains, and should be universally applicable to any AI technology or use case.
Allow me to illustrate how the AI RMF can help address issues related to trustworthiness and manage risk, using an example from medicine.
For medical diagnoses, trustworthy AI is fundamental. First and foremost, any AI application used in this setting must deliver valid and reliable results.
The system must be safe and not cause physical or psychological harm.
To be fair and managed for bias, the application must not produce output that favors one group over others.
Privacy must be considered by ensuring data — such as an individual’s health information used in training and building the AI-based system — are properly protected and no personal information can be inferred through other means.
Approaches for communicating diagnoses to health professionals and to the patient should be explainable and interpretable. And these messages should be crafted considering the intended audience, be it a physician, technician or patient.
Moreover, such AI applications should be secure and resilient to adversarial attacks or vulnerabilities of AI systems.
Finally, the AI system must be transparent and accountable. Transparency is often necessary to remedy incorrect AI system outputs or negative impacts. Keep in mind that a transparent system is not necessarily an accurate, privacy-enhanced, secure or fair system. However, since AI systems are often opaque, it is difficult to know which characteristics are in effect, especially over time as complex systems evolve.
Ultimately, addressing these characteristics individually may not ensure AI system trustworthiness; trade-offs are always involved. Not all characteristics apply in every setting, and some will be more or less important in any given situation.
The AI RMF provides organizations that design, build, deliver, use and evaluate AI with a methodology, a lexicon vital to communications, and actions to help AI systems meet these requirements.
As mentioned earlier, the AI RMF 1.0 has been developed through a consensus-driven, open, transparent, and collaborative process.
We heard from more than 240 organizations across private industry, academia, civil society and government. They shared their feedback through responses to a formal Request for Information, three workshops, public comments on a concept paper and two framework draft documents, discussions at multiple public forums, and many small group meetings.
This extensive engagement has informed the AI RMF 1.0 as well as the AI research and development and evaluation to be conducted by NIST and others.
The AI RMF 1.0 is now available, but our work is not complete. Now we want to see it adopted and put it in practice!
And we need your feedback on how you use it, so that we can measure its effectiveness and make future revisions based on changes in the technology landscape and user experiences.
We also want your feedback on the companion voluntary AI RMF playbook, which offers ways to navigate and use the framework.
We plan to release a revised version of the playbook in a few months, and likely every six months after that, depending on contributions by others.
Stay tuned for the spring launch of an online NIST Trustworthy and Responsible AI Resource Center that will help you put the AI RMF 1.0 into practice. We also hope to add profiles of how your organization would put the AI RMF into practice, so please share those with us.
And today we also are releasing a Roadmap for the AI Risk Management Framework that outlines the many opportunities to add to the knowledge base of AI risk management through research, standards, guidelines and best practices.
We are counting on the broad community to help us to refine those roadmap priorities and do a lot of the heavy lifting that will be called for. I pledge that NIST will do whatever we can.
And I should note that as important as it is, the framework is only a part of NIST’s broad and growing portfolio of AI-related work. We also conduct fundamental, applied and use-inspired research, with a heavy focus on measurement and evaluation, technical standards, and contributions to AI policy.
Let me close by saying that we hope that the framework will be widely incorporated into standards, best practices and guidelines as companies roll out their AI applications.
We are counting on you to put this AI Risk Management Framework into practice.
Finally, I cannot overstate the importance of the community’s involvement in all of these initiatives. I am grateful for your engagement and for trusting us.