NIST is hosting a workshop on Wednesday, January 17, 2024, from 9:00 AM - 1:00 PM EST to bring together industry, academia, and government to discuss secure software development practices for AI models. Attendees will gain insight into major cybersecurity challenges specific to developing and using AI models—as well as recommended practices for addressing those challenges. Feedback from various communities will inform NIST’s creation of SSDF companion resources to support both AI model producers and the organizations which are adopting and incorporating those AI models within their own software and services.
Background
The October 2023, Executive Order 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, tasked NIST with “developing a companion resource to the SSDF to incorporate secure development practices for generative AI and for dual-use foundation models.” NIST’s SSDF version 1.1 describes a set of fundamental, sound practices for general secure software development. The SSDF focuses on outcomes, not tools and techniques, so it can be used for any type of software development, including AI models.
To provide software producers and acquirers with more information on secure development for AI models, NIST is considering the development of one or more SSDF companion resources on generative AI models and dual-use foundation models. These companion resources would be similar in concept and content to the Profiles for the NIST Cybersecurity Framework, Privacy Framework, and AI Risk Management Framework.
During the workshop, NIST is seeking feedback on several topics to help inform the development of future SSDF Profiles, including:
Questions about the workshop or NIST’s SSDF work? Contact us via ssdf [at] nist.gov (ssdf[at]nist[dot]gov).
Times | Speakers | Session Name/Information |
9:00 AM – 9:15 AM | Michael Ogata, NIST Kevin Stine, NIST | Introduction and Overview |
9:15 AM | Martin Stanley, NIST | Session 1 - Secure Software Development Challenges with Large Language Models (LLMs) and Generative AI Systems This session will discuss major cybersecurity challenges in the development and use of LLMs, dual-use foundation models, and generative AI systems. Attendees will identify and consider the biggest challenges are and the potential impacts of not adequately addressing them. |
9:15 AM – 9:30 AM | Jonathan Spring, CISA
| CISA Presentation for NIST Secure Software Development Workshop for Generative AI |
9:30 AM – 9:45 AM | Dave Schulker, CERT | Using System Theoretic Process Analysis to Advance Safety in LLM-enabled Software Systems |
9:45 AM – 10:00 AM | Henry Young, BSA | Cybersecurity for Generative AI: Leveraging Existing Tools and Identifying New Challenges |
10:00 AM – 10:15 AM | Martin Stanley, NIST Jonathan Spring, CISA Dave Schulker, CERT Henry Young, BSA | Q&A |
10:15 AM | Apostol Vassilev, NIST | Session 2 - Secure Development of LLMs and Generative AI Systems This session will explore recommended security practices for the development of LLMs, such as dual-use foundation models with billions of parameters. The focus will be on security practices that are specific to the LLM development lifecycle, rather than on practices generally used for all other types of software development. Attendees will share and gain a better understanding of the practices in use and the gaps that remain to be addressed by users of LLMs. |
10:15 AM – 10:30 AM | Nick Hamilton, OpenAI | Securing Large Language Model Development and Deployment: Navigating the Complexities of LLM Secure Development Practices to Align with the NIST Secure Development Framework |
10:30 AM – 10:45 AM | Mark Ryland, AWS | Secure Development of GenAI Systems: An AWS Perspective |
10:45 AM – 11:00 AM | Mihai Maruseac, Google | Secure AI Development @ Google |
11:00 AM – 11:15 AM | Apostol Vassilev, NIST Nick Hamilton, OpenAI Mark Ryland, AWS Mihai Maruseac, Google | Q&A |
11:15 AM – 11:30 AM | Michael Ogata, NIST | Break |
11:30 AM | Harold Booth, NIST | Session 3 - Secure Use of LLMs and Generative AI Systems This session will explore recommended security practices for reuse of existing LLMs and generative AI systems as components of traditional software deployed within an organization. It will focus on security practices that are specific to LLMs and generative AI models as components integrated into other software and the specific security challenges they bring rather than on practices generally used for any traditional software reuse. Attendees will discuss recommendations and considerations for enhancing their existing secure software development practices, as well as additional security controls they may need to employ. |
11:30 AM – 11:45 AM | Karthi Natesan Ramamurthy, IBM | Foundation Models and their Use in Software Systems -Trust and Governance |
11:45 AM – 12:00 PM | David Beveridge, HiddenLayer | Secure Use of LLMs and GEN AI Systems |
12:00 PM – 12:15 AM | Vivek Sharma, Microsoft | NIST Secure Use of LLMs and Generative AI System |
12:15 PM – 12:30 PM | Harold Booth, NIST Karthi Natesan Ramamurthy, IBM David Beveridge, HiddenLayer Vivek Sharma, Microsoft | Q&A |
12:30 PM –12:45 PM | Michael Ogata, NIST | Closing and Next Steps |
1:00 PM | Adjourn |