Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

PSCR UAS Working Group

Cybersecurity and AI Risk Management for Uncrewed Aircraft Systems (UAS) in Public Safety

psprizes [at] nist.gov (subject: Join%20the%20UAS%20CSAIRM%20Working%20Group) (Call for Working Group Participants)

 
 
Image of a drone in front of the mountains
Credit: PSCR

Overview

The Uncrewed Aircraft Systems (UAS) Portfolio of the National Institute of Standards and Technology (NIST) Public Safety Communications Research (PSCR) Division is leading a project to improve the overall level of Cybersecurity and Artificial Intelligence (AI) Risk Management within the UAS ecosystem to inform and support Public Safety UAS programs. This project leverages two newly published tools: The NIST AI Risk Management Framework (AI RMF), released in January 2023, and the NIST Cybersecurity Framework version 2.0 (CSF 2.0), in draft for public comment at time of writing and to be released in early 2024. 

February 2024 Workshop


Introduction to the Workshop

On the 7th and 8th of February 2024, the UAS Portfolio hosted the Workshop on Cybersecurity and Artificial Intelligence (AI) Risk Management (CSAIRM) for UAS in Public Safety at the Montgomery County Fire & Rescue Training Academy in Gaithersburg, Maryland, USA. This workshop brought together a wide spectrum of stakeholders, including public safety end users, management, academia, and developers, with the following three main goals: 

  • Network and learn about each others’ connective capabilities and challenges. 
  • Begin to develop a roadmap for improving Cybersecurity and AI Risk Management in this domain. 
  • Develop a Top-10 list of questions that management personnel, such as fire and police chiefs, can ask their IT staff, vendors, and other people in the UAS ecosystem as they work to improve their risk management. 

Workshop Outcome

Many thanks to everyone who spoke at and participated in the workshop, and particularly thanks to the Montgomery County Fire & Rescue Training Academy staff for hosting us!

Over the coming weeks and months, we will be publishing various resources relating to the workshop. This includes the meeting minutes, recordings, slides, and summary documents. The event page will continue to be updated with these resources as they are published. Contact us via psprizes [at] nist.gov (psprizes[at]nist[dot]gov) to be notified when new content is uploaded.

One of the main outcomes of the workshop event was the formulation of an initial “Top-10” list of questions that public safety management personnel, such as fire and police chiefs, could ask their vendors, IT staff, and other people in their UAS ecosystem to better understand their CSAIRM posture.

Below, we present the draft Top-10 list from the workshop. This has been lightly edited with topic areas from attendee ideas and consolidated with salient points raised during the workshop. It is not intended to be a complete list, nor is it final or relevant to all organizations or industries. Rather, it is intended to be a list of questions that may be useful starting points for discussion. It will be refined, added to, and tailored to different sectors, with additional commentary and guidance issued, in the future. 

Preliminary Top-10 Questions

  1. How secure is the system? What is the attack surface? How, and in what timeframe, will we be notified of breaches of the various severity levels? What cybersecurity stress testing or “Red Teaming” has been performed, on both the system and systems on which it depends? 
  2. How and where is the data stored? What is the level of encryption and other security methods for the data and its derivatives, such as metadata, error logs, vendor telemetry, and temporary files, in transit and at rest? Who can decrypt or modify the data? 
  3. How can we determine if the AI is giving incorrect information? What are the possible mission-critical failure modes of the system? Do we have enough access to the system to determine if a decision that we do not agree with is due to an error on the part of the AI system or to otherwise perform root-cause analysis? How can we correct the behavior of the system? 
  4. What measures are taken to ensure an appropriate level of data privacy? What are the levels of data privacy that have been chosen and how were those decisions made? 
  5. In what physical locations is the AI processing being done? For instance, on the UAS, on the controller, on our own servers, or in a cloud datacenter in-state, in-country, or elsewhere? 
  6. What AI and cybersecurity challenges have other users run into with this system and what steps have they taken to mitigate risk? 
  7. Is information about the confidence that the AI system has in its decision available to the operator and/or investigator? What information is available to assist in interpreting this? 
  8. What will the system do in the event it encounters a situation that is unusual or unexpected? Is the system capable of detecting if it is operating in a situation that was not represented during its development, training, or testing?
  9. Where does additional input data to the system, such as maps, and positioning, come from, how is it updated and authenticated, and what measures are in place to address intentional and accidental interference, corruption, jamming, spoofing, or similar? 
  10. What measures are taken to ensure continuity of critical operations in the event that the system undergoes planned or unplanned downtime or end-of-life? How likely is it that the information and systems required for recovery are affected by the same cyber attack as the main system? If the AI system suffers a failure, such as effects from a bad model or input data, what plans exist to roll back to a good model?

Get Involved

As we move forward with the UAS CSAIRM working group, we welcome the involvement of all stakeholders! Please send an email to psprizes [at] nist.gov (psprizes[at]nist[dot]gov) to be kept informed, and to offer suggestions, comments, and assistance for this topic area. We are particularly interested in opportunities to collaborate with partner organizations who are also looking to develop policies and guidance in this space and where efforts may be dual-purposed.

A drone resting on a rock
Credit: PSCR

Working Group Background

UAS, also called aerial drones or Remotely Piloted Aircraft Systems (RPAS), are seeing widespread adoption across public safety, in particular, and broader society more generally. Such technologies will revolutionize the way that public safety prevents, mitigates, and responds to emergency incidents in a way that reduces response times, protects first responders, and improves outcomes for society. 

In general, UAS for current and upcoming public safety applications rely on two major technological developments: Artificial Intelligence (AI) and increasing levels of connectivity. However, the rapid pace of development has outstripped society’s ability to fully understand and manage the unique, novel risks associated with these technologies. As a result, the ad-hoc patchwork of organizational and local risk management policies vary immensely. Often the result is that these technologies are being used without full visibility or attention to the risks incurred by public safety personnel and the general public. In other cases, adoption is slowed or limited due to uncertainty, despite the risks being manageable, depriving public safety personnel and the community of capabilities that could improve their safety and well-being. 

The goal of this project is to leverage the AI RMF and CSF 2.0 to fill this information vacuum and provide all stakeholders, from the “boots on the ground” personnel to the incident commander, from the mayor to the congressperson, from the emergency dispatcher to the medical professional, and from the manufacturers to the researchers, with the tools to manage cybersecurity and AI risks associated with integrating UAS into public safety operations, alongside the other risks that they manage on a daily basis. 

Scope

UAS are used in a wide variety of public safety applications. This initial project scope covers UAS for public safety applications that satisfy criteria including: 

  • Small UAS, weighing less than 55 pounds as per Federal Aviation Administration (FAA) Part 107 Regulations. 
  • One UAS, or a small swarm (< 10) of multiple coordinated UAS. 
  • Autonomous or remotely piloted, within or Beyond Visual Line of Sight (LOS or BVLOS). 
  • Deployed by personnel ranging from UAS specialists to non-specialist public safety personnel. 
  • Manually deployed, or deployed from unattended, pre-installed “Drone in a Box” stations. 
  • Sensors that include ordinary cameras, thermal imagers, radio communications sensors, Chemical, Biological, Radiological, Nuclear, and Explosive (CBRNE) detectors, and sensors for 3D mapping. 
  • Deployed in response to an acute incident, in the recovery/documentation/investigation phase, or pre-deployed to gather information to reduce the probability and severity of, or improve potential future response to, an incident. 
  • Deployed to assist in an ongoing response, or deployed with the intent of being first to the scene (“Drone as First Responder”) to better prepare in-transit responders. 

Examples of applications and UAS characteristics that are not covered in this initial project include the following: 

  • Weighing 55 lbs or more as per FAA Part 107 Regulations. 
  • Intentional physical interaction with the environment beyond flying in the atmosphere and landing, such as by grasping, contact sampling, or spraying/dropping/firing objects or liquids. 
  • Applications where the UAS is likely to need to actively evade or deploy active physical or cyber countermeasures in response to a focused adversary. 
  • Deployed with the intent of being used as, or carrying, a weapon.

Intended Audience

It is anticipated that this project will initially focus on producing influential materials for a variety of audiences. These audiences may overlap, and it is possible that some guidance may simultaneously address multiple groups:

  • Non-technical politicians and the interested general public.
  • Public safety end users, managers, and procurement.
  • Vendors and researchers.
  • Risk management and insurance organizations.
  • Teaching academics.
Created November 20, 2023, Updated February 27, 2024