Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

SATE VI: Classic Track

[SAMATE Home | IntrO TO SAMATE | SARD | SATE | Bugs Framework | Publications | Tool Survey | Resources]

Last update: 10/24/2019

The SATE VI Workshop took place on 19 September 2019 at MITRE, in McLean VA.

The presentations are available for download from the Workshop's Program, on the worskshop page.

Introduction

The Classic Track combines the C and Java Tracks from the past SATEs. In SATE VI, we injected realistic vulnerabilities into large programs to assess the ability of tools to find bugs that matter.

Participants should run their tools on these test sets (below) and return their results in the SATE format, along with more detailed reports in their own format if they wish to. The SAMATE team will analyze the reports, but focus solely on warnings related to the bugs it injected in the test sets.

Participants have until December 29, 2018 to run their tools and submit their reports. Feel free to contact 'aure' [at] 'nist.gov' should you have any question or inquiry.

Participants will be acknowledged by name on the SATE website and in publications. However, specific results will be anonymized to encourage participation. Participants are free to release their results as they see fit.

Participation will remain undisclosed until the report submission deadline, and participants who would like to withdraw must do so before the deadline. If a participant withdraws, their intention to participate and decision to withdraw will never be disclosed.

How to Participate

To register for SATE VI, simply send an email, including the tool name and the author/organization, to 'aure' [at] 'nist.gov', letting us know you would like to participate.

Read the documentation below describing how to download and compile the test cases. Proceed and run your tools on the test cases and convert the reports to the SATE format described below. Participants can also provide a richer report in a format of their choice, preferably in a web-based form. We strongly encourage participants to report which parts of the code base has been analyzed by the tool, and which parts were not. This can be on a file-by-file, function-by-function or line-by-line basis.

Send the reports to 'aure' [at] 'nist.gov' by December 29, 2018. Reports too large for email will be dealt with on a case-by-case basis.

The SAMATE team will analyze the reports after the submission deadline and present preliminary results at a workshop at a date TBD in the Fall of 2019. Participants will have the opportunity to present and talk about their experience at the workshop. A report will be published in Q4 and present the final results.

Timeline

  1. 2018-10-16: Registration opens.
  2. 2018-10-29: Test cases are released. Participants begin their analysis.
  3. 2018-12-29: Tool run period ends. Participants submit their tool reports.
  4. 2019-01-02: NIST starts analyzing the reports and providing feedback to the participants.
  5. 2019, Fall: SATE VI Workshop (Date TBD)
  6. 2019, EOY: Publication of the SATE VI Report (Date TBD)

Conditions for Tool Runs and Submissions

Teams run their tools and submit reports following specified conditions:

  • Teams can participate in either language track or all.
  • Teams cannot modify the code of the test cases, except possibly for comments (e.g., annotations).
  • For each test case, teams do one or more runs and submit the report(s).
    • Teams are encouraged to do a custom run (e.g., the tool is configured with custom rules). For a custom run, specify the affected settings (e.g., custom rules) in enough detail so that the run can be reproduced independently.
    • Teams may do a run that uses the tool in default configuration.
  • Teams cannot do any hand editing of tool reports. 
  • Teams convert the reports to a common XML format. See SATE output format for description of the SATE format.
    • Teams are also encouraged to submit the original reports from their tools, in addition to the reports in the common output format.
  • Teams specify the environment (including the operating system and version of compiler) in which they ran the tool.

Code Coverage

In past SATEs, we witnessed surprising behaviour from otherwise excellent tools. For example, a tool would miss a simple intra-procedural weakness that it is fully capable for finding.

To better understand tool behaviour, we want to know which parts of the test cases have been analyzed by your tool and which have not. Since this information is not typically provided in tool reports, we leave it to the participants to decide how to report their tool's code coverage. It could be on a line-of-code, function or file basis.

Reporting code coverage is optional, but we will use it to calculate recall only on code that has been analyzed, leaving out code that has been missed. (Recall on the full code base will also be calculated.)

Any feedback you may have will be appreciated.

Docker

In SATE VI, the test cases are shipped as Docker containers to facilitate the participants' work. These containers are lightweight and contain all dependencies necessary to compile the test cases. They are preconfigured with proper compilation options and will compile the test cases automatically when they are built.

Instructions and links to install Docker are available here.

C Track

Wireshark 1.2.0

Wireshark is a network protocol analyzer. Its large code base, complexity and attack surface makes it an interesting candidate for static analysis testing. For SATE VI, we mined buffer errors and pointer issues from CVEs and the Wireshark bug tracker. We manually injected extra bugs to reach 30 buffer errors and 30 pointer issues. Our analysis will focus on these bugs only. The test case contains a buggy and a fixed version. Participants are to run their tool on both separately.

Instructions

  1. Download the Dockerfile.
     
  2. Create a Docker image from the Dockerfile:

    docker build -t sate6-wireshark -f Dockerfile.wireshark .

    The test case will automatically compile while the image is being built. Compilation instructions can be retrieved from the Dockerfile.
     

  3. Enter the Docker container:

    docker run -it sate6-wireshark bash

    You now have a shell running inside the container.
     

  4. The source code is located in directories:
    • /sources_buggy: Bug-riddled version of Wireshark.
    • /sources_fixed: "Bug-free" version of Wireshark.
       
  5. Run your tool on both code bases and send us the reports in the SATE format.

DARPA Cyber Grand Challenge Set

"After the Cyber Grand Challenge (CGC), DARPA released the source code for over 100 challenge sets (CS). These programs approximate real software with enough complexity and a sufficient variety of flaws to stress both manual and automated vulnerability discovery." -- Trail of Bits

The CGC test suite contains a suite of programs riddled with security vulnerabilities. Participants are to run their tool on these programs. Note that no fixed version of the test suite is provided for SATE VI.

Instructions

  1. Download the Dockerfile.
  2. Create a Docker image from the Dockerfile:

    docker build -t sate6-cgc -f Dockerfile.cgc .

    The test case will automatically compile while the image is being built. Compilation instructions can be retrieved from the Dockerfile.
     

  3. Enter the Docker container:

    docker run -it sate6-cgc bash

    You now have a shell running inside the container.
     

  4. The source code is located in directories:
    • /sources_buggy: Bug-riddled version of the CGC test suite.
       
  5. Run your tool on all code bases and send us the reports in the SATE format.

SQLite 3.21

SQLite is a relational database management system. In SATE VI, NIST used an automated bug injection tool, designed by GrammaTech, to inject buffer errors in the source code. The test case contains a buggy and a fixed version. Participants are to run their tool on both separately.

The bug injector is based on the Software Evolution Library and is developed independently from GrammaTech's static analyzer, Code Sonar. For more information about the GrammaTech bug injector, please visit this page: https://go.grammatech.com/bug-injector/

Instructions

  1. Download the Dockerfile.
     
  2. Create a Docker image from the Dockerfile:

    docker build -t sate6-sqlite -f Dockerfile.sqlite .

    The test case will automatically compile while the image is being built. Compilation instructions can be retrieved from the Dockerfile.
     

  3. Enter the Docker container:

    docker run -it sate6-sqlite bash

    You now have a shell running inside the container.
     

  4. The source code is located in directories:
    • /sources_buggy: Bug-riddled version of SQLite.
    • /sources_fixed: "Bug-free" version of SQLite.
       
  5. Run your tool on all code bases and send us the reports in the SATE format.

Java Track

DSpace 6.2

DSpace is an open source repository software package typically used for creating open access repositories. In SATE VI, we injected 30 cross-site scripting vulnerabilities in its code base. Our analysis will focus on these bugs only. The test case contains a buggy and a fixed version. Participants are to run their tool on both separately.

Instructions

  1. Download the Dockerfile.
     
  2. Create a Docker image from the Dockerfile:

    docker build -t sate6-dspace -f Dockerfile.dspace .

    The test case will automatically compile while the image is being built. Compilation instructions can be retrieved from the Dockerfile.
     

  3. Enter the Docker container:

    docker run -it sate6-dspace bash

    You now have a shell running inside the container.
     

  4. The source code is located in directories:
    • /sources_buggy: Bug-riddled version of SQLite.
    • /sources_fixed: "Bug-free" version of SQLite.
       
  5. Run your tool on all code bases and send us the reports in the SATE format.

Sakai 11.2

Sakai is a customizable learning management system. In SATE VI, we injected 30 SQL injection vulnerabilities in its code base. Our analysis will focus on these bugs only. The test case contains a buggy and a fixed version. Participants are to run their tool on both separately.

Instructions

  1. Download the new Dockerfile (the original file is available here, but is no longer working).
     
  2. Create a Docker image from the Dockerfile:

    docker build -t sate6-sakai -f Dockerfile.sakai .

    The test case will automatically compile while the image is being built. Compilation instructions can be retrieved from the Dockerfile.
     

  3. Enter the Docker container:

    docker run -it sate6-sakai bash

    You now have a shell running inside the container.
     

  4. The source code is located in directories:
    • /sources_buggy: Bug-riddled version of SQLite.
    • /sources_fixed: "Bug-free" version of SQLite.
       
  5. Run your tool on all code bases and send us the reports in the SATE format.

SATE VI Output Format

The SATE VI output format is a simplified XML format that lists warnings issued by static analysis tools. If need be, NIST is willing to work with participants to convert their tool output to the SATE format.

Alternatively, NIST will accept reports in the OASIS Static Analysis Results Interchange Format (SARIF). More information about the SARIF format can be found on the OASIS consortium website.

In addition to a report in the SATE VI or SARIF format, NIST encourages participants to submit their regular tool output report, preferably in an easily readable format (text, HTML, XML, etc.)

Format description

The XML format is described hierarchically below. Mandatory elements are labelled "M" and optional ones "O". Recurring elements are labelled "R" and single elements "S". For example, an optional single element would be labelled "OS" while a mandatory recurring element would be labelled "MR".

  • (MS) report: Document root.
    • (MS) tool_name: Name of the tool that produced this report.
    • (MS) tool_version: Version of the tool that produced this report.
    • (OR) weakness: Recurring element describing a single tool warning:
      • (MS) id: Unique ID for this warning.
      • (OS) tool_id: Tool-specific ID for this warning.
      • (MS) name: Name/class of the warning, e.g. "Buffer Overflow".
      • (MS) explanation: Originial message from the tool, explaining the reasoning behind the warning
      • (MS) grade: Element describing the severity of the warning:
        • (MS) severity: Warning severity, from 1 (most impactful) to 5 (least impactful).
        • (OS) tool_rank: Tool-specific severity for the warning.
      • (MR) trace: Recurring element describing a control/data flow leading to the weakness. It contains a sequence of locations in code:
        • (MR) location: Recurring element describing one step of the trace sequence. It represents a block of code, most often a single line of code.
          • (OS) cwe: CWE number if this location is key to the warning.
          • (MS) path: Full path to the file containing this location.
          • (MS) line: Line in the file for this location.
          • (OS) length: Length of the block described by this location (default: 1).
          • (OS) fragment: Code snippet of interest.
          • (OS) comment: Reason why this particlar location was reported.

Format Changes

The format has been significantly modified since SATE V, mostly to simplify it:

  • All attributes have been replaced by elements for homogeneity.
  • Some elements have been renamed:
    • tool_id (was: tool_specific_id)
    • explanation (was: output)
    • tool_rank (was: tool_specific_rank)
    • comment (was: explanation)
  • Element trace have been introduced to clarify the reporting of multiple control/data flow paths in a single warning. A trace is a sequence of location elements. Each trace corresponds to one of these separate paths. (See example below.)
  • CWE numbers should now be included in the location element instead of the weakness element, offering a more precise description of weakness chains and composites. (See example below).
  • Some elements have been removed:
    • output and its subelements textoutput, htmloutput and xmloutput are now a simple string named explanation.
    • probability has been removed.
    • evaluation and other elements and attributes used to publish NIST analysis results have been removed.
    • Attribute type has been removed.
  • Note that some elements have been re-ordered.

Validation

NIST provides an XML schema to ensure the reports are properly formatted. The latest schema file can be downloaded here.

Several XML tools provide schema verification. Here's an simple example with XMLLint (from package libxml2-utils):

xmllint --schema sate6-format.xsd tool_report1.xml

Example

This example is a report containing two warnings:

  1. A null pointer dereference.
  2. A buffer overread caused by an integer overflow, on two different paths.
<report>
<tool_name>Tool Name</tool_name>
<tool_version>1.0</tool_version>
<weakness>
<id>1</id>
<tool_id>XYZ0001</tool_id>
<name>BUG_NPD</name>
<explanation>There's a NULL pointer dereference in your code because pointer "p" is null and dereferenced.</explanation>
<grade>
<severity>2</severity>
<tool_rank>7/10</tool_rank>
</grade>
<trace>
<location>
<cwe>476</cwe>
<path>/sources_buggy/some_path/buggy_file1.c</path>
<line>456</line>
<fragment>char* p = NULL;</fragment>
<comment>Pointer "p" is declared an initialized to NULL.</comment>
</location>
<location>
<path>/sources_buggy/some_path/buggy_file1.c</path>
<line>467</line>
<fragment>if (p == NULL)</fragment>
<comment>Taking true branch.</comment>
</location>
<location>
<cwe>476</cwe>
<path>/sources_buggy/some_path/buggy_file1.c</path>
<line>469</line>
<fragment>p[3] = 'f';</fragment>
<comment>Pointer "p" is NULL and dereferenced here.</comment>
</location>
</trace>
</weakness>
<weakness>
<id>2</id>
<tool_id>XYZ0002</tool_id>
<name>BUG_BO</name>
<explanation>A buffer overflow occurs in your code because an index was miscalculated on two paths.</explanation>
<grade>
<severity>1</severity>
<tool_rank>9/10</tool_rank>
</grade>
<trace>
<location>
<cwe>190</cwe>
<path>/sources_buggy/some_path/buggy_file2.c</path>
<line>422</line>
<fragment>unsigned int index = 0;</fragment>
<comment>Unsigned integer "index" is declared an intialized to zero.</comment>
</location>
<location>
<cwe>190</cwe>
<path>/sources_buggy/some_path/buggy_file2.c</path>
<line>434</line>
<fragment>index--;</fragment>
<comment>Decrementing "index" causes it to overflow.</comment>
</location>
<location>
<cwe>125</cwe>
<path>/sources_buggy/some_path/buggy_file2.c</path>
<line>496</line>
<fragment>x = buf[index];</fragment>
<comment>"index" is larger than buffer "buf", causing an out-of-bound read.</comment>
</location>
</trace>
<trace>
<location>
<cwe>190</cwe>
<path>/sources_buggy/some_path/buggy_file2.c</path>
<line>422</line>
<fragment>unsigned int index = 0;</fragment>
<comment>Unsigned integer "index" is declared an intialized to zero.</comment>
</location>
<location>
<cwe>190</cwe>
<path>/sources_buggy/some_path/buggy_file2.c</path>
<line>471</line>
<fragment>index -= offset;</fragment>
<comment>"offset" can be larger than "index" causing "index" to overflow.</comment>
</location>
<location>
<cwe>125</cwe>
<path>/sources_buggy/some_path/buggy_file2.c</path>
<line>496</line>
<fragment>x = buf[index];</fragment>
<comment>"index" is larger than buffer "buf", causing an out-of-bound read.</comment>
</location>
</trace>
</weakness>
</report>

 

Created March 22, 2021, Updated August 26, 2024