Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Evaluating Bug Finders: Test and Measurement of Static Code Analyzers

Published

Author(s)

Aurelien M. Delaitre, Bertrand C. Stivalet, Elizabeth N. Fong, Vadim Okun

Abstract

Software static analysis is one of many options for finding bugs in software. Like compilers, static analyzers take a program as input. This paper covers tools that examine source code--without executing it--and output bug reports. Static analysis is a complex and generally undecidable problem. Most tools resort to approximation to overcome these obstacles and it sometimes leads to incorrect results. Therefore, tool effectiveness needs to be evaluated. Several characteristics of the tools should be examined. First, what types of bugs can they find? Second, what proportion of bugs do they report? Third, what percentage of findings is correct? These questions can be answered by one or more metrics. But to calculate these, we need test cases having certain characteristics: statistical significance, ground truth, and relevance. Test cases with all three attributes are out of reach, but we can use combinations of only two to calculate the metrics. The results in this paper were collected during Static Analysis Tool Exposition (SATE) V, where participants ran 14 static analyzers on the test sets we provided and submitted their reports to us for analysis. Tools had considerably different support for most bug classes. Some tools discovered significantly more bugs than others or generated mostly accurate warnings, while others reported wrong findings more frequently. Using the metrics, an evaluator can compare candidates and select the tool that aligns best with his or her objectives. In addition, our results confirm that the bugs most commonly found by tools are among the most common and important bugs in software. We also observed that code complexity is a major hindrance for static analyzers and detailed which code constructs tools handle well and which impede their analysis.
Conference Dates
May 23, 2015
Conference Location
Firenze
Conference Title
Complex FaUlts and Failures in LargE Software Systems (COUFLESS)

Keywords

software faults, software assurance, static analysis tools, software vulnerability

Citation

Delaitre, A. , Stivalet, B. , Fong, E. and Okun, V. (2015), Evaluating Bug Finders: Test and Measurement of Static Code Analyzers, Complex FaUlts and Failures in LargE Software Systems (COUFLESS), Firenze, -1, [online], https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=918370 (Accessed November 21, 2024)

Issues

If you have any questions about this publication or are having problems accessing it, please contact reflib@nist.gov.

Created May 23, 2015, Updated May 4, 2021