[SAMATE Home | IntrO TO SAMATE | SARD | SATE | Bugs Framework | Publications | Tool Survey | Resources]
This page credits those who contributed test suites (sets of test cases) to the SARD and includes a short description for each suite. We appreciate those who contributed test suites. These test suites represent considerable intellectual effort to reduce reported vulnerabilities to examples, classify them, generate elaborations of particular flaws, come up with corresponding correct examples, etc. This page acknowledges those people, groups, companies, and entities who have generously shared with everyone.
For detailed design philosophy of the dataset please refer to Software Assurance Reference Dataset. For some user guidance, enhancements, and bugs, please refer to the SARD user manual.
Contributors are listed in alphabetical order:
- [ABM06] Fortify Software Inc., now HP Fortify, contributed ABM 1.0.1 which is a collection of small, synthetic C programs in flawed and flaw-free forms. The test cases cover various software security flaws, along with good or fixed versions. This is an update of FSI05. These test cases are in test suite 6.
- [BCS15] Bertrand C. Stivalet and Aurelien Delaitre designed the architecture and oversaw development of a test generator by TELECOM Nancy students to create 42 212 test cases in PHP. See Bertrand Stivalet and Elizabeth Fong, "Large Scale Generation of Complex and Faulty PHP Test Cases," 2016 IEEE International Conference on Software Testing, Verification and Validation (ICST), Chicago, IL. Test suite 103.
- [BCS16] Bertrand C. Stivalet and Aurelien Delaitre designed the architecture and oversaw development of the more modular and extensible test generator based on [BCS15] by TELECOM Nancy students to create 32 003 test cases in C#. Test suite 105.
- [CAS10] National Security Agency's Center for Assured Software created over 45 000 test cases in C/C++ and 14 000 in Java covering over 100 CWEs, called the Juliet test suite. They can be compiled individually, in groups, or all together. The C/C++ or Java cases and supporting files can be downloaded from the Test Suites page. Individually they are in test suites 68 (C/C++) and 69 (Java).
This was superseded by Juliet 1.1 [CAS12]. - [CAS12] National Security Agency's Center for Assured Software released Juliet 1.1 which is extended to over 57 000 test cases in C/C++ and almost 24 000 in Java. They can be compiled individually, in groups, or all together. It is described in IEEE Computer Oct 2012. The C/C++ or Java cases and supporting files can be downloaded from the Test Suites page.
This was superseded by Juliet 1.2 [CAS13]. - [CAS13] National Security Agency's Center for Assured Software updated its Juliet test suite to version 1.2. The new suite contains over 61 000 test cases in C/C++ and 25 000 in Java. The C/C++ or Java cases and supporting files can be downloaded from the Test Suites page. Individually they are in test suites 86 (C/C++) and 87 (Java).
This was superseded by Juliet 1.3 [NIST17]. - [CAS20] National Security Agency's Center for Assured Software created almost 29 000 test cases in C# covering 105 CWEs, called the Juliet test suite for C# version 1.3. The C# test cases and supporting files can be downloaded from the Test Suites page.
- [DRDC06] Frédéric Michaud and Frédéric Painchaud, Defence R&D Canada http://www.drdc-rddc.gc.ca/, created 25 C++ test cases. These test cases, plus a 26th with a main() including them all, are in test suite 62. Jeffrey Meister, NIST, entered them.
- [FSI05] Fortify Software Inc., now HP Fortify, contributed a collection of C programs that manifest various software security flaws. Those flaws include (1) Buffer Overflow (2) Format String Vulnerability (3) Untrusted Search Path (4) Memory Leak (5) Double Free Vulnerability (6) Race Condition (7) Direct Dynamic Code Evaluation (8) Information Leak, etc.
- [HH11] Hamda Hasan contributed C#, including ASP.NET, test cases with XSS, SQL injection, command injection, and hard coded password weaknesses. View these test cases.
- [IARPA12] The Intelligence Advanced Research Projects Activity (IARPA) created a test suite for Phase 1 of the Securely Taking On New Executable Software Of Uncertain Provenance (STONESOUP) program. The test suite consists of small C and Java programs, along with inputs triggering the vulnerability and build and execute directions. It comprises five collections of test cases: memory corruption for C, null pointer dereference for C, injection for Java, numeric handling for Java, and tainted data for Java. The five collections (about 450 test cases) may be downloaded from the Test Suites page.
- [IARPA14] The Intelligence Advanced Research Projects Activity (IARPA) created a test suite for Phase 3 of the Securely Taking On New Executable Software Of Uncertain Provenance (STONESOUP) program. The test suite is a collection of 7770 C and Java test cases based on 16 widely-used open source programs in which vulnerabilities have been seeded. IARPA STONESOUP documents are available here. It may be downloaded in a virtual machine from the Test Suites page. Alternatively, the test cases can be viewed and downloaded individually as test suite 102.
- [ITC14] Toyota InfoTechnology Center (ITC), U.S.A. static analysis benchmarks for undefined behavior and concurrency weaknesses. 100 test cases in C and C++ containing a total of 685 pairs of intended weaknesses. The test cases are © 2012-2014 and distributed under the "BSD License." See Shin'ichi Shiraishi, Veena Mohan, and Hemalatha Marimuthu, "Test Suites for Benchmarks of Static Analysis Tools," IEEE Int'l Symp. on Software Reliability Engineering (ISSRE '15), DOI: 10.1109/ISSREW.2015.7392027. Test suite 104.
- [KLOC06] Klocwork Inc. shared 41 cases in C and C++ from their regression test suite. The test cases are © 2000-2005 Klocwork Inc. All rights reserved. See the test cases for details. Since then, some cases have been deprecated and replaced. Test suite 106.
- [MLL05a] MIT Lincoln Laboratory developed a comprehensive taxonomy of C program buffer overflows and 291 diagnostic C code test cases representing this taxonomy. Each test case has three flawed versions (with buffer overflows just outside, moderately outside, and far outside the buffer boundary) and a patched version (without buffer overflow). Examples of using these test cases are in Kratkiewicz and Lippmann A Taxonomy of Buffer Overflows for Evaluating Static and Dynamic Software Testing Tools. Test suite 89.
- [MLL05b] MIT Lincoln Laboratory extracted 14 model programs from popular internet applications (BIND, Sendmail, WU-FTP) with known, exploitable buffer overflows. These programs have the portion of code with the overflows. Patched versions are also available. Examples of using these model programs are in Zitser, Lippmann, and Leek "Testing Static Analysis Tools Using Exploitable Buffer Overflows From Open Source Code", DOI: 10.1145/1029894.1029911. These 28 test cases are test suite 88 and test cases 1283 to 1310.
- [MS06] Michael Sindelar, UMass-Amherst and NIST, wrote test cases for threading.
- [NIST17] Paul E. Black, Charles de Oliveira, and Eric Trapnell updated the Juliet 1.2 test suite [CAS13], originally from National Security Agency's Center for Assured Software, to version 1.3. They fixed more than a dozen problems affecting thousands of test cases, including getting all but Windows-specific cases to compile in Linux, and added 6140 cases of pre- and post- increment and decrement over- or underflow. The C/C++ or Java cases and supporting files can be downloaded from the Test Suites page. Individually, they are in test suites 108 (C/C++) and 109 (Java).
- [RC06] Roderick Chapman, Altran Praxis, contributed an array access out-of-bounds case (1484) that occurs if the compiler generates code one way but not if it generates code another way. The C language does not specify which way.
- [RCS06] Robert C. Seacord contributed 69 examples from Secure Coding in C and C++ Romain Gaucher, NIST, wrote the descriptions and entered the examples.
- [SSW05] Secure Software Inc. published CLASP (Comprehensive, Lightweight Application Security Process) Volume 1.1 Training Manual, in 2005. Chapter 5, Vulnerability Root-Causes, has coding examples of software vulnerabilities. SARD adopted some of them as test cases.
Non-NIST Publications on SARD Content
Either using SARD cases or commenting directly on them. These are listed newest first.
Matteo Mauro, Regole di Programmazione per la Safety e Security: Analisi, Strumenti e Relazioni, Programming Rules for Safety and Security: Analysis, Tools and Relations, Bachelor Thesis, Università Degli Studi Firenze, 2018, unpublished. Mauro ran several static analyzers for MISRA rules on some Juliet test cases. The goal was to study which MISRA rules can also be helpful for security.
Gabriel Díaza and Juan Ramón Bermejo, Static analysis of source code security: Assessment of tools against SAMATE tests, Information and Software Technology, 55(8):1462–1476, August 2013, DOI: 10.1016/j.infsof.2013.02.005. "The study compares the performance of nine tools (CBMC, K8-Insight, PC-lint, Prevent, Satabs, SCA, Goanna, Cx-enterprise, Codesonar) ... against SAMATE Reference Dataset test suites 45 and 46 for C language."
Anne Rawland Gabriel, NIST Tool Boosts Software Security, FedTech, 8 February 2013. "Using the SARD test suites for internal testing and evaluation allows our researchers to gain insight into how their technology fares against a wide range of vulnerabilities ..."
Robet Auger, NIST publishes 50kish vulnerable code samples in Java/C/C++, is officially krad, cgisecurity.com blog, 31 March 2011.
He calls the Juliet test suite "a fantastic project" and says, "If you're new to software security and wish to learn what vulnerabilities in code look like, this is a great central repository ..."
Cristina Cifuentes, Christian Hoermann, Nathan Keynes, Lian Li, Simon Long, Erica Mealy, Michael Mounteney, and Bernhard Scholz, BegBunch: benchmarking for C bug detection tools, Proc. 2nd International Workshop on Defects in Large Software Systems; held in conjunction with Int'l Symposium on Software Testing and Analysis (ISSTA 2009), Chicago, Illinois, July 2009.
Describes BegBunch. Compares BegBunch with SARD and other collections.
Henny Sipma, SAMATE Case Analysis Report, Kestrel Technology, April 2008.
The description is "An application of CodeHawk to a NIST benchmark suite." The first page reads "CodeHawk Buffer-overflow Analysis Report: Benchmarks 115-1278". CodeHawk found an unrecognized underflow vulnerability in case 834.
John Anton, Eric Bush, Allen Goldberg, Klaus Haveland, Doug Smith, and Arnaud Venet, Towards the Industrial Scale Development of Custom Static Analyzers, Kestrel Technology, 2006.
"The SAMATE database will provide the basis for studying the specification language." Specifically uses cases 1314 and 54.
SARD Mentioned in Passing
Redge Barthomew, Evaluation of Static Source Code Analyzers for Safety-Critical Software Development, 1st International Workshop on Aerospace Software Engineering (AeroSE 07), 21-22 May 2007.
Robert C. Seacord and Jason A. Rafail, Secure Coding Standards, Cyber Security and Information Intelligence Research Workshop (CSIIRW 2007), 14-15 May 2007.