Test Results for String Search Tool

Test Results for String Search Tool:
X-Ways Version Forensic


Text formatted like this (Bold, Italics and Green) should be removed and replaced. These are instructions to the report writer.

You should begin with a cover page and an introductory section and maybe a table of contents.

How to Read This Report



This report is organized into the following sections:
Of course, you can reorganize sections, but then you should update this list.
  1. Tested Tool Description. The tool name, version, and vendor information are listed.
  2. Testing Organization. Contact information and approvals.
  3. Results Summary. This section identifies any significant anomalies observed in the test runs. This section provides a narrative of key findings identifying where the tool meets expectations and provides a summary of any ways the tool did not meet expectations. The section also provides any observations of interest about the tool or about testing the tool including any observed limitations or organization imposed restrictions on tool use.
  4. Test Environment & Selected Cases. Description of hardware, software and support environment (e.g., version of Federated Testing used, device firmware version, etc) used in tool testing and a list identifying the applicable test cases selected from the Federated Testing String Search Test Suite.
  5. Test Result Details by Case. Automatically generated test results that identify anomalies.

Test Results for String Search Tool:
X-Ways Version Forensic

Tested Tool Description

Tool Name: X-Ways
Tool Version: Forensic
Vendor: Insert vendor name and contact information.

Testing Organization

The following items in this section may be included or omitted as per organization's policy for tool test reports.
Organization conducting test: Insert Organization name.
Contact: Insert contact name.
Report date: Insert date.
Authored by: Insert author name.
Reviewed by: Insert name of reviewer.
Reviewed by date: Insert date.
Approved: Insert name of approving official.
Approved by date: Insert date.

This test report was generated using CFTT's Federated Testing Forensic Tool Testing Environment, see Federated Testing Home Page.

Results Summary

Provide a narrative of key findings -- did the tool meet expectations. If not provide a summary of how the tool did not meet expectations. Provide any observations of interest about the tool or about testing the tool including any observed limitations or organization imposed restrictions on tool use.

Test Environment & Selected Cases

This section describes test hardware, software, test data sets and test cases.

Test Hardware and Software

Hardware

List and describe any hardware used during testing in sufficient detail to repeat the tests.

Software

List and describe any additional software used during testing in sufficient detail to repeat the tests.

Test Data Sets

String search test data set package Version 1.1 was used. The package can be downloaded from either the CFTT web site (www.cftt.nist.gov then select String Searching) or the CFReDS web site (www.cfreds.nist.gov). The package includes two dd files with known content. One of the dd test images contains target strings within FAT, ExFAT and NTFS file systems (Windows), the other dd test image contains target strings from HFS+ journaled, case insensitive (OSXJ), HFS+ journaled, case sensitive (OSXC), ext4 file system and APFS (Apple file system) (UNIX-like).

In general, each target string is encoded in ASCII and located in an active file and a recoverable deleted file in each partition of the test image. The Windows dd image also has a block of unallocated storage that contains the target strings without a file system. Some of the target strings are also encoded in Unicode UTF-8, UTF-16BE and UTF-16LE with a byte-order-mark. Test case FT-SS-09 is organized to test specific situations such as formatted strings, strings spanning file fragments, Unicode UTF-16 without a byte-order-mark, Unicode text with and without combining characters (diacritic marks), Unicode text with and without ligatures ("fi" as two characters and as one character) and strings located in inaccessible areas. Each instance of a target string also has a unique associated string ID located immediately after the target string. The string ID helps identify the specific string matched by the search tool.

Test Case Descriptions

The following table gives a brief description of available test cases in the data sets. Not all test cases are used for all data sets.
You can delete the row in the table for any cases not used.
CaseCase Description
FT-SS-01Search ASCII
FT-SS-02Search Ignore Case
FT-SS-03Search for Words
FT-SS-04Search Logical AND
FT-SS-05Search Logical OR
FT-SS-06Search Logical NOT
FT-SS-07-CJK-charSearch Unicode Chinese/Japanese ideograms (Asian)
FT-SS-07-CJK-hangulSearch Unicode CJK Korean Hangul (Asian)
FT-SS-07-CJK-kanaSearch Unicode CJK Japanese phonetic Kana (Asian)
FT-SS-07-CyrillicSearch Unicode Cyrillic (Russian)
FT-SS-07-LatinSearch Unicode Latin (French & German)
FT-SS-07-NoBOMSearch Unicode 16 without a byte-order-mark
FT-SS-07-NormNormalized Search of Unicode text with diacritic marks (NFC & NFD)
and ligatures (NFKC & NFKD)
FT-SS-07-RTLSearch Unicode RTL (Arabic)
FT-SS-08-EmailSearch Tool-defined Queries -- Email Address
FT-SS-08-PhoneSearch Tool-defined Queries -- Telephone Number
FT-SS-08-SSSearch Tool-defined Queries -- Social Security
FT-SS-09-DocSearch Formatted Document Text
FT-SS-09-FragSearch Fragmented File
FT-SS-09-LostSearch Inaccessible (lost) Areas
FT-SS-09-MFTSearch File in MFT
FT-SS-09-MetaSearch file name substring in Meta-data
FT-SS-09-StemSearch for matches to word stem
FT-SS-10-HexSearch Hexadecimal Character Match
FT-SS-10-RegexSearch Pattern Character Match

Some test cases are for specific features, e.g., logical conditions (and, or, not), built in searches (email, telephone numbers), etc. Three test cases, FT-SS-09-Frag, FT-SS-09-Lost & FT-SS-09-MFT, are only applied to the Windows data set.

If a test case applies to a feature that is not supported by the tested tool, the case should be omitted and listed here.

Test Result Details by Case (per Data Set)

This section presents test results grouped by function.

A string search tool may implement more than one search algorithm (also known as a search engine) for searching text. The two most common search engines are indexed search and live search. An indexed search reads all the acquired data once before doing any searching and builds an index to all words found. Each query can be looked up quickly in the index. A Live search reads all the acquired data for each query.

This section presents test results by test image (windows file systems, unix-like file systems or both). For each test image, there is a result table for each search engine tested. Each table shows results by test case of the number of expected search hits, the number of actual search hits and the number of strings missed (i.e., expected hits minus actual hits) for allocated files, deleted files and unallocated space.

The following search engines were tested: Live.

Results for Data Set: Windows

This section provides results for the Windows data set.

Results for Live Search of Windows Data Set

The table columns contain the following information:

  • Case The test case identifier.
  • Expected String The expected strings that should be reported by the search.
  • Active Files A group of three columns (Expected, Hits and Misses) giving the number of hits and misses when searching for the expected string in an active file.
  • Deleted Files A group of three columns (Expected, Hits and Misses) giving the number of hits and misses when searching for the expected string in a deleted file.
  • Unallocated Space A group of three columns (Expected, Hits and Misses) giving the number of hits and misses when searching for the expected string in unallocated space.
  • Expected The number of instances of the expected string found in the group (i.e., Active files, Deleted files or Unallocated space).
  • Hits The number of times the expected string was found in the group.
  • Misses The number of times the expected string was missed (not found) in the group.

Notes: If the row identifies a test case, then the results are a summary for all the strings that should be found.

In the Expected String column for test case FT-SS-09-DOC each string is labeled to indicate features of the expected string. The labels include the file type (.doc, .docx or .html), the encoding of the string in the .doc file and if the string has embedded formatting, labeled as Formatted, e.g., the string crossbow has the substring cross formatted as bold and underlined, i.e., crossbow.

CaseExpected StringActive FilesDeleted FilesUnalloc Space
ExpectedHitsMissesExpectedHitsMissesExpectedHitsMisses
FT-SS-01 3 3 0 3 3 0 1 1 0
DireWolf 3 3 0 3 3 0 1 1 0
FT-SS-02 15 15 0 15 15 0 5 5 0
WOLF 3 3 0 3 3 0 1 1 0
wolf 3 3 0 3 3 0 1 1 0
Wolf 3 3 0 3 3 0 1 1 0
DireWolf 3 3 0 3 3 0 1 1 0
WereWolf 3 3 0 3 3 0 1 1 0
FT-SS-03 9 9 0 9 9 0 3 3 0
WOLF 3 3 0 3 3 0 1 1 0
wolf 3 3 0 3 3 0 1 1 0
Wolf 3 3 0 3 3 0 1 1 0
FT-SS-04 3 3 0 3 3 0 0 0 0
tiger 3 3 0 3 3 0 1 0 1
FT-SS-05 6 6 0 6 6 0 2 2 0
DireWolf 3 3 0 3 3 0 1 1 0
WereWolf 3 3 0 3 3 0 1 1 0
FT-SS-06 12 12 0 12 12 0 0 0 0
fox 15 12 3 15 12 3 5 0 5
FT-SS-07-CJK-char 18 18 0 18 18 0 6 6 0
中国 9 9 0 9 9 0 3 3 0
東京 9 9 0 9 9 0 3 3 0
FT-SS-07-CJK-hangul 9 9 0 9 9 0 3 3 0
서울 9 9 0 9 9 0 3 3 0
FT-SS-07-CJK-kana 18 18 0 18 18 0 6 6 0
スバル 9 9 0 9 9 0 3 3 0
みつびし 9 9 0 9 9 0 3 3 0
FT-SS-07-Cyrillic 9 9 0 9 9 0 3 3 0
Сибирь 9 9 0 9 9 0 3 3 0
FT-SS-07-Latin 18 18 0 18 18 0 6 6 0
garçon 9 9 0 9 9 0 3 3 0
Schönheit 9 9 0 9 9 0 3 3 0
FT-SS-07-NoBOM 39 39 0 39 39 0 13 13 0
Россия 9 9 0 9 9 0 3 3 0
فلافل 9 9 0 9 9 0 3 3 0
中國 9 9 0 9 9 0 3 3 0
QuarterHorse 12 12 0 12 12 0 4 4 0
FT-SS-07-Norm 75 75 0 75 75 0 25 25 0
mañana (NFD) 9 9 0 9 9 0 3 3 0
infinity (No Ligature) 12 12 0 12 12 0 4 4 0
Mäuse (NFD) 9 9 0 9 9 0 3 3 0
infinity (Ligature) 9 9 0 9 9 0 3 3 0
Mäuse (NFC) 9 9 0 9 9 0 3 3 0
libertà (NFC) 9 9 0 9 9 0 3 3 0
libertà (NFD) 9 9 0 9 9 0 3 3 0
mañana (NFC) 9 9 0 9 9 0 3 3 0
FT-SS-07-RTL 9 9 0 9 9 0 3 3 0
الكسكس 9 9 0 9 9 0 3 3 0
FT-SS-09-Doc 16 15 1 0 0 0 16 13 3
longbow
.html
2 2 0 0 0 0 2 2 0
shotgun
Formatted .doc UTF-16
2 2 0 0 0 0 2 2 0
revolver
.doc UTF-16
2 2 0 0 0 0 2 2 0
peroxide
.docx
2 2 0 0 0 0 2 1 1
nitroglycerin
Formatted .docx
2 2 0 0 0 0 2 1 1
rifle
.doc UTF-8
2 2 0 0 0 0 2 2 0
crossbow
Formatted .html
2 1 1 0 0 0 2 1 1
flintlock
Formatted .doc UTF-8
2 2 0 0 0 0 2 2 0
FT-SS-09-Frag 2 2 0 0 0 0 0 0 0
Washington 1 1 0 0 0 0 0 0 0
California 1 1 0 0 0 0 0 0 0
FT-SS-09-Lost 0 0 0 0 0 0 4 4 0
SecretKey 0 0 0 0 0 0 2 2 0
disconnected 0 0 0 0 0 0 2 2 0
FT-SS-09-MFT 4 4 0 4 4 0 0 0 0
bear 4 4 0 4 4 0 0 0 0
FT-SS-09-Meta 6 6 0 6 6 0 2 2 0
cañón 3 3 0 3 3 0 1 1 0
thunderbird 3 3 0 3 3 0 1 1 0
FT-SS-10-Hex 3 3 0 3 3 0 1 1 0
tiger 3 3 0 3 3 0 1 1 0
FT-SS-10-Regex 6 6 0 6 6 0 2 2 0
DireWolf 3 3 0 3 3 0 1 1 0
WereWolf 3 3 0 3 3 0 1 1 0

Meta-Data results for Live Search of Windows Data Set


The following table presents search results for strings located in file system meta-data. The Case column identifies the test case, the String column identifies the search string, the Partition column identifies the partition (file system) where the string is located and the Seen column records if the search tool reported at least one instance of the string (yes or no) in meta-data.

CaseStringPartitionSeen
FT-SS-09-Meta
thunderbirdntfsYes
cañónfat32Yes
cañónexfatYes
cañónntfsYes

Comments on Live Search of Windows Data Set

The following table presents any comments recorded during testing for a test case.

CaseComments
FT-SS-06 UTF-16 strings are reported twice.
FT-SS-07-Latin UTF-16 strings are reported twice.
FT-SS-07-NoBOM UTF-16 strings for "QuarterHorse" are reported twice.
FT-SS-07-Norm UTF-16 strings normalized as NFC are reported twice.
FT-SS-09-Doc UTF-16 strings are reported twice.
FT-SS-09-Lost UTF-16 strings are reported twice.

END of REPORT