Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Media Forensics Challenge 2019

The Media Forensics Challenge 2019 (MFC2019) Evaluation is the second annual evaluation to support research and help advance the state of the art for image and video forensics technologies – technologies that determine the region and type of manipulations in imagery (image/video data) and the phylogenic process that modified the imagery. The MFC2019 evaluation is currently being designed off of experience from the MFC2018 Evaluation.  We expect to continue support for the following tasks:

  • Image Manipulation Detection and Localization (Image MDL)- Given a single probe image, detect if the probe was manipulated and provide localization mask(s) indicating where the image was modified.
  • Splice Detection and Localization (Image SDL) - Given two images, detect if a region of a donor image has been spliced into a probe image and, if so, provide two masks indicating the region(s) of the donor image that were spliced into the probe and the region(s) of the probe image that were spliced from the donor.
  • Provenance Filtering (PF) - Given a probe image and a set of images representing a world (i.e., large 5M+ images), return the top N images from the world data set which contributed to creating a probe image.
  • Provenance Graph Building (PGB)- Produce a phylogeny graph for a probe image 
    • Variation 1: End-to-End Provenance - Provenance output produced by processing the large world data set (5M+ images) of images.
    • Variation 2: Oracle Filter Provenance - Provenance output produced from a NIST-provided small (200 image) collection of images.
  • Video Manipulation Detection (Video MDL) and Temporal Localization - Detect if the probe video was manipulated, and provide a list of frame intervals indicating which frames in the video were modified.
  • Event Verification - Determine if a probe is from a claimed event.
  • Camera Verification - Determine if a probe is from a claimed camera fingerprint.

Prospective MFC participants can subscribe to the MFC mailing list for announcements by sending a request to the contact below and can take part in the evaluation by completing the registration and license agreements below.

Tentative Schedule

Dates Development Resources
November 20, 2018
  • NC 2017 Data resources available
November 30, 2018
  • MFC18 Data resources available
  • MFC19 RecaptureDev dataset released
  • MFC19 Evaluation Plan released
  • MFC19 Evaluation Schedule finalized
January 31, 2019
  • Provenance world data distributed
February 8, 2019
  • Provenance world data unlocked 2:00 pm EDT
February 28, 2019
  • MFC19 EP1 Image Probes distributed
  • Oracle Provenance Data (without distractors) released
March 22, 2019
  • MFC19 EP1 Video Probes distributed
April 26, 2019
  • All image and video tasks' submission to NIST scoring server
  • External team submission to NIST scoring server
May 3, 2019
  • Oracle Provenance Data (with distractor) released
May 10, 2019
  • Oracle Provenance Data (with distractor) submission to NIST scoring server
May 24, 2019
  • Scores released to participants
  • NIST report on all NIST scoring server submissions

Documentation

The MFC19 Evaluation Plan details the structure of the evaluation tasks, data, and metrics.

Signup Procedure

  1. Read the evaluation plan when it is available to become familiar with the evaluation.
  2. Sign and return the Media Forensics Data Use Agreement to mfc_poc [at] nist.gov (mfc_poc[at]nist[dot]gov)
  3. Sign and return the Media Forensics Challenge 2019 Participation Agreement to mfc_poc [at] nist.gov (mfc_poc[at]nist[dot]gov)
  4. Complete a Dry Run Evaluation.
    • The dry run is an opportunity for developers to make sure they are able to generate valid system output that can be scored with the NIST scoring tools. The actual performance of the system is not of interest during the dry run so developers may feel free to use any method to generate their system output, e.g., a random system, training on the dry run data. Instructions will be posted soon.

Data Resources

By signing up for the evaluation, you’ll be getting a wealth of data resources to conduct your media forensics research.  The data will be designated as either: (1) a development resource that will be accessible at signup, (2) past year evaluation resources which will be provided after completing a dry run evaluation, and (3) the MFC’19 evaluation resources provided during the formal evaluation in April ’19.  Here’s a quick summary of the resources:

Data Set Type

Data Set Name

Number of Forensic Probes (true manipulations and non-manipulations)

World Data Set Size

Reference Annotations

Supported Tasks

Development

NC2016 – Both Nimble Science and Nimble Web

624

 

Full

MDL

NC’17 Development Image Data

3,500

100,000

Full

All

NC’17 Development Video Data

213

 

Full

All

MFC’18 Development Image and Video Data

TBD

TBD

Full

TBD

Past Evaluations

 

NC’17 Evaluation Images

10,000

1,000,000

Full for 1/3 subset

All

NC’17 Evaluation Videos

1,000

 

Full for 1/3 Subset

All

MFC’18 Evaluation Images

50,000

1,000,000

 

 

MFC’18 Evaluation Videos

5,000

 

 

 

Evaluation Tools

NIST-provided tools are described in the Evaluation Infrastructure Setup Instructions.

Contact

mfc_poc [at] nist.gov (mfc_poc[at]nist[dot]gov)

Created November 20, 2018, Updated December 16, 2019