Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

MED 2013 Evaluation

The Multimedia Event Detection (MED) evaluation track is part of the TRECVID Evaluation. The 2013 evaluation will be the third MED evaluation which was preceded by the 2012 and 2011 evaluation and the 2010 Pilot evaluation.

The goal of MED is to assemble core detection technologies into a system that can search multimedia recordings for user-defined events based on pre-computed metadata. The metadata stores developed by the systems are expected to be sufficiently general to permit re-use for subsequent user defined events.

A user searching for events in multimedia material may be interested in a wide variety of potential events. Since it is an intractable task to build special purpose detectors for each event a priori, technology is needed that can take as input a human-generated definition of the event that a system will use to search a collection of multimedia clips. The MED evaluation series will define events via an event kit which consists of an event name, definition, explication (textual exposition of the terms and concepts), evidential descriptions, and illustrative video exemplars.

The major changes for the 2013 evaluation include:

  • The 20 MED '12 test events will be this years Pre-specified events
  • 20 new AdHoc events will be released
  • Four defined contrastive evaluation conditions will be defined 
  • A 1/3 subset of last year's test collection is a defined evaluation condition
  • The training exemplar conditions will be 100, 10 and 0 exemplars.

MED Task Definitions

The 2013 evaluation will support two evaluation tasks: 

  • Pre-Specified Event MED: WITH knowledge of the pre-specified test event kits, construct a metadata store for the test videos and then for each pre-specified test event kit, search the metadata store to detect occurrences of the test event.
  • Ad Hoc Event MED: WITHOUT knowledge of the ad hoc test event kits, construct a metadata store for the test videos and then for each ad hoc test event kit, search the metadata store to detect occurrences of the test event.

The Pre-Specified event task is identical to the MED12 task. Participants must build a system for at least one of the test events in order to participate in the evaluation and TRECVID Conference.

Information Dissemination

NIST maintains an email discussion list to disseminate information. Send requests to join the list to med_poc at nist dot gov.

Evaluation Plan and Data Use Rules

MED system performance will be evaluated as specified in the evaluation plan.   The evaluation plan contains the rules, protocols, metric definitions, scoring instructions, and submission instructions.

Data Resources

A collection of Internet multimedia (i.e., clips containing both audio and video streams) will be provided to registered MED participants. The data, which was collected by the Linguistic Data Consortium, consists of publicly available, user-generated content posted to the various Internet video hosting sites.

MED12 Participants will receive following training resources:

  • the MED10 data set, the MED11 DEV-T set,
  • the MED11 Development and Test Collection,
  • the Kindred Collection, and
  • the "PROGRESS Test" Collection. This data set will be made available to previous MED participants that submitted results to NIST and new participants that complete the MED13 Dry Run.

The evaluation plan and license information will specify usage rules of the data resources in full detail.

2013 Pre-Specified Event Kits

Twenty Pre-Specified events will be from the 2012 test events. The table below contains the event names.

MED11 Event MED12 Events
Birthday party
Changing a vehicle tire
Flash mob gathering
Getting a vehicle unstuck
Grooming an animal
Making a sandwich
Parade
Parkour
Repairing an appliance
Working on a sewing project
Attempting a bike trick
Cleaning an appliance
Dog show
Giving directions to a location
Marriage proposal
Renovating a home
Rock climbing
Town hall meeting
Winning a race without a vehicle
Working on a metal crafts project
  • Event names need to be interpreted in the full context of the event definitions that will be made available as part of the event kits.

Video data

Clips will be provided in MPEG-4 formatted files. The video will be encoded to the H.264 standard. The audio will be encoded using MPEG-4'S Advanced Audio Coding (AAC) standard.

Data Licensing

In order to obtain the corpora, ALL (including '12 participants) TRECVID-registered sites must complete an evaluation license with the LDC. Each site who requires access to the data, either as part of a team or as a standalone researcher, must complete a license.

To complete the evaluation license follow these steps:

  1. Download the license MED13 Evaluation License.
  2. Return the completed license to LDC's Membership Office via email at ldc at ldc dot upenn dot edu. Alternatively you may fax the completed license to the LDC at 215-573-2175.
  3. When you send the completed license to the LDC, include the following information:
  4. Registered TRECVID Team name
  5. Site/organization name
  6. Data contact person's name
  7. Data contact person's email

The designated data contact person for each site will receive instructions from the LDC about the specific procedures for obtaining the data packages when they are released.

Dry Run Evaluation

The dry run data resources are available from http://www-nlpir.nist.gov/projects/tv2013/med.data/. The username and password was provided during TRECVID registration. There are two relevant files:

In addition to the data files, F4DE_3.0.1 contains a scoring primer for MED '13 in the file DEVA/doc/TRECVid-MED13-ScoringPrimer.html of the release that demonstrates how to score a system and prepare a submission file.

Evaluation Tools

Evaluation scripts to support the MED evaluation are within the NIST Framework for Detection Evaluations (F4DE) Toolkit Version 3.0.1 or later found on the NIST MIG tools page.

The package contains an MED evaluation primer (found in DEVA/doc/TRECVid-MED11-ScoringPrimer.html) of the distribution.

The NIST has prepared a MER Workstation. The current release version is MERAPP13-v3.0

Schedule

Consult the TRECVID Master schedule

Revision History

  • March 19, 2013 - Initial page created.
Created March 19, 2013, Updated September 21, 2016