The Multimedia Event Detection (MED) track is part of the TRECVID Evaluation. The 2017 evaluation will be the eighth MED evaluation which was preceded by the 2011, 2012, 2013, 2014, 2015, and 2016 evaluations and the 2010 Pilot evaluation.
The goal of MED is to assemble core detection technologies into a system that can search multimedia recordings for user-defined events based on pre-computed metadata. The metadata stores developed by the systems are expected to be sufficiently general to permit re-use for subsequent user defined events.
A user searching for events in multimedia material may be interested in a wide variety of potential events. Since it is an intractable task to build special purpose detectors for each event a priori, technology is needed that can take as input a human-generated definition of the event that a system will use to search a collection of multimedia clips. The MED evaluation series will define events via an event kit which consists of an event name, definition, explication (textual exposition of the terms and concepts), evidential descriptions, and illustrative video exemplars.
NIST maintains an email discussion list to disseminate information. Send requests to join the list to med_poc at nist dot gov.
This page will be updated periodically with additional information and resources. When such updates occur, a notice will also be sent to the aforementioned mailing list.
MED system performance will be evaluated as specified in the evaluation plan, MED17 Evaluation Plan. Please note that some content (specifically the submission format and instructions) is subject to change as we update our submission/scoring infrastructure for 2017.
Portions of the HAVIC collection of Internet multimedia (i.e., clips containing both audio and video streams) will be provided to registered MED participants. The data, which was collected by the Linguistic Data Consortium, consists of publicly available, user-generated content posted to the various Internet video hosting sites.
The Yahoo Flickr Creative Commons 100M dataset (YFCC100M) is a large collection of images and video available on Yahoo! Flickr. All photos and videos listed in the collection are licensed under one of the Create Commons copyright licenses. This dataset is available directly from Yahoo! here. Only a subset of the YFCC100M videos will be used for this evaluation, this subset is to be determined.
The evaluation plan and license information will specify usage rules of the data resources in full detail.
HAVIC clips will be provided in MPEG-4 formatted files. The video will be encoded to the H.264 standard. The audio will be encoded using MPEG-4'S Advanced Audio Coding (AAC) standard. Please refer to the YFCC100M documentation regarding video file format.
In order to obtain the HAVIC corpora, ALL (including previous participants) TRECVID-registered sites must complete an evaluation license with the LDC. Each site who requires access to the HAVIC data, either as part of a team or as a standalone researcher, must complete a license.
To complete the evaluation license follow these steps:
This year, NIST will make the HAVIC resources available to sites who've successfully submitted a signed MED17 Data License agreement to the LDC. The data will be hosted on Amazon Web Services (AWS), you will need to create an AWS account if you or your site doesn't already have one in order to access the HAVIC data. To request access to the data, please send a request to med_poc at nist dot gov, and include the following information:
AWS root account canonical ID (an obfuscated form of your account number)
The canonical ID is found under a drop-down menu in the upper right-hand corner of the AWS console, under their username:
[username] drop-down menu -> My Security Credentials -> Accounts Identifiers
Your site/organization name
Your team name (if different from site/organization name)
The following archives contain the input files for both the Pre-Specified and Ad-Hoc Event portions of the MED17 evaluation.
There are two relevant archives for the dry run evaluation:
In addition to the data files, F4DE-3.5.0 contains a scoring primer for MED '17 in the file DEVA/doc/TRECVid-MED17-ScoringPrimer.html that demonstrates how to score a system and prepare a submission file.
Evaluation scripts to support the MED evaluation are within the NIST Framework for Detection Evaluations (F4DE) Toolkit, a link to the latest release can be found on the NIST MIG tools page.
MED17 requires a F4DE version of 3.5.0 or later. The list of F4DE releases can be found on the F4DE GitHub page here.
Consult the TRECVID Master Schedule.
Any mention of commercial products within NIST web pages is for information only; it does not imply recommendation or endorsement by NIST.