This website is currently in the process of being moved. Some links to our data may not work at this time. We apologize for the inconvenience.
This benchmark aims to provide tools to evaluate 3D Interest Point Detection Algorithms with respect to human generated ground truth.
Please refer to the paper for more information about this benchmark: Helin Dutagaci, Chun Pan Cheung, Afzal Godil: “Evaluation of 3D interest point detection techniques via human-generated ground truth" <filename: article_VC_interest_points.pdf>, The Visual Computer, 2012. [Bib Tex <filename: visual_computer_IP_benchmark.bib> ]
Using a web-based subjective experiment, human subjects marked 3D interest points on a set of 3D models. The models were organized in two datasets: Dataset A and Dataset B. Dataset A consists of 24 models which were hand-marked by 23 human subjects. Dataset B is larger with 43 models, and it contains all the models in Dataset B. The number of human subjects who marked all the models in this larger set is 16.
Some of the models are standard models that are widely used in 3D shape research; and they have been used as test objects by researchers working on the best view problem. Examples are Armadillo, David’s head, Utah teapot, Bunny, etc. We chose some of the models from The Stanford 3D Scanning Repository and some others from the Watertight Models Track of SHREC 2007 (see the license). The Stanford 3D Scanning Repository and some others from the Watertight Models Track of SHREC 2007 (see the license <filename: AIM.txt>).
The data set can be downloaded from the link: <filename: MODEL_DATASET.zip>. The triangular mesh models are stored in MAT files. Please refer to README <filename: README.pdf> for details.
The interest points marked by human subjects' can be downloaded from the link: HUMAN SUBJECT’s INTEREST POINTS <filename: HUMAN_SUBJECTs_INTEREST_POINTS.zip>.
The text files are named with respect to the following template:
subjectname-modelname_points.txt
An example is jck-ant_points.txt <filename: jck-ant_point.txt>. Where jck is the ailas for the human subject and camel is the name of the 3D model. Each row in the text file gives x, y, z coordinates of an interest point marked by the subjet on the 3D model.
We have compared five 3D Interest Point Detection algorithms. The interest points detected on the 3D models of the dataset can be downloaded from the link next to the corresponding algorithm. Please refer to README for details.
The interest points for all of the six algorithms can also be downloaded as a single zip file: ALGORITHMs_INTEREST_POINTS <filename: ALGORITHMs_INTEREST_POINTS.zip>
There are two steps for evaluating an “interest point detection algoritm” with respect to the interest points marked by human subjects:
The MATLAB code can be downloaded from the following link: MATLAB code <filename: CODES.zip >. The details about how to use the code are present in the pdf document : README <filename: README.pdf>.
The entire benchmark, including the output MAT files generated by the codes, can be downloaded as a single archive file: IP_BENCHMARK <filename: IP_BENCHMARK.zip>
Helin Dutagaci, Chun Pan Cheung, Afzal Godil, “Evaluation of 3D interest point detection techniques via human-generated ground truth” <filename: article_VC_interest_points.pdf>, The Visual Computer, 2012. [Bib Tex <filename: visual_computer_IP_benchmark.bib>]
For more information and feedback, please contact Helin Dutagaci or Afzal Godil