The top three winners of PSCR's first data challenge, Advancing Methods in Differential Privacy, have been selected. The grand prize of $15,000 was awarded to a team from Georgia Tech. The runner-up team, from Purdue University, won $10,000 while the paper that earned honorable mention--a $5,000 award--came from a team at Westat Corporation. Georgia Tech and Purdue (DPSyn) teams were also voted "People's Choice" by the HeroX community, a prize that earned each team an additional $5,000. Congratulations to the winners!
PSCR's second data challenge is due to launch in October. The Differential Privacy Synthetic Data Challenge will entail a sequence of three marathon matches run on the Topcoder platform to collect, normalize, implement and compare differentially private algorithms with the prospect of advancing research in the field of Differential Privacy. Participation in the first data challenge is not required. Learn more about the challenge on PSCR's Open Innovation page.
Our increasingly digital world turns almost all our daily activities into data collection opportunities, from the more obvious entry into a webform to connected cars, cell phones, and wearables. Dramatic increases in computing power and innovations can also be used to the detriment of individuals through linkage attacks: auxiliary and possibly completely unrelated datasets in combination with records in the dataset that contain sensitive information can be used to determine unique identifiable individuals.
This valid privacy concern is unfortunately limiting the use of data for research, including datasets with the Public Safety sector that might otherwise be used to improve protection of people and communities. Due to the sensitive nature of information contained in these types of datasets and the risk of linkage attacks, these datasets can't easily be made available to analysts and researchers. In order to make the best use of data that contains PII, it is important to disassociate the data from PII. There is a utility vs. privacy tradeoff; however, the more that a dataset is altered, the more likely that there will be a reduced utility of the de-identified dataset for analysis and research purposes.
Currently, popular de-identification techniques are not sufficient. Either PII is not sufficiently protected, or the resulting data no longer represents the original data. Additionally, it is difficult or even impossible to quantify the amount of privacy that is lost with current techniques.
This competition is about creating new methods or improving existing methods of data de-identification in a way that makes de-identification of privacy-sensitive datasets practical. A first phase hosted on HeroX will ask for ideas and concepts, while later phases executed on Topcoder will focus on the performance of developed algorithms.