Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Don't Use LLMs to Make Relevance Judgments

Published

Author(s)

Ian Soboroff

Abstract

Making the relevance judgments for a TREC-style test collection can be complex and expensive. A typical TREC track usually involves a team of six contractors working for 2-4 weeks. Those contractors need to be trained and monitored. Software has to be written to support recording relevance judgments correctly and efficiently. The recent advent of large language models that produce astoundingly human-like flowing text output in response to a natural language prompt has inspired IR researchers to wonder how those models might be used in the relevance judgment collection process.\citedagstuhl-report,llm4eval-cacm} At SIGIR 2024, a workshop ''LLM4Eval'' provided a venue for this work, and featured a data challenge activity where participants reproduced TREC deep learning track judgments, as was done by Thomas et al.\citethomas2024} I was asked to give a keynote at the workshop, and this paper presents that keynote in article form. The bottom-line-up-front message is, don't use LLMs to create relevance judgments for TREC-style evaluations.
Citation
Information Retrieval Research
Volume
1
Issue
1

Keywords

information retrieval, evaluation

Citation

Soboroff, I. (2025), Don't Use LLMs to Make Relevance Judgments, Information Retrieval Research, [online], https://doi.org/10.54195/irrj.19625, https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=958495 (Accessed March 31, 2025)

Issues

If you have any questions about this publication or are having problems accessing it, please contact reflib@nist.gov.

Created February 25, 2025, Updated March 26, 2025