Inproceedings,

Evaluating Generative Ad Hoc Information Retrieval

, , , , , , , , , , , and .
47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2024), page 1916-1929. ACM, (July 2024)
DOI: 10.1145/3626772.3657849

Abstract

Recent advances in large language models have enabled the development of viable generative retrieval systems. Instead of a traditional document ranking, generative retrieval systems often directly return a grounded generated text as a response to a query. Quantifying the utility of the textual responses is essential for appropriately evaluating such generative ad hoc retrieval. Yet, the established evaluation methodology for ranking-based ad hoc retrieval is not suited for the reliable and reproducible evaluation of generated responses. To lay a foundation for developing new evaluation methods for generative retrieval systems, we survey the relevant literature from the fields of information retrieval and natural language processing, identify search tasks and system architectures in generative retrieval, develop a new user model, and study its operationalization.

Tags

Users

  • @lugi004g

Comments and Reviews