In our research, we aim to assess the effectiveness of Large Language Models (LLMs) in performing query expansion in the context of Information Retrieval (IR). Some recent solutions proposed and studied in the literature to perform this task have proven effective considering specific LLMs, datasets, or prompt engineering techniques. In this paper, we intend to deepen this analysis with a more comprehensive and up-to-date view of their effectiveness, by comparing the results obtained from such solutions in the context of Zero-Shot (ZS) and Chain-of-Thought (CoT) learning, so as to be agnostic with respect to Few-Shot (FS) learning that requires additional training data from the dataset considered for evaluations, and using a variety of LLMs also of the latest generation. Results obtained across various LLMs generally demonstrate the superiority of utilizing recent LLM-based solutions for query expansion when employed in a prompt engineering scenario based on Zero-Shot learning. This showcases the intrinsic effectiveness of such recent LLMs even characterized by a modest number of parameters.
Rizzo, D., Raganato, A., Viviani, M. (2024). Comparatively Assessing Large Language Models for Query Expansion in Information Retrieval via Zero-Shot and Chain-of-Thought Prompting. In Proceedings of the 14th Italian Information Retrieval Workshop (pp.23-32). CEUR-WS.
Comparatively Assessing Large Language Models for Query Expansion in Information Retrieval via Zero-Shot and Chain-of-Thought Prompting
Raganato A.;Viviani M.
2024
Abstract
In our research, we aim to assess the effectiveness of Large Language Models (LLMs) in performing query expansion in the context of Information Retrieval (IR). Some recent solutions proposed and studied in the literature to perform this task have proven effective considering specific LLMs, datasets, or prompt engineering techniques. In this paper, we intend to deepen this analysis with a more comprehensive and up-to-date view of their effectiveness, by comparing the results obtained from such solutions in the context of Zero-Shot (ZS) and Chain-of-Thought (CoT) learning, so as to be agnostic with respect to Few-Shot (FS) learning that requires additional training data from the dataset considered for evaluations, and using a variety of LLMs also of the latest generation. Results obtained across various LLMs generally demonstrate the superiority of utilizing recent LLM-based solutions for query expansion when employed in a prompt engineering scenario based on Zero-Shot learning. This showcases the intrinsic effectiveness of such recent LLMs even characterized by a modest number of parameters.File | Dimensione | Formato | |
---|---|---|---|
Rizzo-2024-CEUR WS_14 It Informat Retrieval Ws-VoR.pdf
accesso aperto
Tipologia di allegato:
Publisher’s Version (Version of Record, VoR)
Licenza:
Creative Commons
Dimensione
557.03 kB
Formato
Adobe PDF
|
557.03 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.