In our research, we aim to assess the effectiveness of Large Language Models (LLMs) in performing query expansion in the context of Information Retrieval (IR). Some recent solutions proposed and studied in the literature to perform this task have proven effective considering specific LLMs, datasets, or prompt engineering techniques. In this paper, we intend to deepen this analysis with a more comprehensive and up-to-date view of their effectiveness, by comparing the results obtained from such solutions in the context of Zero-Shot (ZS) and Chain-of-Thought (CoT) learning, so as to be agnostic with respect to Few-Shot (FS) learning that requires additional training data from the dataset considered for evaluations, and using a variety of LLMs also of the latest generation. Results obtained across various LLMs generally demonstrate the superiority of utilizing recent LLM-based solutions for query expansion when employed in a prompt engineering scenario based on Zero-Shot learning. This showcases the intrinsic effectiveness of such recent LLMs even characterized by a modest number of parameters.

Rizzo, D., Raganato, A., Viviani, M. (2024). Comparatively Assessing Large Language Models for Query Expansion in Information Retrieval via Zero-Shot and Chain-of-Thought Prompting. In Proceedings of the 14th Italian Information Retrieval Workshop (pp.23-32). CEUR-WS.

Comparatively Assessing Large Language Models for Query Expansion in Information Retrieval via Zero-Shot and Chain-of-Thought Prompting

Raganato A.;Viviani M.
2024

Abstract

In our research, we aim to assess the effectiveness of Large Language Models (LLMs) in performing query expansion in the context of Information Retrieval (IR). Some recent solutions proposed and studied in the literature to perform this task have proven effective considering specific LLMs, datasets, or prompt engineering techniques. In this paper, we intend to deepen this analysis with a more comprehensive and up-to-date view of their effectiveness, by comparing the results obtained from such solutions in the context of Zero-Shot (ZS) and Chain-of-Thought (CoT) learning, so as to be agnostic with respect to Few-Shot (FS) learning that requires additional training data from the dataset considered for evaluations, and using a variety of LLMs also of the latest generation. Results obtained across various LLMs generally demonstrate the superiority of utilizing recent LLM-based solutions for query expansion when employed in a prompt engineering scenario based on Zero-Shot learning. This showcases the intrinsic effectiveness of such recent LLMs even characterized by a modest number of parameters.
paper
Information Retrieval; Large Language Models; Natural Language Processing; Prompt Engineering; Query Expansion;
English
14th Italian Information Retrieval Workshop - September 5-6, 2024
2024
Roitero, K; Viviani, M; Maddalena, E; Mizzaro, S
Proceedings of the 14th Italian Information Retrieval Workshop
2024
3802
23
32
https://ceur-ws.org/Vol-3802/
open
Rizzo, D., Raganato, A., Viviani, M. (2024). Comparatively Assessing Large Language Models for Query Expansion in Information Retrieval via Zero-Shot and Chain-of-Thought Prompting. In Proceedings of the 14th Italian Information Retrieval Workshop (pp.23-32). CEUR-WS.
File in questo prodotto:
File Dimensione Formato  
Rizzo-2024-CEUR WS_14 It Informat Retrieval Ws-VoR.pdf

accesso aperto

Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Creative Commons
Dimensione 557.03 kB
Formato Adobe PDF
557.03 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/536207
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
Social impact