This paper introduces a system that integrates large language models (LLMs) into the clinical trial retrieval process, enhancing the effectiveness of matching patients with eligible trials while maintaining information privacy and allowing expert oversight. We evaluate six LLMs for query generation, focusing on open-source and relatively small models that require minimal computational resources. Our evaluation includes two closed-source and four open-source models, with one specifically trained in the medical field and five general-purpose models. We compare the retrieval effectiveness achieved by LLM-generated queries against those created by medical experts and state-of-the-art methods from the literature. Our findings indicate that the evaluated models reach retrieval effectiveness on par with or greater than expert-created queries. The LLMs consistently outperform standard baselines and other approaches in the literature. The best performing LLMs exhibit fast response times, ranging from 1.7 to 8 seconds, and generate a manageable number of query terms (15-63 on average), making them suitable for practical implementation. Our overall findings suggest that leveraging small, open-source LLMs for clinical trials retrieval can balance performance, computational efficiency, and real-world applicability in medical settings.

Peikos, G., Kasela, P., Pasi, G. (2024). Leveraging Large Language Models for Medical Information Extraction and Query Generation. Intervento presentato a: The 23rd IEEE/WIC International Conference on Web Intelligence and Intelligent Agent Technology - December 9-12, 2024, Bangkok, Thailand.

Leveraging Large Language Models for Medical Information Extraction and Query Generation

Georgios Peikos;Pranav Kasela;Gabriella Pasi
2024

Abstract

This paper introduces a system that integrates large language models (LLMs) into the clinical trial retrieval process, enhancing the effectiveness of matching patients with eligible trials while maintaining information privacy and allowing expert oversight. We evaluate six LLMs for query generation, focusing on open-source and relatively small models that require minimal computational resources. Our evaluation includes two closed-source and four open-source models, with one specifically trained in the medical field and five general-purpose models. We compare the retrieval effectiveness achieved by LLM-generated queries against those created by medical experts and state-of-the-art methods from the literature. Our findings indicate that the evaluated models reach retrieval effectiveness on par with or greater than expert-created queries. The LLMs consistently outperform standard baselines and other approaches in the literature. The best performing LLMs exhibit fast response times, ranging from 1.7 to 8 seconds, and generate a manageable number of query terms (15-63 on average), making them suitable for practical implementation. Our overall findings suggest that leveraging small, open-source LLMs for clinical trials retrieval can balance performance, computational efficiency, and real-world applicability in medical settings.
paper
Computer Science - Information Retrieval; Computer Science - Information Retrieval
English
The 23rd IEEE/WIC International Conference on Web Intelligence and Intelligent Agent Technology - December 9-12, 2024
2024
2024
http://arxiv.org/abs/2410.23851v1
open
Peikos, G., Kasela, P., Pasi, G. (2024). Leveraging Large Language Models for Medical Information Extraction and Query Generation. Intervento presentato a: The 23rd IEEE/WIC International Conference on Web Intelligence and Intelligent Agent Technology - December 9-12, 2024, Bangkok, Thailand.
File in questo prodotto:
File Dimensione Formato  
Peikos-2024-23 IEEE/WIC Int Conf-AAM.pdf

accesso aperto

Tipologia di allegato: Author’s Accepted Manuscript, AAM (Post-print)
Licenza: Creative Commons
Dimensione 384.39 kB
Formato Adobe PDF
384.39 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/548721
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
Social impact