The series of Personalised Information Retrieval (PIR-CLEF) Labs at CLEF is intended as a forum for the exploration of methodologies for the repeatable evaluation of personalised information retrieval (PIR). The PIR-CLEF 2018 Lab is the first full edition of this series after the successful pilot edition at CLEF 2017, and provides a Lab task dedicated to personalised search, while the workshop at the conference will form the basis of further discussion of strategies for the evaluation of PIR and suggestions for improving the activities of the PIR-CLEF Lab. The PIR-CLEF 2018 Task is the first PIR evaluation benchmark based on the Cranfield paradigm, with the potential benefits of producing evaluation results that are easily reproducible. The task is based on search sessions over a subset of the ClueWeb12 collection, undertaken by volunteer searchers using a methodology developed in the CLEF 2017 pilot edition of PIR-CLEF. The PIR-CLEF test collection provides a detailed set of data gathered during the activities undertaken by each subject during the search sessions, including their search queries and details of relevant documents as marked by the searchers. The PIR-CLEF 2018 workshop is intended to review the design and construction of the collection, and to consider the topic of reproducible evaluation of PIR more generally with the aim of improving future editions of the evaluation benchmark.
Pasi, G., Jones, G., Curtis, K., Marrara, S., Sanvitto, C., Ganguly, D., et al. (2018). Evaluation of personalised information retrieval at CLEF 2018 (PIR-CLEF). In Experimental IR Meets Multilinguality, Multimodality, and Interaction. 9th International Conference of the CLEF Association, CLEF 2018, Avignon, France, September 10-14, 2018, Proceedings (pp.335-342). Springer Verlag [10.1007/978-3-319-98932-7_29].
Evaluation of personalised information retrieval at CLEF 2018 (PIR-CLEF)
Pasi, Gabriella;Marrara, Stefania
;
2018
Abstract
The series of Personalised Information Retrieval (PIR-CLEF) Labs at CLEF is intended as a forum for the exploration of methodologies for the repeatable evaluation of personalised information retrieval (PIR). The PIR-CLEF 2018 Lab is the first full edition of this series after the successful pilot edition at CLEF 2017, and provides a Lab task dedicated to personalised search, while the workshop at the conference will form the basis of further discussion of strategies for the evaluation of PIR and suggestions for improving the activities of the PIR-CLEF Lab. The PIR-CLEF 2018 Task is the first PIR evaluation benchmark based on the Cranfield paradigm, with the potential benefits of producing evaluation results that are easily reproducible. The task is based on search sessions over a subset of the ClueWeb12 collection, undertaken by volunteer searchers using a methodology developed in the CLEF 2017 pilot edition of PIR-CLEF. The PIR-CLEF test collection provides a detailed set of data gathered during the activities undertaken by each subject during the search sessions, including their search queries and details of relevant documents as marked by the searchers. The PIR-CLEF 2018 workshop is intended to review the design and construction of the collection, and to consider the topic of reproducible evaluation of PIR more generally with the aim of improving future editions of the evaluation benchmark.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.