eXplainable AI (XAI) has been gaining research interest across several AI applications. However, current XAI methods often fall short of involving the user in the decision-making process, as XAI explains to the user a decision already made by the algorithm, preventing the user from evaluating alternatives. In this setting, Evaluative AI encourages balanced human-AI collaboration by addressing the issues of over- and under-reliance on AI systems and involving the user in evaluating the pros/cons of each recommendation. In this paper, we present EADS (Evaluative AI-based Decision Support), a framework that connects Evaluative AI with conversational explanations realised via Large Language Models (LLMs). The Evaluative AI approach enables users to actively contribute with their domain knowledge and expertise to more effective and robust decision-making processes. Large Language Models enrich the explainer’s output with natural language conversational explanations to present pros, cons, and neutral aspects of ML model alternatives, empowering users to evaluate options for informed, hypothesis-driven decision-making. Following the implementation of the conversational framework as per our proposed formalization, our conducted user study serves as compelling evidence of its efficacy in enhancing decision-making processes. The results of the user study demonstrate that EADS, through the fusion of human expertise and AI capabilities, presents a highly promising avenue for elevating the explainability, transparency, and overall efficacy of decision support systems across diverse domains.

Ermellino, A., Malandri, L., Mercorio, F., Nobani, N., Serino, A. (2024). An approach to Evaluative AI through Large Language Models. In Proceedings of the First Multimodal, Affective and Interactive eXplainable AI Workshop (MAI-XAI24 2024) co-located with 27th European Conference On Artificial Intelligence 19-24 October 2024 (ECAI 2024) (pp.1-15). CEUR-WS.

An approach to Evaluative AI through Large Language Models

Ermellino A.;Malandri L.;Mercorio F.;Nobani N.;Serino A.
2024

Abstract

eXplainable AI (XAI) has been gaining research interest across several AI applications. However, current XAI methods often fall short of involving the user in the decision-making process, as XAI explains to the user a decision already made by the algorithm, preventing the user from evaluating alternatives. In this setting, Evaluative AI encourages balanced human-AI collaboration by addressing the issues of over- and under-reliance on AI systems and involving the user in evaluating the pros/cons of each recommendation. In this paper, we present EADS (Evaluative AI-based Decision Support), a framework that connects Evaluative AI with conversational explanations realised via Large Language Models (LLMs). The Evaluative AI approach enables users to actively contribute with their domain knowledge and expertise to more effective and robust decision-making processes. Large Language Models enrich the explainer’s output with natural language conversational explanations to present pros, cons, and neutral aspects of ML model alternatives, empowering users to evaluate options for informed, hypothesis-driven decision-making. Following the implementation of the conversational framework as per our proposed formalization, our conducted user study serves as compelling evidence of its efficacy in enhancing decision-making processes. The results of the user study demonstrate that EADS, through the fusion of human expertise and AI capabilities, presents a highly promising avenue for elevating the explainability, transparency, and overall efficacy of decision support systems across diverse domains.
paper
Explainable AI; Human-Centered AI; Large Language Models;
English
First Multimodal, Affective and Interactive eXplainable AI Workshop (MAI-XAI24 2024) co-located with 27th European Conference On Artificial Intelligence 19-24 October 2024 (ECAI 2024) - October 19, 2024
2024
Proceedings of the First Multimodal, Affective and Interactive eXplainable AI Workshop (MAI-XAI24 2024) co-located with 27th European Conference On Artificial Intelligence 19-24 October 2024 (ECAI 2024)
2024
3803
1
15
https://ceur-ws.org/Vol-3803/
none
Ermellino, A., Malandri, L., Mercorio, F., Nobani, N., Serino, A. (2024). An approach to Evaluative AI through Large Language Models. In Proceedings of the First Multimodal, Affective and Interactive eXplainable AI Workshop (MAI-XAI24 2024) co-located with 27th European Conference On Artificial Intelligence 19-24 October 2024 (ECAI 2024) (pp.1-15). CEUR-WS.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/526422
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
Social impact