eXplainable AI (XAI) has been gaining research interest across several AI applications. However, current XAI methods often fall short of involving the user in the decision-making process, as XAI explains to the user a decision already made by the algorithm, preventing the user from evaluating alternatives. In this setting, Evaluative AI encourages balanced human-AI collaboration by addressing the issues of over- and under-reliance on AI systems and involving the user in evaluating the pros/cons of each recommendation. In this paper, we present EADS (Evaluative AI-based Decision Support), a framework that connects Evaluative AI with conversational explanations realised via Large Language Models (LLMs). The Evaluative AI approach enables users to actively contribute with their domain knowledge and expertise to more effective and robust decision-making processes. Large Language Models enrich the explainer’s output with natural language conversational explanations to present pros, cons, and neutral aspects of ML model alternatives, empowering users to evaluate options for informed, hypothesis-driven decision-making. Following the implementation of the conversational framework as per our proposed formalization, our conducted user study serves as compelling evidence of its efficacy in enhancing decision-making processes. The results of the user study demonstrate that EADS, through the fusion of human expertise and AI capabilities, presents a highly promising avenue for elevating the explainability, transparency, and overall efficacy of decision support systems across diverse domains.
Ermellino, A., Malandri, L., Mercorio, F., Nobani, N., Serino, A. (2024). An approach to Evaluative AI through Large Language Models. In Proceedings of the First Multimodal, Affective and Interactive eXplainable AI Workshop (MAI-XAI24 2024) co-located with 27th European Conference On Artificial Intelligence 19-24 October 2024 (ECAI 2024) (pp.1-15). CEUR-WS.
An approach to Evaluative AI through Large Language Models
Ermellino A.;Malandri L.;Mercorio F.;Nobani N.;Serino A.
2024
Abstract
eXplainable AI (XAI) has been gaining research interest across several AI applications. However, current XAI methods often fall short of involving the user in the decision-making process, as XAI explains to the user a decision already made by the algorithm, preventing the user from evaluating alternatives. In this setting, Evaluative AI encourages balanced human-AI collaboration by addressing the issues of over- and under-reliance on AI systems and involving the user in evaluating the pros/cons of each recommendation. In this paper, we present EADS (Evaluative AI-based Decision Support), a framework that connects Evaluative AI with conversational explanations realised via Large Language Models (LLMs). The Evaluative AI approach enables users to actively contribute with their domain knowledge and expertise to more effective and robust decision-making processes. Large Language Models enrich the explainer’s output with natural language conversational explanations to present pros, cons, and neutral aspects of ML model alternatives, empowering users to evaluate options for informed, hypothesis-driven decision-making. Following the implementation of the conversational framework as per our proposed formalization, our conducted user study serves as compelling evidence of its efficacy in enhancing decision-making processes. The results of the user study demonstrate that EADS, through the fusion of human expertise and AI capabilities, presents a highly promising avenue for elevating the explainability, transparency, and overall efficacy of decision support systems across diverse domains.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.