Deep learning has revolutionized computer vision by allowing neural networks to automatically learn features from data. However, the highly nonlinear nature of deep neural networks makes them difficult to interpret, leading to concerns about potential biases in critical applications. To address this, researchers have advocated for eXplainable Artificial Intelligence (XAI). Many XAI techniques have been proposed but all of them only highlight image regions influencing model decisions, lacking any further explanations. In this paper, we propose a post-hoc model-agnostic meta-XAI method that explains why specific image regions are used for decisions. The paper presents the experimental setup and results, discussing the perturbations used for explanations in color, frequency, shape, shading, and texture. The explanation is given in terms of human-interpretable image features, e.g., color, shape, shading, and texture both as perturbation plots and as visual summary through the use of the newly introduced normalized Area Under the Curve score. The experimental results confirm the previous findings that vision deep learning models are biased towards texture, but also highlight the importance of color, frequency content and perceptually salient structures in the final decision.

Bianco, S. (2025). Meta-XAI for Explaining the Explainer: Unveiling Image Features Driving Deep Learning Decisions. IEEE TRANSACTIONS ON ARTIFICIAL INTELLIGENCE, 1-10 [10.1109/tai.2025.3529397].

Meta-XAI for Explaining the Explainer: Unveiling Image Features Driving Deep Learning Decisions

Bianco, Simone
2025

Abstract

Deep learning has revolutionized computer vision by allowing neural networks to automatically learn features from data. However, the highly nonlinear nature of deep neural networks makes them difficult to interpret, leading to concerns about potential biases in critical applications. To address this, researchers have advocated for eXplainable Artificial Intelligence (XAI). Many XAI techniques have been proposed but all of them only highlight image regions influencing model decisions, lacking any further explanations. In this paper, we propose a post-hoc model-agnostic meta-XAI method that explains why specific image regions are used for decisions. The paper presents the experimental setup and results, discussing the perturbations used for explanations in color, frequency, shape, shading, and texture. The explanation is given in terms of human-interpretable image features, e.g., color, shape, shading, and texture both as perturbation plots and as visual summary through the use of the newly introduced normalized Area Under the Curve score. The experimental results confirm the previous findings that vision deep learning models are biased towards texture, but also highlight the importance of color, frequency content and perceptually salient structures in the final decision.
Articolo in rivista - Articolo scientifico
Deep learning, Convolutional neural networks, Explainable artificial intelligence
English
13-gen-2025
2025
1
10
none
Bianco, S. (2025). Meta-XAI for Explaining the Explainer: Unveiling Image Features Driving Deep Learning Decisions. IEEE TRANSACTIONS ON ARTIFICIAL INTELLIGENCE, 1-10 [10.1109/tai.2025.3529397].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/536041
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
Social impact