The purpose of this work is to investigate the soundness and utility of a neural network-based approach as a framework for exploring the impact of image enhancement techniques on visual cortex activation. In a preliminary study, we prepare a set of state-of-the-art brain encoding models, selected among the top 10 methods that participated in The Algonauts Project 2023 Challenge [16]. We analyze their ability to make valid predictions about the effects of various image enhancement techniques on neural responses. Given the impossibility of acquiring the actual data due to the high costs associated with brain imaging procedures, our investigation builds up on a series of experiments. Specifically, we analyze the ability of brain encoders to estimate the cerebral reaction to various augmentations by evaluating the response to augmentations targeting objects (i.e., faces and words) with known impact on specific areas. Moreover, we study the predicted activation in response to objects unseen during training, exploring the impact of semantically out-of-distribution stimuli. We provide relevant evidence for the generalization ability of the models forming the proposed framework, which appears to be promising for the identification of the optimal visual augmentation filter for a given task, model-driven design strategies as well as for AR and VR applications.

Piskovskyi, V., Chimisso, R., Patania, S., Foulsham, T., Vizzari, G., Ognibene, D. (2024). Generalizability analysis of deep learning predictions of human brain responses to augmented and semantically novel visual stimuli. Intervento presentato a: Human-inspired Computer Vision - ECCV 2024 Workshop - 29th September 2024, Milan, Italy [10.48550/arXiv.2410.04497].

Generalizability analysis of deep learning predictions of human brain responses to augmented and semantically novel visual stimuli

Patania, S;Vizzari, G;Ognibene , D
Ultimo
2024

Abstract

The purpose of this work is to investigate the soundness and utility of a neural network-based approach as a framework for exploring the impact of image enhancement techniques on visual cortex activation. In a preliminary study, we prepare a set of state-of-the-art brain encoding models, selected among the top 10 methods that participated in The Algonauts Project 2023 Challenge [16]. We analyze their ability to make valid predictions about the effects of various image enhancement techniques on neural responses. Given the impossibility of acquiring the actual data due to the high costs associated with brain imaging procedures, our investigation builds up on a series of experiments. Specifically, we analyze the ability of brain encoders to estimate the cerebral reaction to various augmentations by evaluating the response to augmentations targeting objects (i.e., faces and words) with known impact on specific areas. Moreover, we study the predicted activation in response to objects unseen during training, exploring the impact of semantically out-of-distribution stimuli. We provide relevant evidence for the generalization ability of the models forming the proposed framework, which appears to be promising for the identification of the optimal visual augmentation filter for a given task, model-driven design strategies as well as for AR and VR applications.
poster + paper
Image Enhancement · Brain encoding · Generalizability
English
Human-inspired Computer Vision - ECCV 2024 Workshop - 29th September 2024
2024
2024
https://sites.google.com/view/hcvworkshop2024/accepted-papers
none
Piskovskyi, V., Chimisso, R., Patania, S., Foulsham, T., Vizzari, G., Ognibene, D. (2024). Generalizability analysis of deep learning predictions of human brain responses to augmented and semantically novel visual stimuli. Intervento presentato a: Human-inspired Computer Vision - ECCV 2024 Workshop - 29th September 2024, Milan, Italy [10.48550/arXiv.2410.04497].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/533842
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
Social impact