The ability to predict how efficiently a person finds an object in the environment is a crucial goal of attention research. Central to this issue are the similarity principles initially proposed by Duncan and Humphreys, which outline how the similarity between target and distractor objects (TD) and between distractor objects themselves (DD) affect search efficiency. However, the search principles lack direct quantitative support from an ecological perspective, being a summary approximation of a wide range of lab-based results poorly generalisable to real-world scenarios. This study exploits deep convolutional neural networks to predict human search efficiency from computational estimates of similarity between objects populating, potentially, any visual scene. Our results provide ecological evidence supporting the similarity principles: search performance continuously varies across tasks and conditions and improves with decreasing TD similarity and increasing DD similarity. Furthermore, our results reveal a crucial dissociation: TD and DD similarities mainly operate at two distinct layers of the network: DD similarity at the intermediate layers of coarse object features and TD similarity at the final layers of complex features used for classification. This suggests that these different similarities exert their major effects at two distinct perceptual levels and demonstrates our methodology's potential to offer insights into the depth of visual processing on which the search relies. By combining computational techniques with visual search principles, this approach aligns with modern trends in other research areas and fulfils longstanding demands for more ecologically valid research in the field of visual search.

Petilli, M., Rodio, F., Günther, F., Marelli, M. (2024). Visual search and real-image similarity: An empirical assessment through the lens of deep learning. PSYCHONOMIC BULLETIN & REVIEW [10.3758/s13423-024-02583-4].

Visual search and real-image similarity: An empirical assessment through the lens of deep learning

Petilli M. A.
Primo
;
Rodio F. M.
Secondo
;
Marelli M.
Ultimo
2024

Abstract

The ability to predict how efficiently a person finds an object in the environment is a crucial goal of attention research. Central to this issue are the similarity principles initially proposed by Duncan and Humphreys, which outline how the similarity between target and distractor objects (TD) and between distractor objects themselves (DD) affect search efficiency. However, the search principles lack direct quantitative support from an ecological perspective, being a summary approximation of a wide range of lab-based results poorly generalisable to real-world scenarios. This study exploits deep convolutional neural networks to predict human search efficiency from computational estimates of similarity between objects populating, potentially, any visual scene. Our results provide ecological evidence supporting the similarity principles: search performance continuously varies across tasks and conditions and improves with decreasing TD similarity and increasing DD similarity. Furthermore, our results reveal a crucial dissociation: TD and DD similarities mainly operate at two distinct layers of the network: DD similarity at the intermediate layers of coarse object features and TD similarity at the final layers of complex features used for classification. This suggests that these different similarities exert their major effects at two distinct perceptual levels and demonstrates our methodology's potential to offer insights into the depth of visual processing on which the search relies. By combining computational techniques with visual search principles, this approach aligns with modern trends in other research areas and fulfils longstanding demands for more ecologically valid research in the field of visual search.
Articolo in rivista - Articolo scientifico
Computer vision; Convolutional neural networks; Perceptual processing; Search efficiency; Visual search; Visual similarity;
English
26-set-2024
2024
open
Petilli, M., Rodio, F., Günther, F., Marelli, M. (2024). Visual search and real-image similarity: An empirical assessment through the lens of deep learning. PSYCHONOMIC BULLETIN & REVIEW [10.3758/s13423-024-02583-4].
File in questo prodotto:
File Dimensione Formato  
Petilli-2024-Psychonomic Bulletin and Review-VoR.pdf

accesso aperto

Descrizione: CC BY 4.0 This article is licensed under a Creative Commons Attribution 4.0 International License To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Creative Commons
Dimensione 5.39 MB
Formato Adobe PDF
5.39 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/521758
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
Social impact