Artificial intelligence (AI) is a rapidly developing technology that has the potential to create previously unimaginable chances for our societies. Still, the public's opinion of AI remains mixed. Since AI has been integrated into many facets of daily life, it is critical to understand how people perceive these systems. The present work investigated the perceived social risk and social value of AI. In a preliminary study, AI's social risk and social value were first operationalized and explored by adopting a correlational approach. Results highlighted that perceived social value and social risk represent two significant and antagonistic dimensions driving the perception of AI: the higher the perceived risk, the lower the social value attributed to AI. The main study considered pretested AI applications in different domains to develop a classification of AI applications based on perceived social risk and social value. A cluster analysis revealed that in the two-dimensional social risk × social value space, the considered AI technologies grouped into six clusters, with the AI applications related to medical care (e.g., assisted surgery) unexpectedly perceived as the riskiest ones. Understanding people's perceptions of AI can guide researchers, developers, and policymakers in adopting an anthropocentric approach when designing future AI technologies to prioritize human well-being and ensure AI's responsible and ethical development in the years to come.

Gabbiadini, A., Durante, F., Baldissarri, C., Andrighetto, L. (2024). Artificial Intelligence in the Eyes of Society: Assessing Social Risk and Social Value Perception in a Novel Classification. HUMAN BEHAVIOR AND EMERGING TECHNOLOGIES, 24 [10.1155/2024/7008056].

Artificial Intelligence in the Eyes of Society: Assessing Social Risk and Social Value Perception in a Novel Classification

Gabbiadini, A
;
Durante, F;Baldissarri, C;
2024

Abstract

Artificial intelligence (AI) is a rapidly developing technology that has the potential to create previously unimaginable chances for our societies. Still, the public's opinion of AI remains mixed. Since AI has been integrated into many facets of daily life, it is critical to understand how people perceive these systems. The present work investigated the perceived social risk and social value of AI. In a preliminary study, AI's social risk and social value were first operationalized and explored by adopting a correlational approach. Results highlighted that perceived social value and social risk represent two significant and antagonistic dimensions driving the perception of AI: the higher the perceived risk, the lower the social value attributed to AI. The main study considered pretested AI applications in different domains to develop a classification of AI applications based on perceived social risk and social value. A cluster analysis revealed that in the two-dimensional social risk × social value space, the considered AI technologies grouped into six clusters, with the AI applications related to medical care (e.g., assisted surgery) unexpectedly perceived as the riskiest ones. Understanding people's perceptions of AI can guide researchers, developers, and policymakers in adopting an anthropocentric approach when designing future AI technologies to prioritize human well-being and ensure AI's responsible and ethical development in the years to come.
Articolo in rivista - Articolo scientifico
Artificial Intelligence; social risk; social value
English
11-mar-2024
2024
24
7008056
open
Gabbiadini, A., Durante, F., Baldissarri, C., Andrighetto, L. (2024). Artificial Intelligence in the Eyes of Society: Assessing Social Risk and Social Value Perception in a Novel Classification. HUMAN BEHAVIOR AND EMERGING TECHNOLOGIES, 24 [10.1155/2024/7008056].
File in questo prodotto:
File Dimensione Formato  
10281-466338_VoR.pdf

accesso aperto

Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Creative Commons
Dimensione 833.44 kB
Formato Adobe PDF
833.44 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/466338
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
Social impact