AI systems, from computer-based conversational agents to embodied robots, may exhibit characteristics that, in some cases, would lead people to attribute them mental states and cognitive abilities (De Graaf & Malle, 2017). Although the literature on the attribution of mental states and cognition to AI systems is extensive and growing (Thellmann, 2022), research on this topic has mainly focused on whether people take an intentional stance (à la Dennett, 1971) towards artificial systems, attributing beliefs, desires and other propositional attitudes to the system to explain and predict its behavior. We suggest that people, in their interaction with AI systems, may occasionally adopt a mentalistic explanatory and predictive strategy, interestingly different from the intentional stance, which involves the decomposition of the system into functional modules processing representations. While the first approach (in Dennett’s framework, the adoption of the intentional stance) is more akin to folk-psychology, the latter, dubbed here folk-cognitivist, is closer to classical cognitivist modeling. The distinction between the two strategies will be explored and illustrated with reference to examples from ongoing empirical researches on the topic. This distinction aims to contribute to our understanding of the dynamic of human-AI interaction which may be relevant to various research fields, to the design of effective, safe and trustworthy interactive AI systems, to the proper dealing with ethical, legal and social issues related to their use, to their deployment as scientific tools to understand human cognition (Wykowska, 2021, Ziemke, 2020).

Larghi, S., Datteri, E. (2024). Mentalistic stances towards AI systems. Intervento presentato a: PHILTECH WORKSHOP – Topics in the Philosophy of Science and Technology for the ML era, Milano, Italy.

Mentalistic stances towards AI systems

Silvia Larghi
;
Edoardo Datteri
2024

Abstract

AI systems, from computer-based conversational agents to embodied robots, may exhibit characteristics that, in some cases, would lead people to attribute them mental states and cognitive abilities (De Graaf & Malle, 2017). Although the literature on the attribution of mental states and cognition to AI systems is extensive and growing (Thellmann, 2022), research on this topic has mainly focused on whether people take an intentional stance (à la Dennett, 1971) towards artificial systems, attributing beliefs, desires and other propositional attitudes to the system to explain and predict its behavior. We suggest that people, in their interaction with AI systems, may occasionally adopt a mentalistic explanatory and predictive strategy, interestingly different from the intentional stance, which involves the decomposition of the system into functional modules processing representations. While the first approach (in Dennett’s framework, the adoption of the intentional stance) is more akin to folk-psychology, the latter, dubbed here folk-cognitivist, is closer to classical cognitivist modeling. The distinction between the two strategies will be explored and illustrated with reference to examples from ongoing empirical researches on the topic. This distinction aims to contribute to our understanding of the dynamic of human-AI interaction which may be relevant to various research fields, to the design of effective, safe and trustworthy interactive AI systems, to the proper dealing with ethical, legal and social issues related to their use, to their deployment as scientific tools to understand human cognition (Wykowska, 2021, Ziemke, 2020).
abstract + slide
mental state attribution; intentional stance, philosophy of cognitive science, philosophy of AI
English
PHILTECH WORKSHOP – Topics in the Philosophy of Science and Technology for the ML era
2024
2024
none
Larghi, S., Datteri, E. (2024). Mentalistic stances towards AI systems. Intervento presentato a: PHILTECH WORKSHOP – Topics in the Philosophy of Science and Technology for the ML era, Milano, Italy.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/533723
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
Social impact