One-shot anonymous unselfishness in economic games is commonly explained by social preferences, which assume that people care about the monetary pay-offs of others. However, during the last 10 years, research has shown that different types of unselfish behaviour, including cooperation, altruism, truth-telling, altruistic punishment and trustworthiness are in fact better explained by preferences for following one's own personal norms-internal standards about what is right or wrong in a given situation. Beyond better organizing various forms of unselfish behaviour, this moral preference hypothesis has recently also been used to increase charitable donations, simply by means of interventions that make the morality of an action salient. Here we review experimental and theoretical work dedicated to this rapidly growing field of research, and in doing so we outline mathematical foundations for moral preferences that can be used in future models to better understand selfless human actions and to adjust policies accordingly. These foundations can also be used by artificial intelligence to better navigate the complex landscape of human morality.
Capraro, V., Perc, M. (2021). Mathematical foundations of moral preferences. JOURNAL OF THE ROYAL SOCIETY INTERFACE, 18(175) [10.1098/rsif.2020.0880].
Mathematical foundations of moral preferences
Capraro V
;
2021
Abstract
One-shot anonymous unselfishness in economic games is commonly explained by social preferences, which assume that people care about the monetary pay-offs of others. However, during the last 10 years, research has shown that different types of unselfish behaviour, including cooperation, altruism, truth-telling, altruistic punishment and trustworthiness are in fact better explained by preferences for following one's own personal norms-internal standards about what is right or wrong in a given situation. Beyond better organizing various forms of unselfish behaviour, this moral preference hypothesis has recently also been used to increase charitable donations, simply by means of interventions that make the morality of an action salient. Here we review experimental and theoretical work dedicated to this rapidly growing field of research, and in doing so we outline mathematical foundations for moral preferences that can be used in future models to better understand selfless human actions and to adjust policies accordingly. These foundations can also be used by artificial intelligence to better navigate the complex landscape of human morality.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.