Shank, Daniel B. and Alexander Gott*. 2019. “People’s Self-Reported Encounters of Perceiving Mind in Artificial Intelligence.” Data in Brief 25: 1-5.
This article presents the data from two surveys that asked about everyday encounters with artificial intelligence (AI) systems that are perceived to have attributes of mind. In response to specific attribute prompts about an AI, the participants qualitatively described a personally-known encounter with an AI. In survey 1 the prompts asked about an AI planning, having memory, controlling resources, or doing something surprising. In survey 2 the prompts asked about an AI experiencing emotion, expressing desires or beliefs, having human-like physical features, or being mistaken for a human. The original responses were culled based on the ratings of multiple coders to eliminate responses that did not adhere to the prompts. This article includes the qualitative responses, coded categories of those qualitative responses, quantitative measures of mind perception and demographics. For interpretation of this data related to people's emotions, see Feeling our Way to Machine Minds: People's Emotions when Perceiving Mind in Artificial Intelligence Shank et al., 2019.
Shank, Daniel B., Christopher Graves*, Alexander Gott*, Patrick Gamez, and Sophia Rodriguez*. 2019. “Feeling our Way to Machine Minds: People’s Emotions when Perceiving Mind in Artificial Intelligence.” Computers in Human Behavior 98: 256-266.
It is now common for people to encounter artificial intelligence (AI) across many areas of their personal and professional lives. Interactions with AI agents may range from the routine use of information technology tools to encounters where people perceive an artificial agent as exhibiting mind. Combining two studies (useable N = 266), we explore people's qualitative descriptions of a personal encounter with an AI in which it exhibits characteristics of mind. Across a range of situations reported, a clear pattern emerged in the responses: the majority of people report their own emotions including surprise, amazement, happiness, disappointment, amusement, unease, and confusion in their encounter with a minded AI. We argue that emotional reactions occur as part of mind perception as people negotiate between the disparate concepts of programmed electronic devices and actions indicative of human-like minds. Specifically, emotions are often tied to AIs that produce extraordinary outcomes, inhabit crucial social roles, and engage in human-like actions. We conclude with future directions and the implications for ethics, the psychology of mind perception, the philosophy of mind, and the nature of social interactions in a world of increasingly sophisticated AIs.
Shank, Daniel B., Alyssa DeSanti*, and Timothy Maninger*. 2019. “When are Artificial Intelligence versus Human Agents Faulted for Wrongdoing? Moral Attributions after Individual and Joint Decisions.” Information, Communication, and Society 22(5):648-663.
Artificial intelligence (AI) agents make decisions that affect individuals and society which can produce outcomes traditionally considered moral violations if performed by humans. Do people attribute the same moral permissibility and fault to AIs and humans when each produces the same moral violation outcome? Additionally, how do people attribute morality when the AI and human are jointly making the decision which produces that violation? We investigate these questions with an experiment that manipulates written descriptions of four real-world scenarios where, originally, a violation outcome was produced by an AI. Our decision-making structures include individual decision-making – either AIs or humans – and joint decision-making – either humans monitoring AIs or AIs recommending to humans. We find that the decision-making structure has little effect on morally faulting AIs, but that humans who monitor AIs are faulted less than solo humans and humans receiving recommendations. Furthermore, people attribute more permission and less fault to AIs compared to humans for the violation in both joint decision-making structures. The blame for joint AI-human wrongdoing suggests the potential for strategic scapegoating of AIs for human moral failings and the need for future research on AI-human teams.
Shank, Daniel B. and Alyssa DeSanti*. 2018. "Attributions of Morality and Mind to Artificial Intelligence after Real-World Moral Violations." Computers in Human Behavior 86:401-411.
The media has portrayed certain artificial intelligence (AI) software as committing moral violations such as the AI judge of a human beauty contest being “racist” when it selected predominately light-skinned winners. We examine people's attributions of morality for seven such real-world events that were first publicized in the media, experimentally manipulating the occurrence of a violation and the inclusion of information about the AI's algorithm. Both the presence of the moral violation and the information about the AI's algorithm increase participant's reporting of a moral violation occurring in the event. However, even in the violation outcome conditions only 43.5 percent of the participants reported that they were sure that a moral violation occurred. Addressing whether the AI is blamed for the moral violation we found that people attributed increased wrongness to the AI – but not to the organization, programmer, or users – after a moral violation. In addition to moral wrongness, the AI was attributed moderate levels of awareness, intentionality, justification, and responsibility for the violation outcome. Finally, the inclusion of the algorithm information marginally increased perceptions of the AI having mind, and perceived mind was positively related to attributions of intentionality and wrongness to the AI.
Follow Psychological Science