Dr. Daniel Shank

Assistant Professor

Specialization:

Social psychology, technology, groups, artificial intelligence

Resume/CV:

SIM-AI Research Lab

The Social Interaction and Morality of Artificial Intelligence (SIM-AI) lab conducts research on people’s interactions with and perceptions of AIs and other advanced computer systems including how people judge their morality and mental capacity. Other research projects are on the affective impressions of teams, groups, and organizations. Some of the sort of questions we try to address include: What causes people to perceive real-world artificial intelligences or their behavior as moral? What causes people to perceive them to have mind, mental abilities, or intention? When do people blame AIs or hold them responsible and how is this process similar or different from blaming humans? When and why do people unplug smart home hubs such as Google Home and Amazon Alexa? How do artificial agents and their behaviors alter the impressions of teams they are on? How do people interact when put on a team with AIs and how does this alter their perceptions, attributions, and blame? What factors enable people to trust the advice of AI for personal decisions? How do people judge artistic creations by humans and AI differently? How do the behaviors of members of an organization alter image of the organization?

RESEARCH ASSISTANT INFORMATION

Contact Dr. Daniel Shank at shankd@mst.edu for a list of current project openings.

Duties: Research assistants will primarily be involved in conducting background literature searches, running experiments, coding qualitative data, developing the stimuli, developing experiments, and programming experiments in Qualtrics.  Advanced research assistants may be involved in analyzing results, writing up findings, presenting findings at conferences, and developing new research questions.

Volunteering and Pay: For unpaid position: may join project as volunteer for 10 hours per week or as fulfillment of Psych 5000/4099 (Special Problems). For paid position: Undergraduate hourly rate is $9.00/hour for approximately 10 hours per week; salaried graduate rate is approximately $1607 per month for 15 hours/week and includes a tuition waiver. Generally, paid positions are only offered after a period of unpaid work demonstrating abilities or based on previous research experience. Paid positions are also contingent on available funding.

Start Date: Research assistant positions may start anytime, but work the best at the beginning of a new semester.

Location: Most work can be done remotely with the exception of on-campus experiments.

Contact: If interested please send resume/CV to Dr. Shank at shankd@mst.edu.

Recent SIM-AI Lab Publications

*Students at the time

Morality and Mind of Artificial Intelligence

Shank, Daniel B. and Alexander Gott*. 2019. “People’s Self-Reported Encounters of Perceiving Mind in Artificial Intelligence.” Data in Brief 25: 1-5.

This article presents the data from two surveys that asked about everyday encounters with artificial intelligence (AI) systems that are perceived to have attributes of mind. In response to specific attribute prompts about an AI, the participants qualitatively described a personally-known encounter with an AI. In survey 1 the prompts asked about an AI planning, having memory, controlling resources, or doing something surprising. In survey 2 the prompts asked about an AI experiencing emotion, expressing desires or beliefs, having human-like physical features, or being mistaken for a human. The original responses were culled based on the ratings of multiple coders to eliminate responses that did not adhere to the prompts. This article includes the qualitative responses, coded categories of those qualitative responses, quantitative measures of mind perception and demographics. For interpretation of this data related to people's emotions, see Feeling our Way to Machine Minds: People's Emotions when Perceiving Mind in Artificial Intelligence Shank et al., 2019.

Shank, Daniel B., Christopher Graves*, Alexander Gott*, Patrick Gamez, and Sophia Rodriguez*. 2019. “Feeling our Way to Machine Minds: People’s Emotions when Perceiving Mind in Artificial Intelligence.” Computers in Human Behavior 98: 256-266. 

It is now common for people to encounter artificial intelligence (AI) across many areas of their personal and professional lives. Interactions with AI agents may range from the routine use of information technology tools to encounters where people perceive an artificial agent as exhibiting mind. Combining two studies (useable N = 266), we explore people's qualitative descriptions of a personal encounter with an AI in which it exhibits characteristics of mind. Across a range of situations reported, a clear pattern emerged in the responses: the majority of people report their own emotions including surprise, amazement, happiness, disappointment, amusement, unease, and confusion in their encounter with a minded AI. We argue that emotional reactions occur as part of mind perception as people negotiate between the disparate concepts of programmed electronic devices and actions indicative of human-like minds. Specifically, emotions are often tied to AIs that produce extraordinary outcomes, inhabit crucial social roles, and engage in human-like actions. We conclude with future directions and the implications for ethics, the psychology of mind perception, the philosophy of mind, and the nature of social interactions in a world of increasingly sophisticated AIs.

Shank, Daniel B., Alyssa DeSanti*, and Timothy Maninger*. 2019. “When are Artificial Intelligence versus Human Agents Faulted for Wrongdoing? Moral Attributions after Individual and Joint Decisions.”  Information, Communication, and Society 22(5):648-663. 

Artificial intelligence (AI) agents make decisions that affect individuals and society which can produce outcomes traditionally considered moral violations if performed by humans. Do people attribute the same moral permissibility and fault to AIs and humans when each produces the same moral violation outcome? Additionally, how do people attribute morality when the AI and human are jointly making the decision which produces that violation? We investigate these questions with an experiment that manipulates written descriptions of four real-world scenarios where, originally, a violation outcome was produced by an AI. Our decision-making structures include individual decision-making – either AIs or humans – and joint decision-making – either humans monitoring AIs or AIs recommending to humans. We find that the decision-making structure has little effect on morally faulting AIs, but that humans who monitor AIs are faulted less than solo humans and humans receiving recommendations. Furthermore, people attribute more permission and less fault to AIs compared to humans for the violation in both joint decision-making structures. The blame for joint AI-human wrongdoing suggests the potential for strategic scapegoating of AIs for human moral failings and the need for future research on AI-human teams.

Shank, Daniel B. and Alyssa DeSanti*. 2018. "Attributions of Morality and Mind to Artificial Intelligence after Real-World Moral Violations." Computers in Human Behavior 86:401-411. 

 The media has portrayed certain artificial intelligence (AI) software as committing moral violations such as the AI judge of a human beauty contest being “racist” when it selected predominately light-skinned winners. We examine people's attributions of morality for seven such real-world events that were first publicized in the media, experimentally manipulating the occurrence of a violation and the inclusion of information about the AI's algorithm. Both the presence of the moral violation and the information about the AI's algorithm increase participant's reporting of a moral violation occurring in the event. However, even in the violation outcome conditions only 43.5 percent of the participants reported that they were sure that a moral violation occurred. Addressing whether the AI is blamed for the moral violation we found that people attributed increased wrongness to the AI – but not to the organization, programmer, or users – after a moral violation. In addition to moral wrongness, the AI was attributed moderate levels of awareness, intentionality, justification, and responsibility for the violation outcome. Finally, the inclusion of the algorithm information marginally increased perceptions of the AI having mind, and perceived mind was positively related to attributions of intentionality and wrongness to the AI.

Smart Home Technology

Wright, David and Daniel B. Shank. 2019. “Smart Home Technology Diffusion in a Living Laboratory.” Journal of Technical Writing and Communication. Advance online publication. 

Smart home products continue to rise in popularity but have yet to achieve widespread adoption. There is little research on how the general population perceives benefits of different smart home devices beyond general surveys. Using a living laboratory of five solar houses that we equipped with a range of smart home devices, we assessed how university student residents learn about, use, and gain interest in adopting this smart home technology. Analysis of data confirms that users find lifestyle benefits to be the most important motivators for adopting smart home technology. Yet without training in using that technology, these benefits do not outweigh the risks associated with learning to operate that technology.

Affective Impressions of Groups and Organizations

Shank, Daniel B., Sarah Hercula, and Brent Curdy. Forthcoming. “The Effect of Noun Phrase Grammar on the Affective Meaning of Social Identity Concepts.” Journal of Research Design and Statistics in Linguistics and Communication Science.

We examine the influences of determiners (a/an, the, and all) and grammatical number (singular or plural) on the affective meaning of social identity concepts. Some linguistic evidence suggests that changes in the grammatical form of a noun phrase may shift its affective meaning, while other research highlights the importance of context for such shifts. We conceptualize and measure affective meaning in terms of evaluation (goodness), potency, and activity drawn from research in affect control theory (ACT), a social psychological theory of culture and language. In two experiments, participants rate 28 social identity concepts, which are either count or collective nouns, presented in one of five grammatical forms. In congruence with ACT, the data support that the bulk of a concept’s affective meaning is carried by the noun itself, rather than by the grammatical features of the noun phrase in which the concept is expressed.

Shank, Daniel B. and Dawn T. Robinson. 2019. “Who’s Responsible? Representatives’ Autonomy Alters Customers’ Emotion and Repurchase Intentions toward Organizations.” Journal of Consumer Marketing 36(1):155-167. 

This paper aims to present and test a model of how the autonomy of an organization’s representative alters the effects of customer experiences on customer emotions and repurchasing intentions toward the organization. Specifically, this paper offers a moderated mediation model whereby representative autonomy alters attributions of organizational responsibility, which moderate the effect of service experience on emotion and emotion mediates the effects of service experiences on repurchasing intentions. Study 1 is a laboratory experiment (N = 115), where participants engaged in a multi-round product purchasing task through an online representative of a company. Study 2 is a vignette experiment (N = 393), where participants responded to situations of purchasing either a car, furniture, haircut or vacation package from a representative of a company. In both studies, manipulated representative autonomy information was either low or high and manipulated customer experience was either positive or negative. Measures included responsibility, emotion toward the organization and repurchase intention. Structural equation models support the proposed model. In the presence of information about representative autonomy, the link between customer experience and repurchasing intent is amplified and mediated by emotion toward the organization. Because of the experimental approach, the findings may not be generalizable, but the experimental method allows for a controlled test of the process, ordering and relationship among variables. Understanding how representatives’ autonomy ultimately alters repurchasing and how this process involves responsibility attributions contributes to both practice and theory.

Shank, Daniel B. and Alexander Burns*. 2018. “Comparing Groups’ Affective Sentiments to Group Perceptions.” Current Research in Social Psychology 26(5):55-66.

Affect control theory focuses on interaction among individuals, not groups. Groups, like individual identities, vary in affective sentiments across the dimensions of evaluation, potency, and activity, but a separate literature shows the importance of the group perceptions of entitativity, homogeneity, essentialism, and agency. Therefore, to consider affect control theory’s applicability to groups, we compare these principal group perceptions to affective sentiments for 64 group concepts. The results reveal that affective sentiments correlate with all four group perceptions in meaningful ways.