Psychological Perspectives Relevant to Artificial Sentience
Janet Pauketat
Research Fellow
August 08, 2021

Edited by Ali Ladak and Jamie Harris. Many thanks to Jacy Reese Anthis and Silvio Curtis for reviewing and providing feedback.

Abstract

At Sentience Institute, we operate under the principle that moral circle expansion is a key strategy for reducing s-risks, particularly those related to artificial sentience. Like SI, psychologists have been studying moral circle expansion and the moral inclusion of humans and nonhumans alike. In this post, we provide an overview of the psychological science of moral inclusion. Then we review psychological research on some traditional correlates of the moral consideration of artificial entities, on social robot and human relations, and on the connections between psychology and artificial intelligence (AI). We then provide an overview of the interdisciplinary literature on interpersonal interactions with artificial entities. Finally, we point to research that could aid our nascent understanding of artificial sentience and consider how knowledge about moral inclusion could be applied to improve the future. We hope that you will gain a sense of where psychological science has been building knowledge about moral inclusion and artificial entities, the fundamental connections between psychology and AI, and what areas of research seem promising to advance our moral consideration of sentient artificial entities.

*Please note that this post focuses on some of the psychological perspectives on artificial entities that have had more traction and are relevant to the moral consideration of sentient artificial entities. This review is by no means exhaustive of the work on human-robot interaction (HRI), human-computer interaction (HCI), or the psychological study of robots and AIs.

Table of Contents

Introduction

How have psychological scientists conceived of moral inclusion?

How are artificial entities studied in psychological science?

Psychological correlates related to the moral consideration of artificial entities

Social robots and humans

AIs and humans

Psychology, neuroscience, and AI

Interpersonal interactions with artificial entities

What psychological science research would help better inform our understanding of artificial sentience?

How could we apply knowledge about the moral inclusion of artificial entities to improving the future?

Further Reading

Introduction

When you think about artificial entities, many possible representations might come to mind, from robot vacuums to advanced language generators such as GPT-3 to future whole brain emulations. In the future, the numbers of such artificial entities with some degree of sentience, that is, the capacity to have positive and negative experiences, could be vast. The large number of entities combined with the possibility that they will not be granted sufficient moral consideration poses a risk of astronomical suffering. In this post we review the psychological science of moral inclusion and psychological perspectives on artificial entities, focusing on research relevant to the moral consideration of future sentient artificial entities.

How have psychological scientists conceived of moral inclusion?

Moral exclusion has been a hallmark of human history, leading to and justifying atrocities against human and nonhuman beings alike.[1] Psychologists have thought of the moral circle as a boundary separating entities worthy of moral consideration (i.e., those who are included) from entities unworthy of moral consideration (i.e., those who are excluded).[2] Psychological science research supports the idea that humans have a moral circle in which a variety of other beings are either included to some degree or excluded entirely.[3] The table below details some common measurements of moral inclusion used in psychological science.

Table 1: Measures of moral inclusion

Measure

Original citation

Example instructions/item

Example entities studied

Example studies using the measure

Moral Concern

Laham, 2009

Below is a list of entities. Please select those that you feel morally obligated to show concern for.

Companion animals, wild animals, food/meat animals, whole brain emulation, robot, young girl, baby, brain-dead person

Camacho, 2019; Ladak et al., 2021; Leite et al., 2019; Loughnan et al., 2010

Moral Expansiveness Scale

Crimston et al., 2016

In which circle of moral concern would you put the following entities?

Family member, co-worker, chicken, chimpanzee, redwood tree, coral reef

Bastian et al., 2019; Neldner et al., 2018; Takamatsu, 2020 

Moral Inclusion/Exclusion of Other Groups Scale

Passini & Morselli, 2017

(values held by this group represent a threat to our well-being) <--->  (values held by this group represent an opportunity for our well-being)

Ethnic or cultural groups different from your own

Passini & Villano, 2018

Moral Regard for Outgroups

Reed & Aquino, 2003

To what extent do you believe you have a moral or ethical obligation to show concern for the welfare and interests of [people from another country]?

People from another country, strangers, people who practice a different religion than you, people of different ethnicities than you, refugees, the environment

Bratanova et al., 2012; Leavitt et al., 2012; Takamatsu, 2020

Mutualism

Teel & Manfredo, 2009

I view all living things as part of one big family.

Wildlife (e.g., wolves, polar bears, lynx)

Manfredo et al., 2020; Vaske et al., 2011

Scope of Justice/Moral Exclusion Scale

Opotow, 1993

I believe that considerations of fairness apply to [Muslims] too.

Muslim people, Jewish people, Roma people, insects

Hadarics & Kende; 2019; Hadarics & Kende, 2018

Psychologists have also studied how various values, personality orientations, perceptions, attitudes, and emotions are related to moral inclusion. Below we summarize some of these relationships.

Table 2: Psychological correlates of moral inclusion

Construct

Definition

Relationship to moral inclusion

Relevant citation(s)

Anthropomorphism

The attribution of human-like physical features and mental capacities to nonhuman entities

A stronger tendency to anthropomorphize predicts greater moral consideration of nonhuman agents.

Waytz et al., 2010

Compassion

A moral emotion felt in response to others’ suffering coupled with a motivation to help

Compassion motivates moral judgments and actions that serve to reduce suffering, especially suffering as a result of unjustified harms.

Goetz et al., 2010

Empathy

“Empathy is a stimulated emotional state that relies on the ability to perceive, understand and care about the experiences or perspectives of another” and incorporates cognitive, affective, and motivational components (Young et al., 2018).

Empathy typically predicts moral care and helping behavior. However, as in the “meat paradox” empathy may be undermined by moral disengagement.

Camilleri et al., 2020; Wilhelm & Bekkers, 2010

Global Citizenship

“awareness, caring, and embracing cultural diversity while promoting social justice and sustainability, coupled with a sense of responsibility to act” (Reysen & Katzarska-Miller, 2013, p. 858)

Stronger global citizenship predicts morally expansive attitudes and behaviors such as pro-environmentalism, support for peace, and human rights activism.

McFarland et al., 2019

Mind Perception

The attribution of beliefs, intentions, and mental states such as thoughts and emotions to others

More mind perception is linked with greater moral consideration.

Wang & Krumhuber, 2018

Moral Identity

“the degree to which being a moral person is important to an individual’s identity” (Hardy & Carlo, 2011, p. 212)

Moral identity is weakly associated with moral behaviors including avoiding antisocial actions like aggression, behaving ethically, and prosocial actions like volunteering.

Hertz & Krettenauer, 2016

Moral Foundations

A model of moral judgment suggesting that moral behavior is based on variation in the strength of care/harm, fairness/cheating, loyalty/betrayal, authority/subversion, and sanctity/degradation values

Holding stronger care/harm and fairness/cheating values is linked with a more expansive moral circle (i.e., showing concern for friends in addition to family, for the world in addition to the nation, and for nonhuman animals in addition to humans).

Waytz et al., 2019

Moral Motives

A model of moral judgment suggesting that moral behavior comes from the intersection of approach-avoidance motives and focus of moral concern

Approach motives combined with a focus on others underpins the desire to help others and treat them fairly.

Janoff-Bulman & Carnes, 2013

Unconditional Respect

A moral orientation towards the recognition of another’s fundamental integrity and autonomy as a rational being

Unconditional respect predicts empathy and positive intentions towards human outgroups.

Lalljee et al., 2009

Social Dominance Orientation

the degree to which individuals desire and support a group-based hierarchy and the domination of ‘inferior’ groups by ‘superior’ groups” (Sidanius & Pratto, 1999, p. 48)

A stronger social dominance orientation predicts less moral consideration.

Passini & Morselli, 2016

Speciesism

“the assignment of different moral worth based on species membership” (Caviola et al., 2019, p. 1011)

Higher speciesism is associated with a variety of prejudices (e.g., racism, sexism), a stronger preference for meat snacks, and donating to and investing time in human causes rather than nonhuman causes.

Caviola et al., 2019

Universalism

“defining goal: understanding, appreciation, tolerance, and protection for the welfare of all people and for nature” (Schwartz, 2012, p. 7)

Moral consideration is linked with societally- and individually-held universal values.

Schwartz, 2007

How are artificial entities studied in psychological science?

Interest is increasing in the psychological antecedents and outcomes of human-robot interaction (HRI) and human-computer interaction (HCI). Some of this psychological research may be particularly important to consider in preparation for interacting with sentient artificial entities. We first review research on three consequential psychological correlates studied in relation to the moral consideration of artificial entities. Then, we present some of the psychological science on human-social robot interactions. Next, we examine some of the psychological science informing our understanding of the relationship between humans and AIs. We conclude this section by summarizing foundational interdisciplinary HRI research on interpersonal interactions with artificial entities.

Psychological correlates related to the moral consideration of artificial entities

Two constructs have been particularly important in the psychological study of the moral consideration of artificial entities: anthropomorphism and mind perception. A third construct, substratism, is emerging as important to moral consideration.

Anthropomorphism involves granting human-like qualities to nonhuman beings, ranging from designing physical qualities like bipedalism and having two forward-facing eyes to attributing mental capacities like having intentions and the capacity to feel complex emotions. Anthropomorphism can be thought of as the opposite of dehumanization, in which human-like qualities are denied to other humans,[4] leading to moral exclusion. Humanization of robots entails making robots more human-like in terms of their physical, behavioral, and mental attributes and its effects are intertwined with psychological anthropomorphism.[5] Psychological research has shown that humans are less willing to sacrifice a humanized robot than a machine-like robot during a moral dilemma and that they are less likely to sacrifice any robot attributed emotional capacity. Similarly, psychological research has shown that anthropomorphized self-driving cars are trusted more by humans than non-anthropomorphized self-driving cars or non-autonomous cars. 

Mind perception involves perceiving that another entity has cognitive and experiential capacities. For instance, humans recognize that other humans think and feel. Humans also recognize that many nonhuman animals, like elephants and dolphins, think and feel. Studies on perceiving the minds of artificial entities have shown that humans tend to attribute cognitive capacities to them, like problem-solving and memory, but not experiential capacities like having emotions.[6] 

An important comment on psychological terminology needs to be made here before continuing. “Affect” refers to a broad category of emotions, feelings, sentiments, and moods. Psychologists typically consider affect to vary along valence (positive, negative) and arousal/activation (high, low) dimensions. “Emotions” are brief, intense states arising from and directed toward a stimulus, like anger in response to being insulted. “Feeling” is a somewhat muddier term but tends to refer to a state that is longer-lasting than an emotion, requires subjective awareness, and can serve as a source of information about the self and the world. “Sentiments” are enduring, organized systems of emotions about a given stimulus such as long-standing hatred of another social group. “Mood” refers to a longer-lasting, diffuse state that may not have a specific cause, like being in a happy mood.[7]

In psychological science, “suffering” is not necessarily thought of as analogous to “pain” nor is it typically conceptualized as an emotion like “distress”. Instead, the American Psychological Association’s (APA) definition emphasizes that “suffering” is the “experience” of pain or distress. We can consider “pain”, “distress”, and “suffering” in terms of traditional psychological theories of emotion. There is a clear biological basis for pain, distress, and suffering consistent with Barrett’s theory of constructed emotion and research showing the integration of negative affect and pain in the brain.[8] Pain, distress, and suffering qualify as moderate activation/arousal and unpleasant/negative valence on Russell’s circumplex model of core affect. The APA’s emphasis on “experience” might also suggest the presence of a cognitive appraisal of “pain” that results in “suffering”, consistent with cognitive appraisal theories of emotion, and with effects of appraisals found on chronic pain experiences and coping. It is also worth noting that “suffering” may be a protracted experience resulting from ongoing situations like chronic pain or difficult living conditions. In this case, “suffering” might be classified as more akin to a feeling, sentiment, or mood than it is to an emotion.

The APA’s emphasis on experience is consistent with philosophical and clinical perspectives on suffering as an emergent property of painful sensation. This definition also accords with humans’ tendency to think that if a being does not “feel” or is not aware of their affective states then they cannot “suffer.”[9] For instance, one study found that meat consumption was linked to the denial of the mental states (e.g., “pain”) thought to be necessary for nonhuman animals to suffer. This type of sentience denial has been used to justify the maltreatment of numerous nonhuman animals[10] as well as humans in other social groups throughout history.[11] 

Some of our research suggests that sentient artificial entities may be subject to substrate-based moral exclusion. Substratism entails the devaluing of artificial entities based on their artificial (e.g., silicon-based), non-carbon material composition in a psychological process similar to speciesism.[12] Speciesist attitudes prioritize the moral value of humans over nonhuman animals and some nonhuman animals over other nonhuman animals. Speciesism and substratism can be thought of as prejudiced attitudes towards nonhumans. Psychological research has shown that a preference for human social hierarchy in which (supposedly) powerful human groups dominate (supposedly) weaker human groups (a personality trait psychologists refer to as social dominance orientation) underpins speciesism.[13] Both social dominance orientation and speciesism contribute to moral exclusion.[14] It seems likely that substratism may have a similar effect on the moral exclusion of artificial entities.

Social robots and humans

Note. This section highlights a small portion of the research on social robots relevant to moral consideration, coming from a psychological science perspective. For more information on the extensive literature on social robots from educational, clinical, developmental, and HRI perspectives, see the section below on interpersonal interactions and the following links to meta-analytic and review articles: Admoni and Scassellati (2017), Belpaeme et al. (2018), Broekens et al. (2009), Campa (2016), Chen et al. (2018), Costescu et al. (2014), Hancock et al. (2011), Hancock et al. (2020), Hirt et al. (2021), Hung et al. (2019), Leichtmann and Nitsch (2021), Pu et al. (2019), Rosanda and Starcic (2020), Scoglio et al. (2019), Song and Luximon (2020), Stower et al. (2021).

Perhaps when you think about robots, the first image that comes to your mind is of an industrial working arm bolting car parts together. Indeed, robots like this have been used for a long time in industrial settings like car manufacturing. However, humans are social beings who have been designing robots for healthcare, education, entertainment, and social companionship purposes. These social robots are embodied, at least partly autonomous, and designed to routinely interact with humans.[15] For example, several companion robots that are intended to help relieve anxiety, depression, and loneliness are currently available for humans to bring into their homes.

Psychologists have suggested that the increasing extent and quality of social interaction between humans and social robots necessitates more consideration of their moral status and their rights based on their social roles,[16] for instance considerations of citizenship rights. That is, we need to think more about social robots’ rights because we form bonds with them and consider them to have certain social standings.

Social psychological research has suggested that humans will relate to social robots based on the same principles witnessed in human intergroup relations. For instance, positive emotions felt towards robots, much like positive emotions felt towards a human outgroup, have been found to predict a greater willingness to interact with them.[17] This research suggests that human-robot intergroup relations may be an important area of study that could be applied to understanding how humans extend moral consideration to sentient artificial entities.

Psychologists have also shown that humans attribute more emotional capacity to social robots than to robots with an economic purpose. Humans protected social robots over economic robots from the harm of receiving painful electric shocks in one study. In this study, recognizing the emotional capacity of these social robots increased concern for their well-being. An implication of this research is that humans’ recognition of artificial sentience may be critical for their moral inclusion.

AI and humans

Since the advent of AI there have been numerous questions about the biases of AIs. Recently, there have been high-profile examples of AIs committing moral violations against humans, from making racist healthcare decisions to autonomous vehicle crashes resulting in human deaths to generating overly sexualized images of women. Psychological research has shown that humans will hold AIs accountable for these moral violations, including attributing to them awareness of the violation, intentionality, and blame, despite humans’ general discomfort with machines making moral decisions. The converse of this relationship,[18] that AIs could experience harm or suffering as a result of their own or others’ actions against them, has not been considered in depth in psychological science. Although, one study showed that people who read about the intentional physical abuse of a complex social robot by a scientist attributed the robot more mind and more capacity to feel pain than people who read that the scientist performed their job satisfactorily. Another similar study showed that humans who saw a facial wound on a human-like robotic avatar or on a human avatar attributed that avatar more mind and capacity to feel pain than unharmed robotic or human avatars. The human avatar was also evaluated as having more mind and capacity to feel pain than the robotic avatar. These studies suggest that visible and physical harms to human-like entities may increase their moral consideration to some degree. Whether these effects generalize to all harms and all AIs or are limited to physical wounds, social robots, or other human-like entities is unknown.[19]

Psychology, neuroscience, and AI

Psychology and neuroscience have contributed substantially to how we think about knowledge constructs critical to AI development such as problem-solving, learning and memory, emotion recognition, and neural networks.[20] If we accept that the field of artificial intelligence aims to create intelligent machines and programs,[21] then we also benefit from considering the connections between how psychological science explains cognition and emotion and how artificial intelligence expresses cognitive and emotional functions.

Reinforcement learning (RL), an increasingly prominent type of artificial intelligence, is closely related to theories of instrumental learning in psychology in which human and nonhuman animals learn based on a series of rewards and punishments. RL algorithms rely on goal accomplishment to learn the appropriate behaviors in their system.[22] RL algorithms focus on optimizing specific rewards in their environments to achieve their goals. For example, when an AI defeats a human in a game of chess, we would consider this a successful application of RL in which the AI has optimized the positive rewards in its system. What happens when the AI instead fails? The AI might face impeded goal achievement and continual punishment in the system for failing to optimize. A human or nonhuman animal in these conditions might exhibit frustration, anger, or anxiety at failing to achieve their goal, with suffering as a possible consequence. Given that the fundamental learning processes of these AIs mirror those that lead to suffering in natural entities, some have suggested that this merits giving them moral consideration.

Interpersonal interactions with artificial entities

Facilitating harmonious, productive interactions between humans and artificial entities has been a multidisciplinary topic of research including some psychological studies. Below we summarize some influential studies from this domain. We seek to provide a general overview of the outcomes of interpersonal interactions that researchers have focused on or that showcase unique areas of growing scholarship. Within this survey of the literature we endeavor to showcase highly cited papers (i.e., well-known, widely regarded, generative), novel approaches, and outcomes relevant to moral consideration. Additionally, where possible, we prioritized studies in which humans directly interacted with artificial entities. The following list is by no means exhaustive of the research on humans’ interactions with artificial entities.

Table 3: Interpersonal interaction studies

Outcome

Entity type

Effect

Study (Google Scholar “cited by” index on 4 August 2021)

Attribution of intelligence

Computer program, functional robot, human-like robot

Humans attributed more intelligence to the human-like robot following interaction in a Prisoner’s Game Dilemma.

Hegel et al., 2008 (103)

Compliance

Nursebot robot

Humans complied more with robots displaying social cues congruent to their jobs (e.g., a serious robot for an exercise instructor).

Goetz et al., 2003 (788)

Trust

Virtual mechanomorphic guidance robot

Initial trust in an unknown robot decreased when the robot made a single mistake, especially in a high risk situation.

Robinette et al., 2017 (80)

Willingness to interact

Mechanomorphic iRobot Create robot, Beam+ telepresence robot, human-like Nao robot, human-like Baxter robot, seal-like Paro robot, dinosaur-like Pleo robot

Following interaction with robots, humans who felt more positively about robots were more likely to be willing to interact with robots in future situations.

Smith et al., 2020 (6)

Perceived threat

Mechanomorphic iRobot Create robot

Humans felt more threatened by entitative (similar in appearance and action) robot groups than by diverse (different in appearance and action) robot groups.

Fraune et al., 2017 (35)

Negative attitudes

Human-like Robovie robot

Humans with more negative attitudes towards robots took longer to talk to a co-present robot.

Nomura et al., 2004 (144)

Social eye gaze

Animal-like IPRobot-PHONE

Mutual gaze (i.e., eye contact) with the robot increased positive evaluations of the robot.

Yonezawa et al., 2007 (82)

Collaborative performance

Human-like Robovie robot, human-like Geminoid robot

Human-robot pairs in a game performed better when the robot revealed their choice by briefly looking at it before the human had to make a congruent choice.

Mutlu et al., 2009 (195)

Moral standing

AIBO robot dog

Children rated a robot dog as deserving only slightly less moral standing than a live dog (e.g., children said it was not okay to hit the robot dog and it was not okay to hit the live dog).

Melson et al., 2009 (143)

Language learning

Human-like Robovie robot

Children who formed a relationship with the robot over a two-week interval at school showed increased learning of a second language.

Kanada et al., 2004 (1097)

Cognitive learning

Animal-like Keepon robot

Interacting with an embodied robot providing personalized feedback improved adult humans’ performance on a puzzle task.

Leyzberg et al., 2012 (241)

Interaction preference

Embodied and virtual human-like Bandit robot

Older adults preferred interacting with the embodied robot exercise coach than interacting with the virtual robot coach.

Fasola & Matarić, 2013 (334)

Bullying

Mechanomorphic DustBot robot, mechanomorphic Piero robot

Service robots (e.g., trash removal) operating in an urban environment demonstration were subject to aggressive behaviors like kicking and slapping, especially by younger adults, when the robots were unsupervised by human controllers.

Salvini et al., 2010 (74)

Abuse

Human-like robot

Adults were more likely to deliver intense shocks to robots than to humans even when the robot uttered statements like, “That was too painful, the shocks are hurting me.”

Bartneck & Hu, 2008 (76)

What psychological science research would help better inform our understanding of artificial sentience?

At Sentience Institute, some of our research aims to understand how and why humans might extend moral consideration to artificial entities based on humans’ ability to take the perspective of nonhumans, on human psychological characteristics predictive of moral consideration, and on artificial entities’ features. We hope that this research will grow the conversation amongst researchers around the importance of studying the moral consideration of artificial entities.

More research is needed on the moral consideration of artificial entities, especially regarding future entities with the potential to have some degree of sentience. Although psychological science has begun to investigate the moral consideration of current artificial entities and has begun contributing to the research on social interactions between artificial entities and humans, there is very little research that focuses, either explicitly or implicitly, on topics consequential to the moral consideration of future sentient artificial entities. For instance, research could question when, why, and how people might or might not value the sentience of artificial entities, which framings and types of interventions (e.g., social norms, values, mindsets) might best increase the moral inclusion of sentient artificial entities, and which psychological and situational factors might increase moral circle inclusion in the present and for the long-term future.[23] Another open question is whether or not there is a capacity limit for the moral consideration of nonhumans that will affect how nonhuman animals and sentient artificial entities are treated.

Psychological research could also question whether experiences of time and psychological distance shape the moral inclusion of artificial entities, given that artificial sentience is tied to future artificial entities, as well as studying the types of psychological processes that influence which types of artificial entities will be granted moral consideration. What types of threats to humanity will increase or decrease moral consideration? Will interventions need to be different for embodied social robots who routinely interact with humans and virtual AI algorithms who are largely unseen by humans? There may be vastly more AI algorithms in the future at risk of less consideration than social robots, just as insects are more numerous than other animals but receive less consideration than mammals like chimpanzees, elephants, and dolphins.

Likewise, little is known about substratism and the psychological tendency to delegitimize entities based on their material composition. We can seek some insight from studies of social stigmas eliciting threat such as stigma associated with prosthetics.[24] We can also seek insight from studies of the (un)naturalness of cultured meat.[25] Research growth in the area of substratism could more accurately illuminate the moral exclusion of current and future artificial entities who are materially different from fully biological entities and “unnatural” in the origin of their sentience and the material composition of their minds.

Another neglected area important for advancing the moral consideration of artificial entities regards how humans think about the autonomy of social robots. One HRI study found that describing artificial entities as being “fully autonomous” led to greater support for their rights (e.g., the right to free speech).[26] Extrapolating from HRI research such as this, humans may perceive robots as more emotional than they were designed to be and grant them more moral consideration because of their emotional and autonomous behavior.[27] However, another study found that autonomous behavior increased perceptions of uniqueness and safety threats, leading to more negative attitudes towards robots and opposition to robotics research. Is the autonomous behavior of a virtual AI (e.g., an algorithm or program) as threatening as the autonomous behavior of an embodied AI? Is autonomous behavior more or less threatening if an entity is sentient? Does an entity’s expression of sentience signify greater autonomy? More studies on the interaction between autonomy and sentience need to be undertaken to disentangle the positive and negative effects that autonomy has on humans’ moral consideration of sentient artificial entities.

Finally, how should we think about entities who can take moral actions but who are not given moral consideration even when others are responsible for teaching them immoral, unfair, or biased strategies? For instance, is it acceptable to blame an AI for making a biased decision based on information and goals provided exclusively by humans? Should humans destroy the AI for succeeding if the outcome is dangerous for humans? What if the AI was able to process pain, feel distressed, and suffer or to experience joy and feelings of success?

How could we apply knowledge about the moral inclusion of artificial entities to improving the future?

Understanding the psychological science behind how people include artificial entities in their moral circles may be used to encourage moral circle expansion to foster a more morally inclusive society. Knowing more about how, when, and why people include artificial entities may help us to build better interventions promoting individuals’ moral circle expansion. For instance, social psychological interventions, ranging from brief cognitive, emotional, or social ‘nudges’ to extensive campaigns that change social norms, have been employed to reduce human prejudices against other humans, to encourage pro-environmental behavior, and to reduce speciesism.

Additionally, research-based evidence from intervention studies may aid legislators and policy makers in designing and implementing effective, tractable societal solutions that expand the moral circle to include sentient AIs without threatening or devaluing the interests of human and nonhuman animals. Some scholars have begun to consider the socio-political and legal structures that may need to be in place for society to successfully integrate sentient artificial entities, such as by modeling legal frameworks for the protection of artificial entities on existing legislation about animal abuse and nonhuman animals as sentient property. This strategy, however, positions the moral consideration of sentient AIs relative to the needs of humans, as has largely been the case with animals.

Building knowledge and working towards the moral inclusion of artificial entities may also be beneficial to reducing the dehumanization of other human groups and the moral exclusion of nonhuman animals. The prosocial treatment of social robots may generalize to increased prosocial treatment of other human and nonhuman animals. Another study suggested that a robot workforce could prompt a sense of shared humanity that would reduce human-human intergroup prejudice. However, some research has also found that workforce automation may lead to increased human intergroup prejudice. Understanding when and why moral consideration generalizes to human and nonhuman animals after interaction with social robots and AIs might help us to explain and promote moral circle expansion.

We feel confident in suggesting that furthering the psychological science behind how humans perceive and interact with sentient artificial entities as individuals and as members of a group will help us to build the epistemological foundation for interventions and strategies that serve to benefit future humans and sentient artificial entities.

Further Reading

Moral exclusion and injustice: An introduction

Toward a psychology of moral expansiveness

Moral circle expansion: A promising strategy to impact the far future

Moral maturation and moral conation: A capacity approach to explaining moral thought and action

Holding robots responsible: The elements of machine morality


[1] See a recent review by Anthis and Paez (2021).

[2] Paraphrasing from Laham (2009). 

[3] See Crimston et al. (2016) and Crimston et al. (2018).

[4] See Epley and Eyal (2019) for more on this dynamic.

[5] Following from Giger et al. (2019), Nijssen et al. (2020), and Nijssen et al. (2019).

[6] For example, Gray et al. (2007) and Gray and Wegner (2012).

[7] See these articles for more details on terminology: Schwarz and Clore (2007), Russell (2005), Russell (2003), Bertocci (1940).

[8] See also Buhle et al. (2013) and Kragel et al. (2018).

[9] See a commentary on this subject in this post by Kotzmann (2020).

[10] For instance, chickens are one of the most numerous birds on the planet but they are categorized as a commodity rather than an animal (Marino, 2017) and are subject to difficult lives primarily lived on factory farms.

[11] See Loughnan et al. (2014) and Haslam and Loughnan (2014) for reviews of relevant research.

[12] Paraphrasing from this post on artificial sentience.

[13] See also Dhont et al.’s (2016) studies on the Social Dominance Human-Animal Relations Model.

[14] See Caviola et al. (2019).

[15] See Darling (2012) for more on social robots.

[16] See Hildt (2019) for a review and commentary.

[17] See Smith et al. (2021).

[18] For more information on this duality see Gray et al. (2007) and Bigman et al. (2019).

[19] See Tanibe et al., (2017) for a related study showing how perspective-taking and imagining a benevolent interaction with a human-like housework robot increases perception of that robot’s mind.

[20] See this handbook for a review of the history of psychology, this seminal book on neural pathways, this paper for a review of the psychology of AI, and this historical perspective on cognitive science and AI.

[21] See also Russell & Norvig, 1995.

[22] See Tomasik (2014) for more depth.

[23] For more information on the value of the future, see this article from Tobias Baumann (2020) and this post from William MacAskill (2019).

[24] For more on product-related stigma see this article by Schröppel et al., (2021).

[25] For more on “naturalness” see this book by Alan Levinovitz (2021).

[26] Summarized from a study by Lima et al. (2020).

[27] Paraphrasing from Darling (2012).


Subscribe to our newsletter to receive updates on our research and activities. We average one to two emails per year.