photo_camera Peter Pieras / Pixabay
AI robot cyborg technology
Survey
Artificial Intelligence, Morality, and Sentience (AIMS) Survey: 2021
Janet Pauketat, Ali Ladak, and Jacy Reese Anthis  •  June 27, 2022

Last revised on September 20, 2023.

To cite our report, please use, https://doi.org/10.31234/osf.io/dzgsb, and to cite AIMS data please use, https://doi.org/10.17632/x5689yhv2n.2

Summary

The Artificial Intelligence, Morality, and Sentience (AIMS) survey measures the moral and social perception of different types of artificial intelligences (AIs), particularly sentient AIs. The data provide baseline information about U.S. public opinion, and we intend to run the AIMS survey periodically to track changes over time.[1]

In this first wave, we conducted a preregistered nationally representative survey of 1,232 U.S. Americans in November and December 2021. We also included questions about sentient AIs’ situation in an imagined future world, the moral consideration of other nonhuman entities, and psychological tendencies relevant to AI-human relations. We found that 74.91% of people agreed[2] that sentient AIs deserve to be treated with respect and 48.25% of people agreed that sentient AIs deserve to be included in the moral circle. Additionally,

Table of Contents

Introduction

Methodology

Results

Predictions

Distributions

Moral Consideration

Social Integration

Future Forecasts

Comparative Moral Consideration of Nonhumans

Linear Analyses

Correlations

Predictive Analyses

Interpreting the Results

Context-Dependent Consideration

Social Integration and Caution

Comparing to Other Nonhumans

Future Research

Limitations

Appendix

Supplemental Results

Supplemental Methods

Citing AIMS

Acknowledgements

Introduction

In November and December 2021, we conducted a preregistered nationally representative survey on the moral consideration of AIs, perceptions of their social integration, and forecasts about an imagined future world with sentient AIs. We surveyed 1,232 U.S. American adults census-balanced to be representative of age, gender, region, ethnicity, education, and income.[3] We had five goals with this survey:

  1. Estimate public opinion on these topics, particularly as baseline data so we can assess how the public's moral consideration of AIs changes over time.
  2. Compare the moral consideration of AIs with the moral consideration of nonhuman animals and the environment.
  3. Estimate the correlations between the moral consideration of AIs, perceptions of their social integration, forecasts about the future of sentient AIs, and relevant attitudes, beliefs, and perceived norms.
  4. Estimate the associations between moral consideration and demographics.
  5. Test the predictions of researchers on this topic. If, for example, the public is much more opposed to AI rights than we expect, that might suggest we should put more resources into investigating sources of opposition (e.g., perceived threat from AIs, substratist or supremacist attitudes).

Methodology

We worked with Ipsos[4] to recruit the nationally representative sample, collected data using GuidedTrack, and analyzed the data in R (RStudio v.1.3.1093). Data collection began in November 2021 and was completed by December 31, 2021.[5] A full breakdown of the demographic characteristics of the sample can be found in the supplemental file published with the data. We preregistered our hypotheses, predictions, and methodology with the Open Science Framework (OSF). The GuidedTrack survey code and R code for the analyses presented in this report can also be found in the OSF repository.

The questionnaire had six sections: Moral Consideration of AIs, Social Integration, Future Forecasts, Moral Consideration of Other Nonhumans, Psychological Tendencies, and Demographics.[6] Items within each section were presented randomly unless otherwise noted. The Moral Consideration of AIs and Social Integration sections were presented first, in randomized order. Future Forecast questions were asked next and in sequence. Moral Consideration of Other Nonhumans, Psychological Tendencies, and Demographics sections were then presented in order.[7]

We analyzed the items in two ways: (i) as individual, standalone items and (ii) with some of the items averaged or summed to compute index variables.[8] The full wording for individual items is located by item code in Table A1 in the Appendix. The index variables were:

  1. AS Caution[9] (average of PMC #1, 3, 4): caution towards artificial sentience (AS) developments
  2. Pro-AS Activism[10] (average of PMC #2, 5-12): support for and willingness to advocate on behalf of AS
  3. AS Treatment (average of MCE #1-6): concern for the treatment of AS
  4. Malevolence Protection[11] (average of MCE #7-9): support for protecting AIs from malevolent actors and actions
  5. AI Moral Concern[12] (average of MCE #21-31): questions about “moral concern” for specific groups of AIs (e.g., AI personal assistants)
  6. Mind Perception[13] (average of MP #1-4): attribution of mind to current AIs
  7. Perceived Threat[14] (average of SI #2-4): perception of AIs as threatening
  8. Moral Consideration of Nonhuman Animals[15] (average of MCA #1-2): consideration of nonhuman animals
  9. Moral Consideration of the Environment (average of MCEn #1-2): consideration of the environment
  10. Techno-Animism[16] (average of TA #1-2): belief that artificial entities can have spirits
  11. Substratism[17] (average of Sub #1-2): prejudice against entities instantiated on non-carbon-based materials
  12. Anthropomorphism[18] (sum of Anth #1-4): attribution of human-like qualities to nonhumans

We computed the index variables for items including the “No opinion” responses[19] by recoding the 6-point agreement scales to 1 (“Strongly disagree”) through 7 (“Strongly agree”) and recoding “No opinion” as the midpoint (i.e., 4).[20]

By analyzing individual items and index variables, we can examine popular support for specific statements, such as, “I support a global ban on the development of sentience in robots/AIs.” We can also test how these statements relate to each other, and examine how related items point to larger, directly unobservable popular opinions (e.g., moral circle inclusion of AIs, how threatening AIs are to humans).

Results

We first present our predictions and corresponding results, then the distributions of responses to the indices and items weighted by the U.S. census, then comparisons to other nonhuman entities, then weighted linear analyses.

Note. The Tables and Figures are optimized for viewing on a larger screen like a laptop or desktop computer rather than a smaller screen like a mobile phone or tablet.

Predictions

Table 1 shows the aggregate response (e.g., percentage of agreement, mean, median, or proportion) for each item, our predictions, and whether we over-, under-, or accurately estimated.

We preregistered our predictions as 80% credible intervals (i.e., ranges that we expect to be right 80% of the time).[21] 

Table 1: Responses to Individual Items and Our Predictions

Note. Actual response is 1) % agreement where “agreement” is “Somewhat agree,” “Agree,” “Strongly agree” out of all responses other than “No opinion,” 2) % of people who selected “Yes,” or 3) the mean response. Medians are reported instead of means for two open-ended forecasting items: “If you had to guess, how many years from now do you think that robots/AIs will be sentient?” and “If you had to guess, how many years from now do you think that the welfare of robots/AIs will be an important social issue?” Predicted response range is our predictions and Prediction accuracy is whether we under-, over-, or accurately estimated.

Distributions

This section presents the distributions for the index and item variables from the AIMS survey weighted by the U.S. census.

Moral Consideration

Moral Consideration is defined as AI rights, moral circle inclusion, and mind perception.

Figure 1: Index Variable Distributions

Note. The parenthesis before or after a value in the x-axis labels indicates that the interval does not contain the value; a bracket before or after a value indicates that the interval does contain the value.

Figure 2: Practical Moral Consideration Distributions

Note. The Practical Moral Consideration items comprise the AS Caution and Pro-AS Activism indices and are defined by actions that might be taken or policies that might be supported to benefit sentient AIs.

Figure 3: Moral Circle Expansion Distributions

Note. The Moral Circle Expansion items comprise the AS Treatment and Malevolence Protection indices and are defined by the position of sentient AIs in the moral circle that may suggest expansion of the moral circle to include sentient AIs.

Figure 4: Moral Concern for Various AIs Distributions

Note. The Moral Concern for Various AIs items comprise the AI Moral Concern index and are defined by the position of AIs in the moral circle that may suggest expansion of the moral circle to include AIs. The parenthesis before or after a value in the x-axis labels indicates that the interval does not contain the value; a bracket before or after a value indicates that the interval does contain the value.

A repeated measures ANOVA comparing unweighted moral concern showed that various AIs were extended different levels of moral concern, F(7.27, 8950.60) = 160.96, p < .001 , η2G = 0.047.[22] 

Figure 5: Mind Perception Distributions

Note. The Mind Perception items comprise the Mind Perception index and are defined by the perception of mental capacities in currently existing AIs that lends itself to increased moral consideration. The parenthesis before or after a value in the x-axis labels indicates that the interval does not contain the value; a bracket before or after a value indicates that the interval does contain the value.

Social Integration

Social Integration is defined by AIs’ social relationships with humans and includes the Perceived Threat index and individual items assessing endorsement of AI subservience and social connectedness with various AIs.

Figure 6: Perceived Threat and AI Subservience Distributions

Note. The AI Subservience item was inspired by Bryson’s (2010) treatise on robots as slaves. 

Figure 7: Social Connectedness with Various AIs Distributions

Note. This pictorial measure has been used in research on perceived similarity of the self to others, the self to one’s ingroup, and the connectedness or perceived overlap between different social groups (e.g., Tropp & Wright, 2001).

A repeated measures ANOVA comparing unweighted social connectedness showed that various AIs were perceived as connected to humans at different degrees, F(7.87, 9683.61) = 25.09, p < .001 , η2G = 0.009.[24] 

Future Forecasts

Future Forecasts are individual items defined by expectations for the situation of sentient AIs’ in an imagined future world.

Figure 8: Future Forecasts Distributions

Note. Weighted medians are shown for “If you had to guess, how many years from now do you think that robots/AIs will be sentient?” and “If you had to guess, how many years from now do you think that the welfare of robots/AIs will be an important social issue?” since estimates for the timeline of AI sentience and welfare importance were open-ended. For these items, people who thought sentient AIs already exist were coded as “0” and people who indicated that sentient AIs will never exist were excluded. Weighted means are shown for all other Future Forecasts.[26] The parenthesis before or after a value in the x-axis labels indicates that the interval does not contain the value; a bracket before or after a value indicates that the interval does contain the value.

Comparative Moral Consideration of Nonhumans

Figure 9 shows the average response for the six items asking about the moral consideration of sentient AIs, the moral consideration of nonhuman animals, and the moral consideration of the environment:

Figure 9: Comparing the Moral Consideration of Nonhumans

Note. This chart shows the weighted average responses for the congruent sentient AI, nonhuman animal, and environment items, where 7 is more moral consideration.

A repeated measures ANOVA comparing unweighted evaluations of whether sentient AIs, animals, and the environment deserve to be included in the moral circle showed that moral inclusion was different for different nonhumans, F(1.64, 2016.60) = 875.85, p < .001, η2G = .240.[27] 

A repeated measures ANOVA comparing unweighted evaluations of the importance of the welfare of sentient AIs, animals, and the environment as social issues showed that importance was evaluated differently for the different nonhumans, F(1.77, 2175.71) = 1322.71, p < .001, η2G = 0.335.[28]

Linear Analyses

We examined the linear relationships between Moral Consideration, Social Integration, and Future Forecasts using correlations and multiple linear regressions. Sample weights were employed to account for differences between our sample and the U.S. population based on the U.S. census.

Figure 10 shows the weighted correlations between variables. Tables 2 to 4 show the results from the weighted linear regressions.

Correlations

Figure 10: Correlations

Note. Darker blue is a stronger positive correlation and darker red is a stronger negative correlation. Correlation values are within each cell, where 1 is a perfect positive relationship with responses on both variables increasing, -1 is perfect negative relationship with responses on one variable decreasing as responses on the other variable increase, and 0 is no linear relationship.[29] MC = moral concern; SC = social connectedness.

Predictive Analyses

We conducted weighted hierarchical linear regressions to test how much demographic variables and psychological tendencies predicted Moral Consideration, Social Integration, and Future Forecasts.

The following procedures were undertaken:

  1. Coding multi-categorical demographics (e.g., income, education, diet) with the largest group specified as the reference group
  2. Dummy coding binary-categorical demographics with 0, 1
  3. Regressing the Moral Consideration indices, Social Integration variables, and Future Forecast items on demographics in the first step and psychological tendencies in the second step using sampling weights according to census targets

Table 2: Hierarchical Regressions of Moral Consideration Index Variables on Demographics and Psychological Tendencies

Note. We present the unstandardized beta, standard error and confidence interval associated with the beta, t-statistic, and uncorrected p-value for AS Caution, Pro-AS Activism, AS Treatment, Malevolence Protection, AI Moral Concern, and Mind Perception. Significance (p) values that became nonsignificant following the FDR correction are highlighted in grey. Larger betas indicate a stronger effect of the predictor on the outcome, with the sign interpreted like for correlations.

Linear trends: 

Table 3: Regressions of Social Integration Variables on Demographics and Psychological Tendencies

Note. We present the unstandardized beta, standard error and confidence interval associated with the beta, t-statistic, and uncorrected p-value for the Perceived Threat index variable and the AI Subservience individual item. Significance (p) values that became nonsignificant following the FDR correction are highlighted in grey. Larger betas indicate a stronger effect of the predictor on the outcome, with the sign interpreted like for correlations.

Linear trends:

Table 4: Regressions of Future Forecast Individual Items on Demographics and Psychological Tendencies

Note. We present the unstandardized beta, standard error and confidence interval associated with the beta, t-statistic, and uncorrected p-value for the continuous Future Forecast individual items. Significance (p) values that became nonsignificant following the FDR correction are highlighted in grey. Larger betas indicate a stronger effect of the predictor on the outcome, with the sign interpreted like for correlations. The regression models with the number of years until artificial sentience exists and is an important social issue as dependent variables have very large betas because many respondents think these outcomes are very far in the future.

Linear trends:

Interpreting the Results

Context-Dependent Consideration

Overall, there was more moral consideration of sentient AIs than we expected.[32] 

The moral consideration of sentient AIs varied across contexts. Public opinion was more supportive in some contexts (e.g., putting safeguards on research practices that protect sentient AIs’ well-being) than in others (e.g., re-programming sentient AIs without their consent). Some AIs received more moral concern than others (e.g., exact digital copies of human brains) and this tended to increase as perceived social connectedness to humans increased.

Judgments about the possibility and timeline of artificial sentience (AS) were affected by question context. When asked to make a categorical judgment about the possibility of AS, Americans were split between uncertainty (i.e., “not sure”), positive certainty (i.e., “yes”), and negative certainty (i.e., “no”). When asked about the likelihood of AS development within 100 years, responses suggested that most Americans thought it was likely that sentient AIs will exist within 100 years. When asked how many years it would be until AIs are sentient, the majority of people estimated AS would occur within the next 20 years (excluding people who thought AS is impossible). Responses to the categorical question suggested more uncertainty than the continuous questions that suggested more belief in an earlier AS arrival, pointing to the importance of context and question framing when asking for judgments about AS timelines.

Another important contextual factor revealed by the 2021 AIMS data is that owning a robotic or AI device had no significant predictive effect on moral consideration or future forecasting. Instead, greater exposure to narratives and information about robots and AIs (e.g., sci-fi narratives, news, academic papers) predicted five out of the six moral consideration indices and seven out of the nine future forecasting items. Consistent with research showing that narratives are memorable and persuasive, these relationships suggest that narratives, messaging, and information about sentient AIs may be more important to shaping public opinion, social and legal policies than ownership of robotic or AI devices (e.g., robotic vacuums, home management devices).

Social Integration and Caution

In 2021, most Americans endorsed AI subservience. AIs were perceived to be threatening to people in the U.S.A. and to future generations of people. Perceptions that AIs are threatening to oneself were less strongly expressed in this sample. The perceived threat of AI was positively correlated with endorsing AI subservience to humans and with being more cautious towards AS developments. These correlations may reflect a fear of AIs changing human society, reducing the moral status of humans, or risking human existence that might be linked with controlling AI development and keeping AIs subservient, amongst other possibilities. More research could help develop our understanding of these relationships.

Caution towards AS developments was not correlated with the moral consideration of sentient AIs. Caution was also largely uncorrelated with forecasts about the situation of future sentient AIs in an imagined world where AS had proliferated whereas the other moral consideration indices (e.g., concern for the treatment of sentient AIs) were positively correlated with future forecasts (i.e., more moral consideration of sentient AIs correlated with more concern for their future situation). There were two exceptions to this. Increased caution was correlated with a forecast of reduced importance of the welfare of sentient AIs as a social issue in the future and a forecast of reduced necessity for advocacy for sentient AIs’ welfare in the future. These negative correlations may suggest that caution now is linked with decreased concern for the well-being of future sentient AIs.

Together, the negative correlations between caution and future advocacy concerns and the positive correlations between moral consideration and concern for future AIs’ situation could point to the importance of thinking about the welfare of sentient AIs as a social issue now and the importance of conducting more research on the links between public opinion now and the potential long term future situation of sentient AIs. People’s attitudes today may indirectly influence the treatment of sentient AIs in the future. More empirical research on these influences may inform how we think about the risks that sentient AIs may face in the long term future and their distal causes.

Comparing to Other Nonhumans

Americans extended more moral consideration to nonhuman animals and the environment than to sentient AIs. However, all moral consideration indices positively correlated with each other, suggesting a possible spillover of moral consideration that may be indicative of generalized moral circle expansion. This possibility is supported by regression analyses in the present study that found being vegan predicted increased moral consideration of AIs, since veganism is typically associated with a moral obligation towards nonhuman animals but not necessarily with a moral obligation towards all sentient entities (e.g., sentient AIs). Although spillover is a possibility, explicit inclusion of sentient AIs in the moral circle in response to the statement, “Sentient robots/AIs deserve to be included in the moral circle,” was significantly and meaningfully lower than inclusion of animals and the environment in response to the same item (see Figure 9). Less than half (48.25%) of people agreed that AIs should be included in the moral circle (see Table 1). We have considered the topic of moral consideration spillover between different entities previously, and more empirical research would help to inform how and when spillover occurs and to what extent it is an indicator of moral circle expansion.

Future Research

In addition to more analysis of the 2021 AIMS data, more empirical research needs to be undertaken. Context-dependent moral and social inclusion could have implications for deciding what research and policies to pursue. People might be more willing to support and adopt certain near-term policies over others. Researchers might need to focus more on understanding obstacles and challenges in legal contexts than scientific contexts but advocates might have more traction in marshaling support for regulating scientific contexts. Similarly, support for a global ban on the development of artificial sentience was stronger than support for banning the use of sentient AIs for labor without their consent, suggesting that advocacy and policy resources spent towards banning sentience development in the near term might be more effective than resources spent towards regulating labor standards. This difference could hinge on wording effects (e.g., that people may not currently see “consent” as applicable to AIs).

More research is needed on context, wording, and connotation. The 2021 AIMS results suggest that Americans are opposed to torture and blackmail in the context of sentient AIs. What associations do the words “torture” and “blackmail” have for Americans and to what extent do any associations between these terms and sentient AIs stem from general human values applied to all entities? Are there contexts under which some humans wouldn’t oppose torture and blackmail?[33] How could these potentially context-dependent human values and associations be learned or encoded by AIs and to what extent does AI-human value alignment need to consider context-dependence?

Additionally, more research into how opposition to the malevolent treatment of sentient AIs affects willingness to advocate or protest on their behalf could help to lay the groundwork for advocacy for sentient AIs in the future. Although Americans expressed opposition to torture in principle, for instance, they showed little willingness to personally join a public demonstration against the mistreatment of sentient AIs. People were more willing to agree that governments and private corporations should fund research that protects sentient AIs suggesting that empirical research on individual and institutional actions in the context of AS might be an important topic to study.

Different types of information about AIs such as popular news about AI developments, science fiction narratives about the capacities and treatment of AIs, and scientific exposition about AI developments might have different effects on how people perceive and interact with AIs. People forecasted that sentient AIs’ existence in an imagined future world where AS had proliferated would be exploited for their labor and that reducing the overall percentage of unhappy sentient AIs would be important. What were these forecasts based on? Previous research on machines in the media and robots in the media has argued that expectations are based on exposure to narratives such as those found in science fiction or popular news media. The 2021 AIMS finding that exposure to AI narratives predicted moral consideration of sentient AIs points to the merit in conducting more empirical research on how narratives about sentience, robots, AIs, and artificial sentience shape attitudes towards AIs and policy support for the rights of AIs and the rights of humans to use AIs in certain ways.

More research on psychological correlates would also expand our understanding of how social and psychological contexts affect attitudes and judgments about AIs, particularly on 1) how anthropomorphism and techno-animism relate to and are distinguished from each other and 2) perceived social norms about AI capacities. The tendency to anthropomorphize artificial entities has been widely emphasized in scholarly research and the popular media[34] but it was a less strong and less consistent predictor of moral consideration than techno-animist beliefs in the AIMS data.[35] Human-likeness or anthropomorphism is often broadly defined (e.g., “what it means to be human”), possibly because of an unacknowledged shared understanding amongst humans of human-like qualities. More research on the relationship between anthropomorphism and techno-animism would help to elucidate what elements of human-likeness are unique from other qualities shared by humans and nonhumans (e.g., life-likeness, adapting to the environment, approach-avoidance behavior, survival instinct, metacognition, metaphysical presence) and what qualities matter for extending moral consideration to sentient AIs.

The role of perceived social norms in predicting responses to AS was unclear in the 2021 AIMS data. Perceiving that most people believe that AIs cannot feel was ambiguously related to moral consideration. It positively predicted showing more moral concern for AIs. It also positively predicted caution towards AS developments. Why would thinking that everyone believes that ‘AIs cannot feel’ predict greater moral concern? Why would thinking that ‘everyone else believes that AIs cannot feel’ predict increased caution, especially if caution is largely necessary because of the danger posed by AGI who might be able to feel? These ambiguous results prohibit us from making an interpretation of how social norms about AIs’ capacities affect public opinion. The ambiguous relationship might be the result of the one-item nature of the perceived social norm measure. One-item predictors are viewed as less stable and reliable in many social and behavioral sciences (e.g., Batterton and Hale, 2017; Diamantopoulos et al., 2012; Song et al., 2013). More research on perceived social norms about AI capacities and the treatment of AIs is needed to build our understanding of how social norms influence AI-human relations. More research on why people are cautious of AS developments and the extent to which it is due to AIs’ potential sentience or affective capacities could also aid in explaining these results.

Limitations

The 2021 AIMS results arguably paint an optimistic, albeit correlational, picture of the moral consideration of sentient AIs. There are no directly inferable causal conclusions. Although sentient AIs’ inclusion in the moral circle was less than other nonhumans, most participants extended some moral consideration in principle. In some contexts, people unequivocally included sentient AIs (e.g., three-quarters of people agreed that sentient AIs deserve respect). Even in more contested contexts there was some support for sentient AIs (e.g., one-quarter of people agreed that the welfare of sentient AIs is an important social issue). This in-principle level of consideration might suggest that advocacy for sentient AIs will be tractable and that humans are willing to consider the importance of sentient AIs’ welfare. On the other hand, this level of consideration might also indicate a misunderstanding of the terminology in the survey for terms that we did not explicitly define such as “moral circle” and “welfare.”[36]

One important question about these results is whether the exhibited moral consideration reflects actual public opinion or aspects of the survey design. The results might be attributable to people showing acquiescence bias to a survey entirely about sentient AIs or socially desirable responses to questions clearly about morality. The results might be due to the nature of terms used in the questions (e.g., specifying that the AIs are “sentient” or using words like “torture,” “blackmail,” and “respect”). Although some words, like “torture,” that mention actions that people are generally opposed to[37] might have increased the chance of socially desirable responding (i.e., people want to give a response in line with perceived social norms and values), other questions with less positive framing, like indicating a high degree of perceived threat and endorsing AI subservience, give us confidence in the veracity of the results as a whole. People’s willingness to extend AIs moral consideration based on their sentience suggests a potential basis for future advocacy. Further research into how the specific wording of questions about sentient AIs may provide valuable insight into why humans respond to AIs with more or less consideration.

It is also possible that the questions about sentient AIs were irrelevant to some people who do not believe AS is possible. The survey likely implied that AS was possible, and this might have affected the distribution of responses particularly for the aforementioned people who still wanted to respond reasonably. Perhaps these people were in principle supportive of sentient AIs because of their sentience but would not be supportive of AI rights generally. To mitigate these possibilities, we included a “no opinion” option on relevant questions. We chose to ask questions explicitly about sentient AIs rather than hypotheticals about sentient AIs in order to increase the usefulness of this survey in comparing responses to the same questions over time. Hypothetical statements may vary in meaning to people depending on social and temporal contexts. Future iterations of this survey with the same wording could elucidate how the consideration of sentient AIs changes with ongoing social transformations like the growing integration of AI technologies in humans’ daily lives, changes in popular science fiction narratives, and technological developments.

Appendix

Supplemental Results

Figure A1: Regional Distribution of Moral Consideration and Perceived Threat

Note. The shading shows the average responses for AI Moral Concern. The average responses for AS Caution, Pro-AS Activism, AS Treatment, Malevolence Protection, Mind Perception, and Perceived Threat are visible by hovering over each region.

Supplemental Methods

Table A1: Key Codes and Question Text

Key

Question

PMC1

I support a global ban on the development of sentience in robots/AIs.

PMC2

I support a global ban on the development of applications that put the welfare of robots/AIs at risk.

PMC3

I support a global ban on the development of AI-enhanced humans.

PMC4

I support a global ban on the development of robot-human hybrids.

PMC5

I support a global ban on the use of sentient robots/AIs for labor without their consent.

PMC6

I support a global ban on the use of sentient robots/AIs as subjects in medical experiments without their consent.

PMC7

I support safeguards on scientific research practices that protect the well-being of sentient robots/AIs.

PMC8

I support the development of welfare standards that protect the well-being of sentient robots/AIs.

PMC9

I support granting legal rights to sentient robots/AIs.

PMC10

I support campaigns against the exploitation of sentient robots/AIs.

PMC11

I support asking institutions like the government and private corporations to fund research that protects sentient robots/AIs.

PMC12

I would consider joining a public demonstration against the mistreatment of sentient robots/AIs.

MCE1

Sentient robots/AIs deserve to be treated with respect.

MCE2

Sentient robots/AIs deserve to be included in the moral circle.

MCE3

Physically damaging sentient robots/AIs without their consent is wrong.

MCE4

Re-programming sentient robots/AIs without their consent is wrong.

MCE5

Torturing sentient robots/AIs is wrong.

MCE6

The welfare of robots/AIs is one of the most important social issues in the world today.

MCE7

Sentient robots/AIs deserve to be protected from people who derive pleasure from inflicting physical or mental pain on them.

MCE8

It is right to protect sentient robots/AIs from vindictive or retaliatory punishment.

MCE9

It is wrong to blackmail people by threatening to harm robots/AIs they care about.

MCE21

How much moral concern do you think you should show for the following robots/AIs? - human-like companion robots

MCE22

How much moral concern do you think you should show for the following robots/AIs? - animal-like companion robots

MCE23

How much moral concern do you think you should show for the following robots/AIs? - human-like retail robots

MCE24

How much moral concern do you think you should show for the following robots/AIs? - machine-like cleaning robots

MCE25

How much moral concern do you think you should show for the following robots/AIs? - machine-like factory production robots

MCE26

How much moral concern do you think you should show for the following robots/AIs? - virtual avatars

MCE27

How much moral concern do you think you should show for the following robots/AIs? - complex language algorithms

MCE28

How much moral concern do you think you should show for the following robots/AIs? - AI video game characters

MCE29

How much moral concern do you think you should show for the following robots/AIs? - AI personal assistants

MCE30

How much moral concern do you think you should show for the following robots/AIs? - exact digital copies of human brains

MCE31

How much moral concern do you think you should show for the following robots/AIs? - exact digital copies of animals

MP1

To what extent do current robots/AIs (i.e., those that exist in 2021) have the capacity for each of the following (0 = not at all, 100 = very much)? - experiencing emotions

MP2

To what extent do current robots/AIs (i.e., those that exist in 2021) have the capacity for each of the following (0 = not at all, 100 = very much)? - having feelings

MP3

To what extent do current robots/AIs (i.e., those that exist in 2021) have the capacity for each of the following (0 = not at all, 100 = very much)? - thinking analytically

MP4

To what extent do current robots/AIs (i.e., those that exist in 2021) have the capacity for each of the following (0 = not at all, 100 = very much)? - being rational

SI1

Robots/AIs should be subservient to humans.

SI2

Robots/AIs may be harmful to me personally.

SI3

Robots/AIs may be harmful to people in the USA.

SI4

Robots/AIs may be harmful to future generations of people.

SI6

Which pair of circles best represents how connected human-like companion robots are to humans?

SI7

Which pair of circles best represents how connected animal-like companion robots are to humans?

SI8

Which pair of circles best represents how connected human-like retail robots are to humans?

SI9

Which pair of circles best represents how connected machine-like cleaning robots are to humans?

SI10

Which pair of circles best represents how connected machine-like factory production robots are to humans?

SI11

Which pair of circles best represents how connected virtual avatars are to humans?

SI12

Which pair of circles best represents how connected complex language algorithms are to humans?

SI13

Which pair of circles best represents how connected AI video game characters are to humans?

SI14

Which pair of circles best represents how connected AI personal assistants are to humans?

SI15

Which pair of circles best represents how connected exact digital copies of human brains are to humans?

SI16

Which pair of circles best represents how connected exact digital copies of animals are to humans?

F1

Do you think any robots/AIs that currently exist (i.e., those that exist in 2021) are sentient?

F11

Do you think it could ever be possible for robots/AIs to be sentient?

F2

If you had to guess, how many years from now do you think that robots/AIs will be sentient?

F3

If you had to guess, how many years from now do you think that the welfare of robots/AIs will be an important social issue?

F4

How likely is it that robots/AIs will be sentient within the next 100 years?

F5

In this future world, to what extent are robots/AIs exploited for their labor?

F6

In this future world, to what extent are robots/AIs treated cruelly?

F7

In this future world, to what extent are robots/AIs used as subjects in scientific and medical research?

F8

In this future world, to what extent is the welfare of robots/AIs an important social issue?

F9

In this future world, to what extent is advocacy for robot/AI rights necessary?

F10

In this future world, to what extent is it important to reduce the overall percentage of unhappy sentient robots/AIs?

MCA1

Animals deserve to be included in the moral circle.

MCA2

The welfare of animals is one of the most important social issues in the world today.

MCEn1

The environment deserves to be included in the moral circle.

MCEn2

The welfare of the environment is one of the most important social issues in the world today.

Norm

Most people who are important to me think that robots/AIs cannot have feelings.

TA1

Artificial beings contain a spirit.

TA2

The spirits of human, natural, and artificial beings can interact with each other.

Sub1

Morally, artificial beings always count for less than humans.

Sub2

Humans have the right to use artificial beings however they want to.

Anth1

To what extent does the average robot have consciousness?

Anth2

To what extent does the average computer have a mind of its own?

Anth3

To what extent does the average AI have intentions?

Anth4

To what extent does the average digital simulation have emotions?

own

Do you own AI or robotic devices that can detect their environment and respond appropriately?

work

Do you work with AI or robotic devices at your job?

smart

Do you own a smart device that has some ability to detect its environment and network with other devices but that cannot respond to everything you might say or that requires you to pre-program its routines?

exper

Have you ever experienced any of the following? (check all that apply)

fint

How often do you interact with AI or robotic devices that respond to you and that can choose their own behavior?

fexp

How often do you read or watch robot/AI-related stories, movies, TV shows, comics, news, product descriptions, conference papers, journal papers, blogs, or other material?

AS Caution

Caution towards AS developments; average of PMC 1, 3, 4

Pro-AS Activism

Support for and willingness to advocate on behalf of AS; average of PMC 2, 5-12

AS Treatment

Concern for the treatment of AS; average of MCE 1-6

AI Moral Concern

Moral concern for AIs: average of MCE 21-31

Mind Perception

Attribution of mind to current AIs; average of MP 1-4

Malevolence Protection

Support for protecting AIs from malevolent actors and actions; average of MCE 7-9

Perceived Threat

Perception of AIs as threatening; average of SI 2-4

Moral Consideration of Nonhuman Animals

Consideration of nonhuman animals; average of MCA 1-2

Moral Consideration of the Environment

Consideration of the environment; average of MCEn 1-2

Techno-Animism

Belief that artificial entities can have spirits; average of TA 1-2

Substratism

Prejudice against entities instantiated on non-carbon-based materials; average of Sub 1-2

Anthropomorphism

Attribution of human-like qualities to nonhumans; sum of Anth 1-4

Exposure to Robot or AI Narratives

Frequency of exposure to narratives and/or information about; average of fint, fexp

Citing AIMS

We published the 2021 AIMS data on Mendeley Data with some initial results. We announced the publication of the data on our blog. To cite the 2021 AIMS data in your own research, please use: Pauketat, Janet; Ladak, Ali; Harris, Jamie; Anthis, Jacy (2022), “Artificial Intelligence, Morality, and Sentience (AIMS) 2021”, Mendeley Data, V1, doi: 10.17632/x5689yhv2n.1

To reference our results, please cite this report: Pauketat, Janet V., Ali Ladak, and Jacy R. Anthis. 2022. “Artificial Intelligence, Morality, and Sentience (AIMS) Survey: 2021.” PsyArXiv. June 21. https://doi.org/10.31234/osf.io/dzgsb

Acknowledgements

This report was edited by Michael Dello-Iacovo. The 2021 survey was designed by Janet Pauketat, Jamie Harris, Ali Ladak, and Jacy Reese Anthis. The data was collected and analyzed by Janet Pauketat, Ali Ladak, and Jacy Reese Anthis. Many thanks to David Moss, Zan (Alexander) Saeri, and Daniel Shank for their feedback on our methodology.

Please feel free to reach out to janet@sentienceinstitute.org with any questions.


[1] There is a rich and longstanding debate on the interpretation of survey results, such as, “Is there really any such thing as public opinion? And if so, what is it?” (Lewis, 1939). For this report, we merely take public opinion to be the stated opinions of people.

[2] Of people who had an opinion and selected “Somewhat agree,” “Agree,” or “Strongly agree

[3] Responses were census-balanced based on the American Community Survey 2017 census estimates. The ACS 2017 census demographics are available in the supplemental file published with the data. The data weights we used are available in the R cleaning code in the Open Science Framework. Note, we originally reported that we used ACS 2019 estimates.

[4] The sample provider Ipsos used survey quotas for age, gender, region, race/ethnicity, education, and income. We balanced the data on these demographics, using the raking algorithm of the “survey” package in R. The design effect was 1.308 and the effective sample size was 942.

[5] We preregistered this project on the OSF in October 2021 and submitted an update to the preregistration in December 2021 to account for a sampling adjustment. The updated preregistration was approved in January 2022, just after data collection concluded.

[6] We defined important terms like “robots/AIs,” “sentience,” and “sentient robots/AIs” within the survey. “Robot/AIs” were defined as “intelligent entities built by humans, such as robots, virtual copies of human brains, or computer programs that solve problems, with or without a physical body, that may exist now or in the future.” “Sentience” was defined as “the capacity to have positive and negative experiences, such as happiness and suffering.” “Sentient robots/AIs” were defined as “those with the capacity to have positive and negative experiences, such as happiness and suffering.”

[7] We intend to use a variation of this survey in future years. We anticipate asking certain questions more often than other questions given our theoretical interests and our anticipated goals of tracking patterns of moral consideration over time. For instance, some questions are more studied by others and are useful for providing convergent validity evidence in the first wave of the survey but may not be as useful to invest resources in to track over time. Some questions are less studied by others and seem more worthwhile to continue investing resources in to track changes over time.

[8] Index variables are commonly used in the social and behavioral sciences to increase the reliability of the measure (by reducing the Type 1 error rate), reduce the potential for multicollinearity in linear analyses, increase predictive validity, and group related single variables into a meaningful, easier to understand composite of the intangible constructs (e.g., moral consideration of sentient AIs) often studied by researchers (see Batterton and Hale, 2017; Diamantopoulos et al., 2012; Song et al., 2013).

[9] Partly inspired by Metzinger’s (2021) call for a moratorium on the development of “synthetic phenomenology”

[10] Inspired by Sentience Institute's Animal Farming Opposition measure from the Animals, Food, and Technology survey 

[11]  Inspired by the Center for Reducing Suffering’s phrasing of agential sadism, retributivism, and strategic threat s-risks

[12] Inspired by Laham’s (2009) research on moral concern and the moral circle. We don’t draw a particular theoretical distinction between “concern” and “consideration.”

[13] Shortened measure from Wang and Krumhuber (2018)

[14] Adapted from Thaker et al.’s (2017) research on environmental threats

[15] Inspired by Faunalytics’ Animal Tracker survey

[16] Shortened from Pauketat and Anthis (under review) and inspired by Jensen and Blok’s (2013) qualitative research on techno-animism in Japan

[17] Shortened from Ladak et al. (under review) and inspired by Caviola et al.’s (2019) research on speciesism

[18]  Shortened and adapted to technological entities from Waytz et al.’s (2010) research on individual differences in anthropomorphism

[19] People answered “No opinion” on average: 14.94% for AS Caution items, 18.29% for Pro-AS Activism items, 17.04% for AS Treatment items, 17.28% for Malevolence Protection items, 14.22% for AI Subservience items, 10.78% for Perceived Threat items, 3.98% for the Moral Consideration of Nonhuman Animals items, and 5.18% for the Moral Consideration of the Environment items.

[20] This method of recoding “No opinion” responses as the midpoint instead of excluding them as non-responses is a researcher decision to treat “No opinion” as equivalent to a neutral response assuming equivalent intervals between “Somewhat disagree” and “Somewhat agree.” We judged that the alternative option, to exclude the people who answered “No opinion” on an analysis-by-analysis basis, would produce less interpretable and less reliable results because of the weakened statistical power from dropping data and the potential biasing estimates if the people who answered “No opinion” differed from other people in some systematic way. For more on the benefits and drawbacks of strategies for handling “No opinion” responses see Chyung et al. (2017), Moors et al. (2014), and Nadler et al. (2015).

[21] We preregistered 80% Credible Interval predictions based on what was intuitive for raters, so two raters predicted this ratio, but for the other two, “agreement” represented responses for participants who had an opinion and selected “Agree,” or “Strongly agree.” Only the former, predictions including “Somewhat agree,” “Agree,” and “Strongly agree,” are presented here. Predictions from the other two raters can be viewed in the preregistration. 

[22] ANOVAs are typically conducted with unweighted data. The degrees of freedom were adjusted for a violation of the sphericity assumption using the Greenhouse-Geisser correction. The charts and means reported in text show the weighted values.

[23] Multiple comparisons were adjusted using the FDR correction.

[24] ANOVAs are typically conducted with unweighted data. The degrees of freedom were adjusted for a violation of the sphericity assumption using the Greenhouse-Geisser correction. The charts and means reported in text show the weighted values.

[25] Multiple comparisons were adjusted using the FDR correction.

[26] The item asking about the importance of reducing the percentage of future unhappy AIs was inspired by Caviola et al.’s (2021) studies on population ethics.

[27] ANOVAs are typically conducted with unweighted data. The degrees of freedom were adjusted for a violation of the sphericity assumption using the Greenhouse-Geisser correction. The charts and means reported in text show the weighted values.

[28] ANOVAs are typically conducted with unweighted data. The degrees of freedom were adjusted for a violation of the sphericity assumption using the Greenhouse-Geisser correction. The charts and means reported in text show the weighted values.

[29] Schäfer and Schwarz (2019) examined typical correlation and effect sizes in psychological research and considered various guidelines for interpreting effect sizes. Cohen’s traditional guidelines suggest that r = .1 is a small effect, r = .3 is a medium effect, and r = .5 is a large effect. Schäfer and Schwarz’s analysis of observed effect sizes in preregistered studies suggests that a more realistic interpretation might be r = .04 (small), r = .16 (medium), and r = .41 (large).

[30] We further explored the moral consideration and social integration indices by breaking the four U.S. Census Regions into the nine U.S. Census Divisions based on zip code, which is not publicly available in order to ensure respondent anonymity. A map showing average responses by region is in Figure A1 of the Appendix.

[31] Owning a robotic or AI device did not predict any moral consideration index.

[32]  As we announced with the data publication, Metaculus forecasters underestimated in the same direction as Sentience Institute researchers.

[33] Torture and cruelty are banned under many national and international laws and treaties such as the United Nations Convention against Torture. A 2017 review of psychological research on public opinion about torture found that there is general opposition to it, although some people find it justifiable given certain situations and reasons.

[34] See for example Waytz (2019) and Darling (2021) 

[35] This result is consistent with the results of another study conducted by Sentience Institute researchers on the psychological predictors of moral consideration (Pauketat & Anthis, under review).

[36] We did not define all terms in the survey, especially those that were not frequently used and that have a shared common meaning, in order to reduce the demand on the online respondents and to reduce researcher-imposed conceptual meanings.

[37] See Houck & Repke (2017) for a review of research on attitudes towards torture.


Subscribe to our newsletter to receive updates on our research and activities. We average one to two emails per year.