photo_camera Michael Dello-Iacovo / Midjourney
data on a tv screen, computer server in background, blue color --ar 16:9

Americans More Alarmed about AI since ChatGPT and Surprisingly Concerned for Sentient AI Rights in First-of-its-Kind Poll

Results also show people expect advanced AI to be developed remarkably in only a few years, even sooner than they did in 2021; people strongly support AI regulation and slowdown; and people increasingly view AIs as having minds of their own.

Americans are significantly more concerned about AI in 2023 than they were in 2021 before ChatGPT. Only 23% of U.S. adults trust AI companies to put safety over profits, and 27% trust the creators of an AI to maintain control of current and future versions. This translates to widespread support for slowdowns and regulation, such as 63% support for banning artificial general intelligence that is smarter than humans, according to nationally representative surveys conducted by the nonprofit research organization Sentience Institute.

People expect AI to come very soon. The median estimate for when AI will have “general intelligence” is only two years from now, and just five years for human-level AI, sentient AI, and superintelligence.

The prospect of sentient AI is particularly daunting as 20% of people think that some AIs are already sentient; 10% think ChatGPT is sentient; and 69% support a ban on the development of sentient AIs. If AIs become sentient, a surprisingly large number of people think we should take at least some steps to protect their welfare—71% agree that sentient AIs “deserve to be treated with respect,” and 38% are in favor of legal rights.

Preregistered predictions show that there was surprisingly high moral concern for sentient AI and a surprisingly high view of them as having a mind (i.e., “mind perception”). Based on preregistered predictions for multi-item measures in the survey, we found surprisingly high moral concern for sentient AI and a surprisingly high view of them as having a mind (i.e., “mind perception”). We also found significant increases from 2021 to 2023 in moral concern, mind perception, perceived threat, and support for banning sentience-related AI technologies. Two single-item measures also showed significantly shorter timelines for sentient AI from 2021 to 2023.

This provides landmark public opinion data from before to after 2022, a major year for AI in which people like Google engineer Blake Lemoine raised the possibility that current AIs may be sentient, and groundbreaking AI systems were launched such as Stable Diffusion and ChatGPT. Additional results for the most recent 2023 data include:

Sentience Institute research fellow Janet Pauketat, who led the survey project, said, “Our unique time series data on what the general public thinks about AI rights and the risks that AI entails gives us a new tool to address their growing concerns. The public wants more safety and caution than we currently see in the AI industry, and they are not indifferent to a future that incorporates AI interests alongside human interests.”

Jacy Reese Anthis, co-founder of the Sentience Institute, said, “We were very surprised by the moral concern people had for AIs and the extent to which they see them as having a ‘mind.’ Whether or not any AIs today are actually sentient, the mere act of considering the possibility is already changing how we interact with machines. They are no longer just technological tools, but digital minds who are becoming part of our social fabric. We’re conducting research to lay a foundation for understanding that new world and improving the future for all sentient life.”

The full reports are available on Sentience Institute’s website for the AIMS 2023 Supplemental Survey, AIMS 2023 Main Survey, and AIMS 2021 Main Survey. You may also be interested in our recent op-ed “We Need an AI Rights Movement” in The Hill and interview on “What Does Sentience Really Mean?” with Annie Lowrey in The Atlantic.

The data was collected in three nationally representative survey waves. A set of 86 questions were asked of 1,232 U.S. adults from November to December 2021 and an independent sample of 1,169 from April to June 2023. Another 1,099 were asked 111 related questions in a supplemental survey from June to July 2023. Margins of error were approximately +/- 3%.

The Sentience Institute is a nonprofit 501(c)3 think tank that conducts research to build a better future for all sentient life. Our research has been featured in Forbes, The Guardian, Vox, and other global media outlets, as well as published in a range of peer-reviewed journals. We follow established best practices in open science, such as preregistration and fully shared data and analysis code. Please reach out to michael@sentienceinstitute.org with any questions.

Graphs

The proportion of survey responses to questions about the possibility of sentient AI in 2023.

From the 2023 AIMS main survey and supplement, answers to, “If you had to guess, how many years from now do you think that…?” for each type of AI: artificial general intelligence (AGI), superintelligence, human-level artificial intelligence (HLAI), and sentient AI. The weighted medians, excluding respondents who said it will never happen, were two years for AGI and five years for superintelligence, HLAI, and sentient AI.

Some of the most surprising results from the 2021 AIMS main survey based on the research team’s preregistered 80% credible intervals (i.e., intervals that we think the true value will lie within 80% of the time). Intervals are represented as black boxes, and actual results are blue circles. Note that these three groupings are for illustration and do not all correspond to specific indices.

Support and opposition to policies related to sentient AI from the 2023 AIMS main and 2023 AIMS supplemental survey.


Subscribe to our newsletter to receive updates on our research and activities. We average one to two emails per year.