Research Agenda
photo_camera Jo-Anne McArthur / We Animals
Calves In A Sale Yard

Last substantive update: January 1, 2023.

Table of contents

Introduction

In-progress projects

AI alignment and evolutionary debunking paper

AI alignment and the ethical treatment of digital minds paper

Animals, Food, and Technology (AFT) survey

Artificial Intelligence, Morality, and Sentience (AIMS) survey

Autonomy and sentience of AIs experiments

Cautious longtermism book chapter

Dancing qualia and theories of consciousness paper

Digital minds cause profile

Features of artificial entities and moral consideration experiment

Follow-up AFT and AIMS experiment(s)

Individuation of animals experiment

Moral spillover blog post and experiment(s)

Perspective-taking experiment

Simulations and catastrophic risks report

Substratism literature review and experiment(s)

Top priority projects

Construal level theory, moral consideration, and threat experiment

Individuation of AIs experiment

Moral messaging of cultured meat experiment

Perception of emotions in AI experiment

Possible effects of future technologies and social transformations on human values

Possible medium-term futures of AI sentience and consciousness

Possible value reflection AI processes

Uncanny valley effect and the moral consideration of AIs experiment

Other projects

Case studies

Social movements

Other

Experimental studies

Expert interviews

Literature reviews

Surveys

Other

Introduction

We select research projects based on their expected impact, as detailed on the Perspective page. Our main research program is in artificial intelligence, asking questions such as: What Features of AIs Indicate Sentience and Moral Relevance? and What Do People Think about the Moral and Social Inclusion of AI?, but we have also researched the expansion of humanity’s moral circle to include farmed animals and other related topics. This research agenda contains specific in-progress projects we are working on, high-priority potential projects we are not yet working on but consider of particular interest, and a range of other potential projects. We are a publicly funded nonprofit organization, so your donation can help us complete more of this research, and we are keen to provide support for external researchers interested in these projects.

In-progress projects

AI alignment and evolutionary debunking paper

Evolutionary debunking arguments claim that the evolutionary origins of moral beliefs renders them epistemically defective. Some evolutionary debunking arguments aim to debunk all moral beliefs while others target particular moral beliefs (e.g. anti-consequentialist beliefs, beliefs in speciesism). How should such arguments be taken into account in aligning AI with human values?

AI alignment and the ethical treatment of digital minds paper

This early-stage project explores the prospects for solving the problem of aligning AI with human values while also ensuring that we expand the moral circle to include future digital minds.

Animals, Food, and Technology (AFT) survey

We are collecting longitudinal nationally representative data on US attitudes towards Animals, Food, and Technology (AFT). We have collected 2019, 2020, and 2021, and we next plan data collection in 2023. We are also interested in various supplemental surveys and different survey populations (e.g., other countries, experts).

Artificial Intelligence, Morality, and Sentience (AIMS) survey

We are collecting longitudinal nationally representative data on US attitudes towards Artificial Intelligence, Morality, and Sentience (AIMS). Our first data collection was in 2021, and we next plan to collect data in 2023. We are also interested in various supplemental surveys (e.g., attitudes towards large language models such as GPT-3) and different survey populations (e.g., other countries, NeurIPS/ICML experts comparable to AI Impacts’ survey).

Autonomy and sentience of AIs experiments

We are conducting a series of experiments to disentangle and analyze the relationship between perceived autonomy and sentience in AIs, particularly in relation to moral consideration and perceptions of threat.

Cautious longtermism book chapter

We are planning to have a book chapter in an upcoming anthology on the future of design and human civilization. The chapter will discuss existential threats and the importance of approaching our radical future with cautious longtermism: Before humanity colonizes the universe, we must ensure that the future we would build is one worth living in.

Dancing qualia and theories of consciousness paper

This paper analyzes the implications of “dancing qualia,” in which changes in experience result from function-preserving manipulation of a mind (e.g., replacing neurons with silicon chips). It argues that dancing qualia do not have the implications they are usually thought to have but that they nonetheless have a range of ramifications for theories of consciousness in the metaphysics of mind, the philosophy of perception, and the science of consciousness.

Digital minds cause profile

We are writing a series of blog posts on artificial sentience that we hope to culminate in a cause profile of the issue. As of November 2021, we have published The Importance of Artificial Sentience, Psychological Perspectives Relevant to Artificial Sentience, Prioritization Questions for Artificial Sentience, and The Terminology of Artificial Sentience.

Features of artificial entities and moral consideration experiment

Artificial entities can be designed with a range of features, such as a humanlike appearance, language abilities, or emotional expression. This study uses conjoint analysis to estimate the effects of a range of features on the moral consideration of artificial entities?

Follow-up AFT and AIMS experiment(s)

We are conducting a series of experiments to gain a deeper understanding of the AFT and AIMS survey results. What are the contextual factors affecting support for policies affecting animals and artificial entities? How supportive are people of actual policy changes? How does support for a ban vary between the framings of a ban on production/selling vs. a ban on consumption/purchasing?

Individuation of animals experiment

How do humans perceive nonhuman animals who may be homogeneous in appearance or behavior as individuals rather than as a group or an exemplar of a group? What effect does individuation have on the moral consideration of a specific individual? What effect does individuation have on the moral consideration of the whole species or group?

Moral spillover blog post and experiment(s)

We are working on a blog post that summarizes the literature on moral spillover, such as increased moral concern for an individual in one group leading to more moral concern for individuals in different but related groups, as well as designing experiments to better understand this phenomenon.

Perspective-taking experiment

Taking the perspective of another sentient being is an important social phenomenon. In this study, we explore the relationship between perspective taking, moral consideration, and the moral circle. How does the effect of perspective taking on moral consideration change if beings are more psychologically distant?

Simulations and catastrophic risks report

The possibility of simulations, particularly technology that simulates sentient minds, raises many important questions for the likelihood of and possible reduction strategies for catastrophic risks. This report synthesizes scattered literature on the topic and proposes tentative strategic implications and promising research directions.

Substratism literature review and experiment(s)

Recent social psychology research has developed a rich conceptualization and contextualization of “speciesism,” the discrimination against certain species as originally developed in the philosophical literature. Similarly, humans can discriminate on the basis of substrate, such as against digital entities implemented on computer hardware instead of biological nervous systems. This project will seek to understand substratism comparable to the five studies conducted by Caviola, Everett, and Faber (2019) on, “The Moral Standing of Animals: Towards a Psychology of Speciesism.”

Top priority projects

These are projects we hope to work on in the near future. Please contact us if you would like to discuss collaboration or your own work on them.

Construal level theory, moral consideration, and threat experiment

Psychological research has shown that humans can think at different levels of abstraction that shape their psychological distance to entities like AIs and farmed animals. These levels of abstraction, known as construal level, can shift between concrete (detailed, local, close) and abstract (general, global, distant) ways of thinking. How does thinking in concrete and close or abstract and distant ways shape the perception that AIs who exist now are threatening to humans? What about AIs who exist in the future? What are the effects of concrete and abstract ways of thinking on the moral consideration of AIs?

Individuation of AIs experiment

How do humans perceive nonhuman AIs who may be homogeneous in appearance or behavior as individuals rather than as a group or an exemplar of a group? What effect does individuation have on the moral consideration of a specific individual? What effect does individuation have on the moral consideration of the whole species or group?

Moral messaging of cultured meat experiment

How does telling people that cultured meat is morally driven vs. not morally driven affect its favorability? How does knowledge of a moral opposition to it affect this?

Perception of emotions in AI experiment

Recognizing emotions from facial expressions facilitates human interpersonal interactions and face perception differs for human ingroups and outgroups. The faces of AIs embodied in robots have been shown to be important for human-AI interactions. How AIs see and process human faces to recognize human emotions has also been important for developing computer vision. Less work has investigated how humans perceive and recognize the facial expressions of AIs embodied in robots. Do humans perceive emotions differently in the faces of AIs compared to humans? How similar is emotion recognition for human-like robot faces and human faces?

Possible effects of future technologies and social transformations on human values

There are a number of technologies and social transformations that could have important, substantial effects on human values. For example, if moral progress occurs largely due to people with old moral values passing away and young people with new views coming into social power, would human life extension slow down moral progress? Other changes of interest include cryonics, blockchain, nuclear fusion, surveillance and smart cities, various forms of transformative AI, climate change, and the reduction of poverty and global increase in GDP.

Possible medium-term futures of AI sentience and consciousness

We analyze the plausible ways in which AIs could become sentient or conscious: their likelihood, defining features, and implications. For example, is it more likely that sentience will emerge in language models, robots, game-playing reinforcement learners, or ensemble models? How will humans react to the emergence of sentience in each context?

Possible value reflection AI processes

Many discussions of “safe” or “aligned” AI futures involve different value reflection processes, such as coherent extrapolated volition, debate, indirect normativity, iterated amplification, or a period of long reflection. What will happen to values during such reflection processes? In particular, how much do the inputs of this process (i.e., the state of human values at the beginning) matter for the outputs?

Uncanny valley effect and the moral consideration of AIs experiment

Robots’ and AIs’ human-like appearance and capacity to experience prompt feelings of eerieness among humans, a phenomenon known as the uncanny valley effect. Moral decisions made by AI agents who resemble humans are judged as less moral than the same decisions made by humans or non-humanlike AIs, a moral uncanny valley effect. How pervasive and impactful is the uncanny valley effect on the moral consideration of AIs? How does the increasing social integration of AIs affect the strength of the uncanny valley effect?

Other projects

These are projects we do not immediately plan to work on but may if we had more resources. Please still feel free to contact us if you would like to discuss collaboration or your own work on them.

Case studies

Social movements

We are also interested in the aggregation and analysis of existing case studies, given there have now been several published.

Other

Experimental studies

Expert interviews

Literature reviews

Surveys

Other


Subscribe to our newsletter to receive updates on our research and activities. We average one to two emails per year.