We recently published our foundational questions summary page, which catalogs the evidence on each side of important debates in the field of effective animal advocacy. This summary page is over 9,000 words right now, the literature it references is much longer, and much of that literature relies on more references from fields like history and psychology. The personal experiences of advocates, companies, and nonprofits also provide a wealth of evidence on the impacts of different strategies. Given this abundance of information, it’s important for researchers and especially advocates without the spare time to consume all of the literature to account for the informed opinions of others in the field. To help them do that, we conducted a brief survey of the people who we believe have most thoroughly considered these issues, asking their opinions on the debates currently listed on our summary page, such as:
Do we need more confrontation (e.g. protests, disruptions) or nonconfrontation (e.g. online ads, corporate outreach) in the farmed animal advocacy movement?
0 (confrontation is much more effective)
5 (we have the right balance now)
10 (nonconfrontation is much more effective)
In this post, we detail our methodology and include at the bottom a table of the overall scores for each question. We also have a Google Sheet with the exact questions and data for anyone who’s curious.
We sent the survey to 21 people, listed at the bottom of this post, of which 15 responded, 5 of whom chose not to identify their responses with their name, usually because they didn’t want their personal views too strongly associated with those of the organization they represent. Most of them work professionally in effective altruism or animal advocacy research, and the rest are impact-focused nonprofit leaders who frequently engage and help with research. We chose not to use a strict criterion such as working full-time in research, but that was our main consideration. We recognize our subjective approach here has the downside of potentially increasing risk of bias, such as us unintentionally choosing respondents based on how much they agree with our own views, but a more objective, simple criterion like full-time position would not accurately capture EAA expertise. One approach we considered was a pre-survey survey to select the respondents for the main survey, asking a question like, “Which 10 people do you think have most thoroughly considered these issues?” We felt the time cost for this (both for us and the researchers we reached out to) would lead to only a very limited benefit, so we decided not to pursue it.
We included detailed instructions on how to interpret the survey questions, which were framed in terms of which approach (e.g. confrontation vs. nonconfrontation) would be more useful on the margin. This means where the respondent think a small, unspecialized resource would do the most good in the relevant movement (e.g. farmed animal advocacy, effective animal advocacy), e.g. where they would like a single $1,000 anonymous donation to go. The respondents were asked to assume that this money will not have any effects on the organization it's given to outside of the increase in funding, and that there are organizations that can use funding for every specific intervention, to avoid concerns like, “Well, I think confrontation is very effective, but all the organizations currently implementing that strategy are doing other things I think are very ineffective, so I wouldn’t actually make a donation to support confrontation in the real world.”
We also asked about the confidence each respondent had in their answers. Few researchers have thoroughly investigated all the questions in our list, so this was our attempt to weight people by their expertise on individual questions. We considered an alternative approach of having Sentience Institute or another party assess each respondent’s expertise for each question, but we worried that this was too susceptible to bias and that it could be done with the results after the data was collected. By collecting self-reported confidence, we risked a bias towards people who expressed undue confidence, e.g. because they wanted to unfairly influence the overall survey outcomes or because they are more confident than their expertise warrants. We explicitly asked respondents to avoid this, but we still see it as an important qualification to the results. If you would prefer to look at the responses without confidence weighting, or to apply your own third-party expertise assessments, you’re welcome to do so.
Similarly, we asked respondents to consider the best implementations of each strategy, since this seems like the more relevant consideration for impact-focused advocates. For example, if one thinks that most confrontational activism is ineffective, but thinks that large marches are actually extremely effective relative to nonconfrontation, they should probably answer that confrontation is more effective.
The scale for both the main questions and questions about confidence was 0-10, and respondents were asked to normalize their answers such that an answer to 5 on a main question meant both approaches were equally effective. The normalization for confidence was based on the average survey response, across all respondents, being a 5. This means that if a respondent thinks they have much less expertise than other respondents for all questions, they should be comfortable answering with confidences of 0, 1, and 2, and similarly for a respondent with much more expertise having confidences of 8, 9, and 10. They should also not worry about absolute confidence in their positions. For example, they might be deeply uncertain about advocacy strategies in general, only having something like 55% confidence in all of their positions, but they could still answer with higher numbers. This does require some speculation about the other respondents, but it seems like the best way to gather useful, accurate confidence information.
Other important qualifications to these results include:
The following table has the unweighted and confidence-weighted average scores for each question, as well as whether there was at least 80% agreement in favor of one side or another within the debate. We also have a Google Sheet with the exact question wordings and individual scores. Note that some respondents chose to not have their name associated with their responses, and that a score closer to 10 means people favored the approach listed second while a 0 means people favored the first approach. We’ve color-coded the quantitative scores to help people quickly read the table and focus on the issues where respondents think the evidence suggests we need to see the biggest changes.
Strong: 0.0-3.0 or 7.0-10.0
Moderate: 3.1-4.0 or 6.0-6.9
Confrontation vs. nonconfrontation
Consistent vs. varying messaging
Institutional vs. individual
Yes, towards institutional
Influencer vs. general public
Broad vs. animal-focused messaging
Left-wing vs. nonpartisan messaging
Momentum vs. complacency from welfare reforms
Yes, towards momentum from welfare reforms
Reducetarian vs. vegan ask
Social change vs. food technology
Long-term vs. short-term focus
Less vs. more animal messaging
Yes, towards more animal messaging
Less vs. more environmental messaging
Less vs. more health messaging
Less vs. more farmed animal focus
Less vs. more wild animal focus
Less vs. more general antispeciesism focus
Less useful vs. more useful existing social movement evidence
Less useful vs. more useful existing EAA RCT evidence
Less useful vs. more useful intuition/speculation/anecdotes evidence
Less useful vs. more useful external findings evidence
Below are the 21 people to whom we sent the survey, based on our best guess of the amount of time they’ve spent thinking critically about these issues. 15 responded. We expect there to be substantial disagreement in the animal advocacy community about who qualifies for their list. We did our best to objectively select respondents, but of course this is a very difficult assessment and our selection was likely imperfect.