As a research organization dedicated to rigor and collaboration, we feel it’s important to share not only our accomplishments but also our mistakes, including those that seemed difficult to correct ahead of time even with more effort and insight but nonetheless seemed to be mistakes in hindsight. Our decision to have a dedicated page for those mistakes was inspired by the mistakes pages of GiveWell and Animal Charity Evaluators. A small number of mistakes are omitted to protect privacy.
2022
- We found out that we could have gotten zip code data for previous iterations of the Animals, Food, and Technology (AFT) survey, which would have been useful for analysis. We got zip code data for the 2021 iteration, but we should have pursued this sooner.
- We were rejected for a large research grant after a suboptimal pitch where we emphasized our open-mindedness and flexibility while the funder was actually looking for the opposite emphasis: our commitment, core competencies, and specific, concrete ideas we planned to execute. It is unclear to what extent we should change our grantwriting strategy for other funders. We also received feedback that we should have raised funds for a specific researcher whom we could hire if a specific funder contributed, which may have been a more effective strategy.
- We had a cautious runway (over 12 months of expenses) prepared for events like the collapse of FTX, one of the largest funders in effective altruism, but when the event occurred, this runway was more of a downside than we expected in that some of the “bridge” funders stepping in to help organizations harmed by the collapse prioritized organizations with less runway.
2021
- In our perspective-taking study, we did not sufficiently account for the contentious debate on mediation analysis. For example, we probably should have collected demographic data at baseline and controlled for it, and we probably should have changed our sensitivity analysis.
- We were overly ambitious with the number of predictors we included in our survey on the predictors of moral consideration of AIs. Although the predictors are all interesting and the results are compelling, the number of predictors to discuss in the paper makes it somewhat denser than is desirable. We aren't able to fully unpack the survey results because there are so many. This might reduce the overall impact of the survey in the long term.
- We underestimated the number of terms and academic citations that we needed to include in The Terminology of Artificial Sentience blog post, which we had initially intended to be a part of the Assessing Sentience in Artificial Entities blog post. This underestimation led to a longer research phase than was anticipated and it took more time to review and edit the posts than it could have because of the decision to split the piece into two and because of the added length in the terminology post.
- In the AIMS 2021 survey, we preregistered our 80% credible intervals for responses to each item. The wording of the four items assessing moral consideration of nonhuman animals and the environment was changed after we made predictions and before we preregistered the materials. We didn't reevaluate the predictions for the new wordings so our predictions for these four items do not correspond to the actual wording of the items we measured in the survey.
2020
- While we have received positive feedback on the SI Podcast, it may not have been worth as much effort as we put in, given it has not reached a large audience. Relatedly, it may have been a mistake to not put much effort into spreading it to particular audiences (e.g., skeptics and humanists) where it could have gained momentum.
- We incorrectly entered payroll information for an employee, which was not as easily fixable as we expected because of issues in the payroll software codebase. This created a time sink of contacting various customer support teams to fix the payroll records.
2019
- We had errors in the initial publication of our global and US factory farming estimates: using an incorrectly low estimate of global egg-laying hens, erroneously swapping two variables within the fish estimates, and using a US sales figure for meat chickens instead of the inventory figure.
- Our worker’s compensation policy was cancelled, and at the advisement of the broker we work with, we paid for a new policy ourselves, but that did not constitute purchasing the policy, which resulted in a fine. Our mistake was in not seeking a second opinion on how to acquire a new policy after the initial policy was cancelled.
- We significantly underestimated the time a large research project would take, leading to a backlog and less time spent on other important projects.
2018
- Jacy should have titled his TEDx talk “The Future Is Vegan” because, while the TEDx talk racked up tens of thousands of views, “The End of Animal Farming” didn’t get the talk to show up in the top search results for common relevant queries, most of which include the word “vegan,” e.g. “vegan TED talks” (and presumably has also limited it in showing up in autoplay and/or the sidebar).
- From July to November, Jacy probably spent a little too much time speaking for local effective altruism and animal advocacy groups, given this was less cost-effective than expected.
- The book might have done better with a title of “The End of Meat” to focus more on virality than intellectual rigor. (“The Future is Vegan” was also a possibility, but probably would have narrowed the audience too much.)
- A major media outlet expressed interested in an op-ed centered on the book, and Jacy thinks he would have been more likely to get it published if he had spent more time corresponding with the editor and pushing for its publication, which would have been worthwhile.
- Some of Jacy’s book interviews could have been improved if Jacy had rehearsed and outlined responses to common questions. In other words, Jacy thinks he should have focused more on ‘sticking to talking points.’
- Jacy should not have started the large-scale messaging strategies RCT in 2018 (now a collaborative study with two academics) as we have deprioritized direct, short-term experimental research like this, and it would have been better tackled by the quantitative researcher we expect to hire in 2019.
- Kelly failed to notice one moderately sized donation and as such to thank the donor and reach out to them again in a timely fashion.
- After our 501(c)(3) application was returned due to a form error, Kelly should have conducted the rest of the application process on her own or with hired legal service instead of the pro bono service we were using, in order to avoid several months of waiting.
- Our series of three blog posts on the tractability of changing the course of history arguably should have been one report, but by the time we realized this, it would have required too much editing to be worthwhile.
2017
- In our first fundraising pitch shared in May 2017, we did insufficient review to make sure it was clear, free of typos, and detailed in specifying which research projects were our top priority.
- We failed to include one full-time EAA researcher on our “Effective Animal Advocacy Researcher Survey June 2017” who should have been included. Fortunately we were able to send that person the survey before we published the results.
- When we initially transferred the Research Network from Sentience Politics, we sent out emails from multiple addresses and should have instead used a single point of contact.
- Two generous lawyers handled our incorporation and 501(c)(3) application pro bono. However, that process took a long time, and in retrospect, it might have been better to do it ourselves, even though the likelihood of making an error would have increased. The delay in incorporation postponed when we could make our first hire, and while our fiscal sponsor, the Centre for Effective Altruism, has been very generous in continuing to accept earmarked donations for us for longer than anticipated, receiving our 501(c)(3) status sooner would spare them some time.
- In our first researcher job interviews, we took too few notes on each applicant, overestimating how many details from the interviews we’d remember without writing them down. We didn’t want to make candidates nervous and we have personally had plenty of interviews where people didn’t take (or appear to be taking) notes, which made it easy for us to overestimate how easily we’d remember everything. However, our applicant pool was small, we used an evaluation chart with predetermined criteria, our application form included questions that significantly informed our evaluations, and our process involved an editing project which heavily informed our evaluations, so the lack of more detailed notes from our interviews probably did not significantly affect the process.
- We underestimated the initial administrative time costs of establishing the organization, which led us to rush to complete one of our initial research projects, our social movement case study of the British antislavery movement, by the end of November. The project could also have been somewhat smaller (i.e. the research phase could have stopped somewhat earlier in light of diminishing returns on significant new information).
- In our “Survey of US Attitudes Towards Animal Farming and Animal-Free Food October 2017,” we should have included two additional questions despite increasing costs: political affiliation and self-identification as vegetarian. We also should have phrased the last two questions about humane farming more similarly (either both “treated well” or both “treated humanely”). This might have been fixed if we had sought additional peer review, but we’re not sure about that.
- We overestimated how much interest in the “Survey of US Attitudes Towards Animal Farming and Animal-Free Food October 2017” we’d get from mainstream media outlets. In retrospect, we should not have sent it to those outlets because that might have reduced our chances of getting coverage from them for future projects.
- In our social media posts and press release about the “Survey of US Attitudes Towards Animal Farming and Animal-Free Food October 2017,” we should have put more emphasis on the strategic implication that we should focus more on changing institutions. We partly made this mistake because we expected our coverage to be more from mainstream outlets, who wouldn’t be as interested in the strategic takeaways. We also should have included the animated video Mercy For Animals made about the survey results in the press releases we sent following its publication, but we hadn’t looked carefully enough at the MFA blog post to see it at the time.