In the effective altruism (EA) community (one of Sentience Institute’s two core communities along with animal advocacy), many are focused on impacting the far future — on making a difference to what the universe looks like many millennia and further into the future, where potentially astronomically more sentient beings could exist than today. Sentience Institute’s goal of moral circle expansion is largely motivated by the importance of the moral circle in determining what sort of society humanity builds in the long term.
The default approach in EA to impacting the far future has been to work on reducing extinction risk, increasing the likelihood that humanity survives and expands its reach to the stars. The most common approach has been artificial intelligence alignment, working to ensure that advanced, powerful AI implements the values of its creators.
I’ve written a blog post for the EA community that argues for an increased EA prioritization on impacting the far future by increasing the welfare of the sentient beings who might exist, rather than just increasing the likelihood that they do exist. Specifically, this post compares the strategies of moral circle expansion and AI alignment.
The post is written for the very specific audience of people involved in the effective altruism community who are familiar with cause prioritization and arguments for the overwhelming importance of the far future, so it might read as strange and confusing to people without that domain knowledge. If that’s you, but you still want to read the post, please consider reading the articles linked in its Context section to get your bearings.
To read the post on the Effective Altruism Forum, click here. Please consider leaving your thoughts on the topic in the comments section.