We should say more than “x-risk is high”

Base Rates
4 min readDec 16, 2022

--

Epistemic status: outlining a take that I think is maybe 50% likely to be right.

Some people have recently argued that, in order to persuade people to work on high priority issues such as AI safety and biosecurity, effective altruists only need to point to how high existential risk (x-risk) is, and don’t need to make the case for longtermism or broader EA principles. E.g.

  • Neel Nanda argues that if you believe the key claims of “there is a >=1% chance of AI causing x-risk and >=0.1% chance of bio causing x-risk in my lifetime” this is enough to justify the core action relevant points of EA.
  • AISafetyIsNotLongtermist argues that the chance of the author dying prematurely because of AI x-risk is sufficiently high (~41%, conditional on their death in the subsequent 30 years) that the pitch for reducing this risk need not appeal to longtermism.

The generalised argument, which I’ll call “x-risk is high”, is fairly simple:

  • 1) X-risk this century is, or could very plausibly be, very high (>10%)
  • 2) X-risk is high enough that it matters to people alive today — e.g. it could result in their premature death.
  • 3) The above is sufficient to motivate people to take high-priority paths to reduce x-risk. We don’t need to emphasise anything else, including the philosophical case for the importance of the long-run future.

I think this argument holds up. However, I think that outlining the case for longtermism (and EA principles, more broadly) is better for building a community of people who will reliably choose the highest-priority actions and paths to do the most good and that this is better for the world and keeping x-risk low in the long-run. Here are three counterpoints to only using “x-risk is high”:

Our situation could change

Trivially, if we successfully reduce x-risk or, after further examination, determine that overall x-risk is much lower than we thought, “x-risk is high” loses its force. If top talent, policymakers or funders convinced by “x-risk is high” learn that x-risk this century is actually much lower, they might move away from these issues. This would be bad because any non-negligible amount of x-risk is still unsustainably high from a longtermist perspective.

Our priorities could change

In the early 2010s, the EA movement was much more focused on funding effective global health charities. What if, at that time, EAs decided to stop explaining the core principles of EA, and instead made the following argument “effective charities are effective”?

  • Effective charities are, or could very plausible be, very effective.
  • Effective charities are effective enough that donating to them is a clear and enormous opportunity to do good.
  • The above is sufficient to motivate people to take high-priority paths, like earning to give. We don’t need to emphasise anything else, including the case for effective altruism.

This argument is probably different in important respects to “x-risk is high”, but illustrates how the EA movement could have “locked in” their approach to doing good if they made this argument. If we started using “effective charities are effective” instead of explaining the core principles of EA, it might have taken a lot longer for the EA movement to identify x-risks as a top priority.

Our priorities may change again, such that “x-risk is high” starts to look naïve. We can imagine some scenarios where this is the case: for example, we might learn that promoting economic growth, accelerating key technologies or ensuring global peace are more effective longtermist interventions than directly reducing x-risk.

We lose what makes EA distinctive

(I’m least certain about this argument)

Other movements make similar arguments to “x-risk is high”. Extinction Rebellion, a climate activist group, regularly draws attention to the risks the planet faces as a result of climate change in order to motivate political activism. The group has had limited success (at most, by introducing more ambitious climate targets to the conversation) and has also attracted criticism for overstating the risks of climate change.

I think “x-risk is high” is much more robust than arguments that claim that climate change will imminently destroy life on earth. But other people might not notice the difference. I worry that by (only) using “x-risk is high”, we risk being dismissed as alarmists. This would be wrong, and I’m sympathetic to the idea that both the EA movement and XR should sound the alarm because we are collectively failing to respond to all of these issues quickly enough. But if we don’t paint a more robust case than “x-risk is high”, that criticism could become more potent.

Takeaway

Outlining the case for longtermism and explaining how longtermism implies that x-risk should be a top priority even if x-risk is low is a much more robust strategy:

  • If we successfully reduce x-risk or, after further examination, determine that overall x-risk is much lower than we thought, it’s still clear that we should prioritise reducing x-risk.
  • If other actions beyond directly reducing x-risk become top priority, those convinced by the case for longtermism are more likely to pivot appropriately. Moreover, if longtermism itself proves to be less robust than we thought, those convinced by the core principles of EA are more likely to pivot appropriately too.
  • EA retains the intellectual rigour that has gotten us to where we are now, and that rigour is on display. I think this rigour is the reason we attract many smart people to high-priority paths (though I’m unsure of this).

My thanks to Lizka for running the Draft Amnesty Day on the EA forum and prompting me to share this draft.

--

--

Base Rates
Base Rates

Written by Base Rates

Longtermist, reader of books, giver of unprompted advice

No responses yet