Extinction Rebellion — commendations and implications

I’m intrigued by the new climate action movement Extinction Rebellion. The campaign group make the familiar (and justifiable) claims that climate change constitutes an unprecedented global emergency and that urgent action is necessary. Their rhetoric also contains some rather surprising warnings such as “to survive, it’s going to take everything we’ve got”. I think this message captures something true but, as I’ll argue, also has unintended implications.

Extinction Rebellion marching somewhere, passionately

I think it’s good to bring up the topic of existential risk in the debate about climate change. Existential risks are risks which could wipe out or permanently de-rail human civilisation, dramatically curbing its vast potential. Climate change appears to be such a risk so it’s very much worth worrying about. Framing the problem in this way is also an effective message, a grander and more universalistic version of the ‘rally around the flag’ tactic. The campaign’s appeal to humanity’s collective agency stirs up our survival instinct and unites us against common threats — a rousing feeling that sci-fi films like Independence Day and Interstellar draw upon to great effect.

The movement has already encountered some criticism, however. James Butler claims that the campaign fails to formulate clear political demands. He writes that:

If we accept that we have a very limited time to bring emissions under control, then we must also accept that the vehicles for that enormous transformation will be those that already exist, however imperfectly. That includes national legislatures, international bodies and political parties.

This seems like a fair point. However, a more interesting criticism of this campaign is that if the Extinction Rebellion movement is concerned with preserving humanity’s future, it might do better by targeting other problems.


The core question here is whether climate change really constitutes an existential risk; could climate change actually wipe out humanity (or at least it’s potential)? Researcher John Halstead did the unenviable work on this question. The analysis of potential emissions pathways and how they affect the probability and extremity of warming (including the rather terrifying prospect of more than 9 degrees of warming) lead Halstead to conclude the following:

The fact that warming takes a long time to occur and gives us time to respond is a major consideration. In the absence of runaway effects, the [existential] risk of climate change seems relatively small.

What does this mean? This means that, whilst extreme climate change will be a catastrophic disaster which will radically alter the course of human civilisation, it probably won’t finish us off. This is a fairly bold claim and I encourage you to read Halstead’s very approachable google doc to judge for yourself but I think the strength of this conclusion becomes clear once the risks from climate change are compared to other existential risks. These include the threat of a deadly manmade pandemic, a destructive nuclear war and the rise of uncontrolled or misaligned advanced artificial intelligence. I think these risks are more dangerous for two key reasons

  1. They are being ignored.
  2. They could materialise very, very quickly.

Climate change, as severe a threat as it is, is not neglected. Global climate finance sits at the $930bn mark¹. Meanwhile, biosecurity, nuclear security and AI safety remain small, under-funded fields battling rapid advancements in their respective technologies. Even if climate change was twice as deadly as all of these threats combined (it almost definitely isn’t), we should still be scrambling to address this imbalance.

Even the most extreme and worrying trajectories for climate change indicate that change would happen relatively slowly. It would be a disaster in slow motion. This doesn’t mean that it’s effects are necessarily avoidable but we would see them coming. Other existential risks could materialise quite suddenly. Take Ebola, for example; a small-scale, naturally occurring pandemic managed to take the modern world by surprise. Thankfully, the response was (eventually) effective and, due to the fact that the virus could only be passed on via bodily fluids, it didn’t spread quickly. We should count ourselves lucky — a more dangerous, airborne virus could do a lot more damage before we’re able to respond.

We were also probably all a little terrified when a nuclear war almost erupted between the US and North Korea because of a few tweets earlier this year. It’s common knowledge that the nuclear arsenals of the great powers can be deployed with ease and at short notice. That doesn’t look likely right now but that prospect should still terrify us.

The dangers of advanced artificial intelligence are totally new to us. However, because we know so little about this risk, we also won’t know when it’s coming. As Eliezer Yudkowsky argues, there is no fire alarm signalling the emergence of an artificial general intelligence whose goals and values may not align with our own.

Extreme climate change would be bad — extremely bad, in fact — and I’m glad that ordinary people are taking to the streets to remind us of this. But if it’s extinction we’re talking about, they might do better to draw attention to more pressing concerns.

  1. UNFCCC Standing Committee on Finance, “Biennial Assessment and Overview of Climate Finance Flows,” 2016, 6, http://unfccc.int/cooperation_and_support/financial_mechanism/standing_committee/items/10028.php.

Longtermist, reader of books, giver of unprompted advice