This is going to be a series of three articles. The first (this one) discusses and justifies Toby Ord’s estimate that the human race has about a 1 in 6 chance of destroying itself in the next 100 years (a large portion of this article is functionally a summary of his book “The Precipice”). The next will cover what the future might look like if we don’t, and explains why that risk is likely to decline. The third will introduce the beginnings of a framework for “time justice” as a way of conceptualizing how the conclusions of the first two parts should influence personal and public decision making in the present.
Well we’ve made it this far right?
We’ve been around for about 2000 centuries (200,000 years). This would mean that there is some probabilistic cap on any kind of extinction event whose likelihood hasn’t significantly changed recently (which applies to most natural risks). If there were even a 1% risk of extinction from natural causes each century, there would only be a .00000001% chance of us still being around right now. A .34% chance per century would leave us a .1% chance of being here now. So, logically, one would have the default assumption that the risk of human extinction is low (survivorship bias notwithstanding).
Unfortunately, many of the existential risks to humanity are anthropogenic (human made), and have more or less arisen in the past century as real possibilities. If you’re familiar with the Fermi Paradox and the idea of the “Great Bottleneck,” this makes sense.
To put it simply, it goes like this: given how big and old the universe is, if life develops with some reasonable frequency on planets that have the right conditions, and that life has some chance in some cases of expanding into multi-stellar civilizations, there should be a lot of them by now. The universe is so big, and so old, that even if those chances are really small, we should have encountered one of these civilizations. Why haven’t we?
The great bottleneck is one of the explanations of this surprising lack of company — perhaps all or very nearly all civilizations face some set of circumstances before spreading to other solar systems that wipes them out (or prevents them from doing so).
From a technological perspective we are pretty damn close to expanding to other solar systems. It isn’t even really beyond our technological means right now — terraforming notwithstanding — so if there’s a great bottleneck that has a high chance of nipping our multi-stellar future civilization in the bud, it’s probably now.
Look around. This is the great bottleneck.
It actually makes sense, given that the biggest risks to human civilization are not natural risks, but human-created (anthropogenic) ones. In order to understand our risk of getting bottlenecked here (be it near-total extinction, perpetual civilizational collapse, or terminal dystopian outcomes), we have to include estimates of existing natural risks and layer on top of them the newer anthropogenic risks that have increased the overall probability of total/near-total destruction.
Most of this first article will be dedicated to reviewing and explaining those existential risks to humanity, and estimates of their probability. The most reasonable and comprehensive exercise of this sort (that I’ve encountered) is Toby Ord’s The Precipice, which I highly recommend. From this book we get the below estimates of probability (this first article will largely summarize the explanations of these probabilities from the book, as it is important to take this seriously in order to move forward with the conversation):
How We Die
That 1 in 6 probability, for those who do not mess around with probabilities, is largely a result of the fact that combining various largely uncorrelated possibilities of an outcome does a really good job of improving the probability of that outcome occurring.
Playing multiple simultaneous rounds of Russian roulette with a bunch of revolvers that have between ten and a billion chambers, and only one bullet each, quickly gets you to a probability on the order of magnitude of 1 in 10, just like playing multiple simultaneous rounds of Russian roulette with regular revolvers that only have 6 chambers would quickly make one of them blowing your face off a near statistical certainty. We’ll get to that later.
This is the same magic of probability that allows uncorrelated assets to dramatically improve the return profile (in terms of sharpe ratio) of an investment portfolio, which is what Bridgewater calls “the holy grail of investing.”
Except, in this case, we’re talking about a higher probability of ending human civilization, rather than a higher probability of making Ray Dalio another billion dollars. So, perhaps, it’s more “the unholy grail of the end of human civilization” that we’re focusing on today.
And the goal here is less to justify that specific 1 in 6 number (we are compounding estimates here, and each has a pretty wide margin of error), and more to give a reasonable case that it’s not absurd to think the risk is around that order of magnitude, before we move to the next article. And doing that requires some risk by risk exploration of the estimates provided above. So, here we go.
Natural Existential Risks
Asteroids and Comets
This is one of our higher confidence estimates. We have mapped the locations and trajectories of most of the near-orbit asteroids large enough to cause a major extinction event. A lot of the risk comes from our margin of error on our estimate of how thorough our catalogue is, and a margin of error on our mapping of their future trajectories.
By way of example we are fairly confident that we’ve mapped 95% of the asteroids bigger than 1 kilometer across, so all of the risk for those comes from the remaining unmapped asteroids we assume exist. And for the larger asteroids over 10km (which have more destructive power, but are also easier to spot) we really think we’ve found them all.
These remaining uncertainties leave us with a risk estimate of about 1 in 120,000 for the medium sized guys and about 1 in 150 million for the big boys. You can lay on top of that a bit more uncertainty. For some asteroids (a few kilometers across maybe), we could use existing technology (or with enough advance warning develop technology) that would alter their orbits and cause them to miss us: maybe we could use nuclear explosions, kinetic impacts, ion beams, etc. Unfortunately, any technology that we could use to alter an asteroids trajectory to make it miss earth could also be used to alter a trajectory towards earth if in the wrong hands (technologically advanced anarcho-primitivist death cult maybe?).
The actual mechanism of civilization ending here is not so much the turn the surface of the earth into lava through impact, but the slightly less exciting blow enough debris into the upper atmosphere to darken it, dramatically cooling the world and making it markedly less inhabitable.
In any case, these numbers work out to a 1 in 1,000,000 risk.
74,000 years ago Toba erupted in Indonesia and lowered global temperatures by several degrees for several years. Ash could be found as far away as Africa. India was covered in a meter thick layer of ash. For a much more recent example, the Tambora eruption in 1815 lowered global temperatures by 1 degree celsius.
Magnitude 8–9 volcanic eruptions are at a rate of about 1 in 200 (with a confidence interval of around 1 in 50 to 1 in 500) per century, with larger ones (9+) like Toba at a rate of about 1 in 800 (with a wider confidence interval of 1 in 600 to 1 in 60,000).
Still, Tambora-like explosions aren’t existential threats to humanity, and you can even make the case that not all Toba-level ones would be either (so we can discount part of this risk).
On top of those, the geological record does include different kinds of eruptions that would almost definitely do us in. About 250 million years ago, the eruption of the Siberian traps, which created a lava flow the size of Europe (and may have caused the end-Permian extinction, which was the largest extinction in the history of the earth, through gas emissions), would probably more or less do us in today. These kinds of events are called flood basalt events, and one of this size at the highest estimate has about a 1 in 200,000 chance of happening per century.
Toby Ord blends these into a total 1 in 10,000 chance, which we’ll run with for the purposes of this article.
Stellar Explosions and Gamma Ray Bursts
These are actually two separate types of risks, and astronomers have made estimates for each of them (although Gamma ray bursts are pretty poorly understood). The actual mechanism for these causing a problem is basically cooking away our ozone layer, greatly increasing the surface of the Earth’s exposure to stellar radiation. Our “catastrophic” outcome here from “The Precipice” is defined as an event that destroys 30% or more of the ozone layer.
Specifically, for supernovae, we can do better than estimating the probability of a catastrophic event in the current century than simply what it is for an average century — we can look at potential supernovae candidates within 100 light years.
These risks work out (low confidence) to about 1 in 50 million for supernovae and 1 in 2.5 million for gamma ray bursts.
These are pretty unlikely, and some of them may not even be truly “existentially” threatening, leaving us with a combined estimate on the order of 1 in a billion.
Anthropogenic Existential Risks
The big fear with nuclear weapons is the creation of nuclear winter: smoke high above the clouds cannot be rained out of the atmosphere (the mechanism here is actually pretty similar to a large meteor strike). A large scale nuclear war could clog the upper layers of the atmosphere with smoke that persists for years, blackening the sky, chilling the earth, and causing persistent, global, massive crop failure.
Unfortunately our understanding of this phenomenon is extremely limited — in part because worrying about nuclear war isn’t nearly as sexy as it used to be. Since 1991 only two climate models on the effects of a full scale nuclear war have been published despite the fact that missiles have remained minutes away from launch for that entire period.
And, we’ve gotten much closer to nuclear war in our history than most people like to admit. Beyond famous examples like Stanislav Petrov, and other incidents of the US or Russia believing for a few moments that missiles had been launched from the other side, there are also more flavorful stories like Valentin Savitsky, who was on a nuclear armed B-59 submarine that was being hit with depth charges by US warships in 1962. In this case, the order to use the tactical nukes (the strength of the Hiroshima bomb) was given and it was only the happenstance that the submarine in question was the one that happened to be carrying the commander of the flotilla (Vasili Arkhipov) so his consent would be needed to go through with it — and he remanded the order. There are a lot of these close calls.
Obviously quantifying such risks is a bit squishy, but Toby comes out to 1 in 1000, which, to me, seems on the optimistic side — especially considering likely nuclear proliferation in the near future. I still think it’s fair to go with his estimate, given that in many cases of nuclear winters civilization could survive in well-located geographies for nuclear fallout (due to strategic unimportance and tempering geographic factors) like New Zealand.
Let’s move to one of today’s more popular impending crises, climate change. Most of what folks talk about regarding climate change really doesn’t hit the mark for an existential threat. Major changes in our climate system could raise sea levels, change weather patterns, and shift what crops grow where. While this may lead to impoverishing some folks, instability, and international strife — it doesn’t constitute an existential threat in itself. The palaeocene thermal maximum, where temperatures rose 9–14 degrees higher than they were in the pre-industrial era, caused major climate changes but did not lead to any mass extinctions (incidentally, it was also linked to a greenhouse effect very much like our own climate impact, but on a much larger scale: around 1600ppm carbon in the atmosphere versus today’s rise from 280 ppm to 412 ppm).
Crops are more sensitive to reductions than increases in temperature and most of our food base will be fine. Ocean levels could rise a hundred meters and most of the earth’s land area would remain. Some areas may become uninhabitable due to water scarcity, but other areas would likely see increased rainfall.
With all that said, international strife caused by these changes, or unpredictable ecosystem collapses could result in more apocalyptic outcomes. And, some corner cases of runaway greenhouse effects from permafrost melting or methane clathrate deposits in the ocean being released present some possibility of triggering a civilization ending event. We’re going with an estimate of 1 in 1,000 for this one too (although I think it is probably less).
Other Environmental Damage
Humans have caused a lot of damage to the environment. But, folks who compare this stuff to mass extinctions are exaggerating. The big five “mass extinction events” in geological history each eliminated more than 75% of all species on earth. We have lost 1% of species since humans came around. This stuff is pretty sensationalized.
There are articles, as an example, about how the extinction of honeybees could cause massive crop failure — while studies estimate that the disappearance of all honeybees and other pollinators would cause a 3–8% reduction in crop yields. Huge bummer, but not an existential threat to humanity. Toby puts the risk of this bucket at 1 in 1,000, which seems a bit overestimated to me. But, hey, we’re working on orders of magnitude here.
Pandemics (Engineered or Otherwise)
Well, this one is obviously topical. I think our recent experience with a pandemic quickly going global (albeit not a civilization-threatening one) demonstrates that a higher degree of interconnection has made us more exposed to the risk of a global pandemic. At the same time, our ability to rapidly develop vaccines and treatments has also increased — along with our fundamental understanding of how they spread. You only have to go back two centuries for people to believe that miasmic gases are causing tuberculosis. And now, it took us under a year to develop multiple effective vaccines for COVID 19.
But, if we are concerning ourselves with civilization ending pandemics we are unlikely to find one in nature. Even the bubonic plague, after killing 25–50% of the population of Europe, had a profound effect on history, but didn’t end it. Plagues would have to be unnaturally virulent and deadly to actually end civilization. Speaking of “unnatural,” however, the same superior technological advancement that has made us able to effectively prevent and treat disease has also allowed us to make even more deadly plagues.
We can separate the risk here into intentional and unintentional creation and release of a civilization ending pandemic. Both risks are pretty high.
The root of the unintentional risk is that we, now, as a species, do a lot of extremely dangerous viral research and have a pretty garbage track record of doing it safely. We all have the image from Hollywood of these super-sterile hyper-secure facilities where viral research is done with minimal risk of it escaping (until the villain or whatever has some scheme to get the virus into the world). These facilities do exist!
In fact, there is a name for them: BSL-4 (Biosafety Level 4) facilities. They are the highest level of biosafety rating certification. Just like in the movies, they are used for dangerous viral research, including working with SARS, the 1918 flu, H5N1, Anthrax, etc., and they experiment at times with making these viruses more contagious or deadly (incidentally, they’re also used for extraterrestrial samples, which is cool).
The bad news is even BSL-4 labs have stupid breaches all the god damn time. The 2007 foot and mouth outbreak in London was from a lab that hand a leaky pipe that dripped into groundwater.
So, take your imagined Hollywood style super secure lab, and imagine two lab assistants walking by an exposed leaking pipe and one of them turns to the other and says,
“Should we do something about that?”
The other guy shrugs, saying,
“We’ll report it after we finish these tests. It’ll be fine.”
I mean, damn. If that isn’t some ham fisted foreshadowing.
And, the actual lab in question that released the foot and mouth outbreak has had its BSL-4 license renewed since then.
There’s another issue; a lot of dangerous research is done at less secure labs. Labs doing gain of function H5N1 research weren’t even required to be BSL-4 rated. And, various less secure entities often handle dangerous viruses. In 2015 the US military Dugway Proving Grounds just accidentally sent live anthrax (instead of inactive anthrax) to 192 labs in in 8 countries. And that was just, I guess, an oopsie.
So, it’s worth taking the risk of us accidentally releasing a virus of enhanced virulence and deadliness seriously. But, then there is also the risk of the intentional creation and release of biologically modified viruses.
There has been a lot of theorizing around the risk of nuclear proliferation. Beyond risks of nuclear war between countries, the concerns about nuclear weapons falling into non-state actors’ hands justify much of the security around nuclear weapons, and are a large part of the drive to restrict what countries have a nuclear programs. It’s not just about those countries having the weapons, it’s the risk that those countries create additional vectors of risk for others to get access to nuclear materials (or worse, already made nuclear warheads). Poorly guarded, discarded uranium in abandoned soviet facilities, as an example, is one of the greatest nuclear threats. And turning that into a “dirty bomb” is probably attainable for a non-state but well funded (likely terrorist) entity. Due to these concerns, various accords, norms, laws, etc. try to mitigate this risk. There is a lot more that could be done, but the cold war put enough of a scare into a generation of now adults that people at least remember what it was like to take the risk seriously. We are far more poorly guarded against biological threat proliferation.
The international body that monitors biological weapon proliferation risk, the Biological Weapons Convention, has an annual budget of $1.4 million dollars — about the annual budget of a McDonalds. And the availability of gene editing has greatly increased: online services allow you to upload sequences and have them constructed and sent to your home address (made possible by a 1000x decline in the cost of sequencing a genome in the past 20 years). Some services make an effort to try to filter out things that are dangerous, but those reviews are imperfect and in the best cases only cover ~80% of orders. High school and undergrad students now sometimes use CRISPR in science projects. This is like everyone in the world having affordable access to nuclear weapons. All you need is some publicly available knowledge, some money, internet access, and a mailing address.
So with natural pandemics not likely to create a real problems (~1 in 10,000), I think Toby Ord’s estimate of ~1 in 30 for engineered pandemics is, horrifyingly, quite reasonable.
This is far and away the largest threat to human civilization. I’m going to keep this one brief — as I imagine that most people reading this already have a strong opinion and making the full case would take a whole article of about the length of this one.
Incidentally, there is less disagreement on this point than public debates would imply. Most of the debate about the imminent general artificial intelligence threat to humanity is about its degree of imminence: some researchers think it could take decades until we reach general AI. With that said, most researchers agree that we can, that we will, and that it’s an existential threat to humanity. In the 100 year time frame we are talking about for the purposes of this article, a few decades still counts as an imminent risk.
Rather than go into minutiae about the risk from AI, I will point you to the book Life 3.0 by Max Tegmark for a colorful narrative of how an AI takeover could take place, and pull this summary of a 2016 survey of leading AI researchers from The Precipice to give you a sense of the broad agreement on these points:
“…70 percent of the researchers agreed with Stuart Russel’s broad argument about why advanced AI might pose a risk; 48 percent thought society should prioritize AI safety research more (only 12 percent thought less). And half the respondents estimated that the probability of the longterm impact of AGI being ‘extremely bad (e.g., human extinction)’ was at least 5 percent.”
So…yikes. As with the others, we are accepting Toby’s estimate that existential risk from AI is at ~1 in 10 over the next century.
Other Anthropogenic Risks
There area a few of these (nanotechnology apocalypse like Hugh Howey’s Wool series, scientific experiments gone wrong, etc.), but I’ll focus a bit of discussion on one I find particularly interesting, global dystopian outcomes.
A global dystopia is a brand new possibility. It is a new thing for humans to even have enough cross communication that most nations (or even continents) are aware of the existence of all other nations (or even continents). It is an even newer thing that nations have the capacity to send humans to anywhere in the world, and an even newer thing that it can now be done in hours rather than months. It is, above all, an extremely new thing that information, messages, media, etc. can be sent anywhere in the world simultaneously.
Technology has given tools both to oppression and to resistance in a measure we have yet to fully understand, with tools of surveillance, propaganda and projection of global power on one side and encryption, dispersion of communication channels, and ease of organization on the other. We do not yet know where this balance will fall out.
Even some of the most securely anti-central power inventions like bitcoin have not yet been tested in the way they likely one day will. The clash of these two fundamental principles of human organization, each with their own new weapons and tools, has not yet happened. Could China use its control of fiber optic cables to functionally cut off its own internet from the rest of the world, preventing non-Chinese bitcoin nodes from communicating with Chinese miners, then continue to mine a long enough chain to reconnect to the rest of the world’s internet and rewrite weeks, months or years of transactions? Control of infrastructure and the ability to influence people’s actions with threats of imprisonment or violence are effective tools against encryption. Folks with a more conventional, default, mindset tend to vastly underestimate the potential and power of the internet and encryption to fundamentally change (and decentralize) how humans organize and interact — but cypherpunks also tend to underestimate the power of guns and infrastructure.
In any case, beyond the risks of an enforced dystopia a la 1984, there are also risks of a global mono-culture that fails at interstellar expansion (for religious reasons or simply by having bad cultural encoding for technological progress), bad systematic interaction bringing us down from our current level of surplus and security that allows for space programs, or technological innovation that prevents the expansion in a way that messes with human individual desires (we end up with 40% of the human race just living in a VR videogame with their pleasure sensors getting maximally stimulated a la rat-with-the-cocaine-lever).
This one is hard to put a number on. The risk is brand new, and very poorly understood. We are living the process of figuring it out right now. Toby Ord lumpts this in with our somewhat miscellaneous list of anthropogenic risks, and gives them an overall probability of 1 in 50.
So, Who Cares?
Per our opening discussion about the bottleneck, we don’t necessarily need to be extinct to miss out on the majority of our future potential as a species. Simply becoming earth-locked would probably doom us much more quickly than going multi-stellar. On earth, an absolute ceiling exists of ~5 billion years from now when it will be swallowed by the sun becoming a red giant, but even a billion years before that we would be unlikely to survive the runaway greenhouse effects caused by the sun’s increasing brightness. I mean, even just around a billion years from now plate tectonics are likely to fail greeting a great cooling that would kill us. That’s probably a reasonable ceiling if none of these aforementioned risks play out. But, our likelihood of getting to that point, with all of the other ways we could kill ourselves before then in any given century, is extremely low.
The vast majority of the potential for human life exists out there in the rest of the universe, not here on earth. But, as long as we are stuck on earth, that entire future potential could be dashed by a single planet wide catastrophe. So, scenarios that simply keep us on earth and prevent future progress are a serious risk to human potential. Creating a culture or government, or being hit with a disaster, that delays our achievement of extraterrestrial expansion by 500, or 1000, years massively decreases our chances of getting to that expansion at all.
If we even reduce our overall risk estimate to a 1/10 chance of us destroying ourselves this coming century (9/10 chance of survival) we can see how these risks compound. If we wait five centuries for interplanetary/interstellar expansion (which will diversify our risk and reduce our chance of dying) our chance of survival goes from 90% to (.9⁵) 59%. If we wait 1000 years it goes to (.9¹⁰) 35%. The clock is ticking (using the actual 1 in 6 estimate those probabilities of survival fall to 40% and 16%, respectively).
Right now we are in a potentially singular moment of opportunity. We have the means, the peace, and the wealth to achieve interplanetary expansion as a step towards interstellar expansion, and no time to waste. The next article in this series will be a (hopefully much shorter) discussion of what that could look like, and just how feasible it is.