The Precipice Book Cover

The Precipice: Existential Risk and the Future of Humanity – Book Review

Humanity is currently living in its infancy; barely 200,000 years old, our brief span of time on earth pales in comparison to both the billion-year history of the earth and universe that came before us and to the hundreds of millions of years (or longer) that potentially lie ahead of us. 

At the same time, while humanity is relatively young—with the potential for its greatest discoveries, inventions, and moral progress ahead of it—we have also reached a threshold where we have achieved the potential to both cause—and prevent—our own extinction.

In The Precipice, Oxford philosopher Toby Ord synthesizes over a decade of research to argue for the urgency of preparing for and reducing the risk of existential catastrophe, defined as any event—natural or human—that stands to wipe out humanity’s long-term future potential either through the extinction of the species or through unrecoverable civilizational collapse. These existential catastrophes include events such as asteroid/comet strikes, volcanic eruptions, climate change, nuclear war, pandemics, unaligned artificial intelligence (AI), and more. 

We predictably underestimate these risks, because, by definition, they have never occurred and can never have occurred if we find ourselves in a position to be discussing them (survivorship bias). This creates the impression that they will never happen, or that the probability of them happening is close to zero. But according to Ord, not only are existential catastrophes less remote than we might think, we’ve actually been close to facing them in the recent past. The various close calls of either deliberate or accidental nuclear war—during the Cuban Missile Crisis, for instance—remind us just how close we actually came.

So these risks, while seemingly remote, are nevertheless real and deserve far more attention than they are given. As Ord points out, the world is currently so oblivious to these risks that it spends more money every year on things like ice cream than on the research that could help us understand the events that could end our very existence (e.g., the Biological Weapons Convention has four employees and an annual budget of 1.4 million, which is less than the average McDonald’s restaurant). This general underestimation of risk—along with the undervaluation of public goods by the market—could very well be our undoing.

Having outlined the risks and describing what’s at stake, Ord proceeds to present the moral case for the prevention of existential catastrophe, which should be fairly clear—the prevention of billions of deaths and the protection of the future potential of humanity. Although this is difficult to argue against, it does raise the following question: Why should we care about future generations? 

As Ord points out, we should care about those separated by time in the same way we care about those separated by space: just as geographical location does not make a human life more or less valuable, neither should location in time. Additionally, we have benefited in numerous ways from the work, dedication, and innovations of our ancestors and we stand in a position to pass this on to future generations without causing undue harm and destruction. This is our obligation to the continuation of the human project. 

Of course, the philosophy of moral obligation is complex, but to me, the case for our obligation to protect the future potential of humanity is far stronger than the case against it, which ultimately betrays a fundamental lack of gratitude, basic decency, and respect for the overall human project. Since we have benefited in countless ways from the inherited culture and discoveries of past generations, we can fulfill our duty by “paying it forward” to future generations by using our scientific knowledge to protect them from existential risk. 

The moral case is further solidified by its asymmetrical nature. As Ord wrote:

“The case for making existential risk a global priority does not require certainty, for the stakes aren’t balanced. If we make serious investments to protect humanity when we had no real duty to do so, we would err, wasting resources we could have spent on other noble causes. But if we neglect our future when we had a real duty to protect it, we would do something far worse—failing forever in what could well be our most important duty. So long as we find the case for safeguarding our future quite plausible, it would be extremely reckless to neglect it.”

Having established the moral case for the prevention of existential catastrophes, Ord proceeds to analyze the existential risks themselves, what the science tells us about them, the probability of each catastrophe actually occurring, and what our short- and long-term plan of action should be. 

Admittedly, it’s in this part of the book that Ord seems (at times) to be arguing against his prior points by showing us how unlikely the risks really are and how improbable it would be for many catastrophes—even nuclear winter and extreme climate change—to actually wipe out all of humanity. (Even the Black Death, which killed half the population of Europe, didn’t wipe out humanity’s future potential.) And so you’re left with the impression that existential risk, while entirely possible and worthy of our attention, is in fact a fairly remote possibility. 

That is, until Ord greatly exaggerates the dangers of AI, which he takes to represent the single greatest risk humanity faces. If this is true, this is actually good news, because the prospect of general AI enslaving humanity is far lower, in my estimation, than an asteroid strike. 

Let’s review some of the risks in a little more detail.

Starting with natural risks, Ord shows us that even though our scientific understanding is incomplete, our best science—along with our understanding of mass extinction events through an analysis of the fossil record—shows us that natural events that have the potential to cause mass extinction (asteroid/comet impacts and supervolcanic eruptions) occur only once every million or hundreds of millions of years. The probability of humanity facing an existential crisis of this sort over the next hundred years is therefore estimated to be maybe 1 in 10,000—in other words, very remote. This does not mean that unforeseen risks do not exist, or that we should stop studying volcanoes or asteroids; it only means that we can safely assume that humanity will not go extinct or suffer civilizational collapse from natural causes in the next 100 years. 

Ord next considers anthropogenic existential risks, including nuclear war, climate change, pandemics, and unaligned general AI. Again, it should be pointed out that Ord is considering only existential risk—the permanent wiping out of humanity or civilization. While the consequences of climate change are likely to be disastrous, Ord is only concerned with the question of whether it would end humanity’s future potential. In this respect, climate change likely falls short, as Ord himself details. While there is a possibility of a runaway greenhouse effect, the science seems to suggest that this effect—the only likely climate scenario to truly present an existential risk—is unlikely to happen.

This does make you wonder why Ord decided to take this particular perspective. By setting the bar at the end of humanity, very serious risks are seemingly downplayed. Wouldn’t it be more preferable to view these risks as global risks, catastrophes that could affect the entire planet or very large segments of the human population? I’m not sure what you gain by setting the bar so high. We should be paying attention to all global problems, not just the ones that will permanently wipe us out. Nevertheless, Ord insists, for the purposes of this book, to view only the risks that could end humanity’s entire potential, and climate change and nuclear war are not likely candidates. But then again, neither are engineered pandemics or artificial intelligence, as far as I can tell. 

Of course others will disagree, and the reader can make their own decisions. But when Ord starts describing AI systems that will use the internet to accumulate power and financial resources for the purposes of enslaving humanity, he has lost me. I’m not interested in reading about highly speculative risks, especially when it overshadows the discussion of more immediate risks like the elimination of jobs by AI and automation. AI, climate change, and nuclear war may not wipe out humanity, but there are plenty of other disastrous scenarios that fall short of this that are not discussed because they don’t meet Ord’s end-of-the-world criteria. 

The other issue with the book is the assignment of probabilities. With natural risks, this is less of a problem. Since the world has experienced comet strikes, volcanic eruptions, and mass extinction events in the past, we can get a rough estimation of the frequency of such occurrences. And so Ord’s claim that there is a 1 in 10,000 chance of humanity suffering a natural existential catastrophe in the next hundred years is reasonable. 

But what about events that have never occurred? Ord tells us that there is a 1 in 6 chance that unaligned or malicious AI wipes out humanity or causes civilizational collapse in the next century (this is a 16 percent chance!). But where is this number really coming from? While it’s important to be precise, you can’t achieve this level of precision simply by assigning a specific number to what you subjectively believe to be the case. 

Since we are so far away from any scenario where AI takes control over humanity, it’s impossible to tell what the actual probability is: maybe it’s 1 in 6, maybe it’s 1 in 10,000, but there are many reasons to think the risk is lower than Ord is claiming. It’s highly likely that when we develop the capability to build a general AI system (if we ever can; see Steven Pinker’s analysis in his book Enlightenment Now), we will be prudent and capable enough to also build in the appropriate safeguards. Further, there is no more reason to think the AI systems we create will be malicious than there is to think they will be benevolent. After all, computer code does not have consciousness or inherent evil motivations, outside of the film industry. So there is very little reason for me to give this number any kind of credence at all, even if some AI researchers happen to agree with it. 

But AI does not need to enslave humanity for us to be worried about it. And that’s what’s frustrating about this book. We should be discussing the more immediate effects of AI on things like the elimination of jobs, or the effects of climate change on the poorest regions of the world. By setting the bar at extinction-level events that are truly remote, we avoid the conversations about the things that matter now. We should instead re-frame these existential risks as global risks and set out to solve them using global solutions. As we learn more about these risks and solve the more immediate and localized problems, the existential risks associated with these areas should be automatically reduced. 


The Precipice: Existential Risk and the Future of Humanity is available on Amazon.com.