FreshRSS

๐Ÿ”’
โŒ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Guest Post: High Risk, Low Reward: A Challenge to the Astronomical Value of Existential Risk Mitigation

By: admin

Written byย David Thorstad ,ย Global Priorities Institute, Junior Research Fellow, Kellogg College

This post is based on my paper โ€œHigh risk, low reward: A challenge to the astronomical value of existential risk mitigation,โ€ forthcoming in Philosophy and Public Affairs. The full paper is available here and I have also written a blog series about this paper here.

Derek Parfit (1984) asks us to compare two scenarios. In the first, a war kills 99% of all living humans. This would be a great catastrophe โ€“ far beyond anything humanity has ever experienced. But human civilization could, and likely would, be rebuilt.

In the second scenario, a war kills 100% of all living humans. This, Parfit urges, would be a far greater catastrophe, for in this scenario the entire human civilization would cease to exist. The world would perhaps never again know science, art, mathematics or philosophy. Our projects would be forever incomplete, and our cities ground to dust. Humanity would never settle the stars. The untold multitudes of descendants we could have left behind would instead never be born.

This thought has driven many philosophers to emphasize the importance of preventing existential risks, risks of catastrophes involving โ€œthe premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future developmentโ€ (Bostrom 2013, p. 15). For example, we might regulate weapons of mass destruction or seek to reduce what some see as a risk of extinction caused by rogue artificial intelligence.

Many philosophers think two things about existential risk. First, it is not only valuable, but astronomically valuable to do what we can to mitigate existential risk. After all, the future may hold unfathomable amounts of value, and existential risks threaten to reduce that value to naught. Call this the astronomical value thesis.

Second, increasingly many philosophers hold that humanity faces high levels of existential risk. In his bestselling book, The Precipice, Toby Ord (2020) puts the risk of existential catastrophe by 2100 at one in six: Russian roulette. Attendees at an existential risk conference at Oxford put existential risk by 2100 at nearly one in five (Sandberg and Bostrom 2008). And the Astronomer Royal, Martin Rees (2003), puts the risk of civilizational collapse by 2100 at fifty-fifty: a coinflip. Let existential risk pessimism be the claim that per-century levels of existential risk are very high.

Surely the following is an obvious truth: existential risk pessimism supports the astronomical value thesis. If we know anything about risks, it is that it is more important to mitigate large risks than it is to mitigate small risks. This means that defenders of the astronomical value thesis should be pessimists, aiming to convince us that humanityโ€™s situation is dire, and opponents should be optimists, aiming to convince us that things really are not so bad.

In my paper, I argue that every word in the previous paragraph is false. At best, existential risk pessimism has no bearing on the astronomical value thesis. Across a range of modelling assumptions, matters are worse than this: existential risk pessimism strongly reduces the value of existential risk mitigation, often strongly enough to scuttle the astronomical value thesis singlehandedly. (See end notes for examples, and see the full paper for further details).

In the full paper, I explore a range of models and argue that there is only one viable way to reconcile existential risk pessimism with the astronomical value thesis. This is the time of perils hypothesis on which levels of existential risk are high now, but will soon drop to a permanently low level if only we survive the next few perilous centuries. However, I argue, the time of perils hypothesis is unlikely to be true, so there is likely an enduring tension between existential risk pessimism and the astronomical value thesis.

This tension has important philosophical implications. First, it means that unless more is said, many parties to debates about existential risk may have been arguing on behalf of their opponents. To many, it has seemed that a good way to support the moral importance of existential risk mitigation is to make alarmist predictions about the levels of existential risk facing humanity today, and that a good way to oppose the moral importance of existential risk mitigation is to argue that existential risk is in fact much lower than alarmists claim. However, unless more is said, matters are exactly the reverse: arguing that existential risk is high strongly reduces the value of existential risk mitigation, whereas arguing that existential risk is low strongly increases the value of existential risk mitigation.

Second, there has been a wave of recent support for longtermism, the doctrine that positively influencing the long-term future is a key moral priority of our time. When pressed to recommend concrete actions we can take to improve the long-term future of humanity, longtermists often point to existential risk mitigation. By the astronomical value thesis, longtermists hold, existential risk mitigation is very important. But this paper suggests an important qualification, since many longtermists are also pessimists about existential risk. As we have seen, existential risk pessimism may well be incompatible with the astronomical value thesis, in which case the value of existential risk mitigation may be too low to provide good support for longtermism.

End notes

The core modelling claim of the paper is that (1) at best, existential risk pessimism is irrelevant to the astronomical value thesis, and that (2) in most cases existential risk pessimism tells strongly against the astronomical value thesis. While full technical details are contained in the main paper, here are some models to illustrate claims (1) and (2).

On (1): To illustrate the best case, suppose that humanity faces a constant level of risk r per century. Suppose also that each century of existence has constant value v, if only we live to reach it. And suppose that all existential catastrophes lead to human extinction, so that no value will be realized after catastrophe. Then, it can be shown that the value of reducing in our century by some fraction f is f*v. In this model, pessimism has no bearing on the astronomical value thesis, since the starting level r of existential risk does not affect the value of existential risk mitigation. Moreover, the value of existential risk reduction is capped at v, the value of a single century of human life. Nothing to sneeze at, but hardly astronomical.

On (2): Making the model more realistic only serves to heighten the tension between pessimism and the astronomical value thesis. For example, suppose that centuries grow linearly in value over time, so that if this century has value v, the next century has value 2v, then 3v and so on. Keep the other modelling assumptions the same. Now, it can be shown that the value of reducing existential risk in our century by some fraction f is fv/r.

In this model, pessimism tells against the astronomical value thesis: if you think that existential risk is now 100 times greater than I think it is, you should be 100 times less enthusiastic about existential risk mitigation. Moreover, the value of existential risk reduction is capped at v/r. For the optimist, this quantity may be quite large, but not so for the pessimist. For example, if we estimate per-century risk r at 20%, then the value of existential risk is capped at five times the value of a single century โ€“ again, nothing to sneeze at, but not yet astronomical.

ย 

References

Bostrom, Nick, โ€œExistential risk prevention as global priority,โ€ Global Policy 4.1 (2013): 15-31.

Ord, Toby, The precipice (NY: Bloomsbury, 2020).

Parfit, Derek, Reasons and persons (Oxford: Oxford, 1984).

Rees, Martin, Our final our (NY: Basic Books, 2003).

Sandberg, Anders and Bostrom, Nick, โ€œGlobal catastrophic risks survey,โ€ Technical Report 2008-1 (2008), Future of Humanity Institute.

ย 

Rising seas will cut off many properties before theyโ€™re flooded

Image of a road with a low lying section under water.

Enlarge / If this road is your only route to the outside world, it might not matter that your house didn't flood. (credit: Maurice Alcorn / EyeEm)

Climate change produces lots of risks that are difficult to predict. While it will make some eventsโ€”heatwaves, droughts, extreme storms, etc.โ€”more probable, all of those events depend heavily on year-to-year variation in the weather. So, while the odds may go up, it's impossible to know when one of these events will strike a given location.

In contrast, sea level rise seems far simpler. While there's still uncertainty about just how quickly ocean levels will rise, other aspects seem pretty predictable. Given a predicted rate of sea level rise, it's easy to tell when a site will start ending up underwater. And that sort of analysis has been done for various regions.

But having a property above water won't be much good if flooding nearby means you can't get to a hospital or grocery store when you need to or lose access to electricity or other services. It's entirely possible for rising seas to leave a property high, dry, but uninhabitable as rising seas cut connections to essential services. A group of researchers has analyzed the risk of isolation driven by sea level rise, and shows it's a major contributor to the future risks the US faces.

Read 10 remaining paragraphs | Comments

โŒ