Stop Worrying about Asteroids. Existential Risks are Looming

Neil Raden
6 min readJul 14, 2021

In a paper “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards,” Nick Bostrom describes a particularly frightening outlook for the survival of humankind. From the Abstract:

“Because of accelerating technological progress, humankind may be rapidly approaching a critical phase in its career. In addition to well-known threats such as the nuclear holocaust, the prospects of radically transforming technologies like nanotech systems and machine intelligence present us with unprecedented opportunities and risks. Our future, and whether we will have a future at all, may well be determined by how we deal with these challenges.”

I suspect if you took a survey of things that would wipe out mankind, beyond asteroids and the nuclear holocaust, all of the others would be scattered.

“In radically transforming technologies, a better understanding of the transition dynamics from a human to a “posthuman” society is needed. Of particular importance is to know where the pitfalls are: how things could go terminally wrong. While we have had long exposure to various personal, local, and endurable global hazards, this paper analyzes a recently emerging category: existential risks. These are threats that could cause our extinction or destroy the potential of Earth-originating intelligent life. Some of these threats are relatively well known, while others, including some of the gravest, have gone almost unrecognized. Moreover, existential risks have a cluster of features that make ordinary risk management ineffective. A final section of this paper discusses several ethical and policy implications. A clearer understanding of the threat picture will enable us to formulate better strategies.”

Whoa.

So where do drivers of economies fit in this? According to Bostrom, the government and the private sector need to be aware of these, often slim, probabilities. They envision how their products and services can be applied to thwart or minimize these threats. And not to cause them, either. In a mad rush to Artificial General Intelligence, poorly programmed superintelligence is something of the nature “we have met the enemy, and he is us.”

It’s a long paper, but I’ll give the salient points. Let us start with the definition:

Existential risk — One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.

Bostrom admits that a species-destroying comet or asteroid impact, even though extremely rare, was the only existential risk until the mid-twentieth century as certainly none that were within our power to do something about.

So obviously the first existential risk was the buildup of the nuclear arsenals of the U.S. and the Soviet Union. When asked the probability of nuclear war, JFK said, “3 in 5, or even.”

To classify existential risks, Bostrom uses this:

Bangs — Earth-originating intelligent life goes extinct in relatively sudden disaster resulting from either an accident or a deliberate act of destruction.

Crunches — Humans’ potential to develop into post-humanity is permanently thwarted, although human life continues in some form.

Shrieks — Some form of post-humanity is attained, but it is an extremely narrow band of what is possible and desirable.

Whimpers — A posthuman civilization arises but evolves in a direction that leads gradually but irrevocably to either the complete disappearance of the things we value or to a state where those things are realized to only a minuscule degree of what could have been achieved.

Have you ever considered any of these, other than Bangs?

Some of the causes for Bangs are amusing. I won’t describe them all:

· Nanotechnology deliberately used for the destruction of humanity

· Worldwide Nuclear War

· This is my favorite one. Or is it? There is unlimited computing power in the future where people create fine-grained simulations of human civilizations in the past. An existential event causes a loss of control, (or someone trips over the power cord) and we disappear. Or, all of our minds are just simulations.

· Genetically Engineered Biological Agents

· Badly programmed superintelligence that goes haywire

· Something Unforeseen: gradual loss of human fertility and various religious doomsday scenarios

· Physics Disaster: I like this one. Particle accelerators create a black hole that consumes the planet or breakdown of a vacuum state, converting it into a “true” vacuum resulting in an expanding bubble that would sweep through the galaxy and beyond, tearing all matter apart as it proceeds.

· Naturally Occurring Pathogens

· Asteroid or Comet Impact

· Runaway Global Warming

There is far too much material in the paper to summarize here, but it is a pretty easy read. Scary. Annoying. One section about retaining a last-resort readiness for preemptive action is a little bit, Dr. Strangelove. Because negotiations between nation-states are often unsuccessful, there is the possibility that a powerful nation or coalition may need to act unilaterally and preemptively (nuclear).

Finally, he poses an example where advanced nanotechnology has just been developed in a leading lab, such as a device that can build “an extensive range of three-dimensional structures — including rigid parts — to atomic precision. Given a detailed specification of the design and construction process, some feedstock chemicals, and a supply of energy, it can be replicated.” Naturally, design plans are already distributed on the Dark Web, and somebody would then build and release the giant nanobot destroying the biosphere. What should be done?

There is no question that the technology will be acquired by “rogue nations,” hate groups, and perhaps eventually lone psychopaths. A coalition of nations would attempt to persuade the bad actors from the proliferation of the device, which would surely fail and force or the threat of force, by the powerful nations will be obliged to eliminate the danger. Take action to eliminate the threat?

A preemptive strike on a sovereign nation is barely conceivable. Still, in the extreme case he outlined — where a Dr. Evil is about to unleash a device to destroy the world, where a failure to act would with high probability lead to existential catastrophe — it is a move that must be taken. In other words, reduce the nation to ashes, probably unleashing counterattacks from adversaries.

Bostrom writes “Whatever moral prohibition there typically is against violating national sovereignty is overridden in this case by the necessity to prevent the destruction of humankind. This is an ethical dilemma that no one thinks about. “

We should hopefully never be placed in a situation where this becomes necessary. but the storyline here, like it or not, is we have to accommodate ourselves in our moral and strategic thinking for this situation.

We have to develop universal recognition of the ethical issues before a scenario as this appears. Without solid support, democracies will find it difficult to act decisively. Waiting is not an option because it might itself be the end.

I need to point out that Nick Nostrom is not a science fiction writer. He is a philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Program on the Impacts of Future Technology and is the founding director of the Future of Humanity Institute at Oxford University.

This isn’t funny. The ethical constraints of A.I. are still in their infancy and have not expanded beyond machine learning bias. We are not ready for machines that understand context, morals, consequences, but we’re rushing forward. And we cannot prevent either bad actors or stupid or greedy actors from destroying the world. Don’t forget, some people were willing to take that chance. When the first atomic bomb was detonated at Alamogordo, they didn’t know if it would create an uncontrolled chain reaction that would burn off the atmosphere.

--

--

Neil Raden

Consultant, Mathematician, Author; focused on iAnalytics Applied AI Ethics and Data Architecture