Will Technology Ever End Humanity? The Answer Is More Complicated Than You Might Think

Innovation and Technology
6 min readJun 16, 2019

--

You’ve likely heard versions of this story in pop culture, perhaps specifically in science fiction films: humans make technology, then the technology becomes too powerful and destroys the humans. This type of narrative is a well-known trope in movies with one of the most notable examples being HAL from Stanley Kubrick’s 2001: A Space Odyssey. As a computer that senses something regarding his survival could be in jeopardy, HAL ends up killing one of his creators in rebellion.

Through creating technology to assist with one issue, humans create another one, inducing a type of tragic hero storyline. This can also be attributed to various technologies in the past 100 years (i.e. cars and carbon harming the environment), but there has been no specific technology in recent years supposedly on par with HAL dumping humans out into space.

As technology has become more advanced throughout the years, many people have shoved away these types of fantasies, chalking them up to being delusional and unrealistic expectations about what the world will eventually be like. The idea of humanity becoming extinct from one of its own creations feels like a very far off fantasy. However, the philosopher Nick Bostrom thinks that we might be closer than you’d think.

Known worldwide for his argument that humanity is living within a simulation, in the past 14 years Bostrom has been more focused on hypothesizing about humanity’s future, wondering if the way in which we are currently living can only bring about our destruction.

This might sound like wild theories that would only be reserved for conspiracy theorists and those who like The Matrix, but there might be something to it. After all, Bostrom isn’t speculating from his mother’s basement, but rather from the University of Oxford where he established the Future of Humanity Institute in 2005.

At the center of a lot of his theories on the future of humanity is something called the “vulnerable world hypothesis”. This philosophy is kind of a similar principle to what HAL represented in 2001: A Space Odyssey. Humanity’s continual technological developments will persist to be beneficial until they aren’t. The cutting off point at which technology becomes harmful is rather steep, making Bostrom believe that it will likely happen immediately.

In a recent interview with Wired, he explained this phenomenon with the following analogy: if humanity’s discovery of technological advancements is like pulling a limited number of balls from an urn, we have been only pulling white/good balls. However, due to there only being a limited number of these types of balls, the potential for positive benefits from technology are running out and eventually humanity will pick a black/bad ball that will bring with it disaster.

This theory also doesn’t mean that it’s a purely white and black situation. For example, Bostrom also believes that it’s possible for there to be balls that are mixtures between the two, bringing with them equal amounts of positives as they bring negatives.

When asked for an example of a black ball that humanity might be pulling soon, Bostrom argues that the creation of weapons of mass destruction that involve biology could be one of humanity’s first steps towards something truly harmful. This is because there isn’t any type of security or regulation possible with biology as there typically is with engineering.

This might seem like an outlandish concept when you’re dealing with weapons of mass destruction, but it does make sense. For instance, an atomic bomb is incredibly destructive, but can be activated and monitored by humans. Having a weapon of mass destruction that is based in biology would be negative due to the potential for it to get out of the control of humans quickly.

Bostrom also pointed out that — while many people involved with the Manhattan Project became involved with denuclearization activism later — there isn’t a parallel in bioscience to suggest that the same type of conscious efforts will be made in the future.

Though it seems probable that humans will continue to collaborate with each other to create best case scenarios for our survival, there is also the possibility that it will only take one person to undo all the progress that has already been achieved through technology. For example, Bostrom is worried that if we make it too easy to destroy things, then it will become so common that eventually someone is able to cause an immense amount of damage to humanity by having a destructive drive.

Global warming is another instance of a ball in-between white and black, with many humans with seemingly insignificant roles contributing heavily to the same type of global risk.

One of the concerns regarding Bostrom’s theory is that randomly picking balls from an urn might not be a good analogy for scientific discovery. After all, humans dedicate immense amounts of time to research and innovation, right? In response to this concern, Bostrom argues that while this might be the case, we can never know the technological advancements that would come from it. For instance, who would have known that iPhones would be made in response to the internet coming to fruition? Or even Amazon Alexa devices reportedly listening to users all of the time being an evolution from the iPhone’s Siri Artificial Intelligence (AI).

Technology always brings with it a consequence; an almost unpredictable pattern that will continue to accelerate the more we feed it. Bostrom isn’t saying that someday technology will suddenly become entirely bad and destroy humans, but rather that it will become more and more complicated as more and more technologies are made. As we discover loopholes in previous technologies, we will create solutions that will then bring their own loopholes, creating a cycle.

Bostrom’s beliefs aren’t necessarily all drawn by hypothesis. For example, he thinks that we can look back at history to see a multitude of different moments where technology almost brought about our destruction. He cites the Cold War as an instance of this, a moment in history where we nearly caused nuclear Armageddon, but didn’t not because of an agreed-upon intuition, but instead luck.

At the end of the day, Bostrom’s theory doesn’t necessarily mean that we’re all doomed and there’s nothing we can do to stop it. In fact, he doesn’t necessarily think that there even are black balls in the hypothetical urn, as we don’t know how the future will be. However, he thinks that the only proper way to reconcile with this dilemma is to be prepared for humanity eventually pulling out a black ball. There is no way to necessarily stop technological advancement nor prevent everyone who might use technology for destructive purposes, but we can develop policies and regulations to prepare for the possibility of someone pulling out a black ball.

Bostrom is also upfront about these solutions, agreeing that they won’t necessarily be easy to implement. He wants to avoid having regulation and protection translate into a potential police or surveillance state, instead concocting ways in which civilians can help each other in regulating the system. His go-to example involves everybody having their own tags with cameras, something he dubs “freedom tags”. The purpose of the name is to point out that it is an inherently problematic concept, but that no solutions are truly without faults. He doesn’t necessarily know if that is a solution that will surely work in our modern day and age, but things such as that are hypothetical possibilities that raise good questions about what we’re aiming for. Is the only alternative to destruction surveillance?

One of the most important parts about stopping technology from becoming a threat is to have multiple different points at which we can stop it if anything goes wrong. Like the example of biological weapons of mass destruction, similar situations that could seemingly unleash havoc can be better dealt with if there are multiple review processes along the way. These processes might be a committee, stricter regulation rules, or simply a group of people with a dedicated cause to stop any malpractices from becoming problems.

At the root of these solutions is the issue that there will always be potential for human error. However, staying well-informed on the potential dangers of technology might just be the key to saving ourselves and is better than doing nothing at all.

--

--

Innovation and Technology

A weekly download of the latest product launches, technology news, innovations in medicine, and more