Artificial IntelligenceTECHNOLOGY

What Is the Technological Singularity and Why Are Bill Gates and Elon Musk Afraid of It?

This potentially doomsday tech scenario could be just around the corner

Bill Gates and Elon Musk

Photos: Flickr-Ministerie van Buitenlandse Zaken, NORAD and USNORTHCOM

LatinAmerican Post| Juan Manuel Londoño

Listen to this article

Leer en español: ¿Qué es la singularidad técnologica y por qué le temen Bill Gates y Elon Musk?

How far are we from creating true artificial intelligence? According to experts on the subject, we could see this concept, which only exists for now in science fiction, before the end of this century.

In a 2012 study that interviewed more than 550 artificial intelligence researchers, 50% of the participants said we could see this technology by 2040, and 90% said it would happen by 2076. 10% of those interviewed even said that by 2030, we will already have the first true artificial intelligence.

And while it may sound like an exciting concept, for some of the world's smartest people the advent of artificial intelligence spells trouble for humanity. This is due to the concept of the “technological singularity”. What does this term mean? We will explain it to you below.

Technological Singularity: the End of the Human Age

In simple terms, the technological singularity is a point in time when technological development becomes uncontrollable and irreversible, which could result in unpredictable changes for humanity. In theory, we would reach this point in time when an artificial intelligence capable of improving itself is created. This artificial intelligence would create progressively more intelligent versions of itself until it surpassed the collective human intelligence, whose capacities would be impossible for human beings to predict.

Some of the world's smartest people fear that this scenario could be disastrous for humanity. The late Stephen Hawking, for example, once warned that "the development of full artificial intelligence could be the end of the human race." "Humans, who are limited by slow biological evolution, could not compete and would be overtaken," he once explained to the BBC.

Other figures such as Bill Gates and Elon Musk have also raised concerns about the possibility of artificial intelligence. “I am one of those who is concerned about superintelligence. First, machines will do a lot of work for us and they won't be super smart. That should be positive if we manage it well. A few decades after that, they will be smart enough to be a concern,” Gates mentioned in a Reddit forum in 2015. For his part, Musk has assured that the development of artificial intelligence “is the greatest existential threat to humanity".

We suggest you read: Why Does the US Want to Ban the Sale of Location Data?

What Is the Problem?

The fundamental problem of an artificial intelligence superior to a human is that this new intelligence does not necessarily have to have the same values or achievements as a society. This means that they may see our needs as irrelevant, at best, or they may see us as a source of competition for their livelihood on the planet, at worst.

However, as we have already mentioned several times in the article, the biggest cause for concern is uncertainty. A sufficiently powerful artificial intelligence would be capable of making changes on a global scale that we cannot predict, for reasons that we would not be able to discern. It would mean the loss of our position at the top of the evolutionary chain.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button