The Traps of Artificial Intelligence: How not to Fall for the 'Deepfake'?

From memes to coups, the limitations of 'deepfake' and altered images are endless. To avoid fraud or 'fake news, it is essential to identify them.

Swapping faces into an original piece of art using a neural network

Photo: Af Stephen Wolfram/Stephen Wolfram

LatinAmerican Post | Julián Gómez

Listen to this article

Leer en español: Las trampas de la Inteligencia Artificial: ¿Cómo no caer en los 'deepfake'?

Just as Kendrick Lamar from Art made the video clip of "The Hart Part V" go viral with a 'deepfake,' this technological advance has also been used in a harmful way. This week it was learned that the British 'gamer' known as Sunpi had to pay 500 GBP to have a pornographic video downloaded from the internet that showed her face with this Artificial Intelligence.

Like most AI advances, many entities request its control with the 'deepfake,' which is the falsification of audiovisual content that involves people speaking. In addition to being fuel for 'fake news,' this technique is an ideal tool for fraud, extortion, or cybercrime. This is why governments like the UK penalize people who create pornography using it without permission.

Many people use this technique for memes or humorous images. An example is the Pope wearing a modern coat more worthy of the great catwalks of Europe than his outfits in the Vatican. On the other hand, some use them to spread 'fake news' such as the viral photograph of Donald Trump in handcuffs. Said image began as an experiment and occurred within the framework of the accusation that the former president of the United States had for bribing a porn actress during the 2016 elections.

How to Recognize the 'Deepfake?

As a large part of the population is susceptible to falling into the 'deepfake' trap, several tricks allow you to detect it without being an expert. It is crucial to identify what is real from what is false since there have been cases such as that of Ali Bongo, president of Gabon, who, after being out of public opinion for some time due to his health, reappeared with a video that many classified as a 'deepfake.' The uncertainty led to a coup attempt, and later the president clarified that his appearance was unusual because he had suffered a stroke.

One of the essential tricks to detect this AI is to watch for artificial movements in the person in the video. It is easy to see by looking at the lack of blinking or movement of the eyes. It would help if you also looked at the person's skin tones, shadows, and lighting. As the 'deepfake' specialty is the image, it is helpful to listen to the audio quality for videos, so it is essential to pay attention to whether the intonations match the appearance or lack volume.

Some other tricks to detect it are in body proportions or unusual postures. The forehead and cheeks are usually critical with faces, but the lips can also be vital if you speak.

With images, detection is more complex, but tricks such as skin tones, shadows, and the proportionality of each person's body parts are also applied. It is also essential to see if the images have watermarks. As an extra resource, you can go to the Generative Adversarial Networks (GAN) detector, which will try to establish if the image is generated with artificial intelligence.

Also read: What happened to them? 5 social networks that we never heard from again

How does the 'Deepfake' Transcend?

In 1997 we had the first antecedent of what we know as 'deepfake.' It arose thanks to the Video Rewrite program in which its creators, Christoph Bregler, Michele Covell, and Malcolm Slaney, used archive material to make a person pronounce words that they had not said. Over the years, this technology has advanced and been perfected to the point of making the difference between a real video clip and one altered by artificial intelligence almost undetectable.

2014 is a critical year for this technology, with the creation of GAN. Said company uses two algorithmic systems: one fakes the original image, and the other detects and improves fakes to get closer to an accurate result.

The danger of irresponsible or harmful use looms over how it can alter the perception of reality. Its importance is so great today that the truth is doubted, as in the case of the president of Gabon. The tool also has power in the reputation of public figures with pornography, and the restrictive measures of the governments have just started.