Deepfakes represent a reality that has crept into our mobile devices and televisions through social networks and other information channels since 2019 thanks to advances in artificial intelligence and machine learning (or deep learning). 

However, its use is not only intended to issue hoaxes and fake news by the falsified identity of a celebrity or opinion leader, but some organizations and companies are thinking that they may be used to falsify and impersonate the identity of customers and users to commit fraud.

We will try to understand the deepfake concept delve into its possible use to falsify identity as well as anti-fraud techniques to prevent them from being used as a criminal weapon

What is a Deepfake

Deepfake gets its meaning bi the words it is formed by: fake and deep (deep learning). Deep learning is an artificial intelligence technique that creates standard models and automated formats from a huge data source. 

Static and moving images of a subject are studied to understand and replicate his gestures and appearance, sometimes with surprisingly similar to the real person results. A video is produced, and it appears to be real, but it is entirely produced by a computer (CGI).

Thus, we can see Barack Obama making statements about UFOs or Angela Merkel issuing false information about Covid-19.

Deepfakes and identity theft

Many organizations and companies are wondering if these techniques profess a real risk to their eKYC processes. Given that personalities videos are created to issue informative hoaxes and fake news, they consider whether similar videos could be created to falsify identities in habitual customer onboarding processes.

Here, biometric processes come into play, capable of verifying the identity of people with advanced techniques and exact mathematical models that meet high-security criteria, and understand whether the input they receive is a natural person being recorded real-time or if it is a computer-generated video being played.

In addition, experts remember that in order to train algorithms that will later generate a fake video of a person, thousands or tens of thousands high-quality and diverse video, photo and audio samples of the person are required. It is almost impossible to obtain these samples from a person who is not exposed to the media, and even so, in the case of obtaining moderately significant data from an “anonymous” person, the result would probably be very poor. 

Key elements in anti-deepfakes security

ID document

Although it seems that it is increasingly common to generate credible deepfake videos in which you can configure a series of specific actions for a person and what he says, it is not enough for a high-security biometric facial recognition system to be “hacked” or “cheated”.

Still, let’s assume that deepfake techniques are hypothetically able to bypass the biometric recognition filter. However, before showing face to the camera and smiling to carry out the identity verification process, it is not only enough with the face, but it is essential to show a valid and original ID card or document that has not been altered or modified.

Despite the fact that the falsification of identity documents goes back a long way, there are two characteristics that protect them against forgery in online identity verification processes:

Current identity documents and passports are increasingly inviolable and include dozens of “hidden” elements that make them very difficult to be counterfeit. The star feature is the hologram: Although not all online identity verification processes take this aspect into account, the hologram is the key to verifying the integrity and originality of the ID document.

Streaming Video

We have already warned about the lack of security in identity verification solutions based on selfies, in addition to not complying with the standard regulations on digital identification. Unlike streaming video identification where timestamps and validity of the identification are real-time checked, other solutions do not provide the level of security necessary for this type of procedures. 

Committing fraud through a deepfake would be infeasible in a real-time video identification process since the artificial intelligence of the method would recognize that a recorded video is being played. In addition, all the complications arisen from the times for each step of the process and the checks of each one make it impossible for an already generated video to adjust to the identification process by video in real-time.

A key element in the video is the background, and how the face links to it. The artificial intelligence of the streaming video can reveal that the background is an embedded static image, something common in deepfakes. Lighting is another outstanding aspect, where it can be seen that a shadow in the room does not affect the face but the rest of the elements present in the video.

In any case, in risky processes that require high security, a team of qualified agents validates the video in a back process, completely eliminating the risk that a deepfake falsifies an identity.

Two-factor authentication

An additional security element is the commonly called double factor which in this case would be a fourth factor, since, as we have seen, there are three prior controls (ID document, face and smile, and real-time stamps). 

The so-called two-authentication factor adds an extra security through different methods such as an OTP (One Time Password) through SMS, email or other channel or, for example, a personal PIN or the fingerprint. 

In conclusion, the streaming video identification process is a multi-factor process (MFA, Multi-Factor Authentication) impractical for deepfakes.

Does deepfake profess a risk to online identification processes?

The answer is no. As we have seen, the high-security real-time video identification processes are equipped with various anti-spoofing systems and techniques and ensure life tests to prevent any type of fraud and phishing derived from deepfakes or any other method.

In addition, all those systems that comply with the standard regulations on digital identity such as AML5 and eIDAS, are protected against fraud and criminal intent.

Download here the definitive guide on the regulations that set the security standards and discover how to acquire clients in seconds.

What about authentication processes?

Authentication is always a process subsequent to video identification. A user cannot authenticate if he had not previously registered. 

Considering that the video identification process cannot be falsified, and that the authentication is based on a previous and validated video identification, falsification in an authentication by a deepfake is not achievable.

SmileID, for example, bases its biometric facial recognition and authentication system on the data obtained from VideoID, in addition to integrating all its anti-spoofing systems.

Request a free demo of eID anti-deepfake solutions now.