Personal information data is a valuable object and a key area of national security. The growing importance of information at the turn of the 20th and 21st century contributed to the necessity of its protection, and with the digitization of information and its globalization, this protection took on a new dimension. In modern times, dominated by the Internet, which has become the main source of information for many, it is difficult to distinguish real information from ‘fake news’.

However, digitization did not stop at generating false information. There were also fake videos called deepfakes. It is a recording prepared with the use of artificial intelligence, in which the face of a specific person can be inserted on any video material. The term can also refer to other digital materials produced using artificial intelligence algorithms. At the end of 2017, the first counterfeit films began to appear, but we do not know the specific date of the creation of deepfakes. While they seemed fun and easy to identify at first, they became a problem over time. They have been linked to both politics and financial fraud, which can have a huge impact on society and our identity.

How is a deepfake made?

To create a convincing video, you need thousands of photos and videos on which deepfake will be based. Knowledge of deep learning algorithms and face mapping is also necessary. By using these data and algorithms, a computer program learns how to imitate a person’s voice and facial expressions, movements, mannerisms, intonation, and the vocabulary the person uses. It does this through an artificial neural network that simulates the learning process that takes place in the brain.

Deepfake and identity theft

As with other types of identity theft, by using deepfakes, fraudsters impersonate someone to open an account, access it, or access some service. Cyber criminals collect identity theft data in many ways, including via social media. Identity theft using deepfake technology can take many forms, like using the victim’s voice to change the password in a bank or asking subordinates or the family for an immediate transfer. Such an event took place in August 2019, when an employee of a British company was tricked into sending cybercriminals £ 200,000. The victim was convinced that she was talking to her boss and carrying out his order regarding an urgent transfer to the supplier. The voice of the boss was so well forged (suitable accent, tone of voice, style of speaking) that the employee did not arouse any suspicion.

Deepfake and identity verification

Does deepfake really pose a real threat to consumers? Many organizations and companies wonder if these techniques pose a threat to eKYC processes and if such videos could be created to spoof identities in customer engagement processes.

Those who use video verification solutions to onboard clients don’t have to worry about them. Video verifications that are backed by a specialist and biometric processes that can verify the identity of individuals using advanced techniques can recognize whether the recording is real-time or computer-generated. In addition, thousands of diverse and high-quality video, photo and audio samples are required to train the algorithms. Obtaining these samples only from social media is almost impossible and the result would be very low video quality.

But let’s assume that there was a video that fooled the biometric filters. However, this is not the only part of the identity verification process. It is necessary to show a valid identity document. These documents are secured by holograms and many other elements that make them very difficult to counterfeit. The biggest obstacle in the verification process is convincing a qualified specialist who checks the recording that the video has not been tampered with.

Video verification

In live video verification during a video call, it is impossible to cheat the procedure by using deepfakes. An AI that supports the process would know that the video had been computer generated beforehand. During the connection, the specialist pays attention to the background and how the character interacts with it, whether the light and shadows look realistic and whether there are any strange disturbances in the recording. When dealing with deepfakes, it is also worth paying attention to the appearance of teeth and eyes, as well as details related to clothing. These are often problematic elements to animate. Additionally, creating a video that fits perfectly into a live procedure is virtually impossible.

Legal regulations

We cannot predict the further development of deepfake technology. It is known that it has gained popularity and will be used more often. According to Technology Review, deepfake has been hailed as the biggest threat on the web in 2019. Existing legal norms often do not keep up with technological progress, and more and more countries are trying to deal with this problem. China has introduced new government regulations on fake news and deepfakes. They must be properly marked in terms of the use of artificial intelligence. Failure to disclose this fact is to be treated as a crime. During the elections, the United States introduced sanctions for the dissemination of deepfakes depicting politicians. Except from this ban are the media and films that are created as satire or parody. In Europe, actions to create laws regulating artificial intelligence are carried out by a special commission, the European Group on Ethics.

How to deal with deepfakes?

How to recognize when are we dealing with a joke and when with the manipulation of someone else’s image? What can be done to prevent deepfakes from being used in a harmful way? It is already known that it is impossible to prevent the creation of deepfakes or prohibit their sharing on social media. There might be a solution to create a technology that would detect fake videos based on artificial intelligence. To feel safe and secure, it’s worth trusting identity verification companies that use artificial intelligence and are able to prevent theft and fraud.

Image source: (1, 2), private resources (3)

Related Post

Leave a Comment