WordPress Ad Banner

UN AI Adviser Warns About the Destructive Use of Deepfakes


Neil Sahota, an artificial intelligence (AI) expert and adviser to the United Nations, recently raised concerns about the increasing threat posed by highly realistic deepfakes. In an interview with CTVNews.ca on Friday, Sahota highlighted the risks associated with these manipulated media creations.

Sahota described deepfakes as digital replicas or mirror images of real-world individuals, often created without their consent and for malicious purposes, primarily aimed at deceiving or tricking others. The emergence of deepfakes has resulted in various instances of fake content going viral, encompassing a wide range of topics, including political simulations and celebrity endorsements.

WordPress Ad Banner

While famous individuals have often been the primary targets, Sahota emphasized that ordinary civilians are also vulnerable to this form of manipulation. He noted that deepfakes initially gained traction through the distribution of revenge porn, highlighting the importance of remaining vigilant.

To identify manipulated media, Sahota advised individuals to pay attention to subtle inconsistencies in video and audio content. Signs to watch out for include unusual body language, odd shadowing effects, and discrepancies in the spoken words. By maintaining a vigilant eye and questioning the authenticity of media, individuals can become better equipped to identify potential deepfake content.

As deepfake technology continues to advance, Sahota’s warnings serve as a reminder of the critical need to exercise caution and skepticism when consuming digital media, as well as the urgent need for proactive measures to address the risks associated with deepfakes.

Not enough

Sahota also argued that currently policymakers are not doing enough in terms of educating the public on the many dangers of deepfakes and how to spot them. He recommended that a content verification system be implemented that would use digital tokens to authenticate media and identify deepfakes.

“Even celebrities are trying to figure out a way to create a trusted stamp, some sort of token or authentication system so that if you’re having any kind of non-in-person engagement, you have a way to verify,” he told CTVNews.ca.

“That’s kind of what’s starting to happen at the UN-level. Like, how do we authenticate conversations, authenticate video?”