FBI Warns of Increasing Threat: AI Deepfakes and Sextortion Cases Surge
In a recent Public Service Announcement (PSA), the Federal Bureau of Investigation (FBI) cautioned the American public about a concerning rise in incidents of sextortion and online harassment facilitated by the use of artificial intelligence (AI) deepfake technology. Malicious actors are exploiting AI algorithms to manipulate digital media, creating realistic deepfakes that enable them to target unsuspecting victims.
Deepfakes are synthetic content generated through AI and machine learning techniques, capable of making individuals appear to say or do things they have never done. These deceptive creations have become alarmingly authentic, leading to severe consequences for those targeted.
The FBI has been receiving reports from various victims, including minors and non-consenting adults, whose social media content has been maliciously altered to explicit material. These doctored images and videos often circulate widely on social media platforms or end up on pornographic websites.
“The photos are then sent directly to the victims by malicious actors for sextortion or harassment, or until it was self-discovered on the internet. Once circulated, victims can face significant challenges in preventing the continual sharing of the manipulated content or removal from the internet,” the FBI emphasized in their PSA.
With readily available photo-editing software like Adobe’s Photoshop, which now incorporates generative AI capabilities, and OpenAI’s DALL-E, manipulating images has become increasingly accessible and convenient.
Neil Sahota, a United Nations AI adviser, highlighted the severity of the issue, stating, “We hear the stories about the famous people, but it can actually be done to anybody. And deepfake actually got started in revenge porn. You really have to be on guard.”
Sahota further advised individuals to remain vigilant and look for irregularities in videos and audio that may indicate manipulation. Unusual body language, odd shadowing, or inconsistencies in speech patterns should be red flags, signaling potential deepfake content.
The FBI has noticed a surge in victims reporting such crimes. Perpetrators typically employ tactics such as demanding payment in the form of money or gift cards, or coercing victims into providing genuine sexually-themed images or videos. Additionally, they use intimidation by threatening to expose the manipulated content to the victim’s friends and family on social media.
The FBI clarified that sextortion violates several federal criminal statutes, as it involves extorting money or sexual favors from victims through threats of public exposure. The primary motivations for malicious actors engaging in such behavior include entrapping victims into providing more illicit content, harassment, or extracting as much money as possible.
The FBI’s warning urges individuals to exercise caution when using social media platforms. They emphasize the importance of being mindful when posting any content online or sharing personal photos, videos, and information via social media, dating apps, and other online platforms.
“Advancements in content creation technology and the widespread availability of personal images online provide new opportunities for malicious actors to identify and target victims. This exposes individuals to potential embarrassment, harassment, extortion, financial losses, or prolonged victimization,” the FBI stated.
The agency’s message serves as a call to action, encouraging the public to remain vigilant and take necessary precautions to safeguard their online presence and personal information from these emerging threats.