Artificial Intelligence (AI) brings a lot of possibilities. One of these is the ability to create content with real-life characters who are not involved in the creation process.
Imagine watching a video of a popular celebrity doing something amusing, only to find out that it's not real. You might say, “But the video was real! It was him, it was her, or it was them.” Of course, that is what they want you to believe.
So what's going on here? It's a deepfake, and we thought it was time to shed light on the AI deepfake scam. While it's distinct from the recent Zpass toll scam we discussed, they share the commonality of impersonation. But first, let's take a closer look at the term "deepfake," addressing the question: What is a deepfake?
What Is A Deepfake?
A deepfake refers to an image, a video, or an audio file that has been edited or generated using artificial intelligence to misrepresent a real person. It involves the use of deep learning algorithms to produce content that could be life-threatening or damaging to the reputation of those involved.
It includes:
- Face swapping into videos
- Voice cloning
- Audio overlapping and lip-syncing, and
- Body enactment.
Over the years, and particularly in 2024, the emergence of deepfakes has become a growing concern for many people. The very idea that your videos, images, and audio recordings could be easily altered to create a completely different, convincing narrative is disturbing.
What's more disturbing is the malicious content people are creating using the deepfakes of known individuals and the consequences that follow. This raises serious ethical questions and highlights the core issue of deepfake scams that persists to the detriment of their victims.
AI Deepfake Scam Alert
An AI deepfake scam is a scam that involves the use of AI deepfake generators to create images, videos, and audios featuring the identity of well-known individuals to deceive and defraud an audience.
People involved in this type of scam misuse a tool like DeepfakeWeb to generate content using the real image of a celebrity available online, thereby making the celebrity a victim of impersonation.
With these altered or generated images and recordings, scammers are able to create sexually explicit content and use them for misinformation, paid services, investment schemes, and romance scams. They can request real content in return from their victims and use the received content to blackmail and extort money from them.
Below are some examples of AI deepfake scams featuring some of the most popular individuals and celebrities around the world, including Taylor Swift, Jenna Ortega, and Billie Eilish.
Taylor Swift Deepfake
Taylor Swift, a popular musician, has been a victim of AI deepfake exploitation for more than a year now. The first reports of the Taylor Swift deepfake surfaced in January 2024 and were circulated across social media.
The deepfake contained AI-generated, sexually explicit images of her, which had massive views across platforms such as 4Chan and X. It was reported that one of her images got approximately 50 million views in less than 24 hours before it was removed from X (formerly Twitter).
The good news is Taylor Swift decided to fight back by combating the fake news and misinformation that were spreading through the AI-generated content impersonating her.
Jenna Ortega Deepfake
Jenna Ortega, the then-teenage actress of Wednesday, was also a target of AI deepfake exploitation. It happened on social media platforms, including Facebook and Instagram.
The Jenna Ortega deepfake was an exploitation of a teenager through AI-generated images, with blurred versions of explicit images used in adverts to promote an AI application called Perky AI.
We learned the AI software was later suspended by Meta, but one can only wonder the effect that such exploitation had on the young actress.
At least, we could confirm that she deleted her X (formerly Twitter) account, and her detest for seeing herself being used in such inappropriate situations could be a reason for it.
Billie Eilish Deepfake
The Billie Eilish deepfake is another example of an AI deepfake scam that you should know about.
Like Swift and Ortega, the 23-year-old American songwriter and musician Billie Eilish had her own share of exploitation through the same nonconsensual AI-generated content.
In one instance, it was reported that inappropriate AI-generated content imposing her face on another's body circulated and garnered millions of views on TikTok before it was removed. The content was a controversial one, and that was one of several instances.
Basically, what happens is that these scammers use images of public figures to generate content that would spark public interest and attract attention to their channels and marketplaces, where they can get people to defraud, but on the surface, they tarnish the images of the celebrities involved.
Likewise, people involved in the act could be directly targeting the actors and actresses, as well as other public figures, for reasons not related to money. Regardless, the deceitfulness of such acts is one thing we condemn, and it brings to mind what the future of AI holds over deepfake scams.
People need to know that less of what they see online may be reality, as there are several AI tools developed to generate fake images and video and audio recordings, cloning well-known figures to misinform and cause harm financially and otherwise.
The good news is that one of such platforms, Mr. Deepfake, was shut down, and that is a win for the people. And we hope that regulations be put in place and enforced to tackle such activities.
Until then, be vigilant. Verify as much as you can. Report deceitful acts to the appropriate authorities, and use our Scam Dictionary to learn about scam tactics. Remember, your safety comes first.