Scammers are increasingly turning to AI-generated celebrity deepfakes to make fraud feel familiar, personal, and urgent—an approach that’s now showing up in TikTok ads. According to authentication company Copyleaks, the latest wave of scams uses realistic videos featuring well-known public figures, including Taylor Swift and Rihanna, to promote “shady services” and rewards programs that ultimately funnel users toward third-party sites designed to extract personal information.
What makes these campaigns particularly effective isn’t just the technology behind them. It’s the packaging. The ads are built to look like something viewers already trust: a celebrity speaking in an interview setting, framed like a red-carpet moment, a podcast appearance, or a talk-show segment. In other words, the content doesn’t resemble a typical scam pitch. It resembles entertainment—something people might watch without suspicion. Then, at the moment attention is highest, the ad pivots into a call to action: watch more, provide feedback, earn rewards, and move quickly before the opportunity disappears.
Copyleaks says many of the videos appear to be AI-manipulated versions of real footage. That matters because it changes how people evaluate authenticity. When a deepfake is convincing enough to mimic lighting, facial movement, and speech patterns, the usual “does this look real?” instinct becomes unreliable. And when the scam includes recognizable branding elements—sometimes even TikTok’s official branding—users may assume they’re interacting with a legitimate platform feature rather than a lure.
The result is a fraud funnel that looks less like a con and more like a marketing campaign. But the mechanics are consistent: the ad draws you in, the next step pushes you off-platform, and the off-platform destination asks for information that can be used for identity theft, account takeover, or other forms of financial exploitation.
A familiar face, a familiar format, a new kind of risk
Celebrity impersonation has long been a staple of online fraud. What’s changed is the speed and scale at which scammers can generate convincing video content. With AI video tools, fraudsters don’t need to secure footage, negotiate rights, or even find a real clip that matches their message. They can create a new “moment” on demand—one that fits the exact narrative they want to sell.
Copyleaks describes ads that often show celebrities in interview-style contexts. This is a strategic choice. Interview settings are inherently persuasive: they imply credibility, spontaneity, and direct communication. A red-carpet backdrop suggests mainstream media coverage. A podcast studio suggests a casual but informed conversation. A talk-show set implies a vetted environment where the speaker is being asked questions by professionals.
In a traditional scam, the pitch is obvious. In these deepfake ads, the pitch is disguised as a normal media interaction. The viewer isn’t being told, “Give us your data.” Instead, they’re being told, “Here’s an opportunity,” delivered through a voice and face they recognize.
Copyleaks also notes that some ads include TikTok branding. That detail is especially concerning because it blurs the line between platform content and external promotions. Even if the branding appears only briefly or inconsistently, it can still reduce skepticism. Users may interpret the presence of familiar logos as a sign that the ad is authorized or monitored.
Then comes the redirect. Users are sent to third-party services that request personal information. At that point, the scam’s true purpose becomes clear: the ad is not the product. The user’s data is.
Rewards programs as the bait
Many of the ads promote rewards programs that claim users can earn money by watching TikTok content and giving feedback. This is a common pattern in online fraud because it offers a simple promise with a low barrier to entry. The user doesn’t have to download suspicious software immediately or click a link that screams danger. They just have to follow instructions that sound reasonable: watch, respond, participate.
The “earn money” framing also exploits a psychological shortcut. People are more likely to comply when the potential reward feels immediate and attainable. If the ad suggests that feedback is part of a legitimate system—like a survey, a creator program, or a marketing research initiative—the user may not question why a celebrity would be involved or why the process requires leaving TikTok.
Copyleaks’ description points to a key element of the scam: the ads often encourage users to take actions that keep them engaged long enough to reach the next stage of the funnel. Watching content and providing feedback can feel like participation rather than extraction. But once the user is redirected to a third-party site, the scam can shift from “engagement” to “verification,” where personal details are collected under the guise of eligibility.
This is where deepfakes become more than a novelty. They become a trust engine. The celebrity face and voice provide credibility, while the rewards narrative provides motivation. Together, they lower resistance and increase conversion rates.
The “urgent nudge” problem: when persuasion is engineered
One of the most unsettling aspects of these campaigns is how they can be engineered to prompt action. Copyleaks reports examples where realistic AI avatars of celebrities appear to urge viewers to take action. That urgency is not accidental. Scammers know that hesitation reduces conversions. So they design the message to feel time-sensitive, emotionally compelling, and personally relevant.
Deepfakes can amplify this effect because they allow fraudsters to tailor the delivery. A scam can be adjusted to match what a particular audience is likely to respond to—tone, pacing, and phrasing can all be tuned. Even small changes can make the message feel more authentic, more “in character,” and more likely to be believed.
This is one reason the threat is evolving beyond simple misinformation. The goal isn’t only to trick someone into believing something false. The goal is to get them to do something—click, sign up, submit data, or transfer money. Deepfakes are particularly dangerous in this context because they can simulate direct communication. Instead of reading a suspicious text message, the victim sees a celebrity “speaking” to them.
That sense of direct address is powerful. It bypasses some of the skepticism people apply to generic scams. When the message feels like it’s coming from a trusted figure, the user’s internal alarm system may not trigger until it’s too late.
Why TikTok is a high-value target
TikTok’s format—short videos, rapid scrolling, and algorithm-driven discovery—creates conditions that scammers can exploit. Ads blend into the feed, and the platform’s engagement mechanics encourage quick consumption. If a deepfake ad looks like entertainment, it can travel further before anyone flags it.
Additionally, TikTok’s global reach means scammers can target multiple regions with minimal friction. Fraud campaigns can be localized with different languages, different celebrity choices, or different reward narratives. The same underlying infrastructure can be reused across markets.
Copyleaks’ report suggests that the scams are not isolated incidents. They reflect a broader trend: as AI video becomes easier to generate and harder to detect, platforms face increasing pressure to improve authentication and moderation. But moderation alone may not be enough. Even if a portion of scam ads are removed, the underlying technique can be replicated quickly by other actors.
In other words, the barrier to entry for scammers is dropping. The barrier to defense must rise.
The authentication gap: why detection is hard
Deepfake detection is a moving target. As generation models improve, artifacts that once made fakes easier to spot—odd lip movement, inconsistent lighting, unnatural motion—become less visible. Meanwhile, scammers can iterate rapidly, producing new versions that evade detection systems.
Authentication companies like Copyleaks exist because the industry recognizes this gap. But authentication is not a single switch. It’s a pipeline: verifying provenance, tracking content sources, and ensuring that platforms can reliably distinguish original media from synthetic or manipulated content.
When deepfakes are used in ads, the challenge becomes even more complex. Ads are dynamic, and they can be created and updated quickly. A scammer can test multiple variations, learn what converts, and then scale what works. Even if a platform blocks one campaign, another can appear with a slightly different video or a different redirect destination.
This is why the problem isn’t only about whether a deepfake can be detected. It’s about whether the system can respond fast enough to prevent harm.
The redirect layer: where the real damage happens
The deepfake video is the hook. The redirect is the mechanism of harm.
Copyleaks notes that users are redirected to third-party services that ask for personal information. That step is crucial because it shifts the risk from “being fooled” to “being compromised.” Personal information can be used for identity fraud, account recovery attempts, phishing, or direct financial theft depending on what the scam collects.
In many scams, the third-party site is designed to look legitimate. It may use polished branding, familiar interfaces, and language that mimics real programs. It may also include forms that request details such as names, email addresses, phone numbers, or payment-related information. Sometimes the site claims users must verify eligibility to receive rewards. Other times it frames the process as a necessary step to “confirm feedback.”
Even if the user never receives money, the data they provided can still be valuable. Fraudsters can monetize it directly or sell it to other criminal groups.
This is why the presence of TikTok branding in some ads is so alarming. It can make the redirect feel safer. Users may assume that if TikTok appears in the ad, the destination is vetted. But the reality is that the ad is often just a delivery vehicle.
A unique take on the “celebrity economy” of scams
There’s a broader cultural shift happening alongside the technical one. Celebrities have become a kind of universal interface for trust. People don’t just follow celebrities for entertainment; they also associate them with endorsements, values, and legitimacy. That trust is now being weaponized.
Deepfake scams represent a new phase of the celebrity economy: not just fake endorsements in text form, but synthetic video that imitates the intimacy of direct communication. The scam doesn’t merely borrow a famous
