87 deepfake scam rings taken down across Asia in Q1 2025: Bitget report

0


The rise of AI technology has also fueled a surge in AI-enabled fraud. In Q1 2025 alone, 87 deepfake-driven scam rings were dismantled. This alarming statistic, revealed in the 2025 Anti-Scam Month Research Report co-authored by Bitget, SlowMist, and Elliptic, underscores the growing danger of AI-driven scams in the crypto space.

The report also reveals a 24% year-on-year increase in global crypto scam losses, reaching a total of $4.6 billion in 2024. Nearly 40% of high-value fraud cases involved deepfake technologies, with scammers increasingly using sophisticated impersonations of public figures, founders, and platform executives to deceive users.

Distribution of Causes for Security Incidents in 2024 Source: SlowMist

Related: How AI and deepfakes are fueling new cryptocurrency scams

Gracy, CEO of Bitget, told Cointelegraph:” The speed at which scammers can now generate synthetic videos, coupled with the viral nature of social media, gives deepfakes a unique advantage in both reach and believability.”

Defending against AI-driven scams goes beyond technology—it requires a fundamental change in mindset. In an age where synthetic media such as deepfakes can convincingly imitate real people and events. Trust must be carefully earned through transparency, constant vigilance, and rigorous verification at every stage.

Deepfakes: An Insidious Threat in Modern Crypto Scams

The report details the anatomy of modern crypto scams, pointing to three dominant categories: AI-generated deepfake impersonations, social engineering schemes, and Ponzi-style frauds disguised as DeFi or GameFi projects. Deepfakes are particularly insidious.

AI can simulate text, voice messages, facial expressions, and even actions. For example, fake video endorsements of investment platforms from public figures such as Singapore’s Prime Minister and Elon Musk are tactics used to exploit public trust via Telegram, X, and other social media platforms.

Fake video of Singapore’s Prime Minister Lee Hsien Loong Source: Lianhe Morning Newspaper

AI can even simulate real-time reactions, making these scams increasingly difficult to distinguish from reality. Sandeep Narwal, co-founder of the blockchain platform Polygon, raised the alarm in a May 13 post on X, revealing that bad actors had been impersonating him via Zoom. He mentioned that several people had contacted him on Telegram, asking if he was on a Zoom call with them and whether he was requesting them to install a script.

Related: AI scammers are now impersonating US government bigwigs, says FBI

SlowMist CEO also issued a warning about Zoom deepfakes, urging people to pay close attention to the domain names of Zoom links to avoid falling victim to such scams.

SlowMist CEO posts a warning toward deepfake Source: @evilcos

New Scam Threats Call for Smarter Defenses

As AI-powered scams grow more advanced, users and platforms need new strategies to stay safe. deepfake videos, fake job tests, and phishing links are making it harder than ever to spot fraud. 

For institutions, regular security training and strong technical defenses are essential. Businesses are advised to run phishing simulations, protect email systems, and monitor code for leaks. Building a security-first culture—where employees verify before they trust—is the best way to stop scams before they start.

Scam prevention guideline Source: Anti-Scam Report 2025

Gracy offers everyday users a straightforward approach: “Verify, isolate, and slow down.” She further said:

“Always verify information through official websites or trusted social media accounts—never rely on links shared in Telegram chats or Twitter comments.”

She also stressed the importance of isolating risky actions by using separate wallets when exploring new platforms.

Magazine: Baby boomers worth $79T are finally getting on board with Bitcoin



Source link

You might also like
Leave A Reply

Your email address will not be published.