The first half of 2025 has proven brutal for the blockchain industry, with over $2.37 billion lost to security breaches, scams, and sophisticated fraud much of it fueled by emerging technologies like AI.
According to SlowMist’s mid-year Blockchain Security and Anti-Money Laundering (AML) Report, the sector suffered losses across 121 security incidents from January through June. Despite fewer total incidents than the same period in 2024, financial losses have soared by nearly 66%, highlighting how attacks have become more severe and costly.
DeFi and Centralized Exchanges Under Fire
Decentralized Finance (DeFi) remains the prime target for hackers, accounting for roughly 76% of all incidents and around $470 million in losses. However, centralized exchanges (CEXs) were hit even harder in dollar terms, suffering a staggering $1.883 billion in losses from just 11 major incidents—evidence that attackers increasingly focus on high-value targets.
Account takeovers topped the list of attack vectors, closely followed by vulnerabilities in smart contracts.
AI Supercharges Scams and Phishing Attacks
Beyond traditional exploits, 2025 has seen a surge in scams directly targeting individual users, many of them powered by advances in generative AI. SlowMist’s report shines a spotlight on the evolving tactics cybercriminals are using:
🔹 Phishing with EIP-7702 Exploits
With Ethereum’s Pectra upgrade introducing new contract delegation capabilities under EIP-7702, scammers have quickly adapted. On May 24, a user lost $146,551 after being tricked by the Inferno Drainer group. The attackers exploited the EIP-7702 delegation feature in MetaMask, using a legitimate-looking contract to secure bulk token approvals and drain funds.
🔹 Deepfake Scams
AI-generated deepfakes are driving a new breed of “trust-based scams.” In one high-profile case earlier this year, Mehdi Farooq, a partner at Hypersphere Ventures, lost his entire crypto portfolio after joining a fake Zoom call featuring deepfake versions of trusted contacts. Similar scams have surfaced using AI-generated videos of Elon Musk and government officials in Singapore to promote fake investment schemes.
🔹 Telegram Fake Safeguard Attacks
Scammers have been using fake X (formerly Twitter) accounts posing as crypto influencers to lure victims into Telegram groups. Once there, unsuspecting users are prompted to click “Tap to verify” links, which execute malicious PowerShell commands. These attacks have resulted in full device compromises, allowing hackers to steal wallet files, private keys, and even take over Telegram accounts on both Windows and macOS devices.
🔹 Malicious Browser Extensions
Attackers have increasingly hidden malware in browser extensions masquerading as “Web3 security tools.” One notable example is the “Osiris” extension, where attackers hijacked a legitimate developer’s Chrome Web Store account through an OAuth phishing exploit. They then pushed a malicious update that ultimately targeted over 2.6 million users, stealing private keys and login credentials.
🔹 LinkedIn Recruitment Phishing
Phishing attacks on LinkedIn have exploded this year, with scammers posing as blockchain startups to lure engineers into downloading malware. The fraudsters share professional-looking project briefs and then send victims to repositories hosting encrypted malicious payloads. Once installed, these backdoors harvest sensitive data like host details, credentials, SSH keys, and macOS Keychain data.
🔹 Social Engineering at Scale
One of the biggest social engineering incidents of the year involved Coinbase. Hackers bribed overseas customer support staff for internal user data and then posed as Coinbase representatives via spoofed phone calls and phishing messages. The elaborate ruse led to more than $100 million in user losses.
🔹 Supply Chain Attacks via AI Tools
Developers searching for “unlimited access” to AI models through unofficial channels have inadvertently opened themselves to supply chain attacks. In one documented case, a startup lost hundreds of thousands of dollars after installing malicious npm packages generated by an unauthorized AI tool. These packages planted backdoors, granting attackers remote access and enabling credential theft across systems used by over 4,200 developers, mainly on macOS.
🔹 Jailbroken Large Language Models
The rise of “jailbroken” large language models (LLMs) is also raising red flags. Tools like WormGPT help criminals write malware and phishing emails, while FraudGPT generates convincing fake project documentation and phishing websites. DarkBERT, trained on dark web data, is used for precision-targeted social engineering campaigns, and GhostGPT crafts realistic deepfake scams impersonating executives from crypto exchanges.
Looking Ahead
The explosive growth in blockchain adoption and the surge of institutional capital flowing into crypto markets has made the sector an ever more attractive target for attackers. As AI continues to evolve, security experts warn that scams and cybercrime will only become more sophisticated, making vigilance and robust security measures more critical than ever.


























































































































































































































































































































































































































































































































































































































