blog

How AI Amplifies the Social Engineering Threat to Web3

Category: AI

How AI Amplifies the Social Engineering Threat to Web3

POSTED BY: Rob Behnke

02.24.2026

Social engineering has long been a go-to technique for cybercriminals targeting the Web3 space. Tricking individuals or projects into installing malware or handing over private keys is often faster and easier than trying to identify and exploit a smart contract vulnerability.

Social engineering is an evolving threat, maturing from the error-ridden emails of years ago to the sophisticated scams of today. This transformation has dramatically accelerated in recent years as AI gives scammers the tools needed to become more convincing than ever before.

Mandiant Sheds Light on AI-enabled UNC1069 Attack

A report published by Mandiant (now part of Google) in February 2026 reveals that AI has become a key element of APTs’ toolkits. Modern social engineering attacks targeting the Web3 space now use AI-generated deepfakes to impersonate trusted parties on video calls.

UNC1069 is an APT group linked to North Korea that has been active since at least 2018. Mandiant’s investigation uncovered an attack campaign including the following steps:

  • Compromised Telegram Account: The APT group had previously compromised the Telegram account owned by an executive of a cryptocurrency company. This account was used to communicate with the target and build a rapport, leading to a Zoom meeting.


  • Spoofed Zoom Link: The attackers sent a Calendly link to the victim to schedule a thirty-minute call. This link directed the victim to a spoofed Zoom link (zoom[.]uswe05[.]us) that actually led to a website hosted by the attackers.


  • Deepfake Video: Instead of a Zoom meeting, the target was shown a likely deepfake video of the CEO of another cryptocurrency company. This video was designed to make it look like the two were in a call that was having audio issues.


  • ClickFix Deployment: To fix these issues, the target was instructed to run a series of commands to help troubleshoot the issue. These commands were displayed on the “Zoom” site and included a command designed to download and execute ClickFix malware on the target’s system.


  • Multi-Stage Malware Infection: The malicious command in this “troubleshooting” code downloaded and executed malware masquerading as a patch for audio issues. This malware kicked off a string of malicious downloads that deployed members of seven different malware families on the targeted Mac computer. These malware families included backdoors and data stealers designed to extract and exfiltrate credentials, browser data, and Telegram and Apple Notes user data from the system.

While this attack campaign included a deepfake video that was likely generated by AI, this isn’t the only way that UNC1069 and related groups leverage AI capabilities. The APT is known to use Gemini and other AI systems to develop tools, perform reconnaissance, and support operational research. Additionally, Bluenoroff, a related group, has been found to use AI to doctor images and generate deepfake videos to support its attack campaigns.

AI Advancements Enhance Social Engineering Attacks

GenAI is playing an increasingly central role in modern life. Many people use AI to perform research, propose ideas, write emails, and handle a variety of other everyday tasks.

The problem is that these same capabilities are also useful for cybercriminals, and, as AI improves, so will the caliber of social engineering and other attacks.

Some ways that improved AI enhances cybercrime include:

  • Deeper Reconnaissance: Cybercriminals commonly need to research their targets to identify a potential vulnerability or build a target profile to craft a convincing phishing email. With AI, a single prompt might surface major security gaps or suggest how to maximize the chances that a target will click on a link.

  • Convincing Emails: While modern AI-generated text often “sounds like AI,” these systems are getting better. Also, as we become more accustomed to inboxes filled with AI-generated text, this may make phishing content look more realistic than ever. With GenAI, attackers can rapidly craft highly-targeted emails that avoid the misspellings and grammatical errors that made phishing attacks obvious in the past.


  • Real-time Deepfakes: The UNC1069 attack campaign detailed by Mandiant required the use of a fake Zoom link to direct the user to a prerecorded, deepfake video. As AI-created videos become more realistic, live deepfakes will be increasingly possible, allowing attackers to perform these attacks via Zoom and other real videoconferencing platforms.

  • AI Vulnerability Scanning: The release of Claude Code Security sent massive waves through the cybersecurity space as AI demonstrated the ability to identify unknown vulnerabilities in audited code. While this is good for security, it can also accelerate attackers’ efforts to identify exploitable vulnerabilities in deployed smart contracts.

  • Automated Attacks: Everyone wants to be able to automate the boring parts of their job with AI, and cybercriminals are no exception. As agentic AI matures, autonomous systems will be able to manage cyberattacks from end to end, making decisions, writing code, and taking actions in the blink of an eye. As a result, attacks will become faster and more numerous, making it much harder to find and address an intrusion in time.

  • Malware Development: Many companies are bragging about how their software developers no longer write code, handing the task off to AI instead. Malware developers can do the same, allowing them to write custom malware targeted to each cyberattack campaign.

Protecting Against AI-Enhanced Cyberattacks

As AI matures, cyberattacks using it will become increasingly sophisticated. Without the ability to reliably identify phishing emails or differentiate between real videos and deepfakes, security programs need processes and solutions in place to reduce the impact of successful attacks.

In most cases, the end goal of social engineering campaigns in the Web3 space is to gain access to the private keys that control on-chain accounts. 

Security best practices that can help to mitigate these risks include:

  • Endpoint security solutions designed to identify and block infostealer malware

  • Storing keys in cold wallets that isolate them from potentially infected systems

  • Multi-signature wallets that require compromising multiple keys to authorize blockchain transactions


Halborn offers advisory services that can help organizations develop security strategies and controls to address top on-chain and off-chain security risks. To help protect your project from the growing threat that AI poses to Web3 security, get in touch.

Disclaimer

The information in this blog is for general educational and informational purposes only and does not constitute legal, financial, or professional advice. Halborn makes no representations as to the accuracy or completeness of the content, which may be updated or changed without notice.