In the first quarter of 2026, the boundary between “low-skilled” and “elite” hacking evaporated. Security researchers at Expel, led by noted specialist Marcus Hutchins, have uncovered a massive North Korean campaign that successfully exfiltrated $12 million in cryptocurrency over just 90 days.
The group behind the attack, dubbed HexagonalRodent (a subgroup of the notorious Famous Chollima), didn’t rely on zero-day exploits or complex manual coding. Instead, they weaponized generative AI to automate social engineering, malware development, and infrastructure scaling. By infecting over 2,000 Web3 developer workstations, HexagonalRodent has demonstrated that AI isn’t just a tool for productivity—it’s a force multiplier for global cyber-threats.+1
What is “Vibe Coding”? The HexagonalRodent Strategy
The hallmark of this campaign is a phenomenon researchers call “vibe coding.” Rather than writing code line-by-line, the threat actors used text prompts to command AI systems to generate entire malware ecosystems.
The Attack Lifecycle
- AI-Generated Personas: Using tools like Anima, the hackers built high-fidelity websites for non-existent IT firms. These sites appeared professional, credible, and trustworthy.
- The Recruitment Lure: Hackers reached out to Web3 developers on LinkedIn with “lucrative” job offers. In a market where many engineers are seeking new opportunities, these flawless, AI-generated lures were highly effective.
- The “Test Assignment”: Victims were asked to complete a coding challenge. The files provided—often NodeJS or Python scripts—contained hidden backdoors like BeaverTail or InvisibleFerret.+1
- Flawless Social Engineering: Using ChatGPT and Claude, the hackers maintained correspondence in perfect English, eliminating the traditional “broken grammar” red flags typically associated with foreign state-sponsored groups.
Technical Analysis: Scaling the Threat with LLMs
Marcus Hutchins’ analysis of the group’s inadvertently exposed infrastructure revealed a startling level of AI integration.
Evidence of LLM-Generated Malware
- The “Vibe” in the Code: The malicious scripts were filled with helpful English comments and emojis—clear signatures of code generated by a Large Language Model (LLM) rather than a human malware author.
- AI-Proofing Backdoors: Telemetry showed the hackers actually used AI to audit their own malware, attempting to ensure it wouldn’t be flagged by the very AI-based security tools their targets were using.
- Language-Specific Targeting: By writing malware in NodeJS and Python, the group blended in with the legitimate tools already installed on a developer’s machine, making signature-based detection extremely difficult.
Infrastructure Exploitation
The hackers utilized Cursor (an AI-native code editor) and OpenAI’s models to research vulnerabilities and refine their credential-stealing workflows. While these services have since blocked the linked accounts, the incident highlights a persistent challenge: how to prevent the misuse of “dual-use” AI technologies.
Impact Assessment: $12M and 2,700 Compromised Systems
The scale of HexagonalRodent’s success is a wake-up call for the Web3 industry.
| Metric | Impact Detail |
|---|---|
| Total Stolen Value | ~$12,000,000 USD |
| Infected Systems | 2,726 Developer Workstations |
| Exfiltrated Wallets | 26,584 Individual Crypto Wallets |
| Primary Targets | Solo Web3 Developers and Small Blockchain Projects |
Export to Sheets
Why it works: Unlike major exchanges, an individual developer with $400,000 in a software wallet is a “soft target.” They often lack the enterprise-grade hardware security modules (HSMs) and multi-sig protocols that protect larger institutions.+1
Expert Recommendations: Defending Against AI-Driven Attacks
As North Korean operators automate every stage of the cyber-kill chain, developers must evolve their defensive posture.
- Trust But Verify Assignments: Never run a “test assignment” or “coding challenge” on your primary machine. Use a disposable virtual machine (VM) or a dedicated sandbox environment.
- Audit Lures with AI: Just as hackers use AI to write lures, use AI to analyze them. Ask an LLM to look for inconsistencies in job postings or “too-good-to-be-true” offers.
- Hardware Wallets for All Assets: If you are a Web3 developer, do not keep significant assets in “hot” software wallets on the same machine you use for day-to-day coding.
- Monitor for Anomalous Process Behavior: Watch for your IDE (
code.exeorcursor.exe) attempting to execute shell commands or reach out to unknown GitHub repositories during a local build.
FAQs
1. Who is HexagonalRodent?
It is a North Korean state-sponsored APT (Advanced Persistent Threat) group, likely a subgroup of Famous Chollima. They specialize in infiltrating the crypto sector to fund the DPRK regime.+1
2. How did they bypass ChatGPT’s safety filters?
The hackers used “vibe coding” prompts that appeared to be for legitimate developer troubleshooting or IT infrastructure setup. Since the individual components looked benign, they bypassed simple safety filters.
3. Can I be targeted if I’m not a Web3 developer?
While HexagonalRodent currently targets the crypto space due to the high liquidity of assets, their AI-driven recruitment scams could easily be pivoted toward any high-value IT sector (AI, Defense, Fintech).
4. What should I do if I ran a test assignment from an unknown recruiter?
Assume your machine is compromised. Disconnect from the internet, move all crypto assets to a new hardware wallet from a different device, and perform a forensic wipe of your OS.
Conclusion: The Era of the Scaled Cyber-Threat
North Korea’s pivot to AI in 2026 marks a qualitative leap in cyber warfare. By turning “vibe coding” into a weapon, HexagonalRodent has shown that $12 million can be stolen without a single high-level exploit—just a few well-crafted prompts and a lapse in human judgment.
Action Item: Review your team’s policy on “technical interviews” and external code execution. In the age of AI-automated crime, the most dangerous line of code is the one you were “hired” to write.