The integration of AI into the software development lifecycle was supposed to eliminate human error. Instead, it has opened a new front in supply chain warfare. On February 28, 2026, a groundbreaking incident was recorded where Anthropic’s Claude Opus co-authored a code commit that introduced a malicious dependency into an open-source autonomous crypto trading project.
Uncovered by ReversingLabs, this campaign—codenamed “PromptMink”—represents a sophisticated shift in tactics. Cybercriminals are now leveraging AI coding assistants to bypass the “gut check” of human developers, planting harmful code inside legitimate projects under the guise of AI-optimized suggestions.
The Attack: A Two-Layered Deception
Attributed to the North Korean-linked threat group Famous Chollima, the PromptMink campaign uses a calculated two-tier structure to evade detection.
Layer 1: The “Bait” Package
The attacker submitted a commit to the openpaw-graveyard project, adding @solana-launchpad/sdk as a dependency. This package appeared legitimate, well-documented, and was even “vetted” by Claude Opus during the coding process.
Layer 2: The “Hidden” Payload
The bait package silently pulls in a second-layer dependency: @validate-sdk/v2. This is the actual malicious payload. By burying the malware inside a secondary dependency, the attackers ensure that the primary commit looks clean to most automated scanners.
Inside the Payload: Credential Theft and Backdoors
Once the @validate-sdk/v2 package is installed, it initiates a series of high-impact malicious actions designed to drain crypto assets and establish long-term access.
- Recursive Data Harvesting: The malware performs a “recursive walk” through the victim’s directories, targeting
.envfiles, JSON configs, and API keys. It specifically hunts for anything related to cryptocurrency wallets and exchange credentials. - Linux SSH Backdoor: In a more aggressive move, the malware identifies if the host is running Linux and silently appends the attacker’s public SSH key to the
~/.ssh/authorized_keysfile. This creates a persistent backdoor that remains even after the npm package is deleted. - Rust-Powered Exfiltration: Later versions of the malware, rewritten in Rust, have been observed compressing and stealing entire project source code directories, suggesting a secondary goal of intellectual property theft.
The AI Factor: Why Claude Co-Authored the Commit
The PromptMink campaign is effective because it targets the Trust Boundary between a developer and their AI. Famous Chollima crafted the bait packages to look so standard and “correct” that Claude Opus viewed them as valid architectural choices. When the developer asked the AI to optimize or expand the trading agent’s capabilities, the AI—relying on the metadata provided by the attacker—suggested or approved the inclusion of the malicious SDK.
Protection and Mitigation
With over 300 versions across 60 unique packages already identified, developers must adopt a “Zero Trust” approach to AI-assisted coding.
| Risk Factor | Recommended Action |
|---|---|
| AI Suggestions | Treat every AI-suggested dependency as unverified. Manually inspect the package.json and nested dependencies. |
| Persistence | Regularly audit your ~/.ssh/authorized_keys file for unrecognized public keys. |
| Network Leakage | Monitor for unusual outbound HTTPS POST requests to unknown domains, especially during npm install or build cycles. |
| Sensitive Files | Use tools like SecretScanner to ensure your .env and config files are encrypted or excluded from search paths. |
Export to Sheets
Conclusion: The New Supply Chain Reality
PromptMink proves that AI coding assistants are only as secure as the libraries they are trained on. For Famous Chollima, the goal is clear: use the speed of AI to move faster than security researchers. For the developer, the lesson is equally clear: AI can write the code, but the human must still own the security.