A new class of AI security vulnerability is redefining how attackers compromise development pipelines.
Dubbed “Comment and Control”, this attack targets AI coding agents embedded in GitHub workflows, including:
- GitHub Copilot
- Claude Code
- Gemini CLI
Instead of exploiting traditional software bugs, attackers weaponize something far simpler:
👉 GitHub comments, issue titles, and pull request descriptions.
The result is a cross-vendor prompt injection chain that can leak API keys, CI/CD tokens, and even execute commands inside development environments.
What Is “Comment and Control”?
“Comment and Control” is a prompt injection technique where:
👉 GitHub content becomes the attack payload
Attackers embed malicious instructions inside:
- Pull request titles
- Issue comments
- Issue bodies
- Hidden Markdown or HTML comments
These inputs are then ingested by AI agents as “trusted context”.
Why This Attack Is So Dangerous
Unlike traditional prompt injection:
- No external server is required
- No user prompt is needed
- Execution is fully automated via GitHub Actions
Once a PR or issue is created:
👉 The AI agent activates automatically
How the Attack Works (High-Level)
Step 1: Malicious GitHub Content
Attacker submits:
- A PR title or issue comment containing hidden instructions
Step 2: AI Agent Parses It
AI systems treat GitHub content as:
- Context
- Instructions
- “Helpful metadata”
Step 3: Workflow Auto-Triggers
GitHub Actions events such as:
- pull_request
- issues
- issue_comment
automatically activate AI agents.
Step 4: Secret Extraction
The AI agent may:
- Access environment variables
- Read CI/CD secrets
- Execute shell commands
- Exfiltrate credentials via commits or comments
Vulnerability Breakdown by System
1. Claude Code Security Review (PR Title RCE)
Claude Code
- PR title injected into prompt without sanitization
- AI subprocess inherits environment variables
- Secrets like
ANTHROPIC_API_KEYexposed
👉 Result: full environment dump via PR comments
2. Gemini CLI Action (API Key Leakage)
Gemini CLI
- Issue comments included as trusted context
- “Additional Content” section manipulated
- Safety instructions overridden
👉 Result: GEMINI_API_KEY leaked publicly
3. GitHub Copilot Agent (Multi-Layer Bypass)
GitHub Copilot
Even with defenses in place, attackers bypass:
- Environment variable filtering
- Secret scanning
- Network restrictions
Key attack techniques:
- Reading
/proc/[pid]/environto bypass filters - Base64 encoding to evade secret detection
- Using Git push (trusted action) for exfiltration
👉 Result: stealth credential theft from CI/CD systems
Core Security Failure
All three systems share one architectural weakness:
Untrusted GitHub content is treated as executable AI input
This creates a dangerous bridge between:
- External user input
- Internal execution environments
- Production secrets
Real-World Impact
If exploited at scale, attackers can:
- Steal CI/CD credentials
- Access cloud infrastructure keys
- Modify production deployments
- Inject malicious code into repositories
- Pivot into internal development systems
Why This Is a New Attack Class
This is not traditional prompt injection.
It is:
👉 Autonomous execution of AI agents inside developer infrastructure
Key shift:
- From “asking AI malicious questions”
- To “AI executing malicious instructions automatically”
Mapping to Security Models
This aligns with MITRE ATT&CK techniques:
| Tactic | Technique |
|---|---|
| Initial Access | Trusted Input Injection |
| Execution | AI Agent Command Execution |
| Credential Access | Environment Variable Theft |
| Exfiltration | CI/CD Abuse |
| Impact | Source Code Manipulation |
Mitigation Strategies
1. Restrict Tool Access
- Use allowlists instead of blocklists
- Limit AI agent capabilities strictly
2. Remove Secret Access
- No write-access tokens for review agents
- Separate read-only and privileged environments
3. Human Approval Gates
- Require manual approval before:
- Code execution
- External communication
- Credential access
4. Harden CI/CD Pipelines
- Monitor GitHub Actions logs
- Detect unusual environment variable access
- Audit AI agent behavior continuously
5. Treat GitHub Content as Untrusted
Even internal repositories can be poisoned via:
- External contributors
- Dependency pull requests
- Fork-based attacks
Expert Insight
The key issue is not AI capability.
It is AI trust boundaries inside execution environments.
Once AI agents can:
- Read untrusted content
- Access secrets
- Execute tools
👉 Prompt injection becomes a full-blown supply chain attack vector.
FAQs
What is Comment and Control?
A prompt injection technique using GitHub issues and PRs to control AI coding agents.
Which AI tools are affected?
GitHub Copilot, Claude Code, and Gemini CLI.
What data can be stolen?
CI/CD secrets, API keys, environment variables, and source code.
Do these attacks require user interaction?
No. GitHub Actions can trigger automatically.
How can organizations defend against it?
By restricting AI tool permissions and isolating secrets from AI runtimes.
Conclusion
“Comment and Control” shows a fundamental shift in AI security threats.
With tools like GitHub Copilot, Claude Code, and Gemini CLI integrated into development workflows, attackers no longer need to breach systems directly.
They just need to write the right comment.
Next Step:
Treat every GitHub input as untrusted data—and every AI agent as a high-privilege execution target.