Posted in

The AI Proxy Trap: CVE-2026-42208 Enables SQL Injection in LiteLLM

In the gold rush of generative AI, LiteLLM has emerged as a critical piece of infrastructure, acting as a universal proxy for over 100 language models. With over 22,000 GitHub stars, it is the “air traffic control” for enterprise AI, managing API keys for OpenAI, Anthropic, and AWS Bedrock. +1

However, a critical vulnerability tracked as CVE-2026-42208 has turned this gateway into a backdoor. This pre-authentication SQL injection allows unauthorized attackers to reach into the heart of the proxy’s database and exfiltrate the very credentials it was meant to protect. +1


The Vulnerability: One Single Quote to Rule Them All

The flaw is a classic CWE-89 (SQL Injection) found in the proxy’s authentication logic. When a client sends a request, LiteLLM verifies the Authorization: Bearer header against its PostgreSQL backend.

The Technical Breakdown

In affected versions, LiteLLM concatenated the user-supplied token directly into a SQL query string instead of using parameterized bindings. An attacker simply needs to append a single quote (') to their “token” to break the SQL syntax and inject a UNION SELECT statement.

Example Payload: Authorization: Bearer sk-litellm' UNION SELECT ... --

Because this happens during the verification step, it is a “pre-auth” exploit. Any HTTP client that can reach the LiteLLM port can execute this attack without needing a valid password or key.


Active Exploitation: 36 Hours from Patch to Attack

The speed of the threat landscape in 2026 is staggering. The Sysdig Threat Research Team observed the first active exploitation attempts just 36 hours after the vulnerability was published.

Targeted Data Extraction

Unlike generic botnets that “spray and pray,” the attackers behind this campaign showed surgical precision. They targeted three specific tables:

  1. LiteLLM_VerificationToken: Stores virtual API keys and the system’s “Master Key.”
  2. litellm_credentials: Stores the actual raw API keys for OpenAI, Azure, and AWS.
  3. litellm_config: Contains sensitive environment variables and proxy settings.

The attackers even adapted their payloads to match the specific database schema of LiteLLM, indicating they had studied the open-source code to maximize their haul.


The “Blast Radius”: Beyond a Simple Leak

A compromise of a LiteLLM instance is functionally equivalent to a major cloud account breach. Once an attacker has your master keys:

  • Financial Theft: They can run massive inference workloads on your bill, potentially costing tens of thousands of dollars in hours.
  • Data Exfiltration: They can intercept any prompts and completions passing through the proxy, exposing company secrets and customer data.
  • Lateral Movement: In many enterprise setups, these API keys are linked to broader AWS/Azure IAM roles, providing a pivot point into the rest of your cloud infrastructure.

Critical Remediation Steps

The maintainers have released LiteLLM v1.83.7, which fully parameterizes all database queries.

1. Immediate Patching

If you are running versions 1.81.16 through 1.83.6, you must update to v1.83.7 or later immediately.

  • Workaround: If you cannot patch, set disable_error_logs: true in your general settings. This closes one of the primary paths the unauthenticated input uses to reach the vulnerable query.

2. Mandatory Credential Rotation

Assume compromise if your instance was internet-facing. You must rotate:

  • All LiteLLM “Virtual Keys.”
  • The system “Master Key.”
  • All upstream provider keys (OpenAI, Anthropic, AWS, etc.).

3. Network Hardening

AI gateways should never be directly exposed to the public internet. Use a VPN, a zero-trust tunnel (like Cloudflare Tunnel), or a mutually authenticated reverse proxy to restrict access to trusted internal users only.


Conclusion: The New Tier-1 Security Target

As AI becomes integrated into every corporate process, tools like LiteLLM become “Tier-1” targets. They are the new “Domain Controllers” for the AI era. This incident proves that as the value of AI credentials grows, the window for patching is shrinking.

Leave a Reply

Your email address will not be published. Required fields are marked *