A critical vulnerability in LangChain’s core library—tracked as CVE-2025-68664—allows attackers to exfiltrate sensitive environment variables and potentially achieve code execution through unsafe deserialization.
The flaw was discovered by a Cyata security researcher during audits of AI trust boundaries and was patched just before Christmas 2025. Given LangChain’s massive adoption across AI-driven applications, the issue poses serious risk to organizations deploying LLM-powered agents, pipelines, and tools.
Why This Vulnerability Is High Impact
LangChain is one of the most widely used AI orchestration frameworks, with:
- ~847 million total downloads (pepy.tech)
- ~98 million downloads in the past month alone (pypistats)
A vulnerability at this layer impacts agentic AI systems, developer tooling, and production LLM applications at scale.
The CVE carries a CVSS score of 9.3 (Critical) and has been classified under CWE-502: Deserialization of Untrusted Data.
Root Cause: Unsafe Serialization in langchain-core
The issue resides in langchain-core’s dumps() and dumpd() functions, which failed to properly escape user-controlled dictionaries containing the reserved key lc.
In LangChain, the lc key is used internally to mark serialized objects. Because this key was not escaped:
- User-controlled data could masquerade as trusted internal objects
- Serialized data could be deserialized automatically in downstream workflows
This created a dangerous trust boundary violation when LLM outputs or prompt-injected content influenced fields such as:
additional_kwargsresponse_metadata
These fields are commonly passed through event streaming, logging, caching, and tracing pipelines, triggering serialization–deserialization cycles.
Affected Execution Paths
The advisory identified 12 vulnerable patterns, including commonly used async workflows such as:
astream_events(v1)Runnable.astream_log()
These paths can deserialize attacker-influenced data without explicit developer intent, making exploitation feasible in real-world applications.
Exploitation Scenarios
Attackers could abuse this flaw using prompt injection or manipulated LLM outputs to:
1. Exfiltrate Environment Variables
Previously, secrets_from_env was enabled by default, allowing serialized objects to resolve environment variables automatically—leading to direct leakage of API keys, tokens, and credentials.
2. Server-Side Request Forgery (SSRF)
By instantiating allowlisted classes such as ChatBedrockConverse from langchain_aws, attackers could trigger outbound requests with environment variables embedded in headers, enabling exfiltration.
3. Potential Remote Code Execution
If a deserialized object later interacts with PromptTemplate, which supports Jinja2 rendering, there is potential for template-based code execution under certain execution paths.
Discovery and Disclosure Timeline
- December 4, 2025 – Vulnerability reported via Huntr
- December 5, 2025 – LangChain acknowledged the issue
- December 24, 2025 – Public advisory released
- Patches issued in:
langchain-core0.3.81langchain-core1.2.5
LangChain awarded a record $4,000 bug bounty for the discovery.
Security Fixes Introduced
The patched releases include several critical mitigations:
- Wrapping dictionaries containing the
lckey to prevent object spoofing - Disabling
secrets_from_envby default - Hardening serialization logic to prevent unsafe deserialization
These changes significantly reduce the attack surface for agent-based workflows.
LangChainJS Also Affected
A parallel issue was identified in LangChainJS, tracked as CVE-2025-68665, highlighting a broader pattern of risk across agentic AI plumbing and cross-language implementations.
What Organizations Should Do Now
Security teams and AI engineers should take immediate action:
- Upgrade
langchain-coreimmediately - Verify transitive dependencies such as
langchain-community - Treat LLM outputs as untrusted input
- Audit all serialization and deserialization paths
- Disable secret resolution unless inputs are strictly validated
- Inventory AI agents and autonomous workflows for exposure
As LLM adoption accelerates, vulnerabilities at orchestration layers represent systemic risk, not isolated bugs.
Why This Matters for AI Security
CVE-2025-68664 reinforces a critical lesson:
LLMs do not eliminate traditional application security risks—they amplify them.
Frameworks that blur boundaries between data, code, and execution must be designed with explicit trust separation, especially as AI agents gain autonomy.
Key Takeaways
- Critical LangChain flaw enables env var exfiltration and possible RCE
- Root cause: unsafe deserialization via unescaped
lckeys - CVSS 9.3, affecting widely used agent workflows
- Fixed in langchain-core 0.3.81 and 1.2.5
- AI orchestration layers remain a high-risk attack surface