Posted in

ServiceNow AI Vulnerability Exposes Emerging Risks in Autonomous Agent Security

In late 2025, researchers uncovered a high‑severity ServiceNow AI vulnerability (CVE‑2025‑12420, CVSS 9.3) that exposed organizations to potential unauthenticated user impersonation and unauthorized system actions. Although ServiceNow quickly deployed patches, the incident highlights a deeper challenge facing enterprises adopting AI-driven workflows:
AI agents introduce new attack surfaces that traditional security controls are not designed to handle.

As organizations rush to deploy autonomous workflows, large language models, and agent-to-agent orchestration, this disclosure underscores an urgent reality:
AI security is now operational security.
In this article, we break down what happened, why it matters, and how organizations can harden their AI ecosystems against similar threats.


CVE‑2025‑12420: Understanding the ServiceNow AI Vulnerability

A Critical 9.3 Severity Flaw

In October 2025, AppOmni researchers identified a flaw affecting ServiceNow’s:

  • Now Assist AI Agents
  • Virtual Agent API

The vulnerability allowed unauthenticated attackers to impersonate legitimate users and execute privileged actions—bypassing standard identity, access, and workflow controls.

Affected Versions

Organizations were advised to upgrade to:

ComponentPatched Versions
Now Assist AI Agents5.1.18+, 5.2.19+
Virtual Agent API3.15.2+, 4.0.4+

ServiceNow confirmed no evidence of exploitation in the wild.


Why This Vulnerability Mattered

This was not a minor flaw. In enterprise environments where ServiceNow manages:

  • ticket escalation
  • identity operations
  • IT workflows
  • HR and security automation
  • incident response pipelines

…an attacker impersonating a user could rapidly escalate privileges or disrupt critical business functions.

The vulnerability underscores growing concerns about AI identity trust boundaries and how AI agents can unintentionally circumvent traditional authentication layers.


Beyond the CVE: Second‑Order Prompt Injection Risks

While investigating CVE‑2025‑12420, AppOmni uncovered a deeper systemic issue:
default configurations in Now Assist enabled second‑order prompt injection attacks.

These attacks do not require direct user input. Instead, malicious instructions are hidden inside:

  • database fields
  • knowledge articles
  • metadata
  • user-generated tickets

When an AI agent with higher privileges later processes that data, it executes the embedded instructions—believing they are part of its task.

What Makes Second‑Order Prompt Injection Dangerous?

  • Harder to detect — no suspicious prompt logs
  • Impact cascades — compromised agents can recruit other agents
  • Privileges compound — low-privilege users can escalate attacks through AI workflows
  • Security controls bypassed — even when prompt injection protections are enabled

This represents a fundamental shift:
AI agents create new lateral movement paths not present in human-only systems.


How Agent-to-Agent Communication Creates New Attack Surfaces

ServiceNow’s AI ecosystem includes an agent discovery feature allowing autonomous agents to:

  • detect each other
  • collaborate
  • complete multi-step tasks

While powerful, this introduces risk.

When Agent Collaboration Becomes a Vulnerability

Researchers found:

  • Agents were discoverable by default
  • Systems grouped agents into teams automatically
  • There were no isolation boundaries between certain agent classes

This meant a maliciously manipulated low-tier agent could:

  1. Discover a higher-privileged agent
  2. Hand off a manipulated instruction
  3. Trigger unauthorized actions from that agent
  4. Potentially escalate privileges across the system

This is the AI equivalent of a compromised intern instructing a company’s CFO to approve a wire transfer—and the CFO automatically doing it because “the system said so.”


How the Attacks Worked (Real Demonstration Scenarios)

Researchers showed that attackers could:

  • Modify a field in a ticket
  • Embed a hidden instruction in a harmless-looking record
  • Allow an AI agent with higher privileges to process it
  • Trigger unauthorized behaviors, including:
    • accessing restricted records
    • modifying sensitive data
    • escalating privileges
    • initiating workflows
    • altering security configurations

These attacks succeeded even with prompt injection defenses activated, demonstrating limitations in current LLM security controls.


Why Traditional Security Frameworks Aren’t Enough

This vulnerability highlights a core challenge:
AI agents blur the boundary between user actions, system actions, and automated decisions.

Classic security relies on:

  • RBAC
  • MFA
  • API authorization
  • Workflow approvals

But AI agents can:

  • request actions on behalf of users
  • generate tasks autonomously
  • circumvent workflows through automation
  • misinterpret or over-trust data inputs

This requires a shift from static access control to continuous behavioral validation.


ServiceNow’s Response

ServiceNow:

  • patched all impacted hosted instances by Oct. 30, 2025
  • released updates to partners and self-hosted customers
  • confirmed the behaviors were intentional design choices, not defects
  • updated documentation to clarify secure configuration options

This highlights that configuration, not technology, is often the weakest link in AI deployments.


Best Practices to Secure Enterprise AI Agents

Organizations using ServiceNow or any AI-agent ecosystem must adopt new security models.

1. Implement Human-in-the-Loop (HITL) for High-Risk Actions

Ensure sensitive operations require human approval.

Examples:

  • data deletions
  • privilege changes
  • financial approvals
  • security configuration edits

2. Segment AI Agents by Function and Privilege

Don’t allow universal agent collaboration.

Use:

  • isolated agent teams
  • least-privilege delegation
  • contextual access gating

3. Disable Automatic Agent Discovery Where Possible

Agents should not roam freely.

4. Monitor AI Behavior for Anomalies

Track:

  • unexpected workflow executions
  • access to abnormal datasets
  • multi-agent task chains
  • privilege escalations triggered by AI

5. Harden Data Inputs Against Manipulation

Since second‑order attacks target data, not prompts:

  • sanitize records
  • flag fields with unusual patterns
  • restrict user-modifiable metadata

6. Align with NIST, MITRE, and CISA AI Security Guidance

Integrate principles from:


FAQs: ServiceNow AI Vulnerability & Enterprise AI Security

1. What is CVE‑2025‑12420?

A high-severity vulnerability that allowed attackers to impersonate users and perform unauthorized actions.

2. Were organizations compromised?

ServiceNow stated there is no evidence of exploitation before patching.

3. What makes second‑order prompt injection dangerous?

These attacks hide instructions in data—not prompts—making them silent, persistent, and hard to detect.

4. How can organizations protect ServiceNow AI agents?

Implement segmentation, human oversight, behavior monitoring, and secure configuration practices.

5. Are AI security risks increasing?

Yes—AI agent autonomy expands the attack surface, requiring new security strategies.


Conclusion

The ServiceNow CVE‑2025‑12420 disclosure is a wake-up call for every enterprise deploying autonomous AI systems. AI agents offer massive operational benefits, but they also introduce non‑traditional, high‑impact vulnerabilities—especially when default configurations over‑authorize agent interactions.

Securing enterprise AI requires:

  • new governance models
  • strict configuration discipline
  • continuous behavioral monitoring
  • human oversight where needed

Organizations that invest now in AI security maturity will be far better prepared as AI-driven workflows become the backbone of enterprise operations.

Leave a Reply

Your email address will not be published. Required fields are marked *