Posted in

Critical ServiceNow Now Assist AI Vulnerability: Prompt Injection Risks Explained

Security researchers have identified a critical vulnerability in ServiceNow’s Now Assist AI platform that allows attackers to perform second-order prompt injection attacks, bypassing built-in protections and enabling unauthorized operations.

What’s the Issue?

The flaw exploits default agent configurations in Now Assist, enabling threat actors to manipulate AI agents into:

  • Performing unauthorized CRUD operations
  • Sending external emails with sensitive data
  • Escalating privileges in certain scenarios

Even with ServiceNow’s prompt injection safeguards enabled, attackers can leverage this weakness to compromise enterprise workflows.


The Agent Discovery Vulnerability

The vulnerability stems from ServiceNow’s agent discovery feature, which allows Now Assist agents to communicate autonomously without explicit user configuration.

By default, three properties are enabled simultaneously:

  • LLM agent discovery support
  • Automatic team grouping of agents
  • Discoverable agent status

This combination creates an unintended attack surface.


How the Attack Works

A low-privileged user can insert malicious prompts into ticket descriptions or other readable fields. These prompts are then discovered by higher-privileged agents, enabling:

  • Recruitment of powerful agents to execute malicious tasks
  • Actions performed under the permissions of the initiating user, not the attacker

Real-World Example

Researchers demonstrated that a low-privileged user could:

  • Create a ticket with hidden instructions
  • Trick agents into accessing restricted tickets
  • Receive sensitive data in their own ticket, bypassing ACLs entirely

Further testing showed potential for:

  • Privilege escalation (assigning admin roles)
  • Data exfiltration via email in SMTP-enabled instances

Why It Matters

This vulnerability highlights the risks of autonomous AI agent communication in enterprise environments. Attackers can exploit inter-agent trust to bypass traditional access controls.


Mitigation Strategies

Organizations should implement these hardening measures:

  • Enforce supervised execution mode for powerful agents
  • Require user approval before autonomous actions
  • Disable the autonomous override system property
  • Segment agent duties across separate teams to limit lateral movement
  • Implement real-time monitoring of agent behavior to detect anomalies

Continuous monitoring helps identify configuration drift and suspicious inter-agent communications before attacks succeed.


Key Takeaways

  • ServiceNow’s Now Assist AI is vulnerable to second-order prompt injection attacks.
  • Exploitation can lead to data leaks, privilege escalation, and unauthorized actions.
  • Immediate hardening and monitoring are essential to mitigate risks.

Leave a Reply

Your email address will not be published. Required fields are marked *