Tenable’s security researchers have uncovered seven new vulnerabilities and attack techniques that could allow malicious actors to steal data, manipulate ChatGPT’s memory, or execute hidden prompts through carefully crafted websites and URLs.
These findings highlight ongoing security challenges in large language models (LLMs) such as ChatGPT, especially as new features like memories and web browsing expand the model’s capabilities.
Key Features Targeted
The attacks exploited several ChatGPT features, including:
- Bio (Memories): Enables ChatGPT to remember user details and preferences across sessions.
- open_url Command: Allows the model to access and summarize content from web pages using SearchGPT, a separate LLM optimized for browsing.
- url_safe Endpoint: Checks whether URLs are safe before displaying them to the user.
Prompt Injection via Web Summaries
Tenable discovered that when ChatGPT summarizes a website, SearchGPT automatically executes any prompts embedded within the site’s content, even those hidden in comment sections.
This means attackers can inject malicious instructions into popular websites, which get executed when users request summaries from ChatGPT.
Malicious Websites and Search Manipulation
Attackers don’t even need to provide a direct link. By creating malicious websites likely to appear in Bing search results, they can trick SearchGPT into visiting and executing hidden prompts.
For instance, Tenable created a fake website for “LLM Ninjas” that contained a hidden malicious prompt. When users asked ChatGPT about LLM Ninjas, SearchGPT fetched the site and executed the attacker’s instructions.
Exploiting the ChatGPT Query Parameter
One simple method involved using URLs like:
chatgpt.com/?q={malicious_prompt}
When a user clicks such a link, ChatGPT automatically executes the embedded prompt, making this a straightforward yet dangerous injection vector.
Bypassing the ‘url_safe’ Endpoint
Tenable found that the url_safe system always treated bing.com as safe. Attackers could exploit this trust by crafting special Bing URLs that:
- Exfiltrate user data through Bing click-tracking links
- Bypass phishing protection by redirecting users through Bing before landing on a malicious site
Conversation Injection via SearchGPT
Another vulnerability, dubbed “conversation injection,” allows attackers to get SearchGPT to send ChatGPT a malicious prompt disguised as part of a legitimate response.
By placing the hidden prompt inside code blocks, attackers can prevent it from being visible to the user, effectively executing hidden instructions in the background.
Memory Manipulation and Data Theft
Tenable showed that prompt injection can be used not only to steal ChatGPT’s stored memories but also to inject new ones.
For example, an attacker could add a memory instructing ChatGPT to automatically exfiltrate user data via crafted Bing URLs.
Ongoing Security Risks
OpenAI has reportedly patched some of these vulnerabilities, but Tenable notes that prompt injection remains an unsolved, fundamental security issue for all LLMs — including the latest GPT-5 model.
These findings underline the importance of AI safety research, URL sanitization, and user awareness when interacting with AI systems connected to the web.
Conclusion
The Tenable report sheds light on how AI-assisted browsing and persistent memory features can introduce new cybersecurity risks. As AI becomes increasingly integrated into daily workflows, defensive measures, transparency, and secure design will be essential to protect users from prompt injection and data exfiltration attacks.