A researcher found a way to exploit an SSRF vulnerability related to custom GPTs to obtain an Azure access token.
The post ChatGPT Vulnerability Exposed Underlying Cloud Infrastructure appeared first on SecurityWeek.
A researcher found a way to exploit an SSRF vulnerability related to custom GPTs to obtain an Azure access token.
The post ChatGPT Vulnerability Exposed Underlying Cloud Infrastructure appeared first on SecurityWeek.
Tenable researchers discovered seven vulnerabilities, including ones affecting the latest GPT model.
The post Researchers Hack ChatGPT Memories and Web Search Features appeared first on SecurityWeek.
Researchers have discovered that a prompt can be disguised as an url, and accepted by Atlas as an url in the omnibox.
The post ChatGPT Atlas’ Omnibox Is Vulnerable to Jailbreaks appeared first on SecurityWeek.
The AI agent was able to solve different types of CAPTCHAs and adjusted its cursor movements to better mimic human behavior.
The post ChatGPT Tricked Into Solving CAPTCHAs appeared first on SecurityWeek.
OpenAI has fixed this zero-click attack method called by researchers ShadowLeak.
The post ChatGPT Deep Research Targeted in Server-Side Data Theft Attack appeared first on SecurityWeek.
Researchers show how a crafted calendar invite can trigger ChatGPT to exfiltrate sensitive emails.
The post ChatGPT’s Calendar Integration Can Be Exploited to Steal Emails appeared first on SecurityWeek.
Instead of GPT-5 Pro, your query could be quietly redirected to an older, weaker model, opening the door to jailbreaks, hallucinations, and unsafe outputs.
The post GPT-5 Has a Vulnerability: Its Router Can Send You to Older, Less Safe Models appeared first on SecurityWeek.
Researchers demonstrate how multi-turn “storytelling” attacks bypass prompt-level filters, exposing systemic weaknesses in GPT-5’s defenses.
The post Red Teams Jailbreak GPT-5 With Ease, Warn It’s ‘Nearly Unusable’ for Enterprise appeared first on SecurityWeek.
Zenity has shown how AI assistants such as ChatGPT, Copilot, Cursor, Gemini, and Salesforce Einstein can be abused using specially crafted prompts.
The post Major Enterprise AI Assistants Can Be Abused for Data Theft, Manipulation appeared first on SecurityWeek.
LayerX has disclosed an AI chatbot hacking method via web browser extensions it has named ‘man-in-the-prompt’.
The post Browser Extensions Pose Serious Threat to Gen-AI Tools Handling Sensitive Data appeared first on SecurityWeek.