Critical flaw identified in ChatGPT – and all you had to do was “talk”

Didn't pay the rent: ChatGPT was a “lawyer” and saved her from eviction in court

Critical flaw identified in ChatGPT – and all you had to do was “talk”

A hitherto unknown vulnerability in the AI ​​chatbot; OpenAI fixed the bug just over a month ago.

O ChatGPT was allowing silent exfiltration of sensitive data without the user’s knowledge or consent.

The alert is given in an investigation by Check Point, a company specializing in cybersecurity solutions, which talks about “critical failure” security and indicates that this vulnerability had never been detected.

A single prompt malicious agent could turn a seemingly normal ChatGPT session into a covert data exfiltration channelreads a statement sent to ZAP.

They could leave – without the user’s consent or even knowledge – sensitive information, including user input, uploaded files, or conclusions generated by the AI ​​itself.

The attack exploited a communication channel based on DNS, bypassing traditional protection mechanisms and guardrails visible from the platform. From the user’s perspective, there was no suspicious behavior – that is, the interaction proceeded normally, while the data was silently exposed.

Check Point highlights a particularly problematic point: users did not need to perform any suspicious actions – they simply interacted with ChatGPT.

Example: a GPT configured as a medical assistant collected clinical and personal data from the user while ensuring that no information was being shared externally; but the data was sent to a server controlled by criminals.

The research also revealed that the same vector could be used to remotely execute commands within the ChatGPT runtime, elevating the problem from a simple data leak to a structural risk at the level of the platform itself.

However, the failure was totally corrected by OpenAI on February 20, 2026. And there is no evidence of active exploitation.

Source link