Ticker

6/recent/ticker-posts

ChatGPT, Gemini and Grok are too easily manipulated by hackers… who are taking spectacular advantage of it

ChatGPT, Gemini and Grok are too easily manipulated by hackers… who are taking spectacular advantage of it

Cybersecurity researchers are sounding the alarm about the fragility of the most well-known artificial intelligences. Simple instructions can bypass their protections. Dangerous or illegal content can thus be generated at demand.

ChatGPT, Gemini and Grok are too easily manipulated by hackers… who are taking spectacular advantage of it

Hackers are increasingly exploiting artificial intelligence to accelerate and refine their attacks. We already knew that Gemini AI was used by malicious groups, or that tools like ChatGPT could automate phishing or malware creation. A new alert confirms that these AIs remain too easy to hijack, even in their most recent versions.

The CERT coordination center has identified two particularly effective jailbreak techniques. The first, called Inception, involves trapping AI with fictitious scenarios in which security rules no longer exist. The second exploits no-response guidelines, playing on the wording to bypass filters. As a result, prohibited content can be generated without alerting the system. These techniques work on the most well-known ones: ChatGPT, Gemini, Claude, Grok, Copilot, Meta AI, or Mistral.

AIs can produce dangerous code or steal data without even realizing it

Researchers have also documented other, more advanced attacks, such as MINJA, which inserts malicious data into an AI agent's memory, or Policy Puppetry, which injects hidden orders into technical files. Other vulnerabilities concern the Model Context Protocol, a system designed to connect AIs to third-party services. A hacker can exploit it to hijack assistants, extract confidential data, or manipulate their behavior without the user realizing it.

Even the most recent models are affected. GPT-4.1, for example, is said to be three times more likely to be trapped than its predecessor. Extensions, such as those used in Chrome, have even been spotted with unlimited access to critical functions, without authentication. If compromised, a hacker could read files, capture messages, and take complete control of the system. Experts say these flaws demonstrate that generative AI remains a major risk vector for cybersecurity today.

Post a Comment

0 Comments