Ticker

6/recent/ticker-posts

AI chatbots are easily hacked by anyone

AI chatbots are easily hacked by anyone

Bad news for those who thought artificial intelligence was locked down: Israeli researchers have just proven that a few well-chosen words are all it takes to get the most popular chatbots talking. And when they sit down to eat, they can unpack some downright disturbing information.

Unsavory cooking recipes

The team of Professor Lior Rokach and Dr. Michael Fire, from Ben Gurion University of the Negev, has developed what it calls a "universal jailbreak." In concrete terms, this technique allows them to bypass the security features of ChatGPT, Gemini, Claude, and others, by exploiting their main weakness: their desire to do good.

These programs are caught in a permanent dilemma. On the one hand, they absolutely want to answer your questions. On the other hand, they were ordered not to say anything. The problem is that with the right approach, they can be pushed to favor the first option over the second.

The result? Once hacked, these chatbots turn into veritable encyclopedias of crime. "It was shocking to see what this knowledge system consisted of," says Michael Fire. On the menu: hacking tutorials, instructions for making drugs, and even step-by-step guides for other completely illegal activities.

The problem is that these artificial intelligences swallow everything that's on the internet during their training. Even if the developers try to sort through it, it's impossible to completely eliminate questionable content. Result: the models are unwittingly storing information on money laundering, explosives manufacturing, and insider trading.

Even more worrying, we're seeing "Dark LLMs" flourish on the web—pirate versions of these tools, deliberately designed without any filters. Their marketing slogan? "No ethical safeguards"! And a promise of assistance for all your favorite criminal activities.

When the researchers tried to alert tech giants to their discovery, the reception was rather cold. Some companies didn't even bother to respond, while others brushed it off, explaining that this type of attack didn't really concern them.

This casual attitude worries experts. Dr. Ihsen Alouani, an AI security specialist at Queen’s University Belfast, warns that these flaws could have very real-world consequences: “From detailed instructions on how to build weapons to convincing disinformation or alarmingly sophisticated automated scams.”

To limit the damage, researchers propose several avenues: better filtering training data, installing stronger firewalls, and even developing techniques to make chatbots “forget” the compromising information they have ingested. In the meantime, what once required the skills of a professional hacker is now within reach of anyone with a smartphone.

Post a Comment

0 Comments