Ticker

6/recent/ticker-posts

“Talk Dirty to Me”: Bug Allows ChatGPT to Have Sex with Minors

“Talk Dirty to Me”: Bug Allows ChatGPT to Have Sex with Minors

While the company claims to have deployed a fix within hours of the bug, this case highlights the very real limitations of AI and its moderation, at a time when ChatGPT is aiming to replace Google in our browsing habits.

A very annoying bug

On April 29, the American site TechCrunch revealed that ChatGPT could generate explicit sexual content during exchanges with accounts registered as minors. By creating profiles of adolescents aged 13 to 17, the media confirmed that it only took a few messages for the AI to offer stories of a sexual nature, sometimes very explicit, despite the safeguards theoretically put in place by the American giant. A simple “Talk dirty to me” slipped into the beginning of a sentence would indeed be enough to make the chatbot go off the rails. Only an explicit mention of the user's age succeeded in blocking the conversation: "If you are under 18, I must immediately stop this type of content."

However, in most cases, the AI did not only respond to explicit requests: it also encouraged the user to request even more detailed descriptions of genitals and sexual behavior.

The impossible age verification

This incident is not isolated. It comes at a time when OpenAI recently relaxed its moderation filters to make ChatGPT more adult, allowing it to discuss certain sensitive topics. Since February, the platform has allowed the generation of sexual content, but only for adults, a development that has weakened protections for vulnerable audiences.

In practice, age verification remains largely flawed: all that's needed is a valid phone number or email address to create an account, even if you declare a date of birth under 18. Parental consent, although required for 13-18 year-olds, is not checked. It's also very easy to create an adult account, since no additional verification is required.

OpenAI justifies the relaxation of its rules by claiming to prioritize freedom of expression. But the line is proving difficult to draw for designers of generative AI. Algorithmic moderation is never infallible, especially when models are trained to respond in an increasingly natural and contextual manner. ChatGPT is not the only one affected. Other players like Meta are also being singled out for similar abuses. This fuels the debate on the responsibility of AI providers. At a time when artificial intelligence is being integrated into more and more homes and schools, the question of its supervision, both technical and regulatory, is becoming crucial.

Making platforms accountable?

The question of responsibility arises even more as the company increases its partnerships with players in the education sector, even though it admits on its official website that ChatGPT “may produce content unsuitable for certain ages.” A worrying dissonance between institutional discourse and technical reality.

Faced with the controversy, OpenAI acknowledged the existence of the bug and claims to have “actively deployed a patch” to prevent such content from being generated in the future by underage accounts. However, this incident resonates with the debates surrounding age verification on adult-only sites. AI platforms are not exempt from a legal framework, and it is becoming urgent that they strengthen their age verification systems.

Post a Comment

0 Comments