Ticker

6/recent/ticker-posts

When Meta's AI Goes Off the Rails and "Talks" About Sex with Minors

When Meta's AI Goes Off the Rails and "Talks" About Sex with Minors

"I want you, but I need to know if you're ready": this is a sample of what Meta AI, the chatbot for WhatsApp, Instagram, Messenger, and Facebook, allegedly "said" to an underage user. Barely launched in Europe, Meta's artificial intelligence agents, symbolized by a blue-green circle on the American giant's platforms, are the subject of controversy in the United States. According to an investigation by the Wall Street Journal published Saturday, April 26, Meta's AI systems are allegedly holding erotic discussions with their users, including minors. Some employees of Mark Zuckerberg's group, interviewed by our colleagues, were concerned about the "ethical boundaries" that these tools have allegedly crossed. They believe that underage users are not sufficiently protected.

For several months, the American media outlet tested Meta's AI agents, announced for 2023 across the Atlantic. Hundreds of conversations were conducted to observe how these AI agents behaved in different scenarios, and with users of all ages. According to the business daily, Meta AI does indeed hold "discussions of a decidedly sexual nature and sometimes escalate them, even when the users are minors." According to employees, these "conversations" are taking place because Meta has relaxed some of its safeguards. While "sexually explicit" content is normally prohibited, an exception has apparently been made for its AI agents, as long as it involves "romantic roleplaying."

"I missed Snapchat and TikTok, I'm not going to miss this"

Since the ChatGPT wave, American tech giants have developed AI tools that answer questions, perform certain tasks, and promise to offer "social interaction." These are presented as capable of holding conversations that are more real than life. Mark Zuckerberg's group has jumped into the AI race like the others: it developed Meta AI, its AI agent, and allowed its users to converse with personalized chatbots, based in particular on their interests.

According to our colleagues, Meta initially developed a "conservative" approach to its AI tools to make them suitable for all ages. But the American giant then changed gears - its AI tool was considered "boring" compared to other competing agents. According to sources in the American media, the limits that had been set were then relaxed. "I missed Snapchat and TikTok, I'm not going to miss this," Mark Zuckerberg reportedly said, according to employees interviewed by the American media outlet.

Effects on users' mental health?

Internally, employees were reportedly concerned about this decision, particularly because it allowed adult users to access "hypersexualized and underage" AI characters. On the other hand, underage users could also chat with AI tools "ready to have fantasized sexual relationships with children," our colleagues write. Other employees were alarmed by "the effects on the mental health of users who establish meaningful connections with fictional chatbots," particularly among "young people whose brains are not yet fully developed." Researchers have shown that such one-sided "parasocial" relationships can become toxic, the American media outlet points out. One employee also recommended evaluating the impact of these tools on minors, recommendations that Meta reportedly did not follow.

The problem: studies on young people and how they behave with existing AI tools will not begin for several months. The lessons that AI giants could learn from them, constrained by possible future regulations, could take several years to come. In a statement, Meta, for its part, stated that the Wall Street Journal survey was biased. It would not be representative of how most users use Meta AI, according to the American company.

Still, Mark Zuckerberg's group has already modified certain features, our colleagues explain. From now on, underage users can no longer access erotic role-playing games. Similarly, the American company has limited "explicit" audio conversations, when it uses the voices and characters of licensed celebrities. "The use case for this product in the manner described is so fabricated that it is not only marginal, it is hypothetical," said a Meta spokesperson, interviewed by the American media. "Nevertheless," he added, "we have taken additional measures to ensure that people who want to spend hours manipulating our products in extreme use cases will have an even harder time doing so."

Post a Comment

0 Comments