After ChatGPT and DeepSeek, it's the turn of Replika, an artificial intelligence (AI) chatbot developed by Luka Inc., to be targeted by Italy's "Guarante." The country's privacy regulator—equivalent to France's CNIL—has fined the American company €5 million. At the same time, it announced, in a press release published on Monday, May 19, that it was opening a new investigation into the use and processing of personal data during the development and training of the AI tool.
Replika, less well-known than ChatGPT, is an AI tool that is presented as a "virtual friend" that can improve the well-being of its users. Launched in 2017 before being boosted with AI, the latter can take on different profiles, including that of confidant, therapist, or even romantic partner. The personalized chatbot was pinned in 2023 after users fell in love with their "virtual friend." No age verification for users. And for the Italian CNIL, the company behind Replika did not comply with the European regulation that protects personal data, the GDPR: it must therefore pay the €5 million fine and take adequate measures to comply with this European law. In its press release, the authority notes that in February 2023, the American company had not chosen a "legal basis" before collecting personal data – this is a prerequisite imposed by the European regulation on personal data, the GDPR. Second problem: its confidentiality rules are, according to the Italian authority, "inadequate."
Third grievance: the company had also not provided, in February 2023, a mechanism for verifying the age of users, either at the time of registration for the service or during its use, notes the Italian CNIL. However, Luka Inc. declared that it excluded minors from its services. And since then, the system has continued to have shortcomings, writes the Garante.
A new investigation opened
At the same time, the Garante has opened a new investigation: it wants more information on "the measures adopted to protect data during the various phases of development and training of the linguistic model underlying Replika, the types and categories of data used, and the possible implementation of anonymization or pseudonymization measures," it writes.
In Europe, the Garante is one of the most active personal data protection authorities. The latter had already investigated DeepSeek, the Chinese AI that literally scared the American giants of the sector, when it was launched last January. ChatGPT, OpenAI's conversational agent, had also been closely scrutinized by its services. In 2024, Sam Altman's company was fined an even higher €15 million for failing to comply with the GDPR. The chatbot was even banned in the country for four weeks in April 2023. Italy's privacy watchdog ruled that OpenAI was using ChatGPT users' personal data to train its generative AI models... without their permission.
0 Comments