Are consumers sufficiently warned about the risks of "hallucinations" and fabricated results when using DeepSeek, ChatGPT's Chinese rival? This is the question posed by the competition and consumer protection authority, the AGCM, on Monday, June 16. The Italian antitrust authority explains on its website that it has opened an investigation into the Chinese artificial intelligence (AI) startup, suspected of "unfair commercial practices." She believes that users were not sufficiently warned that this generative AI tool could generate false information.
This is not the first time that DeepSeek, the Chinese chatbot that is shaking up American AI giants, has been in the crosshairs of an Italian authority. Last January, the competitor of Gemini and Le Chat was the subject of several questions from the authority in charge of personal data, the equivalent of the CNIL in the country.
The user not sufficiently warned about the risks of hallucinations from the chatbot?
But this time, it is the authority responsible for consumer protection and competition that is banging its fist on the table. In its letter, the AGCM writes that "DeepSeek is (suspected of having) adopted a behavior consisting of not informing in a sufficiently clear, immediate and intelligible manner that users of its AI models could be victims of what are called in technical jargon "hallucinations"," a term that the authority then defines. These are "situations in which, in response to a user command (prompt, editor's note), the AI model generates one or more results containing inaccurate, misleading or invented information."
While the general conditions of use (CGU) clearly state that these risks exist, the AGCM believes that this information is not sufficiently accessible to users who must go to the CGU page to learn it. They may therefore believe that the information provided by the Chinese conversational agent is reliable. DeepSeek has thirty days to respond to the antitrust authority's questions.
First action initiated last January against the Chinese chatbot
This is the second action in the country initiated against DeepSeek. Last January, the equivalent of the CNIL in Italy had already looked into the Chinese chatbot, this time in the area of personal data. It estimated at the time that the generative AI application potentially posed risks to the data of millions of people in Italy. The Italian authority wanted to know "what personal data was collected" when an Italian user used DeepSeek, whether on the website or the application.
The Italian CNIL also requested information on the training sources, the purposes of data collection, the legal basis chosen for processing (personal data), and the location of the servers where the data collected by the chatbot is stored. Twenty days later, the Italian CNIL ordered DeepSeek to block access to its chatbot because the Chinese company had not addressed its concerns regarding personal data protection. When contacted, DeepSeek had not responded to our request for comment at the time of publication. As a reminder, in 2023, ChatGPT was suspended for a month in Italy after failing to meet the expectations of the Guarantor, the Italian CNIL.
0 Comments