Ticker

6/recent/ticker-posts

Lucie, the French catastrophic firm in a rain of mockery

Lucie, the French catastrophic firm in a rain of mockery

A few days ago, the French company Linagora put online Lucie, a generative artificial intelligence designed with the National Center for Scientific Research. This AI, which presents itself as an alternative to AIs such as ChatGPT or Google Gemini, is intended to enter the "world of education" later this year.

Presented as the very first 100% French and open source AI model that is "aligned with European values", it is financed by the State through the France 2030 investment plan. This public investment plan is intended to strengthen France's competitiveness in the field of AI.

As part of this first test phase, Internet users were invited to chat with Lucie, in the same way that they chat with ChatGPT or another chatbot. In the space of a few days, feedback has flooded into social media. And it’s not good at all…

Cows’ eggs and Adolf Hitler

Lucie quickly became the object of mockery by Internet users. People who interacted with the AI ​​realized that it was accumulating errors and absurdities. For example, Lucie sometimes proved incapable of doing basic calculations, or of explaining her result. On social media, there is no shortage of examples. In the cases highlighted by testers, artificial intelligence keeps spreading false information or making historical and factual errors.

In fact, it seems that Lucie is at the level of a generative AI launched two years ago, such as the first public version of ChatGPT, based on the GPT 3.0 model. Mirroring this, Lucie offers absurd answers to her interlocutors. The AI ​​can indeed assume that cow eggs exist… and confuse them with chicken eggs. In short, the result is far from meeting the requirements of users, now accustomed to more sophisticated AI models, such as ChatGPT 4o.

Worse, it turns out that Lucie is devoid of safeguards, supposed to prevent excesses and other controversial remarks. Vincent Flibustier, social media trainer and creator of the satirical newspaper NordPresse, has also managed to make her speak in the manner of Adolf Hitler. A shame for an AI intended for the world of education.

For innovation expert Alain Goudey, the project "was clearly not ready for general public release". The AI ​​did not pass the usual battery of tests that he makes the models pass to probe their capabilities.

A premature testing phase

Faced with the outcry caused by Lucie's responses, Linagora preferred to stop the costs. The company suspended the experimental phase after 48 hours, while the test was supposed to last a month. The company states that it is "temporarily closing access to the Lucie.chat platform" and that it is primarily an "academic research project aimed at demonstrating the capabilities to develop digital commons of generative AI."

The French company specializing in free software specifies that Lucie is based on an AI model that is still completely raw. For the moment, the model does not have the slightest safeguard to block inappropriate behavior. In fact, "the answers generated by LUCIE are therefore not guaranteed and some contain biases and errors".

Linagora says it is well aware that "the 'reasoning' capabilities (including on simple mathematical problems) or the ability to generate code of the current version of 'Lucie' are unsatisfactory", but had hoped "that a public launch of the Lucie.chat platform was nevertheless possible in the logic of openness and co-construction of Open Source projects". The firm admits that the test was "premature" while "the instruction phase was only partial".

A simple communication error?

In other words, the model has not yet been fully trained. The test phase was precisely intended to allow a "collection of instructional data" to complete Lucie's training. Unlike AIs like ChatGPT, Lucie is limited to French data that is open source, which imposes limitations on it, the company adds. The National Education is not yet involved in the project, although Linagora is responsible for targeting "use cases related to education".

The company pleads miscommunication. For Linagora, it is "obvious that we have not communicated and clarified well enough what LUCIE can or cannot do in its current state, as well as the nature of the work carried out so far".

In response to the mockery of some Internet users, Damien Lainé, the head of research and development (R&D) engineering at Linagora, points out that Lucie is not yet an AI in the strict sense of the word. From a technical point of view, it is still only an "interface for interacting with a probabilistic language model, which predicts words based on a given sequence, all in a limited context window".

De facto, we should not expect reasoning abilities from Lucie. He adds that the project stands out from behemoths like ChatGPT by "total transparency concerning all the data used for its training". According to him, it is this "transparency that gives value to the initiative."Despite the fiasco of the last few days, work on Lucie continues.

Source: Lucie.chat

Post a Comment

0 Comments