Ticker

6/recent/ticker-posts

Do not trust the IA search engines, this study proves that they are often mistaken and remain sure of them

Do not trust the IA search engines, this study proves that they are often mistaken and remain sure of them

AI-powered search engines are expected to revolutionize information retrieval, but a study reveals a worrying problem. Up to 76% of answers provided are incorrect, and worse, they respond with absolute confidence. Paying for a premium model doesn't necessarily improve the situation.

Do not trust the IA search engines, this study proves that they are often mistaken and remain sure of them

Since the rise of artificial intelligence, AI search engines like ChatGPT Search, Microsoft Copilot or Google Gemini promise faster and more accurate answers. However, a new study conducted by the Tow Center for Digital Journalism reveals that these tools are often more wrong than right. In some cases, they have an error rate as high as 76%, while giving the impression of being completely reliable.

The study tested eight AI search engines by asking them to extract simple information from 10 articles from 10 different media outlets. They were asked, among other things: to find the article title, the name of the media outlet, and the publication date. Result: the latter often provided incorrect information, while answering confidently, without ever admitting that they could be wrong.

AI search engines are convinced they are right, even when they are wrong

The most extreme case is that of Grok-3, X's premium model, which gave 76% incorrect or partially incorrect answers. Worse still, the paid versions are no more accurate: Perplexity Pro, billed at 20 euros per month, obtained worse results than its free version. In other words, paying more does not guarantee better reliability, but simply more confident errors.

The study also showed that some search engines, supposed to respect access restrictions, manage to circumvent these blocks. For example, Perplexity managed to extract information from National Geographic, even though it was protected by a paywall and prohibited access to AI robots. But here again, several programs provided incorrect data on these blocked sites. Morality : These tools can sometimes recover prohibited information, but that doesn't mean they are reliable. Faced with these results, it is more essential than ever to cross-check your sources and not blindly trust AI-generated answers.

Source: CJR

Post a Comment

0 Comments