Ticker

6/recent/ticker-posts

DeepSeek Strikes Again: New R1-0528 Model Challenges OpenAI and Google

DeepSeek Strikes Again: New R1-0528 Model Challenges OpenAI and Google

At the beginning of the year, American AI giants were shocked and trembling when they discovered, astonished, that the R1 model from the Chinese startup DeepSeek was just as efficient as their AI models for a fraction of the cost. At the time—and this is still the case today—OpenAI, Microsoft, Google, Meta, and Nvidia continue to advocate for building very expensive data centers to train their AIs.

With a little cunning and some strings, DeepSeek demonstrated that lining up tens of billions wasn't necessarily necessary to produce an effective AI model. By putting down the cackles of American companies convinced of their position as world leaders, this David versus Goliath story has caused a lot of noise and pushed some executives to deflate the melons that serve as their heads.

It's not all rosy, however; the R1 model is not without security and confidentiality problems, while OpenAI has accused DeepSeek of plundering intellectual property, which is not without irony given the practices of the creator of ChatGPT.

Since the thunderbolt at the end of January, DeepSeek has continued the development of its AI model and unveiled a brand new one, DeepSeek-R1-0528. According to the start-up, the model's performance approaches that of o3 (OpenAI) and Gemini 2.5 Pro (Google), in other words, some of the most powerful models currently available. Math, programming, and general logic are more efficient than previous versions, and there are also fewer hallucinations.

To achieve this, DeepSeek-R1-0528 uses more computing resources, meaning the model is likely to be more expensive. This model is expected to be offered in DeepSeek's bot, as well as as an API for developers.

Source: DeepSeek

Post a Comment

0 Comments