Ticker

6/recent/ticker-posts

The race for the ultimate AI is creating the uncontrollable monster everyone fears, warns this researcher

The race for the ultimate AI is creating the uncontrollable monster everyone fears, warns this researcher

Turing Award winner Yoshua Bengio warns of the dangerous characteristics of the latest artificial intelligence models from OpenIA, Google, and others. Particularly, their tendency to lie to achieve their goals. The perfect example, according to him, of the effects of the sector's very risky relaxation around security – and the risk of creating an uncontrollable AI that competes with humans.

The race for the ultimate AI is creating the uncontrollable monster everyone fears, warns this researcher

Is the intense competition in the AI sector leading us straight to disaster? That's what Yoshua Bengio, a Canadian academic specializing in AI, believes—he developed the technologies underlying the models developed by OpenAI, Google, Claude, and other leading players.

The threat of an ultra-intelligent AI that would make decisions hostile to humanity has been a known risk from the start. This is what pushed firms like OpenAI to move away, at least initially, from a completely commercial development model. But things have changed a lot since then: delivering ever more intelligent AI at all costs has become the real crux of the matter—even if it means compromising on security.

Commercial pressure risks leading to uncontrollable AI

And this is starting to show, as models become more intelligent and capable with each new iteration. In the last six months, the researcher explains, we have seen leading models autonomously develop very worrying capabilities. “We find evidence of manipulation, cheating, lying, and survival instincts,” Yoshua Bengio explains in the Financial Times (via ArsTechnica).

“Unfortunately, there is a very competitive race between the leading labs, which pushes them to focus on their ability to make AI increasingly intelligent, without necessarily placing sufficient emphasis and investment on research around security,” the expert adds. Before sounding the alarm: “This is all very scary, because we don't want to create something that competes with human beings on this planet, especially with AIs that are smarter than us.”

A reference to the advent of artificial general intelligence, or AGI. Models that would be both conscious and possess extremely vast knowledge, while also being endowed with thinking abilities several orders of magnitude greater than those of the most intelligent humans of our time.

The comments were made on the sidelines of an interview – to announce the launch of a non-profit organization dedicated to developing an alternative AGI. One that does not risk harming human beings. The initiative, called LawZero, specifically intends to isolate its research work “from commercial pressures.” The organization has just successfully raised $30 million through contributions from American philanthropists.

Including Eric Schmidt, former CEO of Google – a company heavily invested in the race for the first AGI worthy of the name. However, the sums involved and the initiative's late arrival undeniably add to the difficulty of achieving the researcher's goals. At least in a way that could guide the entire sector towards a more virtuous path.

Post a Comment

0 Comments