Behind the announced investments of 109 billion for France and 200 billion for the European Union, did the "AI Action Summit" in Paris, which ended on Tuesday, February 11, advance security issues in the face of risks related to AI systems? Have states around the world reached a consensus on this highly sensitive issue?
For some experts, the Paris AI Summit did not lead to any progress: on the contrary, they note a decline in the consideration of the subject. This is despite Emmanuel Macron's statements advocating a "trust framework" necessary for the development of AI. And this despite a promising start to the summit.
"I didn't come here to talk about AI security"
On February 6, ahead of the event, a "AI security report" was published, a sort of "synthesis of the scientific literature on AI security", equivalent to the IPCC report for AI. Parallel events and conferences were organized - for example by the NGO Pause IA.
But that was without counting on the offensive speech by American Vice President V.D. Vance, who came to sweep away any hope of making progress on this issue. "I didn't come here to talk about AI security" said the technophile at the end of the event on Tuesday, February 11. It is time to take risks and put aside excessive caution, he said in essence, tackling in passing the Europeans' desire to "tighten the screws" on American AI and digital giants with its "excessive" and "onerous" regulation.
However, two years earlier, AI security was the main topic of the Bletchley summit, organized in 2023 in the United Kingdom. But the latter was relegated to the second - or even the last - plan. It is barely mentioned in France's final declaration.
A step backwards for some experts
A gap that can be explained for several reasons. First, because the new American administration is defending a deregulation approach. Donald Trump has removed the few safeguards put in place by Joe Biden in this area, and has entered into a standoff with the rest of the world – including Europe – to ensure that no regulatory constraints hinder the development of its American champions.
The country has also refused to sign the summit's final declaration. In this non-binding text, the sixty or so signatories (including the European Union and China) commit to promoting "safe and trusted" AI. Alongside the United States, the United Kingdom and the United Arab Emirates have also chosen not to sign the text, proof that a consensus, even for a simple declaration of goodwill, is not likely to emerge.
Similarly, although a specific French body was created before the summit – INESIA (National Institute for the Evaluation and Security of AI) – it has very few resources – it has not been provided with its own funding. This constitutes a real step backwards that some people regret, such as the Center for AI Security (CeSIA), a French association made up of researchers and experts.
In a press release published late Tuesday, the association regrets "the striking gap between massive investments in AI and the resources allocated to securing it." Yoshua Bengio, winner of the Turing Award in 2018 and founder and scientific director of Mila, the Quebec Institute of Artificial Intelligence, echoed the same sentiment. He believes that this summit “missed the opportunity” to “realistically address the urgent issue of the risks associated with the rapid development of cutting-edge models,” he wrote on his X account.
“Science shows that AI poses major risks in a time frame that requires world leaders to take them much more seriously,” he wrote.
The head of AI startup Anthropic, Dario Amodei, also deplores this “missed opportunity.” "Time is running out," he wrote in a statement, "international discussions" must "address in more detail the growing security risks of this technology."
No risk assessment of AI tools by public authorities
Especially since regulators have few means of assessing the risks inherent in a generative AI tool before it is made accessible to users. The latter are faced with the phenomenon of the "black box" that AI giants refuse to open, as was already the case for social networks.
In this case, it is difficult to measure and assess risks without this access and collaboration from technology companies. AI companies assess their own work themselves, the former British Prime Minister regretted in 2023, during the Bletchey summit. Two years later, we are still at the same point despite the AI Act, the European regulation on AI, and despite a code of good practices on artificial intelligence that has been in development since September 2024.
For Mathias Cormann, the Secretary-General of the OECD, who spoke at a round table on February 11 during the summit, "the OECD has developed principles on AI that have been updated recently to facilitate the development of safe, trusted, human-centered AI, but we need (...) to systematically identify risks and deploy approaches to mitigate these risks. At the OECD, we have an incident tracking system (...), we have a kind of observatory to identify the policy measures put in place." But "we need (…) a governance framework, a general regulatory framework. We are far from that currently and we need to catch up in order to be able to safely benefit from the benefits of AI."
We need to "be concerned today about current AI tools," according to this professor
For Paul Salmon, professor at the University of the Sunshine Coast in Australia, the "indifference of governments and the public to AI safety issues" can be explained by several misunderstandings. In The Conversation, for example, he explains that we must “be concerned today about current AI tools.” These technologies “are already causing significant harm to human beings and society," he added, citing in particular "interference in elections, replacement of human labor, biased decision-making, deepfakes, disinformation and misinformation."
Security was not the only topic to be sidelined during the summit. As noted by Contexteon Wednesday, February 12, the commitment to "the protection of personal data and privacy", a mention that was present in the initial version of the summit's final declaration, ultimately disappeared from the text signed by the sixty or so countries. In a press release published on the first day of the summit, the Defender of Rights, the French authority responsible for defending our rights and freedoms, nevertheless insisted "that fundamental rights not be forgotten at this important moment".

0 Comments