
Experts all across the globe have called for reinforced artificial intelligence (AI) regulations to prevent a potential “loss of control” as global leaders gather in Paris for a prestigious summit on AI governance. Unlike previous discussions in Britain and South Korea, France, co-hosting the event with India, is emphasising AI “action” in 2025 instead of emphasising only safety issues.
President Emmanuel Macron's AI envoy Anne Bouverot stated: "We don't want to spend our time talking only about the risks. There's the very real opportunity aspect as well."
However, Future of Life Institute head Max Tegmark warned that France should take advantage of the opportunity to lead global AI regulation.
"France has been a wonderful champion of international collaboration and has the opportunity to lead the rest of the world," Tegmark said.
Tegmark's institute is endorsing a new platform, Global Risk and AI Safety Preparedness (GRASP), that aims to recognise potential risks posed by artificial intelligence (AI) all across the globe. GRASP coordinator Cyrus Hodes said: "We've identified around 300 tools and technologies in answer to these risks"
The first International AI Safety Report was released, backed by 30 minutes, the United Nations (UN), European Union (EU), and Organisation for Economic Co-operation and Development (OECD). Numerous risks were highlighted in the report from fake content to cyberthreats, and more.
Report coordinator Yoshua Bengio cautioned about a long-term “loss of control” over AI systems that could develop “their own will to survive.”
Tegmark warned that AGI (Artificial General Intelligence), the ability to exceed human intelligence, could come sooner than expected.
"The big problem now is that a lot of people in power still have not understood that we're closer to building artificial general intelligence than to figuring out how to control it," he remarked.
To reduce these AI risks, Stuart Russell from Berkeley urged for government regulations, especially on AI-controlled weapons. Tegmark drew a parallel between AI oversight and nuclear safety.
"Before somebody can build a new nuclear reactor... they have to demonstrate that it is safe. It should be the same for AI," Russell added.