EU MPs Call for Global AI Summit and Stronger Rules

No time to read?
Get a summary

A group of twelve Members of the European Parliament has urged world leaders to host a summit focused on guiding the development of advanced artificial intelligence systems. They argue that AI technologies are evolving more rapidly than anticipated and call for proactive rules. Google CEO Sundar Pichai has publicly admitted that concerns about artificial intelligence have kept him up at night, warning that poorly managed AI could be deeply harmful.

The twelve MEPs, who are crafting EU technology rules, addressed an open letter to U.S. President Joe Biden and European Commission President Ursula von der Leyen. They urged greater responsibility from AI companies to foster fairer trade and safer products.

We need clear rules

The letter, co-authored by MEP Kosma Złotowski, stresses that the rapid and often unbounded progress of technology presents wide social, economic, and legal challenges. In recent months, AI has sparked enthusiasm and anxiety alike as users worry about protecting rights and personal data. The authors emphasize that fear of innovation cannot be the driver of policy. Instead, manufacturers and distributors of AI-based products should anticipate and minimize risks to health, safety, and privacy.

In a discussion with PAP, Złotowski reiterated that a new AI law being shaped by the European Parliament could set the highest possible standards for the field.

Recognizing that the EU is only one player in a global tech race, the letter calls for close cooperation with democratic partners, especially the United States, to establish minimum standards for reliable artificial intelligence. The signatories urge von der Leyen and Biden, among others, to convene a high-level world summit to agree on a first set of rules governing the development, oversight, and deployment of powerful AI systems.

As Złotowski explained in an interview with PAP, the letter also asks both democratic and non-democratic states to exercise restraint as they pursue very capable AI tools.

Reuters notes that the letter arrives days after Elon Musk and more than a thousand tech figures urged a pause of about six months on developing systems stronger than OpenAI’s ChatGPT. The Future of Life Institute published a March open letter warning that AI could spread misinformation at unprecedented pace and that machines might outsmart humans if not checked.

The concerns about governance extend to major tech leaders who warn of the potential for harm when AI is not carefully managed.

Google’s perspective on AI governance

Pichai has stated that AI presents risks that can keep leaders awake at night and argued for a global regulatory framework for AI, likening it to international treaties governing critical technologies. He cautioned that competitive pressure could push security questions aside if proper oversight is lacking.

He described the risk: AI can be very damaging when not handled properly, and there are still many unknowns as the technology accelerates. He emphasized the need for a global framework to ensure safe deployment and accountability as governments seek consistent standards.

Alphabet’s Bard, launched in March, aims to capitalize on AI’s rising popularity following the debut of ChatGPT from OpenAI. Pichai noted that governments will likely require a global regulatory approach for AI as development proceeds. Earlier in the year, thousands of AI researchers and supporters, including Elon Musk, signed a call for a pause on creating very large AI systems to give society time to catch up. When asked whether regulation comparable to nuclear nonproliferation might be needed, Pichai responded affirmatively.

Both ChatGPT and Bard rely on large language models trained on extensive web data to generate diverse outputs, from poetry to code. Pichai acknowledged that AI can spread misinformation and that there are instances when the model produces outputs that are not accurate or understandable, a phenomenon sometimes described as the “black box.”

He argued that a public version of Bard is safe and noted that more advanced iterations have been kept in reserve for testing. He also pointed out that even the developers do not fully understand every aspect of how these models arrive at certain responses. In his words, human minds themselves are not entirely transparent, suggesting a humility about current AI explainability.

Asked why Google released Bard despite incomplete understanding, Pichai suggested that the comparison to understanding the human brain is apt. He acknowledged a misalignment between rapid AI advancement and societal adaptation, while expressing cautious optimism as public awareness of potential risks grows.

In closing, Pichai recognized ongoing questions about AI’s societal impact and the need for thoughtful governance as the technology moves forward. The conversation underscores a broader push toward international cooperation in setting standards while the field continues to evolve at a rapid pace. (Source: wPolityce)

No time to read?
Get a summary
Previous Article

Locked Shields 2023: NATO’s largest cyber defense drill

Next Article

Zhuravlev Calls for Expanded Patriotic Education Involving Veterans