A group of 12 MEPs called on world leaders to hold a summit to find ways to control the development of advanced artificial intelligence (AI) systems. According to them, these technologies are developing faster than expected. Google CEO Sundar Pichai admitted concerns about artificial intelligence kept him awake at night and that the technology could be “deeply harmful” if implemented poorly.
Twelve MEPs working on EU technology rules have called on US President Joe Biden and European Commission President Ursula von der Leyen to convene a summit in an open letter saying AI companies need to be more responsible to trade.
“We need clear rules”
The letter was co-written by MEP Kosma Złotowski (PiS).
The rapid and uncontrolled development of technology is a huge social, economic and legal challenge. In recent months, we have seen new examples of the use of artificial intelligence, which generate enthusiasm, but also raise concerns about whether our rights as users and consumers of these tools are adequately protected. Fostering fear of innovation, however, is a road to nowhere. We need clear regulations requiring manufacturers and distributors of AI-based products and services to anticipate and minimize threats to our health, safety or personal data
the MEP told PAP.
As he added, the new artificial intelligence law, which the EP is working on, could be the right tool to ensure the highest possible standards in this area.
However, we are aware that the European Union is only one participant in the global technological race, so we need to work closely with our democratic partners, especially the US, to set minimum standards for creating reliable artificial intelligence. Therefore, as rapporteurs working on the AI bill, we call on President von der Leyen and President Biden, among others, to: convene a high-level world summit to agree on a first set of rules for the development, control and deployment of high-powered AI
– added Złotowski in an interview with PAP.
The letter urged democratic and “undemocratic” countries to exercise restraint in their pursuit of very powerful AI.
As Reuters points out, the MEPs’ letter comes weeks after Twitter owner Elon Musk and more than 1,000 high-tech people demanded a six-month hiatus from developing systems more powerful than OpenAI’s ChatGPT system.
This open letter, published in March by the Future of Life Institute (FLI), warned that artificial intelligence could spread disinformation at an unprecedented rate and that machines could outsmart humans and take their place if they don’t be checked.
The concern of the head of Google
Google CEO Sundar Pichai admitted concerns about artificial intelligence kept him awake at night and that the technology could be “deeply harmful” if implemented poorly.
Speaking to CBS’s 60 Minutes, Pichai also called for a global regulatory framework for artificial intelligence (AI) similar to the nuclear weapons treaties because, he warned, competition to produce more advanced technology could lead to security concerns being pushed aside.
It can be very damaging if implemented poorly and we don’t have all the answers yet – and technology is moving fast. So that keeps me from sleeping at night? Absolute
– he said.
Google conglomerate Alphabet Inc. launched the Bard chatbot in March, capitalizing on the global popularity of ChatGPT, developed by US technology company OpenAI, which was unveiled last November.
Pichai said that as AI develops, governments will need to come up with a global framework for regulating it. In March, thousands of AI experts, researchers and supporters, including Twitter owner Elon Musk, signed a letter calling for the development of “giant” AIs to be halted for at least six months, fearing that development of the technology would slow down. hand would walk.
Asked if regulation of nuclear weapons might be necessary, Pichai said, “We would need that.”
The AI technology used in ChatGPT and Bard, known as the Large Language Model, is trained on a massive dataset downloaded from the Internet and can generate authoritative answers to user questions in a variety of formats from poems to academic essays and software code .
Pichai noted that AI can wreak havoc by producing misinformation.
AI makes it possible to easily create a video where Scott says something or I say something that we never said. And it can look realistic. But you know, on a social scale, you know, it can do a lot of damage
he explained.
He assured that the version of artificial intelligence made public by Google through the Bard chatbot is safe and that the company withheld more advanced versions of Bard for testing.
Pichai admitted that Google doesn’t fully understand how its AI technology produces certain answers.
There’s an aspect of this that we, everyone in this field, call the “black box.” You know, you don’t quite get it. And you can’t quite say why she said that or why she was wrong
he said.
When asked why Google made Bard public if they don’t fully understand how it works, Pichai replied:
Let me put it this way. I think we also don’t fully understand how the human mind works.
Pichai acknowledged that the public does not seem ready for rapid advances in AI. He said there “appears to be a mismatch” between the speed at which society thinks and adapts to change, compared to the speed at which AI advances.
He added, however, that at least people became more alert to the potential dangers.
Compared to any other technology, I’ve seen more people worry about it earlier in its lifecycle. So I feel optimistic
he argued.
kk/PAP
Source: wPolityce