“When AI gains the ability to replicate itself, unplanned, negative impacts can have an immediate and massive impact on all of humanity,” says Prof. Andrzej Zybertowicz.
President Andrzej Duda’s adviser gave an interview to the weszlo.com portal, commenting on the development of artificial intelligence.
No previous technology could have threatened the existence of all mankind. Even nuclear weapons
says Zybertowicz.
Nothing overly daring here. All it takes is a moment of reflection and a little imagination. The digital revolution is sweeping the entire planet. Half of humanity is permanently connected to the internet. The rest are monitored by surveillance systems, such as street cameras. Once AI gains the ability to replicate itself, unplanned, negative impacts could have an immediate and massive effect on all of humanity. By the time we realize something is wrong and our natural processes of adaptation and protection begin, it may be too late
he adds.
AI systems
Referring to the comparison with nuclear weapons, Prof. Zybertowicz said pointed to the controversy surrounding AI systems.
This is precisely demonstrated by the outstanding artificial intelligence analyst Eliezer Yudkowsky. He insists that nuclear weapons can never outsmart a human, and Chat GPT-4 has already teased him. Even without consciousness or will, he was taught that people provoke each other. Nuclear weapons cannot propagate themselves and cannot be easily duplicated by anyone. It cannot improve itself and scientists have a good understanding of how it works. The same cannot be said of some AI systems – the “black box” problem.
– we are reading.
No one will copy weapons of mass destruction and take them out of the lab, and artificial intelligence technology has already been stolen. I recently spoke to a researcher who claimed that his university was working on a pirated copy of one of the systems of these great language models, causing a leap in AI development
Zybertowicz adds.
The scientist also talks about the unreliability of AI systems and the need to be careful.
Artificial intelligence specialists say that the failure rate of these systems is much higher than ten percent, and yet somehow we like to get on this plane. The precautionary principle states that when the risk margin is significant, certain things should be waived. In the case of AI, we do not follow this precautionary principle. I wish it wasn’t too late for a moratorium
– say.
READ ALSO: Michał Karnowski: I don’t think artificial intelligence only enters the media of the Third Republic of Poland. She’s been in it for a long time. “We Won’t Go That Way”
mly/weszlo.com
Source: wPolityce