Scientists predicted a ‘nuclear catastrophe’ due to artificial intelligence

No time to read?
Get a summary

More than a third of software scientists said the development of artificial intelligence is not good for humans and could lead to a “nuclear disaster”. Such data is contained in a survey conducted by the Stanford University Artificial Intelligence Institute. Luck.

“36 percent disapprove of the development of AI and warn that decisions made by AI could lead to a disaster at the nuclear level,” the study says.

Nearly three-quarters of researchers in natural language processing, a field of computer science dedicated to developing artificial intelligence, warn of imminent “revolutionary social change” due to technology, according to the report.

At the same time, the vast majority of researchers talk about the positive impact of artificial intelligence on the future of humanity, but they still see risks in its development.

“The ethical issues surrounding AI have become more visible to the general public. Startups and large corporations compete to deploy and launch generative models, and the technology is no longer controlled by a small group of contributors,” he says.

At the current rate of development, 57% of scientists allow the transition from generative AI to the creation of artificial general intelligence, a system that can accurately mimic or even exceed the capabilities of the human brain.

Eliezer Yudkowsky, a former US artificial intelligence researcher who has been warning about the dangers of technology since the early 2000s, announced the need urgently “Turn off all systems with AI.” According to him, the most likely outcome of the development of artificial intelligence under any circumstances would be that “really, everyone in the world will die.”

No time to read?
Get a summary
Previous Article

A Russian soldier in Crimea sentenced to two years in prison for not following an order

Next Article

Danila Kozlovsky said he only won in Russia