academics and executives of large companies leading the distribution of artificial intelligence (IA) warned. existential threat in his view this means humanity. In a mere 22-word statement, researchers and CEOs say it’s time to control this emerging situation. technology should be prioritized, equating its importance with that of pandemics. nuclear war.
“Reduce your risk extinction In the open letter, which states that “with other risks on a societal scale such as epidemics and nuclear war, artificial intelligence should be a global priority,”Terminator‘, the machine becomes conscious and kills us all. not included in the broadcast climate change as a priority nor does it recommend actions to minimize the assumed threat.
Among the more than 200 individuals who signed this declaration, sam altmanexecutive director turn on AICreator and part owner of ChatGPT Microsoft anyone Demis HassabisCEO of Google Contemplation. There are academics as well as computer science professors. Yoshua Bengio And Geoffrey Hintonless than a month ago, he resigned as vice president of engineering. Google. Known for both Turing PrizeHe was awarded the Nobel Prize in computer science for his work. deep learning (“deep learning”), the technology behind it AI.
contentious debate
The letter aims to “open up the debate” about the “most serious risks” of a technology still in development, a complex debate that has already sparked several debates. A few of these signatories published another letter at the end of March in which they demanded to suspend training of cutting-edge AI systems for six months. However, critical experts in the field have denounced that these so-called future risks of AI still remain. science fiction and that talking about them serves to obscure the real impact this technology currently has, both on labor issues and on its potential. disinformation or your water and electricity consumption.
Other experts are skeptical of this latest warning. “You have to be very clear that artificial intelligence does not have an ontological entity that will end humanity, something else is that someone programmed it for it, but here the problem will be the person, not the technology,” he told EL PERIÓDICO. Ulises Cortes, scientific coordinator of high-performance artificial intelligence at the Barcelona Supercomputing Centre. “Many of the signers got rich off of AI tools, selling ‘hype’ and leveraging other people’s data,” he adds.
It is difficult for these and other researchers to equate artificial intelligence with nuclear weapons. “Pandemic and nuclear war are two dangers based on theoretical and empirical evidence, while humanity’s risk of extinction is based on hypothetical superintelligence, which is fuzzy, completely obscure, and has no evidence,” he says. Ariel GuersenzvaigELISAVA professor, expert in design ethics and technology. “We didn’t want to lighten up the physics because that allowed the creation of the atomic bomb,” adds Cortés.
diversion maneuver
Although most of the signatories are American, there are many voices on the other side of the Atlantic criticizing such apocalyptic declaration. “Artificial intelligence brothers, “Look at a monster!” tweeted Emily BenderProfessor of computational linguistics at the University of Washington.
Over the past two weeks, Altman has appeared before the United States Senate and toured Europe where he asked officials – including the Spanish president. Pedro Sanchez and his French counterpart Emmanuel Macron– Create a body to oversee the security of AI projects. As others have noted, Guersenzvaig sees in these requests from the creator of ChatGPT a “criminal strategy” to influence and soften the future. arrangement from the vehicles they deployed. “They make us talk about technology, not who uses it and for what purpose,” he adds. All other dangers facing the hypothetical extinction of the human race would seem small.