OpenAI has announced the creation of an independent watchdog board that will focus on safety and security issues in the field of artificial intelligence. With its help, the company aims to demonstrate its interest in the ethical and safe development of neural networks. reports Edition of The Verge.
The new committee was given unprecedented powers, including the right to suspend the publication of AI models on security grounds. Among them were OpenAI senior executives such as Zico Colter, Adam D’Angelo, Paul Nakasone and Nicole Seligman. OpenAI CEO Sam Altman was not on the list.
The new structure will hear from company management about the security assessment of major releases of AI models and oversee their launches together with the entire board. The full OpenAI board will also receive regular reports on safety and security issues.
The Verge writes that the degree of independence of the new committee is questionable, since all of its members are on the OpenAI board of directors. As an example of a more successful implementation of this context, journalists cite the supervisory board of Meta (recognized as extremist and banned in Russia), whose members are completely independent of the IT giant’s board of directors.
Previously It turned outwhat deepfakes want to include in the Criminal Code of the Russian Federation.
What are you thinking?
Source: Gazeta
Jackson Ruhl is a tech and sci-fi expert, who writes for “Social Bites”. He brings his readers the latest news and developments from the world of technology and science fiction.