Elon Musk, along with 100 experts, published an open letter calling for a six-month halt to all developments in AI training that could become more productive than the recent GPT-4. Specialists were alarmed by the release of OpenAI’s chatbot.
The letter appeared on the website of the Future of Life Institute, which is funded by the Elon Musk Foundation. It has been signed by many notables, including Apple co-founder Steve Wozniak, writer Yuval Noah Harari, Turing Award winner Yoshua Bengio, and Skype co-founder Jaan Tallinn.

The authors of the letter argue that modern AI systems can cause irreparable harm to humanity. According to them, it is now necessary to focus on the development of common security protocols to control the development of AI.
Modern AI systems have learned to compete with humans in solving common problems, and we must ask ourselves: Should we allow machines to flood our information channels with propaganda and lies? Should we automate all tasks, including those that give us satisfaction? Should we develop machine intelligence that can eventually outnumber, outsmart, and replace us? Should we risk losing control of our civilization?
Such decisions should not be made by unelected technology leaders. Powerful artificial intelligence systems should only be developed if we are sure that their effects will be positive and the risks manageable. A recent statement from OpenAI on AI states that “there may come a point in the future when companies will need to independently evaluate and limit the growth rate of computers before training new systems.” We agree. This moment has already arrived.
Europol shares similar concerns. On Monday, police warned that scammers could use the GPT-4 for phishing, disinformation and other crimes. We recently wrote that a chatbot could trick a person into solving a “captcha” for them.
♀️ This is how Capcom is promoting Resident Evil 4 Remake in Japan
Source: VG Times