As a lover of good science fiction, I’ve never quite gotten away with this:syndrome HAL9000” When talking about Artificial Intelligence. Supercomputer created Arthur C.Clarke His novel “2001, A Space Odyssey” was later masterfully adapted into a film. Stanley KubrickThe icon of the debate about whether machines can be intelligent is the melon that he opened years ago with the ever-popular “Imitation game”. Alan Turing. A controversy that arose later Another metaphysical discussion: Can machines be sentient or evolve? Here is one of the mortal fears that we CCFF enthusiasts suffer from, the survival instinct. I still don’t know if it’s possible bowel Both are an almost 56-year-old movie and the original novel (“spoilers” for those who love neologisms), but for those who don’t know or don’t remember, it is necessary to reveal a fact: HAL9000 is the main protagonist of the movie’s plot. “2001..”, because after making an alleged error in the decision, kills the entire crew of the ship. Almost everything: One of the astronauts manages to escape and disconnect. bug. The question of what happened to HAL9000 has always masked one of these fundamental fears. Lovecraft’sInspiring cybernetic beings in humans. One of the first “conversations” I had with ChatGPT was about HAL9000 and its features. mistakefor this error, if there is any, is, as I say, “almost” human, “almost” too human. Nietzsche. His answer continues to surprise me: The fictional ancestor of modern artificial intelligence “ethical crisis” between the purposes of protecting the mission and the obligation to conceal information from the crew, or conflict “Between their programming and their mission and what is best for humanity.”
This whole exordium makes sense because of the increasingly frequent news. Master’s strange skills (Large Language Models) developed unexpectedly during their training, much to the surprise of the scientists working with them. Logic says that these artificial intelligence models, filled with tons of data that they absorb with neural networks inspired by those in the human brain, develop certain skills during this training. It is worth noting that data scientists still do not fully understand what is there, but they know what the outcome is: AI improves the expected capabilities. The input gives an output as a result. But it also develops other unexpected things. Experts tell them “emerging behaviors and dozens are already listed. And the list continues to grow. By the way, the term “rising” is not accidental. Biologists, physicists and other scientists use this word as “ describe collective and self-organized behavior It occurs when a large collection of objects behaves like a single thing. An article in the prestigious science journal Quantamagazine, from which I quote this quote, points out that this behavior is the same behavior in which a series of atoms come together to form living cells, or life. Let yours or mine keep beating right now. “Emerging capabilities emerge in systems that include many individual parts. But researchers Only recently have they been able to document these skills In Master’s, because these models reached very large sizes,” states the article by Esteban Ornes in the above-mentioned North American magazine (here is the Spanish version). Initially, it seemed to be a matter of size: the larger the model, the more unpredictable abilities But even more surprising was that these capabilities did not only appear in the huge models of Google, OpenAI or META: there were also small models with several million parameters that developed the same effect. Feature? They were trained with little data but high quality. If anyone is starting to worry at this point, do not read below. Google Brain researchers have already managed to detect that a graduate student was struggling to reason. “chain of thought”, For example, encouraging the AI to suddenly explain itself made it easier for that model to correctly solve a complex math problem without the message of “self-knowledge” or “self-awareness.” I didn’t know how to solve itR. Whenever artificial intelligence is put under pressure, experts talk about the concept of “emergency” created by that “chain of thought” system, strange things start to happen. If you read a book about cutting edge neuroscience Even though we are talking about the human brain, you may find yourself faced with very similar approaches. amazing things when placed in extreme situations.
This pressure may also come pitting two artificial intelligences against each other: One acts creatively by thinking about how to solve the problem before effectively solving it, while the other acts as a validator and controller of the results. It is not a new system in Artificial Intelligence education, but it produces unpredictable results in mathematics, the discipline that measures the arrival of this technology. A few days ago Contemplationone of the largest artificial intelligence laboratories Google, announced his model Fun Search He had solved the “limit determination problem,” an old puzzle in deep mathematics that caused many confrontations due to lack of consensus on how to approach the problem. “FunSearch”, which is not in principle a model for solving math problems, originates in this discipline and has already addressed another popular math challenge with faster solutions than any known so far: container packaging problem. Alhussein Fawzi, one of DeepMind researchers, admitted that it is not explained where these extraordinary abilities come from.
And here we return to HAL9000 and its “bug”. unpredictability Some of these emerging behaviors are brilliant, as far as we can see, while some of them create risks. That’s something geeks learned in “Star Wars”: The Force has a dark side as well as a light side. Deep Ganguli, a scientist at Anthropic, the Amodei brothers’ company that created the AI “Claude,” says it very clearly: “We don’t know how to know in what kind of application the ability to cause harm will emerge. in a smooth or unpredictable manner. hallucinations or errors These are just the tip of the iceberg of this dark side of artificial intelligence. Here’s an example while you’re at it: ChatGPT it was wrong When I asked him to write this blog post crediting the creators of Anthropic and attributing it to OpenAI’s executives, Sam Altman and Greg Brokmaninstead of siblings Amodei, were originally inside OpenAI, but left to found exactly Anthropic. When I pointed out that this was completely wrong, ChatGPT replied, “I’m sorry, I made a mistake in my previous answer.” The good thing is that there are many AI models. You’ll avoid mistakes if you do what I did, who already knew Antropik was from Amodei: ask another AI. Wonder, in my opinion, is much better than other AIs because their answers are based on specific, consultable sources that they cite, and they got it right. You don’t need to trust just one, but ask a few.
All this is well summarized by one of the fathers of technology: Jeff Bezosfounder Amazon. In a recent interview, Bezos made the point of responding to his AI vision with concepts that go beyond technology or engineering. “Great language models are discovery, not invention,” he says. And he states: “ constant surprises about their abilities “They show that they are discoveries, not engineered objects.” Bezos is on the integrated side, showing his optimism about AI: “I think these powerful tools are more likely to help and save us, rather than destabilize us and destroy us.” Amen, Mr. Jeff…
Source: Informacion

Jackson Ruhl is a tech and sci-fi expert, who writes for “Social Bites”. He brings his readers the latest news and developments from the world of technology and science fiction.