A programmer responding to the nickname said, “jbetker” and who works OpenAI In June, he wrote an interesting post on his blog and it triggered him. A heated debate on XIn the meantime, we’ll have to consider stopping calling Elon Musk’s platform the “old Twitter” someday. It took several months, specifically five months, for the shockwave to reach fans. Artificial intelligenceBut a strong idea always fails with its own weight: the tweeter who posted this on the old bird network did it in five days several million answers. What did he say jbetker To begin with, it was because all the Masters (Big Language Models) that we interact with today and that allow us to explore the wonders of Generative AI arrive at the same endpoint, regardless of training strategies, which is a secondary factor. . These models create magic with dataIts creators are amazed at the capabilities they’ve achieved, even exceeding their expectations, but there’s nothing beyond this initial data. Artificial intelligence does not think the way we humans understand this thought; They’re stochastic parrots subject to chance: “a bitter lesson,” according to the tweeter who posted the comment. jbetker. Another X customer is still on the other side saying “nobody”: “So this means that a generative model can never move beyond the data it was trained on. So we will never get into a state of reasoning.”

Comment jbetker, who specifically worked on the development of OpenAI’s image generator DALL-E. This is a secondary consequence that can be expressed by slightly modifying that old Vainica Doble song: it’s all in the data. Or let’s look at Groucho Marx, who was right once again: Starting from nothing you can only reach the highest levels of misery. AI would be nothing without data, without the corpus of data as it is technically known, without the trillions of data that are the long version of human knowledge. It is therefore very surprising that the “new oil” of the fourth industrial revolution, as many people call this data, allows its owners to run the engine of Artificial Intelligence without giving even a single euro to the world’s altruistic donors. fuel.

So we can take a deep breath. The so-called IAG (Artificial General Intelligence) which pursues with great zeal Sam Altman with his mysterious project Q StarAlso called Q*, it’s something its critics fear is like Terminator’s “Judgment Day.” still far away. But the problem is not trivial: the possibility that AI will at some point become smarter than humans, especially at valuable tasks such as mathematics, was what was behind the attempted “coup” by part of the OpenAI council. Fire Altman. There are so many crumbs that there are already those who dream With a new series on Netflix.

Two tests, one of which was gamified with millions of people and the other was conducted in the laboratory, proved what we had perhaps forgotten with the Alan Turing test: People are much smarter.

An interesting gamification project, based on the Turing Test with the participation of millions of volunteers, showed that people are not so incompetent and that they “get” an artificial intelligence that behaves like a human, revealing that people are still smarter than we thought. six times out of ten. But be careful, do not sleep, because cybernetics are lurking: Don’t miss the strong internal logic and inherent cockiness of ChatGPT when questioned about the outcome of this experiment by the well-known technology site “Microsiervos”: “There is room for improvement.” A study from Cornell University reveals the same thing with this gamified test: Only 41% of analyzed AI models pass the famous gamified test. Alan Turingis the patron saint of artificial intelligence and the Enigma decoder, while humans are 25 points away with 63 points. Let’s take a deep breath again.

AI can just be data at a glance mixed in a magic shaker, yes, but their shine hides our inferiority complex: We humans don’t value our skills enough and where they come from. In another post on his blog, mentioned above jbetker He is furious at people who compare the intelligence of a three-year-old child to an AI model. To excuse the AI, which he presents as the “weak part” in this comparison, the programmer says this is so for one reason: the child has much more data than the machine. To demonstrate this, he begins translating into computer terms the amount of information a human receives during his three-year life, and it turns out that it processes more information than an AI like Llama 65b created by Meta (65,000 million parameters). . It also goes way beyond that. Let’s extrapolate this information to a 20, 30 or 40 year old person. The billions of parameters of some models are not so impressive anymore. We need to be more aware that what worries us so much has not yet reached our level, and not the other way around. But for how long?