one of the concepts more stable Current Artificial Intelligence’s dependence on training with artificial neural networks that mimic the physical networks of our brain, “black box”. It’s a problem that major companies have yet to solve, as they have been involved in an intense race to pioneer this technology over the past year. Essentially, when we say “black box”, we are talking about a mystery: During the training of artificial intelligence, we know the “input” and “output” of the system, the data entering and the results coming out, but We do not know how this process occurs nor how the AI arrived at the conclusions it did. There are other experts who think that the “black box” is a myth, not because the problem is that the system is not transparent, but because the tools to interpret its inner workings are not sufficiently developed. In any case, Google’s CEO himself Sundar PichaiIn an interview on the prestigious television program “60 minutes”, he says that there is something there “that you don’t quite understand”, the keys to which are still far from being understood.
Knowing these keys to the internal process of Artificial Intelligence is not the point definitely small. Without absolute transparency about the path AI took to get where it is, great crisis of confidence. Moreover, if the issues that artificial intelligence needs to answer are this sensitive medicineHE Right or environment, anxiety is triggered. In other words: the current behavior of AI is somehow reminiscent of bygone times when we relied on Humanity. straw and the magic of it, which of course we don’t understand, although the results are encouraging. age of science and reason like ours.
Actually, if we think about it a bit, It’s not uncommon for this to happen either.. Neural networks are an algorithm called “machine learning”, made by computer systems capable of self-learning, that copy the behavior of biological brains. Let’s be precise, because the correct thing would be to say “what a copy”. What do we know behavior of the physical brain. artificial neurons They are trying to organize by layers as biological ones do, by the way, in a process discovered more than 130 years ago by the forgotten genius of national science, Santiago Ramón y Cajal. This is the number of layers and the process is so complex that it is absolutely tremendous energy expenditureIt can be difficult, if not impossible, to trace the steps taken and subsequent decision-making processes. It’s as if Turing’s “imitation game” has turned into a sarcastic paradox: AI has progressed by replicating the enigma that our brains still are to scientists. The problem is that by dragging in these enigmas that were the “black boxes” of the original brain, it has created a new black box that is as incomprehensible as real “gray matter”. pure poetry from those that cannot be solved.
Ordinary mortals got used to it Live with technologies that work And we don’t understand how they do it. And computer scientists are not exempt from this situation either, because they have to learn this “black box” concept in their first year at university, which does not only apply to artificial intelligence: there may be a module, object or component that they need to use without. knows its inner workings. But if the high priests of Artificial Intelligence don’t get it 100 percent either, Something is starting to go alarmingly wrong. This is not only because of how complicated it is to get into that “black box” and discover its secrets, but also because haste and vested interestsWhat would I say? BenaventeThey turned the evolution of artificial intelligence into a thing crazy flight forward A situation where the outcome is more important than the analysis of the process. Because it’s impossible to understand, or because it’s my own and I don’t want anyone digging around to solve the mystery: that’s what the companies that create artificial intelligence believe. Copyright. Meanwhile, in the United States, the Administration has already given permission 581 AI models involved in medical decisionsMany were trained with black boxes that we still don’t fully understand how they work.
The good news is that it doesn’t take long for technological and scientific advances to occur. firewalls for these situations. There are already AI researchers who specialize in what is called “interpretability” and are essentially aiming to turn this black box into a completely transparent container. Another concept is “explainability”, a kind of algorithm “how to” concept that even banks are starting to provide you with materials.
“Machine learning is designed to be a black box: very complex predictive models.”
North America Cynthia Rudin is one of these researchers. In an interesting interview in the magazine Kuantamagazin points to the root of the problem: Designed machine learning that enables ChatGPT to generate acceptable poems being a “black box””. Rudin says this “These are predictive models that are too complex for any human to understand. Or are these property? “secret sauce” someone’s The initial idea was that there was no need to fiddle with the data, instead the algorithm would handle all this in the background. “It was very elegant in concept, but it made it difficult to understand what was going on.”. Rudin knows what he is talking about because, as the saying goes, he was a cook before he became a monk. After graduation, he worked in the company New York Electric Company There’s a “rarity” in creating an AI model that predicts the sewers most susceptible to fires or explosions, affecting at least 1% of Manhattan’s underground pipes. While developing the model, they worked with a lot of data and systematized it. However, the model contained information about “neutral wires” that changed the entire end result. Knowing what was inside the model, the power company realized it shouldn’t be there and realized the problem could be fixed. Otherwise it would be a disaster. “It’s really hard to troubleshoot models if you don’t know what’s in them,” Rudin tells the Quantamagazine journalist.
The point is that training an “interpretable” mode is much slower and therefore much more expensive: you have to make sure how the AI will “reason” (quotes needed) this. make sure the results are correct. This seems essential in high-stakes decisions. But it looks like nothing will happen if AI uses its immense ability to calculate probabilities to decide who gets sneaker ads online and who doesn’t. Especially if the system works… No one will worry that everything is controlled by a “black box” model.
The problem is exactly this: accepting a result without knowing where it comes from or without discussing why it is so. Neither ChatGPT nor Bard nor any other Generative AI are oracles or gods that cannot be questioned. Fredric BrownOne of the great science fiction writers, Dr. The question is (could it be something else?): “Is there a God?” Imagine what he answered.