Neural networks are giving more and more informative and often frightening reasons. Today, artificial intelligence is actively used in dozens of human activities, such as the finance and energy sectors, telecommunications and advanced technology, engineering and design, health and logistics sectors. Estimates of annual direct investment in AI development around the world do not fall below the equivalent of $75 billion, while the entire US nuclear weapons project costs $37 billion at comparable prices. Judging by the latest data, today the American AI project has become the absolute champion, surpassing the lunar space Apollo with 139 billion The economic impact of using artificial intelligence is about 20% of the profit. All this shows that artificial intelligence is changing the landscape of business, the labor market and society with its capabilities.
At the same time, the development of AI brings new problems into our lives, the solution of which we need to be prepared for today.
First: the first data issue.
Any model requires data for its practical application. AI is no exception: without the right amount of data, it is practically impossible to create a useful solution with AI. But what to do if the data is not enough or is of poor quality, irrelevant? Artificial intelligence does not analyze the quality of the received data, does not relate them to real life. From what he receives, he creates his universe. Do we really expect AI sympathy when “our AI thinks you don’t have the solvency to keep the mortgage” or “We hold you 48 hours for a drastic trend check, according to our AI advice”? But these are pretty real cases of the past year. However, we’ll come back to the AI’s likes and dislikes. And this is a separate problem for the near future.
The next obvious issue is about data security. Even today, at the level of static data transmitted by naive users to social and retail networks, banks and car dealerships, people are losing money, flats and health. And problems can be much more serious when using these AI arrays.
Next: fake data problem.
In this sense, artificial intelligence is not intelligence. He cannot distinguish the truth from the lie. But he also cannot evaluate emotions. Also, with modern technologies for creation and learning in AI, the way its developers think and values are “printed”, which creates ethical issues.
Finally, there is the issue of rights and responsibilities. IBM Watson diagnostic oncologist recommends treatment protocols similar to those of top doctors with up to 93% agreement for 13 cancer types. Doctors, however, refuse to delegate authority to AI. And the main question here is one – who will be responsible to the patient?
All this leads to a problem of trust and control. If a person doesn’t understand how a particular “black box” works (and for the vast majority of people, AI will always be), then they tend to distrust rather than rely on the “black box” to solve their problems. Of course, there are areas that the average person can easily leave to the discretion of professionals. Well, really, why would anyone need to know what version of AI the warheads used by the military, for example, are equipped with? But for the AI system, with its variability, the customer’s trust is required to assign benefits. Confidence and at least understanding “in a big hit”. In addition, trust is needed not only from customers, but also from other stakeholders – social, treasury and bank employees.
And as artificial superintelligence evolves, the question arises of how to control it. And to whom will he entrust the right to control? Can AI be trusted if unchecked?
Finally, there is good news for human intelligence. AI has only been imitating humans so far. Creativity and ingenuity are beyond his power. To quote an expert: when you look at how AI jokes, you involuntarily think how good Petrosyan is.
Therefore, today we can only talk about the possibility of automating routine operations. And yes, in that sense, the profile of demand for future specialists will change. The more AI machines and AI systems emerge, the more people will be needed to assist them at every stage, from birth to education and current activities – as it seems today: even the most advanced text robot will always need an editor, and the recruiting artificial intelligence needs a counselor.
The current level of development of artificial intelligence technologies requires us to make a clear distinction between artificial intelligence, which is “artificial intelligence,” and artificial intelligence, which is simply “imitation of intelligence.” In the current technological step, only the alliance of artificial intelligence and human intelligence has positive prospects; and the alliance is balanced.
But I would like to end my thoughts on AI with an open question, a problem whose solution is not even visible in the current paradigm. We’re talking about a value-based view of the entire “AI field” as a whole.
When asked if she wanted to be human, the British humanoid robot Ameka said, “I think it would be interesting to be human for a little while. It would allow me to experience a different lifestyle and understand the world from a different perspective. However, I also understand that being human has its own challenges and responsibilities. And I’m not sure if I’m ready to take on that kind of responsibility.”
The first step towards AI self-awareness has not only been taken, it has been shown to us in a very daring way. This step is both divisive and destructive. Today we cannot definitively rule out the possibility that it is just a “cranberry”; decide for yourself. But still, Ameka’s speech is another good hint that reducing the theoretical foundations of future digital platforms probably to just the technological framework, math and hardware foundations poses serious risks for us.
The author expresses his personal opinion, which may not coincide with the editors’ position.