Artificial Intelligence (AI) is already a part of our lives and is affecting us in ways we can’t even imagine. Frank Pasquale He is a professor at Brooklyn Law School, a member of the National Advisory Committee on Artificial Intelligence in the United States, and an algorithm expert who has been researching these systems for decades.

author of booksBlack Box Association‘TO’New Laws of Robotics‘, the academic closed the loop of discussions on the technological future this Tuesday at a conference organized by the Fundació ”la Caixa” Social Observatory at the CaixaForum Macaya.

Public administration uses algorithms to determine subsidies and financial companies to create interests. Do these systems reinforce discrimination against the poorest?

Yes, there is often a deep magnification of discrimination. There is evidence that automation of certain procedures in the public sector has a negative impact. The private sector is of even greater concern because companies know many private things about people, such as their health or salary, which can lead to discrimination.

Algorithms draw conclusions from data about past behavior. This means, for example, that if the statistically worst socioeconomic classes tend to have more health problems, the AI ​​used by health insurance penalizes them. How can such progress happen?

Exactly. The problem with many algorithms is that they repeat patterns of the past and project them into the future. And it is also true that these systems, when truly historical, are presented as scientific methods: they rely so deeply on past knowledge that they repeat it. It is common practice in the police to punish certain groups in this way because they use racial profiles with class prejudices. It is also seen in the financial world. There is a lot of talk about inclusion, but some people may be predatory or subordinately involved. A bank can identify those who pay fines for late payments and those who directly harm the targeted person.

“Google’s AI has no conscience but knows how to find the right words to emulate it”

an engineer Google He said this week that the company’s AI is experiencing “new emotions.” Is it possible for a computer system to begin to have consciousness?

I’ve been following this discussion for a while and it’s something that cannot be proven scientifically, it’s a human perception. The problem is that people who believe that artificial intelligence can have emotions and other human characteristics confuse what a perception is as evidence. Google’s system is unaware, but it uses all the words it can to fake that they are aware. It is a spoof of human feelings rather than real feelings per se.

You published the ‘Black Box Association’ in 2015. Has there been any improvement in performance checks of algorithms since then?

The world can be divided into three blocks. The concept behind the General Data Protection Regulation (GDPR) in Europe is extremely powerful and clever, but implementation has been slow and not very effective. There are some efforts in the United States to give responsibility to the algorithm. And there are many efforts in China to regulate big technology, but this regulation serves to maintain the power and control of the government, not to protect privacy.

How do you assess the AI ​​regulation that the European Union will approve this summer?

I think it is a wise law to look into the future by classifying AI systems according to their risks. However, I believe that not enough money has been allocated to ensure compliance and that risk assessment should be done before these systems are created for licensing, not after they are implemented.

However, this law excludes the military use of AI currently being deployed on European borders. Is there a risk of targeting vulnerable groups such as immigrants?

Yes, using these systems with migrants or to distinguish legitimate refugees carries many great risks and should be included in EU legislation. There is so much evidence of discrimination at borders that it’s hard to imagine artificial intelligence being able to do this task unsupervised.

Facial recognition systems have been shown to fail, especially on darker skin.

Spain is preparing to use facial recognition at its border Morocco. What dangers does this technology pose?

There are three great dangers. The first is that erroneous data helps normalize pressure, just as systems don’t recognize women after they cut their hair. The second is discrimination, such as misidentifying or failing to recognize dark-skinned or transgender people. Third, it concerns the alienation produced by identification wherever you go and the political power that comes with it. People deserve to have privacy in the public space, but there are already airlines that manage the boat by identifying the passenger with facial recognition. And even then, it doesn’t seem like the most efficient option.

Russia used smart drones Ukraine. Is a total ban necessary? weaponry autonomous or a regulation that allows their use for defensive purposes?

An international regulation would have to be reached to make the use of these weapons safe. However, Russia’s invasion of Ukraine has highlighted fears that, in a wartime context, overregulation of military AI could disadvantage countries that give an advantage to those who adapt and those who don’t.

How can there be an international agreement if there are countries like the United States that are blocking it because they are investing in the development of military AI?

The problem is that the EU cannot get a commitment from Russia that it will not invest in such weapons, just as the US cannot get a credible commitment that China will do the same. This lack of trust works both ways and explains why the arms race continues. The most appropriate thing would be to arrive at a consensus similar to that established against nonproliferation.

Yet we see the USA and Chinese Competing in a race for AI hegemony…

Great powers must agree to de-escalation and stop investing in weapons of mass destruction to deal with climate change, inflation, famine and the welfare state. Thinking about the damage that killer robots can cause is not only frightening, it can also take away the resources they’ve absorbed.

Robotization is also affecting the business world. Have they come to steal our work or complete us?

They will not cause a jobless future, but they will change the type of work. This could mean great job opportunities. I think the key is to focus on how to make the transition to a more productive economy, rather than worrying about whether robots will finish off many people.

Automation of jobs is having an increasing impact on employees. Last week, the Spanish government introduced a measure to help workers demand greater transparency for these algorithms.

Yes, it’s a big change that is more mindful and of great concern. The first step is to give workers more power. Second, it should empower unions and civil society groups to help workers better understand how these systems work.

“The most advanced and modern business model, return to piecework of the 18th century”

Does platform capitalism make us accept a model of labor that treats workers like machines in the name of innovation?

This is how it is. The logic of the platform economy is to treat employees not as people, but as data streams, as profit-maximizing assets for the company by doing the largest volume of work in the shortest possible time and in the cheapest way possible. One way to do this is to train them to behave more responsive to algorithms, there are many examples of manipulation. We see how the most advanced and modern business model represents the return to piecework of the 18th century. Phrases like ‘job flexibility’ should be questioned and we need to make sure there is dignity at work and autonomy for the worker.