Artificial intelligence (AI) is already woven into daily life, shaping choices and outcomes in subtle ways. Researchers and policymakers have spent decades studying these systems, with scholars at institutions like Brooklyn Law School contributing to the broader discussion about algorithmic influence and governance. The AI conversation goes beyond technology; it touches on law, ethics, and public policy as communities in Canada and the United States seek practical safeguards and fair use.
A recent conference hosted by a major philanthropic foundation focused on social innovation explored how algorithms interact with public administration and private sector practices. The discussions highlighted the lasting impact of automated decision-making on subsidies, financial products, and everyday services, urging observers to consider how such systems might reinforce inequalities.
Public administration uses algorithms to determine subsidies and financial companies to create interests. Do these systems reinforce discrimination against the poorest?
There is substantial concern that discrimination can intensify when automation drives outcomes. In many cases, automation in the public sector can heighten inequities, and private-sector data practices raise the risk of biased judgments because sensitive information about individuals can influence decisions in complex ways.
Algorithms draw conclusions from data about past behavior. This means, for example, that if the statistically disadvantaged socioeconomic classes tend to have more health problems, the AI used by health insurance may penalize them. How can such progress occur responsibly?
A core challenge is that many algorithms mirror historical patterns and project them into the future. When systems appear scientific because they rely on historical data, they can perpetuate cycles of bias. In practice, biased policing, prejudiced credit or insurance decisions, and similar outcomes have shown how past data can influence present decisions. Privacy considerations, transparency, and accountability are essential to mitigating these effects and ensuring fair treatment across sectors.
“Google’s AI has no conscience but knows how to find the right words to emulate it”
an engineer Google He said this week that the company’s AI is experiencing “new emotions.” Is it possible for a computer system to begin to have consciousness?
This topic is debated, but there is no scientific evidence that machines possess genuine consciousness or feelings. What some observers describe as emotions often reflects sophisticated language generation that mimics human dialogue. AI can simulate sentiment, but it does not truly experience it.
You published the ‘Black Box Association’ in 2015. Has there been any improvement in performance checks of algorithms since then?
The landscape shows three distinct blocs of regulation and oversight. Europe’s GDPR remains a bold framework, though implementation has sometimes lagged. In the United States, accountability efforts are ongoing, aiming to assign responsibility to algorithms themselves, while China continues to regulate tech power—though the intent often centers on control rather than privacy protection.
How do you assess the AI regulation that the European Union will approve this summer?
A thoughtful approach would classify AI systems by risk, prioritizing governance where risk is highest. Adequate funding for compliance and proactive risk assessment before licensing are crucial steps to prevent issues from arising after deployment.
However, this law excludes the military use of AI currently deployed on European borders. Is there a risk of targeting vulnerable groups such as immigrants?
There are serious concerns about using these systems with migrants or for distinguishing refugees. Discrimination at borders is well documented, and it is difficult to imagine AI operating without supervision delivering fair, humane outcomes.
Facial recognition systems have been shown to fail, especially on darker skin.
Spain is preparing to use facial recognition at its border with Morocco. What dangers does this technology pose?
Three major risks stand out. First, erroneous data can normalize bias, including misidentification of women or people with changes in appearance. Second, discrimination remains a persistent threat, particularly against darker-skinned or transgender individuals. Third, pervasive identification can erode privacy and empower authorities in ways that limit free movement and personal autonomy. Even in travel contexts, facial recognition raises questions about efficiency versus privacy.
Russia used smart drones in Ukraine. Is a total ban necessary, or should regulation allow defensive use with safeguards?
Global norms would help, but wartime dynamics complicate agreements. In practice, a balance between safety and strategic interests is required, recognizing that blanket bans may disadvantage those who adhere to defensive use while rushing to innovate.
How can there be an international agreement if major players hesitate, such as the United States, which continues to invest in military AI?
Trust gaps hinder consensus. Without credible commitments from key nations, arms control remains elusive. The aim should be a broad agreement that echoes nonproliferation principles to curb risks while allowing constructive progress.
Yet the USA and China continue to compete in a race for AI hegemony…
A prudent path asks for de-escalation and a shift toward collaboration on global challenges such as climate change, inflation, and social welfare. Focusing on shared interests helps reduce the temptation to pour scarce resources into weaponized technologies.
Robotization is also reshaping the business landscape. Have machines come to steal our work or to complement it?
Automation will reshape jobs rather than erase them. It can unlock new opportunities, especially when workers are prepared to transition to more productive roles. The goal is to guide this shift with a focus on skills, training, and a fair transition while preserving human dignity in the workplace.
Transparency is essential. Empowering workers, supporting unions, and enabling civil society groups to explain how systems operate can help create accountability and trust in automated decision-making.
“The most advanced and modern business model, return to piecework of the 18th century”
Does platform capitalism compel a labor model that treats workers as mere cogs in a machine in the name of innovation?
The platform-driven economy often treats workers as data streams and profit targets. This environment can push people to respond quickly to algorithms and accept precarious conditions. Questions about dignity, autonomy, and fair compensation remain central as society renegotiates what work should feel like in the digital era.