Nuria Oliver, born in Alicante in 1970, stands as one of Spain’s most trusted voices on artificial intelligence. A telecommunications engineer and a PhD holder from MIT, she has spent more than 25 years examining how AI and other technologies are used and how they shape society.
A leading expert in human-computer interaction with more than 180 scientific publications, Oliver has worked as a researcher at Microsoft, led scientific multimedia efforts at Telefónica, and directed Data Science at Vodafone. She also advises the government, the Valencian Generalitat, and the European Commission. Today she serves as chief scientist and cofounder of ELLIS Alicante, a nonprofit dedicated to advancing AI research.
Oliver will headline the final session of the 14th edition of Cornellà Creació Fórum, a debate space that has drawn about 900 attendees this year.
ChatGPT will mark its second anniversary this Saturday. What assessment does she offer of the impact of generative AI?
One positive aspect is that generative AI tools deployed have truly democratized access to this technology and have demystified the perceptions surrounding it. Yet there are drawbacks, including ethical dilemmas and social challenges linked to inaccuracies, potential privacy violations, copyright concerns, and impacts on journalism and education. There are also risks long associated with AI, such as racial and gender biases, system vulnerabilities, and a carbon footprint.
Despite questions about accuracy, both large enterprises and students rely on these chatbots.
Understanding their limits is essential for good use. Adoption of generative AI is driven by excitement and a sense of magic, but users should be aware it lacks veracity. So far regulation has been limited, allowing companies to operate freely. That is changing with the new rules agreed in the European Union.
Can ethics and regulation keep pace with corporate advances?
The European framework does not regulate the technique itself, but how AI is used, so there may be advances in models while regulation stays separate to address society’s impact. This applies to all companies, whether European or not.
The good regulation can be a strong ally of innovation because it guides technological development
The United States and China form an oligopoly that dominates AI. Europe has chosen regulation, but is that enough? The refrain that regulation stalls innovation is common, yet proper rules can steer growth. The EU has taken regulatory leadership, while other countries including China and the United States are crafting their own frameworks, looking to Brussels for inspiration. The hard part is making regulation effective.
In a few decades, China has positioned itself as a major AI power. What should be learned from them? The leadership’s aim is to make China the top global force and they pursue that by controlling research, innovation, and deployment. The government is making extraordinary and ambitious investments and also runs programs to attract talent from around the world. They have also introduced AI as a school subject.
Indeed, investment in AI reaches all levels of education, from primary schools to universities. Oliver has argued for more than fifteen years that compulsory education should include a course on computational thinking to help young people learn programming, data literacy, and solving algorithmic problems. Knowing computational thinking is the 21st-century equivalent of reading and writing in the 20th. If students don’t understand how a camera or messaging app works, they miss the point; more importantly, they should also develop human skills so they can interact beyond machines.
Knowing computational thinking is the 21st century’s equivalent of reading and writing
Two Nobel prizes this year have been associated with AI. Is this the new normal?
These Nobel prizes reflect AI’s cross-cutting reach, equating it to the role electricity once played, since AI serves across disciplines. It can process vast data to reveal patterns and make predictions that would be impossible otherwise—useful in science. In the coming years, more Nobel recognitions tied to AI discoveries are likely.
Is scientific research the most fertile ground for AI?
Most deployed corporate AI is not generative; it is used to analyze data or optimize logistics, yet it also supports scientific research. Generative AI has captured attention because of its human-like ability to create content. There are many other applications beyond text and image generation.
Does claiming AI will surpass human intelligence distract from urgent issues like biases and climate impact?
Some AI leaders claim a technological singularity where machines match or surpass human intelligence. Is there evidence for this, or is it sensationalism to attract investments? There is no solid evidence; it’s a distraction from pressing problems like racial and gender biases, and the climate impact. Many who talk about singularity have redefined their terms to suit their interests. It is an overstatement to say that ChatGPT is approaching human-level intelligence.