AI chatbots find ‘hallucinations’

No time to read?
Get a summary

AI chatbots like ChatGPT are potentially dangerous to users due to flaws in their responses, including “hallucinations”. Prabhakar Raghavan, senior vice president and head of search at Google, said this in an interview with a German newspaper. Welt am Sonntag.

“This type of artificial intelligence we are talking about can sometimes lead to what we call hallucinations. For the user, this is expressed in such a way that the chatbot gives a convincing but completely invented answer.”

The expert assured that the task of the developers is to minimize the number of such cases.

“The huge language models underlying this technology prevent people from monitoring every conceivable behavior of the system. But we want to test the technology on a large enough scale so that eventually the answers are as accurate as possible,” Raghavan added.

A Google executive warned that people should always be alert when faced with results presented to them by an AI-powered chatbot.

“My goal is to be a leader in the chatbot industry in terms of information integrity and responsibility. Only in this way can we maintain the public’s trust,” he said.

According to journalists, Google fears losing its dominance in the web search space after Microsoft invested $10 billion in the development of OpenAI, which introduced ChatGPT. The chatbot can not only conduct a dialogue, but also write text on its own at the request of the user.

Earlier it was reported that the largest technology companies in China, including the owner of AliExpress, are planning to create their own analogues of the ChatGPT neural network.

No time to read?
Get a summary
Previous Article

Discussion on the networks for this moment of Goya: Ayuso and the Community of Madrid sneak into the premiere

Next Article

Three things you could never imagine with an air fryer