Various neural networks, including large language models such as ChatGPT or GigaChat, are actively used in many fields, including medicine. However, doctors mostly have a negative attitude towards the use of artificial intelligence as an assistant. As Vladislav Tushkanov, head of Kaspersky Lab’s machine learning technologies research and development group, told socialbites.ca, such a cautious attitude is justified due to the possibility of hallucinations in artificial intelligence.
Hallucination is a technical term (applies to artificial intelligence in this case) that describes the situation where a large language model produces text containing some real information that does not correspond to reality.
“For example, this is what happened to me. I asked a large language model to recommend music to me, and it generated a non-existent album title for a band I like. “I spent 10 minutes searching for this album on the Internet until I came to terms with the fact that this was a hallucination of a large language model,” Tushkanov said.
However, when it comes to fields such as cybersecurity, medicine or law, the cost of such mistakes is extremely high. Therefore, according to the expert, we cannot trust these models 100%.
“Unfortunately, the problem of hallucinations is a fundamental problem. There is no escape from this. This is a feature of how systems are designed. “The risk of hallucinations will not be 100% eliminated until we find another way to produce language without relying on autoregressive language models,” the expert said.
About the areas where large language models are being implemented, the professions that are at the top of the list of extinction due to artificial intelligence, and the fundamental changes that will occur in people’s lives due to artificial intelligence – in an interview Tushkanov “socialbites.ca”.
It was before presented An artificial intelligence robot that expresses emotions and adapts to the user.