American researchers from New York University taught neural networks based on large language models (LLM) to recognize sarcasm and irony in human-generated texts. The study was published in the scientific journal magazine Computer Science (CS).
There are currently several LLM models that can process texts and predict their underlying emotional tone. In other words, whether they express positive, negative or neutral emotions. However, many of the reviews and comments posted online contain irony and sarcasm; This may cause neural networks to classify them as “positive,” but in reality they carry negative emotions, or vice versa.
Scientists have identified functions and algorithmic components that help artificial intelligence (AI) understand the true meaning of what is being said. They then tested their findings on the RoBERTa and CASCADE LLM models and used them to analyze comments on the online Reddit forum.
Researchers found that neural networks learned to recognize sarcasm almost as well as the average user.
The results of the study could provide the basis for further research and lead to the creation of tools to quickly analyze online reviews, posts, and other user-generated content.
Previous scientists was created Artificial intelligence that can hack the ethical protections of other neural networks and force them to create prohibited and illegal content.
What are you thinking?
Source: Gazeta

Jackson Ruhl is a tech and sci-fi expert, who writes for “Social Bites”. He brings his readers the latest news and developments from the world of technology and science fiction.