American computer scientists at Stanford University have given an artificial intelligence (AI) model an “inner monologue” function that allows it to generate reasons before answering a question. This improved the AI’s performance in reasoning tests. The study was published on: portal scientific materials arXiv.
The team used a method called Quiet-StaR, which instructs the neural network to respond to the user based on pre-selected arguments. The model also learns not to use justifications that seem wrong to him. This is reminiscent of the human reasoning process where we search for the most convincing arguments in a debate.
The researchers tested Quiet-StaR on the large Mistral 7D language model. The neural network trained on internal monologue scored 47.2% on the thinking test; this was 10.9% higher than the result before training. At the same time, the neural network failed a school exam in mathematics, giving only 11% of correct answers, but this is almost double the 5.9% rate before using Quiet-StaR.
The research team also aims to find out whether such methods can bring AI systems closer to human ways of thinking.
Previous scientists warnedIt is stated that communicating with artificial intelligence avatars of deceased people can lead the person to psychosis and hallucinations.
What are you thinking?
Source: Gazeta
Barbara Dickson is a seasoned writer for “Social Bites”. She keeps readers informed on the latest news and trends, providing in-depth coverage and analysis on a variety of topics.