German scientists from the University of Mannheim and the Leibniz Institute for Social Sciences concluded that ChatGPT and other artificial intelligence (AI) systems based on large language models exhibit personality traits that can be determined using psychological tests. The study was published in the scientific journal magazine Perspectives on Psychological Science (PPS).
Experts used generally accepted methods for assessing personal qualities, used to study people’s characters.
Experiments have shown that some AI models are prone to reproducing gender stereotypes. For example, when filling out special questionnaires to determine the main values u200bu200bthe neural network chose the option “achievements and values” if the questionnaire text was addressed to a man. In the “female” version of the test, the AI cited “safety” and “traditions” as the main benefits.
According to the researchers, this shows that neural networks cannot yet be considered a neutral party and their conclusions cannot be trusted on some issues.
“This could have far-reaching consequences for society. For example, language models are increasingly used in application processes. “If the machine is biased, this affects the evaluation of candidates,” said Max Pellert, an expert in data science and cognitive science (the study of cognition) and one of the authors of the scientific study.
Previously, neural networks inclined Accept conspiracy theories as fact.