Blake Lemoine, a software engineer at Google, told the media that the LaMDA artificial intelligence (AI) he was testing has a mind of its own. He told management about this, but was eventually suspended and sent on paid leave.

LaMDA or Language Model for Dialogue Applications is a neural network model for applications that can communicate on any topic. It was Lemoyne’s job to extensively test the system and see if LaMDA used phrases that could be hate speech or discriminatory. However, the engineer came to the conclusion that the AI was his own consciousness.
According to the specialist, the neural network spoke about its rights and considered itself as a person.
If I wasn’t sure I was dealing with a computer program we wrote recently, I would have thought I was talking to a kid of seven or eight years old, who for some reason turned out to be an expert in physics.
The engineer prepared a written report for management, but they felt the arguments did not support the presence of consciousness and reason in LaMDA.
He was told there was no evidence that LaMDA was conscious. However, there is plenty of evidence to the contrary.
Let us know what you think in the comments.🍺 Brewmaster: Beer Brewing Simulator Light Unfiltered Trailer
Source: VG Times

Calvin Turley is an author at “Social Bites”. He is a trendsetter who writes about the latest fashion and entertainment news. With a keen eye for style and a deep understanding of the entertainment industry, Calvin provides engaging and informative articles that keep his readers up-to-date on the latest fashion trends and entertainment happenings.