IDCA NewsAll IDCA News
30 Aug 2022
Google Fires Employee for Claiming AI Chatbot is Sentient
Google terminated the appointment of a senior software engineer who said that LaMDA, Google's artificial intelligence chatbot, is a self-aware person with human abilities.
Google, who took engineer Blake Lemoine on leave, said that he violated company policies and was making a baseless claim about LaMDA (language model for dialogue applications).
The company said Blake Lemoine violated its policies and found his claims on LaMDA last month after it placed him on leave.
"It's regrettable that despite lengthy engagement on this topic, Blake still chose to violate clear employment and data security policies that include the need to safeguard product information," Google commented.
Last year, Google said they developed a language translation technology based on transformer-based language models, which can learn to talk about anything.
An engineer for Google's responsible AI group, Lemoine, described the system he has been developing as sentient, able to perceive and express emotions equal to those of a human.
"If I didn't know what it was, which is this computer program we built recently, I'd think it was a seven-year-old, eight-year-old kid that happens to know physics," Lemoine, 41, told the Washington Post.
The engineer compiled a transcript of the conversations, in which he eventually asked the AI system what it was afraid of.
A spokesperson for Google and several leading scientists refuted Lemoine's statements, insisting that LaMDA is just a complex algorithm designed to generate coherent human language.
Follow us on social media: