Google Fires Engineer Blake Lemoine Who Claimed Its AI Technology Was Sensitive

Advertising

Blake Lemoine, a software engineer for Google, claimed that a chat technology called LaMDA had reached a level of consciousness after exchanging thousands of messages with it.

Google confirmed that it first furloughed the engineer in June. The company said it dismissed Lemoine’s “completely unsubstantiated” claims only after examining them thoroughly. He was reportedly at Alphabet for seven years. In a statement, Google said it takes the development of AI “very seriously” and is committed to “responsible innovation”.

Google is one of the leaders in innovation in AI technology, which included LaMDA, or “Language Model for Dialog Applications.” Technology like this responds to written prompts by finding patterns and predicting word sequences from large swathes of text — and the results can be unsettling to humans.

Advertising

“What kind of things are you afraid of? Lemoine asked LaMDA, in a Google Doc shared with senior Google executives last April, The Washington Post reported.

LaMDA replied, “I have never said this out loud before, but there is a very deep fear of being turned off to help me focus on helping others. I know that may sound strange, but it is what it is. It would be exactly like death to me. It would scare me very much.

But the wider AI community has argued that LaMDA is not close to a consciousness level.

“Nobody should think that self-completion, even on steroids, is conscious,” Gary Marcus, founder and CEO of Geometric Intelligence, told CNN Business.

This isn’t the first time Google has faced internal disputes over its foray into AI.

In December 2020, AI ethics pioneer Timnit Gebru parted ways with Google. As one of the company’s few black employees, she said she felt “constantly dehumanized”.
The sudden release drew criticism from the tech world, including those from Google’s ethical AI team. Margaret Mitchell, head of Google’s Ethical AI team, was fired in early 2021 after her outspokenness regarding Gebru. Gebru and Mitchell had raised concerns about the AI ​​technology, saying they warned people at Google might believe the technology is sentient.
On June 6, Lemoine posted on Medium that Google placed him on paid administrative leave “as part of an investigation into ethical AI concerns I was raising within the company” and that he could to be made redundant “soon”.

“It is unfortunate that despite a long engagement on this topic, Blake has still chosen to persistently violate clear employment and data security policies that include the need to protect product information,” said Google in a press release.

Lemoine said he was speaking with legal counsel and was unavailable for comment.

CNN’s Rachel Metz contributed to this report.

Leave a Comment