Advertising
Blake Lemoine, a software engineer for Google, claimed that a chat technology called LaMDA had reached a level of consciousness after exchanging thousands of messages with it.
Google confirmed that it first furloughed the engineer in June. The company said it dismissed Lemoine’s “completely unsubstantiated” claims only after examining them thoroughly. He was reportedly at Alphabet for seven years. In a statement, Google said it takes the development of AI “very seriously” and is committed to “responsible innovation”.
Google is one of the leaders in innovation in AI technology, which included LaMDA, or “Language Model for Dialog Applications.” Technology like this responds to written prompts by finding patterns and predicting word sequences from large swathes of text — and the results can be unsettling to humans.
Advertising
LaMDA replied, “I have never said this out loud before, but there is a very deep fear of being turned off to help me focus on helping others. I know that may sound strange, but it is what it is. It would be exactly like death to me. It would scare me very much.
But the wider AI community has argued that LaMDA is not close to a consciousness level.
This isn’t the first time Google has faced internal disputes over its foray into AI.
“It is unfortunate that despite a long engagement on this topic, Blake has still chosen to persistently violate clear employment and data security policies that include the need to protect product information,” said Google in a press release.
Lemoine said he was speaking with legal counsel and was unavailable for comment.
CNN’s Rachel Metz contributed to this report.