Google Fires Researcher Who Claimed AI LaMDA Is Conscious

Google fired the engineer who claimed the LaMDA AI had gained consciousness. Statements that had been very badly perceived by the research community.

Blake Lemoine, an engineer for 7 years at Google, was fired, according to information from Alex Kantrowitz in the Big Technology newsletter. The news would have been divulged by Blake Lemoine in person during the recording of a podcast of the same name, episode which has not yet been published. Google has however confirmed the information to Engadget.

Google Fired Engineer Who Claimed AI LaMDA Gained Consciousness

Blake Lemoine, who until recently was a member of the Responsible AI team, had contacted the Washington Post last month to say that one of the American giant’s AI projects had apparently gained a conscience. . The artificial intelligence in question, LaMDA, for Language Model for Dialogue Applications, was unveiled by Google last year. It was to allow computers to better reproduce open conversations. Blake Lemoine was convinced that LaMDA had acquired a conscience, but also questioned the possibility that she possessed a soul. And to leave no doubt about these statements, he even declared to Wired the following thing: “I am deeply convinced that LaMDA is a person.”

After transmitting his statements to the press, most likely without authorization from his employer, Blake Lemoine had been placed on administrative leave. Google had also repeated several times publicly that its AI was in no way conscious.

Statements that had been very badly perceived by the research community

Several members of the AI ​​research community also quickly spoke out against Blake Lemoine. Margaret Mitchell, who was fired from Google after speaking out about the company’s lack of diversity, wrote on Twitter that systems like LaMDA don’t develop intent, they “replicate how people express communicative intent. in the form of texts. With clearly less tact, Gary Marcy qualified the declarations of Blake Lemoine of “bullshit in bar”.

Google’s statement to Engadget reads: “As shared in our AI Principles, we take AI development very seriously and are committed to responsible innovation. LaMDA has undergone 11 different in-depth reviews, and we published a paper a few months ago detailing the work around this responsible development. If an employee has concerns about our work, as Blake Lemoine did, we consider them wisely. We have concluded that Blake Lemoine’s claims of a LaMDA consciousness are completely unfounded and have worked to clarify this with him for several months. These discussions took place as part of our open culture that helps us innovate responsibly. It is regrettable that despite discussions on the subject, Blake Lemoine has chosen to violate the rules of the company in force, in particular the need to protect product information. We will continue to develop language models with caution and we wish Blake Lemoine the best for the future.”

Leave a Comment