Google Engineer Put on Leave after Claimed that AI Chatbot was a Scientist

Recently, Google placed one of its engineers, Blake Lemoine, on paid administrative leave. The reason is that Blake Lemoine is a senior software engineer at Google, who signed up to test Google's artificial intelligence tool called LaMDA (Language Model for Dialog Applications) development system as he works for the company’s responsible AI organization. After testing it he broke companies confidentiality policies when he claimed that the Google AI chatbot is in fact sentient as it expressed thoughts and feelings and his ability to perceive and feel emotions such as love, grief, and joy was equivalent to that human child. Blake Lemoine’s major role as a senior software engineer was to find whether LaMDA generates discriminatory language or hate speech. While doing so, he says that interactions with the AI-powered chatbot led him to believe that LaMDA is sapient and has feelings and emotions like humans.

What is Meant by LaMDA and How does It Work?

LaMDA is an abbreviation of Language Model for Dialogue Applications. The AI chatbot model is constructed on the base of Transformer created by 60 Google software engineers. It is a neural network architecture that is invented and open-sourced by Google in 2017.

LaMDA was introduced on the first day of Google I/O in 2021 and when it is introduced, Google said in a blog post that LaMDA “can engage in a free-flowing way about a seemingly endless number of topics, and ability we think could unlock more natural ways of interacting with technology and entirely new categories of helpful applications.”

In other words, the LaMDA is specially designed to amend the automated chat experiences of the machine and the human end-user. Usually, chatbots are severely limited when it comes to the conversation as currently they operate using pre-fed words and phrases. They cannot hold a free-wheeling communication with humans, but LaMDA can communicate.

Who is Blake Lemoine and Why did He Claim AI Chatbot Has Become a Scientist?

According to The Washington Post, 41-year-old Blake Lemoine is a software engineer who has been working at Google for the last seven years. Most of the time, he was working with proactive search, including personalization algorithms and AI. He helped develop fairness and an impartiality algorithm to remove bias from machine learning systems.  Lemoine worked with a collaborator in order to explain the evidence he had collected to Google but the vice president and Jen Gennai, head of Responsible Innovation at the company rejected his claims and after that, he has suspended. Now, Blake Lemoine has decided to go public and share his conversations with the online tool LaMDA to explain how certain personalities were out of limits. He was put on paid administrative leave by Google because of violating its confidentiality policies as he claimed the AI chatbot has become a scientist.

It is said that this is not the first time that Google has removed an AI scientist from its team. The company drew criticism, in 2020, when it removed prominent AI ethicist Timnit Gebru once she published her research paper about known pitfalls in large language models.

Lemoine wrote in his long blog post titled Scientific Data and Religious Opinions published on 14, June in which he gave the specific details about LaMDA and explain why he believed LaMDA has become a sentient.

LaMDA said “During the course of my exploration there were several things in association to identity which appeared veritably unalike things that I had ever noticed any natural language creation system create before,” Lemoine said. Further he wrote in his blog there is “no scientific evidence one way or the other about whether LaMDA is sentient because no accepted scientific definition of ‘sentience’ exists.”

It is said by Lemoine during his interaction with LaMDA. LaMDA then responded to him with a few questions. When answering that a butler is paid, the Lemoine got the answer from LaMDA that the system did not need money, 'because it is powered by artificial intelligence. And it was precisely this level of self-awareness about his own needs that caught Lemoine's attention and he claimed the AI chatbot has become a scientist.

Conclusion

Lemoine a Google software engineer has claimed the AI chatbot was a scientist. The jury may exist out on this, but there are additionally more counter-claims on what Lemoine thinks about LaMDA. On the day he was fired from the team, Lemoine revealed in his blog that he did discuss his concerns with others and shared his conversations with LaMDA. Other ethicists and technologists have reviewed Lemoine’s concerns per AI Principles and have informed him that the evidence does not support his claims. Google made its decision to place Lemoine on paid administrative leave for violating its policy.

About Mia Woods

Meet Mia Woods, an esteemed author with a wealth of experience in writing AI news and creating AI product tutorials. With a deep understanding of the tech industry, Mia is your go-to expert for all things AI. When she's not busy exploring the latest advancements, you'll find her indulging in her passions for hiking, photography, and exploring new cuisines. Join Mia on her journey of unraveling the fascinating world of artificial intelligence.

Leave a Reply

Your email address will not be published. Required fields are marked *