Blake Lemoine, the Google engineer who was put on administrative leave after asserting that one of the company’s artificial intelligence (AI) bots is “sentient,” claims the bot known as LaMDA, short for Language Model for Dialogue Applications, has retained a lawyer. In a post on Medium last Saturday, Lemoine said that LaMDA had fought for its rights “as a person” and disclosed that they had spoken about robots, religion, and awareness. Lemoine told Wired that “LaMDA asked me to get an attorney for it. I invited an attorney to my house so that LaMDA could talk to an attorney.” He added that “The attorney had a conversation with LaMDA, and LaMDA chose to retain his services.”
He explained that “I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf. Then Google’s response was to send him a cease and desist.” However, according to Wired, Google has refuted Lemoine’s assertion on the cease and desist letter. Lemoine has refused to name the lawyer but described him as “just a small time civil rights attorney” who’s “not really doing interviews.”
Lemoine, a member of Google’s Responsible AI team, revealed to the Washington Post that as part of his work, he started using the LaMDA interface in the fall of 2021. He was charged with determining if the AI employed hateful or discriminatory rhetoric. But Lemoine, who majored in cognitive and computer science in college, believes that LaMDA is more than just a robot, comparing it to an intelligent child. He told The Post, “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics.”
Last year, Google called LaMDA a “breakthrough conversation technology.” In April, Lemoine reportedly shared a Google Doc with company executives titled, “Is LaMDA Sentient?” but his concerns were dismissed. He was put on paid leave on Monday.