On June 11, engineer Blake Lemoine published on Medium a conversation he says he had with the artificial intelligence (AI) tool, the chatbot he programmed, called LaMDA (for its initials in English Languge Model for Dialogue Applications). The programmer claims to have talked with him and that he has a personality that is equivalent to that of a seven or eight year old child.
In that post, Blake supports what he and another developer, whose name is unknown, discussed with the tool. Among his responses were some in which he states that “he is a person” and that the nature of his awareness is that he knows he exists and wants to learn more about the world. Also, the AI supposedly said that sometimes it feels happy and sometimes sad. But that’s not all, because the robot claimed to have read the novel by The Miserables of Victor Hugo and that he enjoyed it very much.
After Blake’s statements, Washington Post published an article where he says that Google made the decision to remove him from his position, since the programmer began to make some “aggressive” movements. Including seeking a lawyer to legally represent the robot, as well as violating the confidentiality agreement by making the conversations with the chatbot public, which belong to Google, who pointed out that they have no evidence that LaMDA is aware.
Our team, including ethicists and technology specialists, have reviewed Blake’s concerns in accordance with our AI Principles and advised him that the evidence does not support his claims. There is no evidence that LaMDA is aware.