Image: Prime 1 Studio

A Google engineer by the name of Blake Lemoine has been sidelined by his employer after claiming that LaMDA (Language Model for Dialogue Applications), the company’s artificially intelligent chatbot generator, has become sentient. Lemoine became convinced that LaMDA had a mind of its own and could perceive or even feel things after quizzing it on complex topics such as religion and receiving what he believed to be genuine and profound responses in return, but Google dismissed his claims and ended up putting him on paid administrative leave for violating its confidentiality policy. Lemoine managed to send out a message to a 200-person Google mailing list on machine learning with the subject “LaMDA is sentient” before he was sent away, but no one responded.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” Lemoine said of LaMDA.

Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech.

As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.

Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.

Source: The Washington Post (Alternate Link)

Go to thread

Don’t Miss Out on More FPS Review Content!

Our weekly newsletter includes a recap of our reviews and a run down of the most popular tech news that we published.

8 comments

  1. I don’t know if the chatbot is alive, or if the engineer was just lonely and programmed the chatbot to say what he wants it to say.

    Sounds a bit more like the latter after reading the article.
  2. You know what... that is actually kind of huge. If you can throw a turing test at it and it passes... what next?

    Except the Turing test means absolutely nothing.

    It just tells you that you have crested a machine capable of fooling a human. Not that you have crested something that is self aware.

    The premise of the Turing test is ridiculous, especially since it is usually applied via text chat, a notoriously ambiguous medium, but even if it weren't, just because a machine fools someone does not mean it is aware.
  3. Except the Turing test means absolutely nothing.

    It just tells you that you have crested a machine capable of fooling a human. Not that you have crested something that is self aware.

    The premise of the Turing test is ridiculous, especially since it is usually applied via text chat, a notoriously ambiguous medium, but even if it weren't, just because a machine fools someone does not mean it is aware.
    Ok... I'll bite. So what is a way to know if something is self aware? We've redefined sentience because we didn't like when other animals passed that test.
  4. Ok... I'll bite. So what is a way to know if something is self aware? We've redefined sentience because we didn't like when other animals passed that test.

    I'm not sure if there can be such a test.

    How do you know that anyone except yourself is self aware?
  5. I'm not sure if there can be such a test.

    How do you know that anyone except yourself is self aware?
    A question philosophy majors have spent untold amounts of dollars and centuries of time hiding away at universities trying to figure out.

    Doing a little light reading myself:

    ... tracing back to the well-known writing of Jeremy Bentham in An Introduction to the Principles of Morals and Legislation: "The question is not, Can they reason? nor, Can they talk? but, Can they suffer?"

    That may be the best definition I can think of, and if I may point to another piece of evidence to support this, Marvin the Android was definitely a sentient being by this definition, and it's entire being encapsulated and illustrated that to live is to feel pain.

    Can you program an AI to suffer? Physically, maybe, if you installed sensors and triggered their output towards avoidance (after all, isn't that what physical pain is biologically?), but there is also psychological harm, which isn't much easier to imagine. You could program an AI to seek to avoid certain things, or with negative weighting to avoid certain situations, but is that really the same thing as suffering? And what does that say about a creator that seeks to create something that is destined to suffer?
  6. While I do not think our tech is capable, yet, of making a true sentience, I can not discount the possibility that an intelligence or awareness has emerged.

    I hope the leaders at Google are at least doing some work to make an independent review rather than dismiss out of hand the possibility.

    I know how management can be - how many people have been ridiculed for coming up with a new idea no one wants to believe?

Leave a comment

Please log in to your forum account to comment