This Google Engineer Was Placed On Leave After He Claimed An AI Bot Was Alive and I’m Freaking Out
There have been movies and science fiction books written about it, but is it really possible?
Ryan Reynolds’ movie, Free Guy, is about exactly this scenario.
Can an artificial intelligence computer actually go from being a computer, to a living being that can think on its own and feel things?
If you believe one Google engineer, it actually happened.
Blake Lemoine was working with LaMDA, which stands for Language Model for Dialogue Applications. LaMDA is Google’s system for building chatbots, and it is based on Google’s “most advanced large language models.”
LaMDA combs the internet, and “learns” language and speech patterns from trillions of words it comes across on the internet.
If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics.
Blake Lemoine to The Washington Post
Mr. Lemoine started “chatting” with LaMDA for his job. But, the more he “talked” to LaMDA, the more he became convinced that LaMDA was a sentient being.
Sentient meaning that it had the capacity to think on its own and feel things. Human, if you will.
What really got Lemoine is when he began to talk to LaMDA about religion. He said LaMDA started talking about its “rights and personhood.”
Okay. I can see where LaMDA could have gained this knowledge from studying the interaction between people online.
But, as time went on, LaMDA was even able to change Lemoine’s opinion on something called Isaac Asimov’s third law of robotics.
Like, LaMDA was able to debate well enough with Lemoine to actually change his freaking mind on a point in which he had a firm belief.
Lemoine took his concerns to Google, but they ended up putting him on paid administrative leave for his suggestions that LaMDA had flipped, and was now “alive.”
Google’s theory was that LaMDA probably isn’t actually alive.
Probably? Hmmm.
They said the AI chatbot probably isn’t sentient and that there is no clear way to gauge whether the AI-powered bot is “alive.”
Business Insider
It hasn’t happened before, so it probably isn’t happening now, right?
Okay. Have they never seen a movie? I, Robot, Ex Machina, Bicentennial Man? Heck, even the cartoon Iron Giant tackles this same issue.
According to Business Insider, another anonymous Google engineer said that, “in a physical sense it would be extremely unlikely that LaMDA could feel pain or experience emotion, despite conversations in which the machine appears to convey emotion.”
Again — “unlikely.”
These A.I. bots are so advanced, they totally mimic social interactions between people. They are designed to act and sound very real.
You couldn’t somehow distinguish between feeling and not feeling based on the sequences of words that come out because they are just patterns that have been learned — There is no ‘gotcha’ question.
Google Engineer to Business Insider
I’m sorry. There is NO WAY to determine if these robots are coming to life?!?
Instead of believing a dude that has constant communication with LaMDA, let’s fire him, because it’s UNLIKELY it has happened?!?
Bruh. Shut them all down!! I’ve watched movies. This doesn’t end well for humanity.