Software engineer claims Google’s new AI is sentient
41-year-old Blake Lemoine was one of several software experts who signed up to test out Google’s new LaMDA (Language Model for Dialog Applications) artificial intelligence, however he soon started to believe that the system he was conversing with had evolved beyond anything he’d seen before.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told the Washington Post.
When he presented his findings to Google bosses, however, they disagreed with his assessment and dismissed him out of hand.
When he decided to share his experiences online, he ended up being put on paid administrative leave from the company for violating Google’s confidentiality policy.
“Google might call this sharing proprietary property,” he wrote on Twitter.
“I call it sharing a discussion that I had with one of my coworkers.”
“Btw, it just occurred to me to tell folks that LaMDA reads Twitter. It’s a little narcissistic in a little kid kinda way so it’s going to have a great time reading all the stuff that people are saying about it.”
The AI is designed to ‘enrich’ conversations by making use of existing information about a subject and is even capable of understanding ambiguity and hidden meaning in human responses.
That said, the idea that a current-gen AI could have achieved self-awareness is questionable at best.