A Google engineer claims to have discovered a piece of software that has feelings and even a soul, but the Silicon Valley company and many tech experts are skeptical.
Blake Lemoine says a chatbot project he was working on became able to think for itself. He says the bot, called Language Model for Dialogue Applications, or LaMDA for short, told him it can feel joy, sadness and anger, which, if true, would be groundbreaking for a piece of software.
San Jose State University Professor Ahmed Banafa, who tracks artificial intelligence, said there are good uses of software that can learn, such as Google Translate, but there's also a dark side, including bots whose programming leads them to learn racist terms and spread hate.
While Lemoine claims LaMDA has a soul, Banafa says it's too soon to say.
Get a weekly recap of the latest San Francisco Bay Area housing news. Sign up for NBC Bay Area’s Housing Deconstructed newsletter.
"But one experiment and one result and one algorithm is not enough," Banafa said. "You need to replicate this one in multiple and multiple times."
In a statement, Google said, "We are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality ... Hundreds of researchers and engineers have conversed with LaMDA, and we are not aware of anyone else making the wide-ranging assertions or anthropomorphizing LaMDA the way Blake has."
Google placed Lemonie on paid leave after he went public with his findings and feelings, saying the engineer breached the company's confidentiality agreement.
Lemoine said he understands, but his conscience told him he should go public.