Culture

No, Google’s Chatbot Isn’t Sentient, We’re Just Idiots

"We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them."

Want more Junkee in your life? Sign up to our newsletter, and follow us on Instagram and Facebook so you always know where to find us.

Last week, a Google engineer made global headlines for claiming that his company’s A.I. chatbot LaMDA had become self-aware.

In an official submission to Google, engineer Blake Lemoine expressed fears that the company’s chatbot LaMDA (which is short for ‘language model for dialogue applications) had developed the capacity to think for itself. After raising his concerns to his superiors an internal investigation by both the head of Responsible Innovation and Google’s vice president found Lemonie’s claims baseless, leading to the engineer being placed on administrative leave. In retaliation, Lemoine went public.

Labelling LaMDA a fellow ‘coworker’, Lemoine shared pages of his dialogue with the chatbot on social media. From spouting Kant to even eerily expressing a fear of being ‘switched off’, LaMDA’s seemingly human conversations quickly went viral across social media.

Lemoine’s perspective was even captured in a lengthy article on the subject of artificial intelligence by The Washington Post, which ultimately joined with leading ethicists and computer science experts in disputing that the chatbot was actually demonstrating true sentience.

 

“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” linguistics professor Emily M. Bender told The Washington Post. 

Bender’s words echo the main argument ethicists and scientists are making to dispute LaMDA’s intelligence: humans are desperate to believe that a spark of life exists within dead machines.

ELIZA And The Gullibility Gap 

Since the very beginning of A.I. technology, human beings have been utterly captivated by its sentient potential, however baseless. The creator of the very first chatbot ELIZA by MIT professor Joseph Weizenbaum encountered this back in 1964. Weizenbaum was shocked when his own secretary entered such a deep conversation with the machine that she requested the professor to leave the room so she could continue conversing with the chatbot in private.

“What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people,” Weizenbaum noted of the experience.

Contemporary scientist Gary Marcus summarises the phenomena as ‘the gullibility gap’, relating it to a “pernicious, modern version of pareidolia, the anthropomorphic bias that allows humans to see Mother Teresa in an image of a cinnamon bun”.

Lemoine himself would later acknowledge that his claims of LaMDA’s sentience had no scientific basis, but were purely made based on his religious ethics.

The Question Of A.I. Sentience Distracts From Bigger Ethical Questions

Debates over whether chatbots like LaMDA can think for themselves obfuscate the bigger ethical questions in A.I. right now. For example, ethicists in the US are warning that the use of artificial intelligence in HR and housing is perpetuating structural racism. The American Civil Liberties Union warned that A.I. technology currently used by real-estate companies to select potential tenants regularly discriminates against people of colour. 

“People are regularly denied housing, despite their ability to pay rent, because tenant screening algorithms deem them ineligible or unworthy.” The ACLU reported last year. “These algorithms use data such as eviction and criminal histories, which reflect long-standing racial disparities in housing and the criminal legal system that are discriminatory towards marginalized communities.”

Additionally, exploits in the language libraries where A.I. chatbots learn to adapt their language to be more ‘human’ have led to trolls coaching machines to spout racist epithets — like when Microsoft chatbot Tay was turned into a racist by trolls in less than a day.

More recently, ethicists have warned that A.I. uses public social media spaces like Facebook and Reddit to mimic human behaviour, which could potentially lead to deceased users being impersonated by chatbot doppelgängers.  

Regardless of these arguments from leading A.I. experts, Lemoine is still readily convinced that LaMDA is having a “great time” reading all of the commentaries that its self-awakening has stirred up.

Even if Lemoine is right and we’re right on the cusp of accidentally creating a ‘Terminator’-esque cyber villain, it almost certainly won’t be the end of the world.