Categories
AI

Do Humans Dream of Electric Minds? How language influences our thinking about AI

“One of our persisting challenges is excessive hallucinations.”

I’ll cut right to the quick: AI systems are nothing like human beings, but our language makes us think they are. That’s a problem, for us and for the AI systems we’re building.

When ChatGPT was introduced to the public in November 2022, people were baffled to discover when you asked the system a question it would often return text presenting information which appeared to be true at first blush but on further scrutiny was only partially true or even entirely fabricated. A barrage of news articles and social media commentary followed, about how the AI systems were “lying” or intentionally “deceiving” us, suggesting these passive computer systems were acting out of some form of malice. 

AI experts explained this is a well-known phenomenon called “hallucinations.” The term quickly took root in the public consciousness and provided a platform from which our common understanding of these technologies would grow:

“If a machine can hallucinate like me, it must have a mind like mine.”

A relatable metaphor can be a useful way of explaining something complex by referring to something similar and less complex. Metaphors are found throughout our everyday language: “You are an angel for doing this”, “she was on fire today,” “I am toast,” these are all nonsensical statements with real-world meanings easily understood by people with sufficient language skills and shared cultural and societal experience. 

Metaphor can also be a useful tool when explaining complex concepts without requiring the listener to understand the full complexity. When our son complains about his hands hurting after he’s been coloring for a long time and I tell him it’s because they are “tired” he understands he needs to give them a “rest” without needing to understand the physiological causes of muscle fatigue. When a TV show is abruptly interrupted by an error message on the TV and I tell him it’s because our TV can’t talk to the streaming service, he understands this is a communication problem without needing to understand the intricacies of HTTP protocol, DNS servers, or packet loss.

So when seemingly all-knowing AI systems inexplicably fabricate information, it’s easier to explain what happened through the metaphor of hallucination than it is to explain the inner workings of computer systems even the people who build them do not have a complete understanding of.

Throughout the history of the science of artificial intelligence, we’ve used metaphorical language rooted in human cognition and behaviour to explain how these systems operate. The term “artificial intelligence” is a metaphor describing systems whose capabilities go beyond traditional computer systems and are “smart” the way humans are smart. Saying AI systems “learn” about the world through “training” uses education metaphors to make simple the enormously complex machine learning algorithms and processes that go into building their models. Saying AIs have “knowledge,” “reasoning” capabilities, and the ability to “follow instructions” uses metaphor to explain their often surprising power. We use the metaphor of human communication and interaction when we tell people to “have conversations” with the systems and refer to them as individuals with human traits like attitudes and emotions.

When we use these metaphors to describe AI, people get enough of an understanding of what’s going on to be able to speak about these systems and see how they can fit into their lives and work without having to understand their technical underpinnings.

The problem is by using anthropomorphic language – metaphors referencing human traits – we construct an image in our minds of these systems being variants of ourselves: machines that are intelligent like us humans, that learn about the world through training like us humans, that hold knowledge and reason like us, and follow instructions like us, and have conversations like us. And when some of those systems use our own very human language as both input and output, our metaphors get validation in the real world and we start thinking of the machines and their software as living conscious agents even when we know they are not.

No wonder then when an AI outputs information that looks true but turns out to be a fabrication we continue the pattern and describe the machine as a liar. 

We could have, and probably should have, chosen to use more technical language for these machines, but in doing so we’d have missed out on the magic and the marketing. “This is the courtyard and Juliet is a human on the balcony” pales to the evocation of “This is the east and Juliet is the sun!” because the language we use colours and shapes our understanding of the world. So now that we’ve chosen to use human metaphors to describe systems of non-linear computing algorithms that process information and build network models, perform advanced retrieval from data graphs and calculate responses based on neural networks, take input and produce output in the form of tokens, and output statistically correct but sometimes fabricated token sequences that reproduce human language, we must always be on guard against the hallucinations our language conjures within us.

I fear in our attempt to make AI more understandable we have committed an unintentional act of self-deception. The metaphor of humanity rides too close to our dreams of machines built in our image, and our language makes us confuse those dreams with reality.

In the Age of AI, our biggest challenge may be overcoming our own excessive hallucinations.

Cross-posted to LinkedIn.

By Morten Rand-Hendriksen

Morten Rand-Hendriksen is a Senior Staff Instructor at LinkedIn Learning (formerly lynda.com specializing in AI, bleeding edge web technologies, and the intersection between technology and humanity. He also occasionally teaches at Emily Carr University of Art and Design. He is a popular conference and workshop speaker on all things tech ethics, AI, web technologies, and open source.