Tools and Materials: A Mental Model for AI

“Language shapes the way we think, and determines what we can think about.”

Benjamin Lee Whorf

Before we begin, I asked ChatGPT to rewrite this article at a 4th grade reading level. You can read the result here.

Artificial? Yes. Intelligent? Not even close. It is not without reason things like ChatGPT are called “AI” or “Artificial Intelligence.” We humans have a propensity for anthropomorphizing – attribute human characteristics to – things that are not human. Thus if we are told something is intelligent, let’s say a very large computer system we can submit questions and get answers from, we look for intelligence in that thing. And if that thing is trained on our own language and art and mathematics and code, it will appear to us as intelligent because its training materials came from intelligent beings: Us ourselves.

“Artificial Intelligence” is a clever marketing term for computer models designed to appear intelligent even though they are not.

So, as we crash headfirst into the AI present and future, we need to reset our mental model before we start believing these things we call “Artificial Intelligences” are actually intelligent (again, they are not).

Tools and Materials

I propose we all start thinking of these things we call “AI” as tools and materials. Because that’s what they are and that’s how we’ll end up using them.

Sometimes we’ll use them as tools the same way we use our phones and computers and the apps on them as tools. Sometimes we’ll use them and what they produce as materials the same way we use printed fabrics and code snippets to create things. And sometimes we’ll use them as both tools and materials the same way we use word processing applications first as a tool with which we write a body of text and then a material as the thesaurus function helps us use more fanciful words and phrases.

Here are some basic examples to help you build the mental model:

AI as a tool performs a task for us:

  • Fill out tax forms, write contracts and legal documents.
  • Summarize text, rewrite text to a specific reading level.
  • Write code.
  • Online shopping including booking flights and hotels etc.
  • Any interaction with any CSR.
  • Magic eraser for images, video, and audio.

AI as a material generates something for us:

  • Simple stories.
  • Plot lines for stories.
  • News articles and summaries.
  • Images and other art.
  • Variants of a layout, or a theme, or an image, or a painting.

Thinking of AI as tools and materials rather than intelligent things with magical human-like powers is an essential mental shift as we figure out how to fit these things into our lives and our world. We have to move away from the linguistic trick their creators foisted upon us with their naming, and move towards the practical realities of what these things really are:

AI are if-this-then-that machines using enormously complex decision trees generated by ingesting all available writings, imagery, and other human-made materials and filtering that data through pattern-matching algorithms.

They are regurgitation machines echoing our own works back to us.

And just like we are drawn to our own image every time we pass a mirrored surface, we are drawn to the echoes of ourselves in the output of these machines.

Shallow Work and Human Creativity

Asked for one word to describe AIs, my immediate answer is “shallow.” You’ve probably felt this yourself without being able to put your finger on it. Let me explain:

There is a bland uniformity to AI output. It’s easiest to notice in generative AI images. Once you’ve been exposed to enough of them, they start taking on a very specific “AI-ness.” For all their variety, there is something recognizable about them – some defining feature that sets them apart from what we recognize at human-made images. That thing is shallowness.

AIs are conservative in the sense they conserve and repeat what already exists. They don’t come up with anything new. They are also elitist in the sense they lean towards what is predominant, what there is more of. They are swayed by trends and popularity and amplify whatever majority opinion they find in their training data.

This makes their output bland and uniform and shallow like a drunk first-year philosophy student at a bar: The initial conversation may be interesting, but after a few minutes you notice there’s little substance behind the bravado. I’ve been that drunk first-year philosophy student so I know what I’m talking about.

This means while AIs are great at doing shallow rote work, they have no ability to bring anything new to the table. They lack creativity and ingenuity and lateral thinking skills because these skills require intelligence. And AIs are not intelligent; they just play intelligent on TV.

Will an AI take my job?

Our instinctual response any new technology is “will it take my job?” It’s a valid question: Jobs are essential for us to be able to make a living in this free-market capitalist delusion we call “modern society,” yet job creators have a tendency to let go of expensive human workers if they can replace them with less expensive alternatives like self-checkout kiosks that constantly need to be reset by a staff member because you put the banana in the bagging area before you chose whether to donate $2 to a children’s charity, or automated “voice assistants” that never have the answers to your customer service questions and only pass you to an actual human once you’ve repeated the correct incantation of profanity (try it, it totally works!)

So now that we have these things some clever marketing people have told us to call “AI,” are they coming for your job? Well, that depends:

If your job is shallow and constitutes mainly rote work, there’s a good chance an AI will enter your life very soon – as in within months – and become part of the toolkit you use to get your job done quicker. And if it turns out that AI can be trained to do your job without your intervention (by having you use it and thereby training it), there’s a non-zero chance it will eventually replace you. That chance hinges more on corporate greed than it does AI ability though.

If your job involves any type of creative, or deep, or lateral, or organizational, or original, or challenging, or novel thinking, AI will not take your job because AI can’t do any of those things. You’ll still work with AI – probably within months – and the AI may alleviate you of a lot of the rote work you are currently doing that takes your attention away from what you were actually hired to do – but the AI is unlikely to replace you. Unless corporate greed gets in the way. Which it often does because of the aforementioned free-market capitalist delusion we call “modern society.”

What we all have to come to terms with today is we’re long past the point of no return when it comes to AI. While technology is not inevitable, technology often becomes so entrenched it is impossible to … un-entrench it. That’s where we are with AI. No matter where you live and what you do for work, for school, or in your own time, you’re already interacting with AIs in more ways that you can imagine. And these AIs are going to become part of your work, your school, and your home life whether you want them or not.

Our job now is to talk to one another about what role these things called “AI” are going to play in our lives. How do we use them in ways that don’t take jobs away from the humans who need them the most – the historically marginalized and excluded people who tend to hold jobs comprising mainly shallow rote work? How do we build them in ways that don’t cannibalize the creative works of artists and writers and coders and teachers? How do we incorporate AI into education to improve learning outcomes for students and build a more informed and skilled populace? How do we wrench control over our AI future from the surveillance capitalists and longtermists currently building the world to their libertarian techno-utopian visions?

How do we use AI and all technology to create human flourishing and build futures in which we all have the capabilities to be and do what we have reason to value?

If we don’t talk about the future, the future becomes something that happens to us. Let’s have this conversation.

Cross-posted to LinkedIn.

By Morten Rand-Hendriksen

Morten Rand-Hendriksen is a staff author at LinkedIn Learning and specializing in WordPress and web design and development and an instructor at Emily Carr University of Art and Design. He is a popular speaker and educator on all things design, web standards and open source. As the owner and Web Head at Pink & Yellow Media, a boutique style digital media company in Burnaby, BC, Canada, he has created WordPress-based web solutions for multi-national companies, political parties, banks, and small businesses and bloggers alike. He also contributes to the local WordPress community by organizing Meetups and WordCamps.