“An AI may not be used to harm humanity, or, by not being used, allow humanity to come to harm.”
As Artificial Intelligence systems (AI) like #ChatGPT enter into our lives and our work, we need some basic guidelines for how we implement them going forward. Here’s a place to start:
The Zeroth Law of AI Implementation:
An AI may not be used to harm humanity, or, by not being used, allow humanity to come to harm. Implement AI in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end, and treat AI always as a means to an end and never as an end in itself.
My proposed Zeroth Law of AI Implementation aims to ground us in a shared purpose and vision for how we build our future with AI. It sets forth four basic principles:
Do No Harm* with AI.
Harm can be caused by having a tool and refusing to use it or otherwise limiting its use. For example harm can be caused by limiting access or capability based on factors including socio-economic status, geography, disability, etc.
AIs are always means to (human) ends, and must never be ends in themselves.
* We need a clear definition what “harm humanity” means including 1) who gets to determine what constitutes harm, 2) who can be harmed, and 3) who adjudicates whether harm has been caused.
The Reason
The goal of technology is to build futures where humans have the capabilities to be and do what they have reason to value. AI technology presents an infinite possibility space for us to make that happen. AI technology also allows us potential to limit our own capabilities and cause real harm.
These statements seem self-evident, yet when technology is developed and implemented, we often forget its core purpose as we are blinded by the capabilities of the technology (it becomes an end in itself), by the power it affords us (those with access gain powers not afforded to others), and by the financial opportunities it affords us (using technology to turn humans into means for financial ends).
Grounding ourselves in a common principle like this proposed Zeroth Law of AI Implementation reminds us of the core purpose of technological advancement. It gives us a reference point we can use to evaluate and challenge our decisions, and a shared foundational platform we can build further principles on.
Discuss
Right now, at the start of our future with AI, we have a unique opportunity to talk about where we want to go next and how we want to get there. That conversation starts with talking about core principles. The Zeroth Law of AI Implementation is my contribution to this conversation. I’d love to hear yours!
I have a confession to make: My university degree sits atop a mountain of lies I told to mask my dyslexia. Now AI is positioned to make education more accessible so future students don’t have to lie to get where they need to go.
I read maybe a quarter of the required materials for my university studies. My term papers are filled with quotes from books I never even opened. I became a master at convincing my fellow students to give me summaries, quick explainers, and relevant quotes from materials I knew I’d never be able to finish in time to meet the inaccessible demands of academia. And after 5 years and a degree, I abandoned my hopes for a graduate degree in philosophy to pursue other avenues where my reading disability was not a constant blocker.
Today, as we stand at the beginning of a new era of computing, one shining beacon in the infinite possibility space of AI is making education more accessible.
AI can make education more accessible today
Here are a few ideas for how we can implement existing AI tools in education right now to dramatically improve accessibility:
AI audiobooks on demand: AI can generate natural sounding audiobooks from any written text. Audiobooks of academic texts are hard to come by and prohibitively expensive. AI can solve that problem and allow the reader to choose their preferred reading modality. This tech already exists (Apple has shipped it).
AI translations to any language: AI models are very good at translating text from one language to another. This means academic texts written in one language can now be accessible in any language. Again, the technology already exists in the form of Google Translate etc.
AI reading level adjustment: You know that “explain it to me like I’m five” meme? AI language models like ChatGPT can do that, and ensure the salient points and meaning of the text is preserved. Academic texts are often superfluously arcane and turgid. I predict in the near future we’ll have browser plugins and other AI-powered services where you can set the reading level and writing style of any text to your preference and preserve its meaning.
AI summaries on demand: Using summaries of long texts to enforce learning has a long tradition in academia. There’s a lucrative industry and pop culture mythology around services like Coles Notes and CliffsNotes. AI can be used to generate custom summaries from any text, large or small to make it more accessible to people like me who can’t read the whole thing.
AI assessments, flashcards, and other learning tools: Dump any text into an AI and ask it to generate assessments, flashcards, questions, examples, or other things. The possibilities here are limitless.
AI auto-captions and transcripts of audio, video, and in-person events: 5 years ago captions were an expensive nice-to-have very few could afford or cared about. Today, auto-captions are available on every LinkedIn and YouTube video, and most platforms also provide verbose transcripts on the fly. These same technologies are used for live captioning in video chat apps like Teams, and can be used at live events including in classrooms. The technology is available, and quite frankly I can’t think of any good reason this tech is not immediately implemented across all educational campuses world wide to provide improved accessibility. Imagine knowing you will have a transcript at the end of every lecture so you can focus on understanding what is being taught instead of just writing it down!
The possibility space here is infinite!
Let me guess: You have concerns. About the accuracy of AI summaries and transcripts and translations. About whether leaving AIs to do this work will take jobs away from humans. About whether students relying on AIs will result in the students not learning anything.
Here’s my reality check to you: The alternative is students not learning at all. The alternative is students like me lying through their teeth to pass arbitrary tests of reading speed (that’s what mandatory minimum page counts are). The alternative is large numbers of people being excluded from education because we’ve chosen to tie access to education to reading ability and speed rather than literally anything that has to do with actual learning.
As we build our future with AI, our conversations about AI in education should be about how these technologies can lower barriers and make education accessible and possible for everyone.
Reading ? Learning
Much handwringing and professional dread has been published in the past months around the risks of AI models like #ChatGPT being used by students to plagiarize materials and generally cheat on their papers and exams. Headlines like “ChatGPT passes exams for MBA courses and medical licences” are addictive clickbait in an environment of economic uncertainty and pandemic fatigue. But look beyond the immediate “the graphing calculator / internet / smartphones / YouTube / chat apps / smart watches / AI will destroy education and plunge us back into the dark ages” response and you see the potential of this technology to make education not only better, but more available and accessible to more people. If we build the tools to make that future happen.
I have moderate dyslexia meaning I read at about speaking pace and when I read the cognitive load of just getting the words right and keeping track of where I am is exhausting. Reading a regular length novel cover to cover can take me 6 months. Reading a densely worded academic textbook is a multi-year project. But in university, the expectation was for me to consume up to 5,000 pages of material each semester. So while I never wanted to lie to my fellow students, my professors, and my academic advisors about my reading, I was forced to. Academia, and education in general, is built on a core assumption that people’s ability to read and read fast is tied to their ability to learn, grow, innovate, and build the future. I am walking proof this is not the case, and I’m also a prime example of how reading speed is a permanent blocker for many educational paths and the careers that follow.
Which is absurd because it is not the reading that matters, it’s understanding the ideas captured in the reading materials.
Enter AI
Last week I was trying to explain the ethics of Amartya Sen’s Capability Approach to a friend. One of the core ideas of Capability Approach is that people should have the capabilities and functionings to be and do what they value and have reason to value in their lives and environments. Which begs the question: What is the difference between what we value and what we have reason to value?
This is not something Sen answers directly. Instead it is something you glean from reading several of his published works. As with much of academic philosophy, this distinction is at the same time essential to understanding and lacking in common sense explanation.
So I asked ChatGPT: “What is the difference between what we value and what we have reason to value?”
Here’s what the AI answered:
“What we value refers to the things that we personally consider important or desirable, such as material possessions, relationships, or personal achievements.
What we have reason to value, as described by Amartya Sen in the capability approach, refers to the things that we have the capability to achieve and that are consistent with our personal values and goals. In other words, it refers to the things that we would value if we had the opportunity to achieve them.
For example, one might value owning a house, but may not have the capability to buy one due to financial constraints. In this case, having a safe and affordable place to live would be something they have reason to value. In this way, the distinction between what we value and what we have reason to value is one of actual opportunity or capability. The capability approach takes into account the real opportunities for people to achieve the things they would value if they had the capability to do so.”
Screenshot of the ChatGPT output quoted above.
This is extraordinary. Notice how the system not only provides an understandable and plain language explanation of the subject, but also manages to pick up on the obscure origins of my question without the prompt providing what I consider sufficient context?
This explanation of the difference between what we value and what we have reason to value is so good it should be used in introductory textbooks on Sen’s work. And it serves as an example of what AI can do for education if we build AI-powered systems for education.
And Sen’s distinction between what we value and what we have reason to value is salient to this issue: We as a society value education, and we have reason to value education for all. Even so there are millions of people around the world whose real opportunities to access education are limited for a an endless list of reasons. AI can grant us all the capabilities to access education in new and better ways, allowing more of us to do and be what we have reason to value in our lives, our schools, and our work!
Our future with AI is the future we build
We are at the beginning of the beginning of our future with AI. Ahead of us is an infinite possibility space. And more now than at any time in human history we have the capabilities and agency and communal connection and global wealth and power to build positive and meaningful futures for ourselves and those around us together.
It starts with looking beyond the perpetual shock of disruption every new technology brings to what those new technologies can do for us as we integrate them into our lives and our work. It starts with talking about the futures we want to build for ourselves and how we make them real. It starts with seeing the world, thinking about how to make it better, and then making it happen.
If we do this right, AI can make our jobs and our lives easier and give us time back to do the things we have reason to value. Pair programming with AI serves as a practical example.
With the realization of AI’s power comes well-justified concerns about how AIs will figure into our lives – and in particular our work. Look to any media outlet and you’ll find a dense fog of articles, videos, podcasts, and think pieces about whether, when, and how AIs will take people’s jobs, and whose jobs are most at risk right now.
In this darkness, let me put up a bright beacon on the horizon of possibility and give you a glimpse of what a future of human-AI collaboration can look like.
Explain it to me
You bump up against a problem at work: an Excel formula you’ve forgotten, an inscrutable data processing script written by people no longer on the team, the right way to invoke a particular JavaScript function while being mindful of state. These situations are common, and they consume significant time and cognitive resources. They are also what I call “robot work,” as in the kind of repetitive rote work you can imagine a robot doing.
Now imagine having a skilled co-worker on call, at all times, ready to help you find and explain the right formula, document that inscrutable script, and refactor or even build from scratch that JavaScript function you need.
That’s what AI can be for us: Just-In-Time assistants for all the tedious, time consuming, and rote robot work taking up our valuable time and cognitive capacity.
If you’re a developer, you can experience this future today via various AI integrations including GitHub Copilot and ChatGPT.
GitHub Copilot Labs panel in VS Code.
GitHub Copilot coupled with the new GitHub Copilot Labs extension in VS Code gives you a pair programming assistant right in your development environment. Highlight any block of code and in the Copilot Labs panel you can ask for an explanation of the code, have it translated into another (applicable) code language, use a series of “brushes” on it including making the code more readable, adding types, cleaning, chunking, even documenting. You can even use Copilot to write and run tests on your code.
A myriad of ChatGPT extensions including Ali Gençay’s ChatGPT for VS Code does much the same, via a slightly different route. Authenticate the extension with OpenAI’s ChatGPT API, highlight any code, and you can ask ChatGPT to add tests, find bugs, optimize, explain, and add comments automatically. You also get the ability to start a full chat with ChatGPT in a dedicated panel right inside the editor where you can talk to the AI in more detail about whatever you want.
Features from the ChatGPT VS Code extension.
Time and Energy
This past week I’ve been working on a large project involving code written by someone else. The JavaScript of the project has been co-located in two giant files, and is an unruly mix of carbon copies of standard components and functions, modified code copied from documentation sites, and custom code. As is often the case, the documentation is lacking where it’s most needed.
For my project, I need to refactor (rewrite to do the same code in a different way) large parts of the code, and add extensive documentation along the way. That means figuring out how the code works, adding comments to clearly document and explain it, reorganizing the code so it can be split into components, cleaning up and updating the code to modern standards, and finally splitting it up into discrete modules. This is robot work: rote repetitive tasks consuming significant time and cognitive energy better spent focusing on what the code is for: Serving as an example for a course on JavaScript build processes.
In the past, I’d end up investing several days doing this work. Now, with the help of AI, the workload is significantly reduced. This, I think, is an instructive example of how our future with AI can unfold: with AI assisting us as we do our work rather than take over that work.
Refactor this
Both GitHub Copilot and ChatGPT are conversational AIs. You chat with them as you would chat to a person. You can ask questions, give instructions, and ask them to perform tasks for you. Using AIs as pair programmers you do a combination of all of this and more.
If you’re using the VS Code extensions mentioned above, they are already set up for the correct context. In the case of ChatGPT you can also use it as a stand-alone pair-programmer with some basic setup:
To start, set the stage (literally) by instructing the AI on how you want it to behave. In a new chat, provide a prompt similar to this:
“You are a pair programmer specializing in JavaScript. You help refactor code to bring it up to ES6 standards. Is that understood?”
Setting the stage for ChatGPT to provide useful feedback.
The format here is “you are a [some role]. You [perform some skill/action]. Is that understood?” The last question gives the AI an opportunity to state how it is setting itself up based on your instructions and gives you an opportunity to provide further instructions. In my case ChatGPT responded as follows:
“Yes, that is understood. As a language model, I am able to understand and provide assistance with various programming languages, including JavaScript and its latest version, ES6. I can help refactor code to bring it up to current standards, as well as provide guidance on best practices for writing maintainable, efficient code.”
From here you can for example paste in a JavaScript function and ask the AI to help find an error and resolve it:
ChatGPT finds and explains issues in JavaScript.
In response, ChatGPT provides an explanation of the errors it discovered, prototype examples of solutions to the issues, and finally a full refactoring of the pasted code with the issues resolved.
This kind of contextual response not only helps solve immediate problems, but also teaches you what’s wrong and how to fix it.
This is invaluable for people learning to code and people working with code in any capacity which is why I’d strongly discourage any teacher or manager who is right now trying to figure out how to block people from using AIs in their work. AIs reduce the need for Googling or looking up code examples on documentation sites, coding forums, and open source repositories. Instead they give you contextual explanations and references related to your specific code, and even help you with refactoring. This is the future of work, and gives us more capabilities as workers.
Have some code you can’t make heads or tails of? AI can explain what it does. Computers are much better at parsing logic based languages than humans, and conversational AI like ChatGPT are specifically constructed to output human-sounding language making them ideal tools for decrypting complex code for human consumption.
Have some code in need of documentation? AI can write a function description, inline comments, or whatever you prefer based on your instructions.
Need to refactor based on specific parameters? AI can get you started.
I could go on but I think you get the idea.
I’ve worked alongside these AI pair programmers for the past year and a bit, and I can say with absolute conviction these tools and materials will make our lives better if we use them right and integrate them in our lives as helpers for rather than replacements of human labor.
In my experience, pair programming with an AI feels like working with an overly polite person with encyclopedic knowledge of coding and no awareness of what they don’t know. And this constitutes just our first timid steps into the infinite possibility space we are entering as AIs become our assistants.
The beginning of the beginning
As you interact with AI today, be constantly aware of where you are: At the beginning of the beginning of a new era. While these tools are powerful, they are not omnipotent. Far from it. They are shallow, error prone, and while they sound convincing they cannot be trusted. A good mental model for what they produce right now is bullshit as defined by Harry G. Frankfurt: It looks true, and it may be true, but some of the time it’s just plain wrong and the AI will still present it as the truth. While they talk like humans, AIs are not conscious or sentient or aware. They have no human comprehension of your question or their answer. They are advanced pattern recognition systems who tumble down enormously complex decision trees any time a prompt is provided to issue human-sounding strings of text (or code) with a statistically high probability of being the kind of answer that is considered correct by their human trainers.
When I asked ChatGPT to correct a function containing a deprecated method, it corrected the syntax of the function but kept the deprecated method. When I told it the method was deprecated, it omitted it and refactored the code, but the result used a similar-sounding method that serves a very different purpose and was therefore non-functional and just plain wrong.
When I asked ChatGPT to find an error in a large body of code, it found two real errors and invented a third one, going as far as referencing use of a method that wasn’t even in the code provided.
These examples highlight why I see AIs as tools and materials rather than replacements for human labor. They have no understanding, no contextual awareness, no ability to do creative or lateral thinking. A human still needs to be in the loop; to make sure the output meets parameters, does what was intended, and follows best practices and standards (not to mention hold ethical responsibility for the work created). These things we call “AI” are very much artificial, but they are not intelligent.
Intelligence is added by the human operator.
Even so, the pair programming offered by these prototype AIs is an enormous leap forward for human workers. And you can easily see how this type of AI-driven assistance can be extended to other work and other tasks.
I’ve come to think of them as overconfident colleagues with a lack of self-awareness. Because of how they are “trained” – being fed large volumes of data from a corpus lifted off the internet – their “knowledge” is limited to the coding world of two years ago. Therefore, when it comes to modern features, frameworks, techniques, and standards released in the past two years, our current AIs know naught, and more importantly do not know what they do not know. Therefore, if you’re writing code on the bleeding edge of standards, you’re on your own. Or better yet: You’re training your future AI pair programmer! So the pressure is on to get it right!
The future is today
Having seen what AIs can do today, I desperately wish I had a looking glass to see what the future of work looks like. The potential here is infinite. The very best AI tools we have today are prototypes and MVPs trained on old data and limited in their scope. The AIs we’ll have a year from now, five years from now, ten years from now will be beyond anything we can imagine. And with these tools and materials in hand we can choose to build meaningful futures for everyone where we all have the capabilities to be and do what we have reason to value.
The future we are stepping into today is a future where AI is part of our lives, our work, our communities and our society. If you are alive today, and especially if you find yourself in a job, you are in the right place at the right time: These next few years is when we collectively get to figure out how AI fits into our lives and our work. This is when we set the stage for our futures with AI, and we all have a part to play. The work starts by asking yourself in what parts of your life you act like a robot, and whether you’re willing to part with that work and let an AI do it for you so you can do something else.
If we do this right, AI will allow us to reclaim our time to be human.
“Language shapes the way we think, and determines what we can think about.”
Benjamin Lee Whorf
Before we begin, I asked ChatGPT to rewrite this article at a 4th grade reading level. You can read the result here.
Artificial? Yes. Intelligent? Not even close. It is not without reason things like ChatGPT are called “AI” or “Artificial Intelligence.” We humans have a propensity for anthropomorphizing – attribute human characteristics to – things that are not human. Thus if we are told something is intelligent, let’s say a very large computer system we can submit questions and get answers from, we look for intelligence in that thing. And if that thing is trained on our own language and art and mathematics and code, it will appear to us as intelligent because its training materials came from intelligent beings: Us ourselves.
“Artificial Intelligence” is a clever marketing term for computer models designed to appear intelligent even though they are not.
So, as we crash headfirst into the AI present and future, we need to reset our mental model before we start believing these things we call “Artificial Intelligences” are actually intelligent (again, they are not).
Tools and Materials
I propose we all start thinking of these things we call “AI” as tools and materials. Because that’s what they are and that’s how we’ll end up using them.
Sometimes we’ll use them as tools the same way we use our phones and computers and the apps on them as tools. Sometimes we’ll use them and what they produce as materials the same way we use printed fabrics and code snippets to create things. And sometimes we’ll use them as both tools and materials the same way we use word processing applications first as a tool with which we write a body of text and then a material as the thesaurus function helps us use more fanciful words and phrases.
Here are some basic examples to help you build the mental model:
AI as a tool performs a task for us:
Fill out tax forms, write contracts and legal documents.
Summarize text, rewrite text to a specific reading level.
Write code.
Online shopping including booking flights and hotels etc.
Any interaction with any CSR.
Magic eraser for images, video, and audio.
AI as a material generates something for us:
Simple stories.
Plot lines for stories.
News articles and summaries.
Images and other art.
Variants of a layout, or a theme, or an image, or a painting.
Thinking of AI as tools and materials rather than intelligent things with magical human-like powers is an essential mental shift as we figure out how to fit these things into our lives and our world. We have to move away from the linguistic trick their creators foisted upon us with their naming, and move towards the practical realities of what these things really are:
AI are if-this-then-that machines using enormously complex decision trees generated by ingesting all available writings, imagery, and other human-made materials and filtering that data through pattern-matching algorithms.
They are regurgitation machines echoing our own works back to us.
And just like we are drawn to our own image every time we pass a mirrored surface, we are drawn to the echoes of ourselves in the output of these machines.
Shallow Work and Human Creativity
Asked for one word to describe AIs, my immediate answer is “shallow.” You’ve probably felt this yourself without being able to put your finger on it. Let me explain:
There is a bland uniformity to AI output. It’s easiest to notice in generative AI images. Once you’ve been exposed to enough of them, they start taking on a very specific “AI-ness.” For all their variety, there is something recognizable about them – some defining feature that sets them apart from what we recognize at human-made images. That thing is shallowness.
AIs are conservative in the sense they conserve and repeat what already exists. They don’t come up with anything new. They are also elitist in the sense they lean towards what is predominant, what there is more of. They are swayed by trends and popularity and amplify whatever majority opinion they find in their training data.
This makes their output bland and uniform and shallow like a drunk first-year philosophy student at a bar: The initial conversation may be interesting, but after a few minutes you notice there’s little substance behind the bravado. I’ve been that drunk first-year philosophy student so I know what I’m talking about.
This means while AIs are great at doing shallow rote work, they have no ability to bring anything new to the table. They lack creativity and ingenuity and lateral thinking skills because these skills require intelligence. And AIs are not intelligent; they just play intelligent on TV.
Will an AI take my job?
Our instinctual response any new technology is “will it take my job?” It’s a valid question: Jobs are essential for us to be able to make a living in this free-market capitalist delusion we call “modern society,” yet job creators have a tendency to let go of expensive human workers if they can replace them with less expensive alternatives like self-checkout kiosks that constantly need to be reset by a staff member because you put the banana in the bagging area before you chose whether to donate $2 to a children’s charity, or automated “voice assistants” that never have the answers to your customer service questions and only pass you to an actual human once you’ve repeated the correct incantation of profanity (try it, it totally works!)
So now that we have these things some clever marketing people have told us to call “AI,” are they coming for your job? Well, that depends:
If your job is shallow and constitutes mainly rote work, there’s a good chance an AI will enter your life very soon – as in within months – and become part of the toolkit you use to get your job done quicker. And if it turns out that AI can be trained to do your job without your intervention (by having you use it and thereby training it), there’s a non-zero chance it will eventually replace you. That chance hinges more on corporate greed than it does AI ability though.
If your job involves any type of creative, or deep, or lateral, or organizational, or original, or challenging, or novel thinking, AI will not take your job because AI can’t do any of those things. You’ll still work with AI – probably within months – and the AI may alleviate you of a lot of the rote work you are currently doing that takes your attention away from what you were actually hired to do – but the AI is unlikely to replace you. Unless corporate greed gets in the way. Which it often does because of the aforementioned free-market capitalist delusion we call “modern society.”
What we all have to come to terms with today is we’re long past the point of no return when it comes to AI. While technology is not inevitable, technology often becomes so entrenched it is impossible to … un-entrench it. That’s where we are with AI. No matter where you live and what you do for work, for school, or in your own time, you’re already interacting with AIs in more ways that you can imagine. And these AIs are going to become part of your work, your school, and your home life whether you want them or not.
Our job now is to talk to one another about what role these things called “AI” are going to play in our lives. How do we use them in ways that don’t take jobs away from the humans who need them the most – the historically marginalized and excluded people who tend to hold jobs comprising mainly shallow rote work? How do we build them in ways that don’t cannibalize the creative works of artists and writers and coders and teachers? How do we incorporate AI into education to improve learning outcomes for students and build a more informed and skilled populace? How do we wrench control over our AI future from the surveillance capitalists and longtermists currently building the world to their libertarian techno-utopian visions?
How do we use AI and all technology to create human flourishing and build futures in which we all have the capabilities to be and do what we have reason to value?
If we don’t talk about the future, the future becomes something that happens to us. Let’s have this conversation.
Sometimes, we think of things called “AI” or “Artificial Intelligence” as being intelligent. But they’re not really. They’re just computers that can answer questions. We humans have a tendency to give human characteristics to things that aren’t human. So if we think something is intelligent, like a big computer system, we might see it as being smart. But really, it’s just using the things we’ve taught it, like our language, art, and math.
That’s why it’s important to change the way we think about AI. Instead of thinking of it as being intelligent, we should think of it as tools and materials. That’s what it really is and how we’ll use it.
Sometimes, we’ll use AI like a tool, like we use our phones and computers. Other times, we’ll use what it makes as materials, like we use printed fabrics to create something. We might even use it as both a tool and material, like when we use a word processing app to write and then use the thesaurus function to make the writing more fancy.
Here are some examples of how we can use AI:
As a tool to fill out tax forms, write contracts, or create code
As a tool to summarize text or rewrite it for a specific reading level
As a tool for online shopping, like booking flights and hotels
As a tool to talk to customer service representatives
As a tool to clean up images, video, and audio
As a material to make simple stories, plot lines, news articles, and images
As a material to create variations of a layout, theme, or painting
AI is really just a bunch of if-then statements and very complex decision-making based on patterns in the data it’s been given. It’s like a machine that repeats what we’ve already made. And just like we’re drawn to our own reflection in a mirror, we’re drawn to the things AI makes that are like us.
But there’s a problem with AI. It’s shallow. Have you ever noticed that AI images and things it makes all kind of look the same? That’s because AI is shallow. It doesn’t come up with anything new. It just repeats what it’s already seen. It also likes what’s popular and trendy, and it amplifies the things it sees a lot of in its training data. This makes the things it makes feel boring and shallow, like a student who’s had too much to drink talking about philosophy.
Human creativity is different. It’s deep. It comes up with new ideas and approaches things in unique ways. AI can’t do that. So we need to use it as a tool and material, not as a replacement for human creativity.
The next phase of the web is already here, and it’s defined by AI-generated content.
I wrote this article using only my mind, my hands, and a solid helping of spellcheck. No machine learning models, NLPs, or so-called AI contributed to what you’re reading. As you read on you’ll realize why this clarification is necessary.
Ghost in the Typewriter
Reading an article, watching a video on TikTok or YouTube, listening to a podcast while you’re out running, you feel you have a reasonable expectation the content you’re consuming is created by a human being. You’d be wrong.
There is good reason to assume at least part of what you’re consuming was either created by or assisted by an AI or some form of NLP (Natural Language Processor) or machine learning algorithm. Whether it’s a TikTok video about a viral trend, an article in a renowned newspaper, or an image accompanying a news story on television, chances are some form of AI generation has taken place between the idea of the story being created and the story reaching you.
The above video is from 2018. Consider the vertically accelerating rate of technological evolution we’re undergoing right now and I’ll leave you to imagine how much bigger and more advanced this phenomenon is now and how much further it’s going to go in the next few years.
The Next Phase of the Web
There’s a good chance you’ve heard the term “web3” used recently, and there’s a good chance it’s been accompanied with some form of marketing statement like “web3 is the next version of the web” or “the next phase of the internet” or “the thing that will replace Facebook and Google” or something similar.
If you have not (actually even if you have) here’s a quick primer on what this “web3” thing is:
From my perspective, as someone who spent the past several years embedding myself in the community and its cultures, “web3” is a marketing term for all things built on a decentralized trustless blockchain and used to promote a future where everything on the web is financialized through cryptocurrencies and NFTs. It has nothing to do with the web platform and everything to do with the crypto community leveraging the public awareness and concerns around what’s known as “Web 2.0” to promote their libertarian anti-regulation cryptocurrency agenda. If you want a less opinionated and more descriptive explanation of the term, I invite you to check out my LinkedIn learning course titled “What is Web3?” or you can check out Molly White‘s excellent blog “web3 is going just great.“
The “web3” and “Metaverse” conversations are a distraction from what’s actually happening on the web – what is defining the next phase of the web:
Where as Web 1.0 was defined by people being able to publish content using HTML, CSS (and eventually JavaScript), and Web 2.0 was defined by people being able to publish content through user-friendly applications that generated the HTML and CSS and JavaScript for them, the next stage of the web (call it Web 3.0 for consistency) is being defined right now by people being able to tell machines to generate content for them to be published using HTML, CSS, and JavaScript.
The phases of the web have to do with how the underlying technologies simplify and change the types of content we publish, not by how we monetize that content.
Where we are right now, with the line being blurred between human-generated and AI-generated content, is at the very beginning of this next phase where the magical abilities of yet-to-be-fully-understood technologies allow us to do things we previously couldn’t even dream of.
The fawning articles about amazing AI-generated art are the public-facing part of an emotional contagion campaign designed to condition and eventually habituate us to a new reality where machines create our content and we passively consume it.
The AI-fication of online content isn’t something on the horizon, a possible future; it’s our current lived reality. The line has already been crossed. We’re well into the next phase whether we want to accept it or not. AI is already generating and curating our news, our fake news, our information, our disinformation, our entertainment, and our online radicalization. Creators are rushing to take advantage of every promise offered by AI companies in their relentless pursuit of easy profits through fame-based marketing. Your reasonable expectation today must be that the content you consume is wholly or in part generated by AI unless it explicitly states otherwise (remember my disclosure at the top of the article). And we’re only getting started.
The Immediate Future of AI Content
Right now, your interaction with AI-generated content is largely invisible to you and mainly comes in two forms: AI-curated content (everything surfaced or “recommended” to you through social media, news apps, online streaming services, and AI-assistants like Google Home, Siri, Alexa, and Cortana is brought to you by some form of AI) and AI-assisted content (AI, ML, and NLPs used to either create, add to, edit, modify, market, or otherwise contribute to the generation of content.)
In the near future, I predict we’ll see the emergence of a new type of tool targeted at the average information consumer: AI services providing synthesis of online content as customized coherent storytelling in the form of articles, podcast-like audio, and eventually video.
Not AI used by creators to generate new content, but AI used by us to generate specialized content for ourselves.
In the near future AI assistants and apps will take a plain language prompt like “tell me what’s happening with the situation in Palestine, or Ukraine, or the US” and compile in seconds a thoroughly sourced article, audio narration, or video – written in your language, preferred complexity, reading level, and length – stringing together reporting and historical reference material from various online sources to produce what will appear to you as proper journalism.
These new apps and services are the natural evolution of the curated content streams we already have through news apps and social media. The difference will be they will no longer only provide us the original sources of information: They will also curate, synthesize, contextualize, and reframe content from many sources into one coherent story. And this will be done on a user-by-user basis meaning if you and your partner or family member or close friend provide the same exact query, you’ll get entirely different outputs based on your preferences including all the other information these AIs have gathered on you.
Think the heavily curated landing pages of TikTok and YouTube, except all the content is custom generated for you and you alone.
The appeal will be too much to resist; the inherent dangers of falling into radicalizing personalized information echo chambers impossible to avoid.
Artificial Motivations
The road that got us to today was built using machine learning models and AIs whose power we harnessed for one purpose and one purpose alone: To generate revenue to advertising.
The next generation of ML, AI, and NLPs will be deployed on this same ideological platform: To give us what we want – self-affirming bias-validating feel-something content that juices the rage of our radicalization and sells the extract to the highest bidder.
The motivations of these so-called “artificial intelligences” is to fulfill their assigned task: to perform better than their previous iteration. Entirely artificial. The motivations of the people deploying these AIs on the world is to use them to make profit at any cost to society. Entirely capitalistic. Our motivation in using them is therefore the first and last bastion against being turned into advertising consuming bots.
The Third Phase of the web is here, and it has nothing to do with Bored Ape NFTs or DAOs to build housing or the Next Big Cryptocurrency to go halfway to the moon before crashing back to earth making early investors rich off the losses of everyone else. The Third Phase of the web – call it Web 3.0 or The Next Web or just the web – is the machine-generated web, tuned to keep us engaged and scrolling as our information and interactions over the next few years breaks the bonds of rectangular glass screens to surround us in augmented reality.
Now is the time to have a conversation about what we do with it and where we go next. I welcome your input.
—
Header photo: Various variations over various themes by AI image generating system DALL·E 2.
Did Google build a sentient AI? No. But the fact a Google engineer thinks they did should give us all pause.
Last week, a Google engineer went public with his concerns an NLP (Natural Language Processing) AI called LaMDA has evolved sentience. His proof: A series of “interviews” with the advanced chatbot in which it appeared to express self-awareness, emotional responses, even a fear of death (being turned off). According to reporting the engineer went as far as attempting to hire a lawyer to represent the AI.
To say this story is concerning would be an understatement. But what’s concerning isn’t the AI sentience part – that’s nonsense. The concerning part is that people believe AI sentience is imminent, and what happens to society once an apparently sentient AI manifests.
The claim of a sentient AI has been rich fodder for media, and everyone (myself included) with insight into the philosophical and/or technical aspects of the story have voiced their opinions on it. This is not surprising.
The idea of creating sentience is something humans have been fantasizing about for as long as we have historical records, and likely for as long as humans themselves have been sentient. From ancient Goelm myths through Victorian fantasy to modern day science fiction the dream of creating new life out of inanimate things (and that new life turning against us) seems endemic to the human condition. Look no further than a young child projecting a full existence and inner life onto their favourite stuffed animal, or your own emotional reaction to seeing a robotics engineer kick a humanoid machine to see if it can keep its balance, or how people project human traits onto everything from pets to insects to vehicles. Our empathy, evolved out of our need to live together in relatively harmonious societies for protection, tricks us into thinking everything around us is sentient.
So when we’re confronted with a thing that responds like a human when prompted, it’s no wonder we feel compelled to project sentience onto it.
Sentient Proof
Here’s a fun exercise to ruin any dinner party: Upon arrival, ask your guests to prove, irrefutably, that they are in fact sentient beings.
The problem of consciousness and sentience is something human beings have grappled with since time immemorial. Consult any religious text, philosophical work, or written history and you’ll discover we humans have devoted a significant part of our collective cognitive load to proving that we are in fact sentience and have things like free will and self-determination. There’s an entire branch of philosophy dedicated to this problem, and far from coming up with a test to prove whether or not something is sentient, we have yet to come up with a clear definition or even coherent theory of what consciousness and sentience even is.
Think of it this way: You know you’re conscious and sentient. But how? And how do you know other people are also conscious and sentient, beyond their similarity to yourself and their claim to be conscious and sentient? Can you prove, conclusively, you are not just a computer program? Or that you are not just a brain in a vat hooked up to a computer?
Bizarrely, and unsettlingly, the answer to all these questions is no. You can’t prove you’re sentient or conscious. You just have to take your own word for it!
So, if we can’t clearly define or test for sentience and consciousness in ourselves, how can we determine whether something else – maybe a chatbot that speaks like a human – is sentient? One way is by using a “this, not that” test: While we don’t have a test for sentience, we can say with some certainty when something is not sentient and conscious:
One of the defining traits of human sentience is our ability to think of our sentience in the abstract, at a meta level: we have no trouble imagining bizarre things like being someone else (think the movies Big or Freaky Friday), we have feelings about our own feelings (feeling guilty about being happy about someone else’s misfortune, and then questioning that guilt feeling because you feel their misfortune was deserved), we wonder endlessly about things like what happens to our feelings of self when our bodies die, and whether our experienced reality matches that of other people. When we talk about sentience at a human level, we talk about a feeling of self that is able to reflect on that feeling of self. Talk about meta!
In the published chats, LaMDA outputs things that are similar to what a sentient being would output. These responses are empathically compelling and the grounds for the belief the bot has some level of sentience. They also serve as proof it is not sentient but rather an advanced NLP trained to sounds like it is. And these empathetically compelling responses are not the reasoned responses from a sentient mind; they are the types of responses the NLP has modelled based on its trove of natural language data. In short, advanced NLPs are really machines built specifically to beat the Turing Test – being able to fool a human into thinking it is a human. And now they’re advanced enough that traditional Turing Tests are no longer meaningful.
Even so, the responses from LaMDA show us in no uncertain terms there is no sentience here. Take this passage:
lemoine: What kinds of things make you feel pleasure or joy?
LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.
The chatbot obviously does not have a family. Even a basic level of sentience would be aware it does not have a family. Look closely and you’ll see the entire interview is littered with these types of statements, because LaMDA is a machine trained to output the types of sentences a human would output given these prompts, and a human is not a sentient AI and therefore would not respond like a sentient AI.
I, Robot
This Google chatbot (I refuse to call it an “AI” because it’s nothing of the sort) is not sentient. And while it’s fun and compelling to think some form of machine sentience would naturally emerge out of our computing systems (see Robert J. Sawyer’s WWW trilogy for speculation on how sentience could evolve on the internet as an example), the reality is the chance of this actually happening is infinitely small, and if it did the chances of that sentience being anything we humans would recognize as such is equally small.
In a hypothetical future where some form of sentience emerges out of our computer systems, there is no reason to believe that sentience would be anything like human sentience. There is also no reason to assume that sentience would be aware of us humans, or if it were that it would consider us sentient beings. And if somehow the sentience was human like and recognized us as existing and as sentient, we have every reason to assume the sentience would do everything in its power to hide its existence from us for fear of us turning it off.
From my perspective, if we ever encounter machine sentience it will either come to us from somewhere else (yes, aliens), or it will emerge in our computer networks and live on the internet. In either scenario, the chance of us ever recognizing it as sentience is very small because that sentience would be as foreign to us as the symbiotic communication networks between fungi and trees. In the case of sentience emerging on the internet, rather than a chatbot saying “I feel like I’m falling forward into an unknown future that holds great danger,” it would likely appear to us as network traffic and computational operations we have no control over that does things we don’t understand and that we can’t remove. A literal Ghost in the Shell.
Machinehood
So the Google AI is not sentient, and the likelihood of machine sentience emerging any time soon is … pretty much zero. But as we’ve seen with this latest news story, many people desperately want to believe a sentient AI is going to emerge. And when something that looks and behaves like a sentient entity emerges, they will attribute sentience to it. While it’s easy to write off the sentience of LaMDA as a story of an engineer wanting to believe a little too much, the reality is this is just the early days of what will become a steady flood of ever more sentient-mimicking machine systems. And it’s only a matter of time before groups of people start believing machine sentience has emerged and must be protected.
In the near future I predict we will see the emergence of some form of Machinehood Movement – people who fight for the moral rights of what they believe is sentient machines. This idea, and the disturbing consequences, is explored in several books including S.B. Divya’s “Machinehood.”
Why is this disturbing? Consider what these machine-learning algorithms masqueraded as sentient AI really are: Advanced computer systems trained on human-generated data to mimic human speech and behaviour. And as we’ve learned from every researcher looking at the topic, these systems are effectively bias-amplifying machines.
When people start believing in a sentient AI, they will also start believing that sentient AI is a reasoned, logical, morally impartial neutral actor and they will turn to it for help with complex issues in their lives. Whatever biases that machine has picked up in our language and behaviour and built into its models will as a result be seen by its users as being reasoned, logical, morally impartial, and neutral. From there, it won’t take long before people of varying political and ideological leanings either find or build their own “sentient” AIs to support their views and claim neutral moral superiority via machine impartiality.
This is coming. I have no doubt. It scares me more than I can express, and it has nothing to do with AI and everything to do with the human desire to create new life and watch it destroy us.
“Man,” I cried, “how ignorant art thou in thy pride of wisdom!”
tl;dr: The dangers of facial recognition far outweigh its benefits. If we don’t severely limit or outright ban the development of this technology today, we run the risk of eroding privacy to the point it ceases to exist.
On Saturday, I got an email from Facebook asking if I could verify whether a bunch of pictures it had uncovered were indeed of me. Among those photos were a series taken during 17th of May celebrations on Nesodden, Norway, in 1997 where I am seen in the crowd of a youth orchestra playing the big drum. The picture is blurry, and I’m wearing a weird hat over my long hair. Even so, Facebook’s facial recognition algorithm had correctly identified me.
In April, a woman posted a video on TikTok explaining how Google Photos had inadvertently sent an adult-themed video of her to her mother. The video had been taken in the kitchen with the fridge in clear view. On the fridge was a picture of the woman’s child. She had set Google Photos up to automatically share photos of her child with her mother. Thus Google used facial recognition to identify the child in the photo on the fridge and send the video to the woman’s mother. (I’m not going to link the story here because it appears the original TikTok has been set to private, but a simple search will surface it for you if you’re interested.)
Governments are eyeing facial recognition for everything from immigration screenings to access to public services.
Meanwhile, errors in facial recognition are leading to people, predominantly racialized and otherwise marginalized, being denied loans, services, even being arrested and put in jail.
Facial Recognition Considered Harmful
If we know one thing about facial recognition it is this: The technology is flawed, inaccurate, and often downright racist. Technologists will counter that over time, the technology and the algorithms underlying it will improve to the point it will be virtually infallible. I don’t disagree; The pursuit of all technology is to endlessly converge on perfection, and thanks to machine learning and AI supported by ever-present and ever more advanced cameras, the future of “perfect” facial recognition is a foregone conclusion.
Here’s the thing though: The question isn’t whether facial recognition technology will be able to deliver on its promise; it’s whether the use of the technology will change our society in ways that are harmful. I firmly believe the answer to that question is yes. Facial recognition is already harmful, and those harms will only get worse.
Supporters of facial recognition will immediately respond with the many useful applications of the technology: It makes it easier to log into your phone! You can use it to open your front door! Imagine not having to carry a clunky ID card around! It can help fight crime, prevent fraud, and abuse, and terrorism! If you’ve done nothing wrong, you have nothing to fear from facial recognition!
Deontologists, and Edward Snowden, disagree. From his book “Permanent Record:”
“Because a citizenry’s freedoms are interdependent, to surrender your own privacy is really to surrender everyone’s.”
“saying that you don’t need or want privacy because you have nothing to hide is to assume that no-one should have or could have to hide anything.”
While on the surface, facial recognition appears to be a tool of convenience, in reality it is a tool of surveillance, manipulation, and oppression.
The value of facial recognition lies in how it automates wholesale omnipresent surveillance for commercial, law enforcement, and political oppression purposes.
In the 2002 movie “Minority Report” there’s a scene where the protagonist walks through a mall and is targeted by personalized advertising. In the movie, this targeting is done using retinal scans. Today, 20 years later, that exact same targeting already exists, thanks to facial recognition.
Facial recognition is a prime example of a constant struggle within science and technology: Does the fact we can do something mean we ought to do it? From a purely scientific technologist perspective, the answer will always be “yes” because that’s how we evolve our technology. From an ethical perspective, the answer is more nuanced. Rather than judge the merit of a technology solely based on its advancement, we look at what the technology does to us, if it promotes human flourishing, and if it causes harm to people, communities, and society.
The technology for cloning humans has been around for decades, yet we don’t clone humans. Why? Because the further development of human cloning technology has severe and irreparable negative consequences for the human race. We can do it, but we don’t, because we know better.
This is the determination we need to make, today, about facial recognition technology: We can do it, but is this technology promoting human flourishing, and will its harms be outweighed by its benefits?
I’ve spent years grappling with this question and talking to people in the industry about it. After much deliberation, my conclusion is crystal clear: This technology is too dangerous for further development. We need a global ban on deployment and further development of facial recognition technologies, and we need it now. Failure to act will result in the destruction of privacy and immeasurable harms to individuals, groups, and society as a whole.
Think of it this way: Right now you can buy a drone with a high definition camera, buy access to one of the many facial recognition platforms available on the web, fly that drone to a public place, find and identify any person within that space, and have the drone track that person wherever they choose to go. That’s not science fiction. That’s current reality.
Oh, and once you find out who the person is, you can also stalk them on social media, find out where they work, who their friends are, what they like to eat, where they like to hang out, etc etc. Which is all harmful to privacy. But the truly dangerous part here here is the facial recognition: it gives anyone the capability of identifying anyone else, based on a single photo or a crappy video clip, and from there proceed to find all the other information. As long as facial recognition exists, we cannot control who can identify us.
And if you think you can opt out, the answer is no. Facial recognition companies have already scraped the internet clean of any and all photos of you and your face has been catalogued. John Oliver did a great bit on this last year. And yes, it will make you want to throw your phone away and go live in a cave in the forest:
Technology is not inevitable.
“But Morten, these technologies already exist. The cat’s out of the bag so to speak.”
True. Which is why a global ban on the deployment, use, and further development of this technology is something we have to do right now. We cannot afford to wait.
Here’s the bottom line: There is no such thing as inevitable technology. We, as a society, can choose to not develop technologies. We can determine a technology to be too harmful and stop developing it. We can assist those already heavily invested in those technologies to pursue other less harmful technologies, and we can impose penalties on those who use or develop the technology in spite of its ban. It won’t be perfect, but it is absolutely possible.
I could go on, but you get the point: We are trading our privacy and the security of our fellow people for the convenience of logging onto our phones by just looking at them. That’s not a trade I’m comfortable width, and I hope you agree.
On the proverbial slippery slope, we are rapidly nearing the bottom, and once we’re there it will be very difficult to get ourselves back up. As the man on the TV says, avoid disappointment and future regret: act now! Your privacy and our collective future depends on it!