“An AI may not be used to harm humanity, or, by not being used, allow humanity to come to harm.”
As Artificial Intelligence systems (AI) like #ChatGPT enter into our lives and our work, we need some basic guidelines for how we implement them going forward. Here’s a place to start:
The Zeroth Law of AI Implementation:
An AI may not be used to harm humanity, or, by not being used, allow humanity to come to harm. Implement AI in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end, and treat AI always as a means to an end and never as an end in itself.
My proposed Zeroth Law of AI Implementation aims to ground us in a shared purpose and vision for how we build our future with AI. It sets forth four basic principles:
Do No Harm* with AI.
Harm can be caused by having a tool and refusing to use it or otherwise limiting its use. For example harm can be caused by limiting access or capability based on factors including socio-economic status, geography, disability, etc.
AIs are always means to (human) ends, and must never be ends in themselves.
* We need a clear definition what “harm humanity” means including 1) who gets to determine what constitutes harm, 2) who can be harmed, and 3) who adjudicates whether harm has been caused.
The goal of technology is to build futures where humans have the capabilities to be and do what they have reason to value. AI technology presents an infinite possibility space for us to make that happen. AI technology also allows us potential to limit our own capabilities and cause real harm.
These statements seem self-evident, yet when technology is developed and implemented, we often forget its core purpose as we are blinded by the capabilities of the technology (it becomes an end in itself), by the power it affords us (those with access gain powers not afforded to others), and by the financial opportunities it affords us (using technology to turn humans into means for financial ends).
Grounding ourselves in a common principle like this proposed Zeroth Law of AI Implementation reminds us of the core purpose of technological advancement. It gives us a reference point we can use to evaluate and challenge our decisions, and a shared foundational platform we can build further principles on.
Right now, at the start of our future with AI, we have a unique opportunity to talk about where we want to go next and how we want to get there. That conversation starts with talking about core principles. The Zeroth Law of AI Implementation is my contribution to this conversation. I’d love to hear yours!
While the book doesn’t teach you how to be perfect, you’ll be a better person for reading it.
If ever I teach an intro to moral philosophy class, this book will be prerequisite reading. Sold as a fun book about ethics from the creator of the TV show “The Good Place,” this is actually a solid introduction to the academic subject of ethics, sprinkled with humour and real-life anecdotes to make it relatable.
“How To Be Perfect” is a semi-biographical story about a TV writer who goes on a journey through moral philosophy to try to figure out how to be a better person. And maybe more importantly how to teach his young children how to be the best they can be. Not to spoil anything, but at the end of the book there’s an entire section where the author talks to his kids about how to be good people, and it is wonderful.
The book introduces a variety of branches of moral philosophy with questions like “Should I lie and tell my friend I like her ugly shirt?” and “Do I have to return my shopping cart to the shopping cart rack thingy?” and “Should I punch my friend in the face for no reason?” And this is where the book truly shines: It succeeds at framing real moral problems in a comedic yet relatable way and introducing ethics to people in a way that actually makes practical sense to them.
Something we all need more of.
I suggested “How To Be Perfect” to my design ethics book club as a light read for the holidays. Two chapters in I dreaded the comments I’d get from my friends. “Light read? I bet Kant would have some opinions on passing off a textbook as an enjoyable holiday treat!” Then I continued reading and realized I’d sold my friends and the book short.
“How To Be Perfect” is an imperfect but damn fine effort at making the exceptionally challenging and often mind-numbingly turgid topic of ethics and moral philosophy fun and engaging. If you’re interested in ethics at all, and you’ve wondered where to start or worried it would be either too boring or too depressing, I recommend this book. In fact I recommend this book, period. And I’m not just saying that because I am a philosopher by education and deeply fascinated by ethics.
This book sets out to do something moral philosophy sorely needs: Make ethics make sense, in a human and relatable way. Moral philosophy has a bad tendency of being at the same time overbearingly moralistic (“here’s how you’re doing everything wrong in your life, and here are some impossible standards you must follow to right yourself!”), philosophically partisan (“my form of ethics, in my specific interpretation, is the only real ethics. All other ethics are wrong!”), and fundamentally unrelateable (“Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.”) Michael Schur tries (and mostly succeeds) in balancing on a knife’s edge between staying true to the academic foundations of moral philosophy while also framing the many theories covered in real-world scenarios, funny anecdotes, personal experiences, and a heavy helping of yelling through a bullhorn at the ivory towers of academic philosophy.
This last point is probably best exemplified in the chapter on charity where Schur points out how moral philosophers of different traditions will contort themselves into Gordian knot over the moral failings of massively wealthy people using charitable giving as a self-congratulatory popularity contest while in the real world the money they raise actually does some good.
Schur also does something extraordinary in the book: He tries (and I sincerely hope he succeeds!) to introduce a new term both to philosophy and to our common language: “Moral Exhaustion.” Let me quote from the book:
“even if we scale the triple-peaked mountain of Daily Stress, Serious Problems, and Circumstance, and (running on 5 percent battery power) try our very best to do the harder/better thing, we often fail miserably despite our best intentions. It. Is. Exhausting.”
Michael Schur, How To Be Perfect
I think moral exhaustion is a great description of the malaise we are all feeling in our lives and our work today, and I’m now using the term freely in my everyday language thanks to this book.
One major problem with moral philosophy (aka ethics) – and I say this as someone who studied moral philosophy for years at university – is its detachment from the real world and its separation into distinct traditions. You are either a Utilitarian or a Deontologist, a Virtue ethician or a Contractualist, and whatever position you hold, you must defend your tradition against the others. (I am oversimplifying here, but this is a real struggle. Call it trauma from years of being an analytical philosopher taught by a faculty almost entirely composed from Kantians.) Through the book, Schur attempts to line up these and other moral philosophy traditions and theories and thread a needle straight through them to show that rather than treat ethics as One Theory to Rule Them All we are best served with an Everything, Everywhere, All At Once approach to our decision making.
As an introduction to ethics and moral philosophy, “How To Be Perfect” does a good job introducing the main branches of western philosophy (Virtue, Duty, and Consequentialist ethics), newer traditions like Contractualism, and even non-western traditions including Ubuntu and Bhudist ideas. This breadth stems from the impressive research Schur did while writing the TV comedy show “The Good Place” which in reality is a covert psy-op to secretly educate people about ethics by making ethics fun.
Side note: Watching “The Good Place” I would typically at least once in every episode jump up and yell “ARE YOU KIDDING ME?!?!?!? They are doing a WHOLE EPISODE on [insert obscure moral philosophy thing]???!?!?!” To which my wife of endless patience would say “Sit down and watch the show.” Point being that show was astounding and if you haven’t watched it, I cannot recommend it enough. Because it is hilarious. And well written. And exceptionally acted. And also, it contextualizes ethics in a way that just makes sense.
Another side note: I recommend getting the audiobook version of this book. It is narrated by the author and the entire leading cast of “The Good Place,” with snarky footnotes from the book’s academic advisor Todd May and even occasional cameos.
In the same vein, in my opinion he makes two significant blunders – one historical and one of lack of foresight:
He writes off Heidegger’s works due to their impenetrability and his much discussed association with Nazism, ignoring the enormous impact Heidegger had on moral and other philosophy. As one of the members of my book club said “I wish he (Schur) would go beyond just hints and snarky remarks to actually explain why he sidesteps Heidegger. I felt like he was making excuses for not reading the work.”
Schur also spends a fair bit of time towards the end of the book celebrating the works of Peter Singer and his longtermism. Anyone paying attention to the collapse of crypto and the bizarre politics driving many Silicon Valley founders will know Singer’s ideals have become a breeding ground for … let’s call them problematic ideas from white men of enormous wealth and power about how we should structure and organize our society today to protect the people of tomorrow. I can’t help but think had “How To Be Perfect” been written in 2022 that entire section of the book would have been very different. So in honesty my critique on this point is a perfect example of an anachronism.
Let me be perfectly clear here: I consider these issues minor to the point of being irrelevant. This book is not an academic textbook, it’s a deeply personal book about morals and ethics that tries to do right by the subject matter and the reader and succeeds more than any similar book I’ve ever read.
If you’re still with me at this point, you’re definitely the type of person who will enjoy this book, so go out and get it in whatever format you prefer. If on the other hand you are looking for a book to give to your friend who refuses to return their shopping cart to the shopping cart shed thingy, or to subtly tell your family member that it’s not OK to tell people their shirt is ugly even if it is, chances are it’ll be a nice decoration on a shelf and will eventually end up in a donation box. “How To Be Perfect” is not light reading for an airplane ride, in spite of how it’s marketed. It is so much more, and because of this it demands much more from the reader. Just like real life demands so much more from us all. And why this book is wroth reading.
I have a confession to make: My university degree sits atop a mountain of lies I told to mask my dyslexia. Now AI is positioned to make education more accessible so future students don’t have to lie to get where they need to go.
I read maybe a quarter of the required materials for my university studies. My term papers are filled with quotes from books I never even opened. I became a master at convincing my fellow students to give me summaries, quick explainers, and relevant quotes from materials I knew I’d never be able to finish in time to meet the inaccessible demands of academia. And after 5 years and a degree, I abandoned my hopes for a graduate degree in philosophy to pursue other avenues where my reading disability was not a constant blocker.
Today, as we stand at the beginning of a new era of computing, one shining beacon in the infinite possibility space of AI is making education more accessible.
AI can make education more accessible today
Here are a few ideas for how we can implement existing AI tools in education right now to dramatically improve accessibility:
AI audiobooks on demand: AI can generate natural sounding audiobooks from any written text. Audiobooks of academic texts are hard to come by and prohibitively expensive. AI can solve that problem and allow the reader to choose their preferred reading modality. This tech already exists (Apple has shipped it).
AI translations to any language: AI models are very good at translating text from one language to another. This means academic texts written in one language can now be accessible in any language. Again, the technology already exists in the form of Google Translate etc.
AI reading level adjustment: You know that “explain it to me like I’m five” meme? AI language models like ChatGPT can do that, and ensure the salient points and meaning of the text is preserved. Academic texts are often superfluously arcane and turgid. I predict in the near future we’ll have browser plugins and other AI-powered services where you can set the reading level and writing style of any text to your preference and preserve its meaning.
AI summaries on demand: Using summaries of long texts to enforce learning has a long tradition in academia. There’s a lucrative industry and pop culture mythology around services like Coles Notes and CliffsNotes. AI can be used to generate custom summaries from any text, large or small to make it more accessible to people like me who can’t read the whole thing.
AI assessments, flashcards, and other learning tools: Dump any text into an AI and ask it to generate assessments, flashcards, questions, examples, or other things. The possibilities here are limitless.
AI auto-captions and transcripts of audio, video, and in-person events: 5 years ago captions were an expensive nice-to-have very few could afford or cared about. Today, auto-captions are available on every LinkedIn and YouTube video, and most platforms also provide verbose transcripts on the fly. These same technologies are used for live captioning in video chat apps like Teams, and can be used at live events including in classrooms. The technology is available, and quite frankly I can’t think of any good reason this tech is not immediately implemented across all educational campuses world wide to provide improved accessibility. Imagine knowing you will have a transcript at the end of every lecture so you can focus on understanding what is being taught instead of just writing it down!
The possibility space here is infinite!
Let me guess: You have concerns. About the accuracy of AI summaries and transcripts and translations. About whether leaving AIs to do this work will take jobs away from humans. About whether students relying on AIs will result in the students not learning anything.
Here’s my reality check to you: The alternative is students not learning at all. The alternative is students like me lying through their teeth to pass arbitrary tests of reading speed (that’s what mandatory minimum page counts are). The alternative is large numbers of people being excluded from education because we’ve chosen to tie access to education to reading ability and speed rather than literally anything that has to do with actual learning.
As we build our future with AI, our conversations about AI in education should be about how these technologies can lower barriers and make education accessible and possible for everyone.
Reading ? Learning
Much handwringing and professional dread has been published in the past months around the risks of AI models like #ChatGPT being used by students to plagiarize materials and generally cheat on their papers and exams. Headlines like “ChatGPT passes exams for MBA courses and medical licences” are addictive clickbait in an environment of economic uncertainty and pandemic fatigue. But look beyond the immediate “the graphing calculator / internet / smartphones / YouTube / chat apps / smart watches / AI will destroy education and plunge us back into the dark ages” response and you see the potential of this technology to make education not only better, but more available and accessible to more people. If we build the tools to make that future happen.
I have moderate dyslexia meaning I read at about speaking pace and when I read the cognitive load of just getting the words right and keeping track of where I am is exhausting. Reading a regular length novel cover to cover can take me 6 months. Reading a densely worded academic textbook is a multi-year project. But in university, the expectation was for me to consume up to 5,000 pages of material each semester. So while I never wanted to lie to my fellow students, my professors, and my academic advisors about my reading, I was forced to. Academia, and education in general, is built on a core assumption that people’s ability to read and read fast is tied to their ability to learn, grow, innovate, and build the future. I am walking proof this is not the case, and I’m also a prime example of how reading speed is a permanent blocker for many educational paths and the careers that follow.
Which is absurd because it is not the reading that matters, it’s understanding the ideas captured in the reading materials.
Last week I was trying to explain the ethics of Amartya Sen’s Capability Approach to a friend. One of the core ideas of Capability Approach is that people should have the capabilities and functionings to be and do what they value and have reason to value in their lives and environments. Which begs the question: What is the difference between what we value and what we have reason to value?
This is not something Sen answers directly. Instead it is something you glean from reading several of his published works. As with much of academic philosophy, this distinction is at the same time essential to understanding and lacking in common sense explanation.
So I asked ChatGPT: “What is the difference between what we value and what we have reason to value?”
Here’s what the AI answered:
“What we value refers to the things that we personally consider important or desirable, such as material possessions, relationships, or personal achievements.
What we have reason to value, as described by Amartya Sen in the capability approach, refers to the things that we have the capability to achieve and that are consistent with our personal values and goals. In other words, it refers to the things that we would value if we had the opportunity to achieve them.
For example, one might value owning a house, but may not have the capability to buy one due to financial constraints. In this case, having a safe and affordable place to live would be something they have reason to value. In this way, the distinction between what we value and what we have reason to value is one of actual opportunity or capability. The capability approach takes into account the real opportunities for people to achieve the things they would value if they had the capability to do so.”
This is extraordinary. Notice how the system not only provides an understandable and plain language explanation of the subject, but also manages to pick up on the obscure origins of my question without the prompt providing what I consider sufficient context?
This explanation of the difference between what we value and what we have reason to value is so good it should be used in introductory textbooks on Sen’s work. And it serves as an example of what AI can do for education if we build AI-powered systems for education.
And Sen’s distinction between what we value and what we have reason to value is salient to this issue: We as a society value education, and we have reason to value education for all. Even so there are millions of people around the world whose real opportunities to access education are limited for a an endless list of reasons. AI can grant us all the capabilities to access education in new and better ways, allowing more of us to do and be what we have reason to value in our lives, our schools, and our work!
Our future with AI is the future we build
We are at the beginning of the beginning of our future with AI. Ahead of us is an infinite possibility space. And more now than at any time in human history we have the capabilities and agency and communal connection and global wealth and power to build positive and meaningful futures for ourselves and those around us together.
It starts with looking beyond the perpetual shock of disruption every new technology brings to what those new technologies can do for us as we integrate them into our lives and our work. It starts with talking about the futures we want to build for ourselves and how we make them real. It starts with seeing the world, thinking about how to make it better, and then making it happen.
In this darkness, let me put up a bright beacon on the horizon of possibility and give you a glimpse of what a future of human-AI collaboration can look like.
Explain it to me
That’s what AI can be for us: Just-In-Time assistants for all the tedious, time consuming, and rote robot work taking up our valuable time and cognitive capacity.
If you’re a developer, you can experience this future today via various AI integrations including GitHub Copilot and ChatGPT.
GitHub Copilot coupled with the new GitHub Copilot Labs extension in VS Code gives you a pair programming assistant right in your development environment. Highlight any block of code and in the Copilot Labs panel you can ask for an explanation of the code, have it translated into another (applicable) code language, use a series of “brushes” on it including making the code more readable, adding types, cleaning, chunking, even documenting. You can even use Copilot to write and run tests on your code.
A myriad of ChatGPT extensions including Ali Gençay’s ChatGPT for VS Code does much the same, via a slightly different route. Authenticate the extension with OpenAI’s ChatGPT API, highlight any code, and you can ask ChatGPT to add tests, find bugs, optimize, explain, and add comments automatically. You also get the ability to start a full chat with ChatGPT in a dedicated panel right inside the editor where you can talk to the AI in more detail about whatever you want.
Time and Energy
In the past, I’d end up investing several days doing this work. Now, with the help of AI, the workload is significantly reduced. This, I think, is an instructive example of how our future with AI can unfold: with AI assisting us as we do our work rather than take over that work.
Both GitHub Copilot and ChatGPT are conversational AIs. You chat with them as you would chat to a person. You can ask questions, give instructions, and ask them to perform tasks for you. Using AIs as pair programmers you do a combination of all of this and more.
If you’re using the VS Code extensions mentioned above, they are already set up for the correct context. In the case of ChatGPT you can also use it as a stand-alone pair-programmer with some basic setup:
To start, set the stage (literally) by instructing the AI on how you want it to behave. In a new chat, provide a prompt similar to this:
The format here is “you are a [some role]. You [perform some skill/action]. Is that understood?” The last question gives the AI an opportunity to state how it is setting itself up based on your instructions and gives you an opportunity to provide further instructions. In my case ChatGPT responded as follows:
In response, ChatGPT provides an explanation of the errors it discovered, prototype examples of solutions to the issues, and finally a full refactoring of the pasted code with the issues resolved.
This kind of contextual response not only helps solve immediate problems, but also teaches you what’s wrong and how to fix it.
This is invaluable for people learning to code and people working with code in any capacity which is why I’d strongly discourage any teacher or manager who is right now trying to figure out how to block people from using AIs in their work. AIs reduce the need for Googling or looking up code examples on documentation sites, coding forums, and open source repositories. Instead they give you contextual explanations and references related to your specific code, and even help you with refactoring. This is the future of work, and gives us more capabilities as workers.
Have some code you can’t make heads or tails of? AI can explain what it does. Computers are much better at parsing logic based languages than humans, and conversational AI like ChatGPT are specifically constructed to output human-sounding language making them ideal tools for decrypting complex code for human consumption.
Have some code in need of documentation? AI can write a function description, inline comments, or whatever you prefer based on your instructions.
Need to refactor based on specific parameters? AI can get you started.
I could go on but I think you get the idea.
I’ve worked alongside these AI pair programmers for the past year and a bit, and I can say with absolute conviction these tools and materials will make our lives better if we use them right and integrate them in our lives as helpers for rather than replacements of human labor.
In my experience, pair programming with an AI feels like working with an overly polite person with encyclopedic knowledge of coding and no awareness of what they don’t know. And this constitutes just our first timid steps into the infinite possibility space we are entering as AIs become our assistants.
The beginning of the beginning
As you interact with AI today, be constantly aware of where you are: At the beginning of the beginning of a new era. While these tools are powerful, they are not omnipotent. Far from it. They are shallow, error prone, and while they sound convincing they cannot be trusted. A good mental model for what they produce right now is bullshit as defined by Harry G. Frankfurt: It looks true, and it may be true, but some of the time it’s just plain wrong and the AI will still present it as the truth. While they talk like humans, AIs are not conscious or sentient or aware. They have no human comprehension of your question or their answer. They are advanced pattern recognition systems who tumble down enormously complex decision trees any time a prompt is provided to issue human-sounding strings of text (or code) with a statistically high probability of being the kind of answer that is considered correct by their human trainers.
When I asked ChatGPT to correct a function containing a deprecated method, it corrected the syntax of the function but kept the deprecated method. When I told it the method was deprecated, it omitted it and refactored the code, but the result used a similar-sounding method that serves a very different purpose and was therefore non-functional and just plain wrong.
When I asked ChatGPT to find an error in a large body of code, it found two real errors and invented a third one, going as far as referencing use of a method that wasn’t even in the code provided.
These examples highlight why I see AIs as tools and materials rather than replacements for human labor. They have no understanding, no contextual awareness, no ability to do creative or lateral thinking. A human still needs to be in the loop; to make sure the output meets parameters, does what was intended, and follows best practices and standards (not to mention hold ethical responsibility for the work created). These things we call “AI” are very much artificial, but they are not intelligent.
Intelligence is added by the human operator.
Even so, the pair programming offered by these prototype AIs is an enormous leap forward for human workers. And you can easily see how this type of AI-driven assistance can be extended to other work and other tasks.
I’ve come to think of them as overconfident colleagues with a lack of self-awareness. Because of how they are “trained” – being fed large volumes of data from a corpus lifted off the internet – their “knowledge” is limited to the coding world of two years ago. Therefore, when it comes to modern features, frameworks, techniques, and standards released in the past two years, our current AIs know naught, and more importantly do not know what they do not know. Therefore, if you’re writing code on the bleeding edge of standards, you’re on your own. Or better yet: You’re training your future AI pair programmer! So the pressure is on to get it right!
The future is today
Having seen what AIs can do today, I desperately wish I had a looking glass to see what the future of work looks like. The potential here is infinite. The very best AI tools we have today are prototypes and MVPs trained on old data and limited in their scope. The AIs we’ll have a year from now, five years from now, ten years from now will be beyond anything we can imagine. And with these tools and materials in hand we can choose to build meaningful futures for everyone where we all have the capabilities to be and do what we have reason to value.
The future we are stepping into today is a future where AI is part of our lives, our work, our communities and our society. If you are alive today, and especially if you find yourself in a job, you are in the right place at the right time: These next few years is when we collectively get to figure out how AI fits into our lives and our work. This is when we set the stage for our futures with AI, and we all have a part to play. The work starts by asking yourself in what parts of your life you act like a robot, and whether you’re willing to part with that work and let an AI do it for you so you can do something else.
If we do this right, AI will allow us to reclaim our time to be human.
This is a book about death. Let me say that up front. It’s also a book about life after death – as in the lives of the people whose loved ones are facing or have passed the threshold of death. As post-pandemic novels go, this one ranks among the best I’ve read.
A pandemic. A dying child. A VR suicide group. A slow recovery. A spaceship. Thousands of years of longing experienced in a single lifetime.
“How High We Go In The Dark” reads like a series of short stories, tied together by time and global events. It’s contemplative in a way readers of “Station Eleven,” “Severance,” and “The Memory Police” will recognize. What sets this book apart from the others is its lack of a central progagonist or linear threaded story. In “How High We Go In The Dark” each chapter is a first-person narrative of a character met once, diary like in its presentation. Each chapter stands alone and can be read as a singular unit. This is accentuated by the audiobook having a different narrator for each chapter.
Every word steeped in melancholy and longing, this book is not for those who seek joy and excitement. It roots in a deep sense of grief for a dying people, a dying planet; reflecting the nebulous grief and loss of past normality we’ve all experienced over the 3+ years of the COVID-19 pandemic.
What’s missing is relief: reading “How High We Go In The Dark” is riding perpetually just behind a creating wave – feeling it’s resolutions within reach but never quite getting there.
Artificial? Yes. Intelligent? Not even close. It is not without reason things like ChatGPT are called “AI” or “Artificial Intelligence.” We humans have a propensity for anthropomorphizing – attribute human characteristics to – things that are not human. Thus if we are told something is intelligent, let’s say a very large computer system we can submit questions and get answers from, we look for intelligence in that thing. And if that thing is trained on our own language and art and mathematics and code, it will appear to us as intelligent because its training materials came from intelligent beings: Us ourselves.
“Artificial Intelligence” is a clever marketing term for computer models designed to appear intelligent even though they are not.
So, as we crash headfirst into the AI present and future, we need to reset our mental model before we start believing these things we call “Artificial Intelligences” are actually intelligent (again, they are not).
Tools and Materials
I propose we all start thinking of these things we call “AI” as tools and materials. Because that’s what they are and that’s how we’ll end up using them.
Sometimes we’ll use them as tools the same way we use our phones and computers and the apps on them as tools. Sometimes we’ll use them and what they produce as materials the same way we use printed fabrics and code snippets to create things. And sometimes we’ll use them as both tools and materials the same way we use word processing applications first as a tool with which we write a body of text and then a material as the thesaurus function helps us use more fanciful words and phrases.
Here are some basic examples to help you build the mental model:
AI as a tool performs a task for us:
Fill out tax forms, write contracts and legal documents.
Summarize text, rewrite text to a specific reading level.
Online shopping including booking flights and hotels etc.
Any interaction with any CSR.
Magic eraser for images, video, and audio.
AI as a material generates something for us:
Plot lines for stories.
News articles and summaries.
Images and other art.
Variants of a layout, or a theme, or an image, or a painting.
Thinking of AI as tools and materials rather than intelligent things with magical human-like powers is an essential mental shift as we figure out how to fit these things into our lives and our world. We have to move away from the linguistic trick their creators foisted upon us with their naming, and move towards the practical realities of what these things really are:
AI are if-this-then-that machines using enormously complex decision trees generated by ingesting all available writings, imagery, and other human-made materials and filtering that data through pattern-matching algorithms.
They are regurgitation machines echoing our own works back to us.
And just like we are drawn to our own image every time we pass a mirrored surface, we are drawn to the echoes of ourselves in the output of these machines.
Shallow Work and Human Creativity
Asked for one word to describe AIs, my immediate answer is “shallow.” You’ve probably felt this yourself without being able to put your finger on it. Let me explain:
There is a bland uniformity to AI output. It’s easiest to notice in generative AI images. Once you’ve been exposed to enough of them, they start taking on a very specific “AI-ness.” For all their variety, there is something recognizable about them – some defining feature that sets them apart from what we recognize at human-made images. That thing is shallowness.
AIs are conservative in the sense they conserve and repeat what already exists. They don’t come up with anything new. They are also elitist in the sense they lean towards what is predominant, what there is more of. They are swayed by trends and popularity and amplify whatever majority opinion they find in their training data.
This makes their output bland and uniform and shallow like a drunk first-year philosophy student at a bar: The initial conversation may be interesting, but after a few minutes you notice there’s little substance behind the bravado. I’ve been that drunk first-year philosophy student so I know what I’m talking about.
This means while AIs are great at doing shallow rote work, they have no ability to bring anything new to the table. They lack creativity and ingenuity and lateral thinking skills because these skills require intelligence. And AIs are not intelligent; they just play intelligent on TV.
Will an AI take my job?
Our instinctual response any new technology is “will it take my job?” It’s a valid question: Jobs are essential for us to be able to make a living in this free-market capitalist delusion we call “modern society,” yet job creators have a tendency to let go of expensive human workers if they can replace them with less expensive alternatives like self-checkout kiosks that constantly need to be reset by a staff member because you put the banana in the bagging area before you chose whether to donate $2 to a children’s charity, or automated “voice assistants” that never have the answers to your customer service questions and only pass you to an actual human once you’ve repeated the correct incantation of profanity (try it, it totally works!)
So now that we have these things some clever marketing people have told us to call “AI,” are they coming for your job? Well, that depends:
If your job is shallow and constitutes mainly rote work, there’s a good chance an AI will enter your life very soon – as in within months – and become part of the toolkit you use to get your job done quicker. And if it turns out that AI can be trained to do your job without your intervention (by having you use it and thereby training it), there’s a non-zero chance it will eventually replace you. That chance hinges more on corporate greed than it does AI ability though.
If your job involves any type of creative, or deep, or lateral, or organizational, or original, or challenging, or novel thinking, AI will not take your job because AI can’t do any of those things. You’ll still work with AI – probably within months – and the AI may alleviate you of a lot of the rote work you are currently doing that takes your attention away from what you were actually hired to do – but the AI is unlikely to replace you. Unless corporate greed gets in the way. Which it often does because of the aforementioned free-market capitalist delusion we call “modern society.”
What we all have to come to terms with today is we’re long past the point of no return when it comes to AI. While technology is not inevitable, technology often becomes so entrenched it is impossible to … un-entrench it. That’s where we are with AI. No matter where you live and what you do for work, for school, or in your own time, you’re already interacting with AIs in more ways that you can imagine. And these AIs are going to become part of your work, your school, and your home life whether you want them or not.
Our job now is to talk to one another about what role these things called “AI” are going to play in our lives. How do we use them in ways that don’t take jobs away from the humans who need them the most – the historically marginalized and excluded people who tend to hold jobs comprising mainly shallow rote work? How do we build them in ways that don’t cannibalize the creative works of artists and writers and coders and teachers? How do we incorporate AI into education to improve learning outcomes for students and build a more informed and skilled populace? How do we wrench control over our AI future from the surveillance capitalists and longtermists currently building the world to their libertarian techno-utopian visions?
How do we use AI and all technology to create human flourishing and build futures in which we all have the capabilities to be and do what we have reason to value?
If we don’t talk about the future, the future becomes something that happens to us. Let’s have this conversation.
Sometimes, we think of things called “AI” or “Artificial Intelligence” as being intelligent. But they’re not really. They’re just computers that can answer questions. We humans have a tendency to give human characteristics to things that aren’t human. So if we think something is intelligent, like a big computer system, we might see it as being smart. But really, it’s just using the things we’ve taught it, like our language, art, and math.
That’s why it’s important to change the way we think about AI. Instead of thinking of it as being intelligent, we should think of it as tools and materials. That’s what it really is and how we’ll use it.
Sometimes, we’ll use AI like a tool, like we use our phones and computers. Other times, we’ll use what it makes as materials, like we use printed fabrics to create something. We might even use it as both a tool and material, like when we use a word processing app to write and then use the thesaurus function to make the writing more fancy.
Here are some examples of how we can use AI:
As a tool to fill out tax forms, write contracts, or create code
As a tool to summarize text or rewrite it for a specific reading level
As a tool for online shopping, like booking flights and hotels
As a tool to talk to customer service representatives
As a tool to clean up images, video, and audio
As a material to make simple stories, plot lines, news articles, and images
As a material to create variations of a layout, theme, or painting
AI is really just a bunch of if-then statements and very complex decision-making based on patterns in the data it’s been given. It’s like a machine that repeats what we’ve already made. And just like we’re drawn to our own reflection in a mirror, we’re drawn to the things AI makes that are like us.
But there’s a problem with AI. It’s shallow. Have you ever noticed that AI images and things it makes all kind of look the same? That’s because AI is shallow. It doesn’t come up with anything new. It just repeats what it’s already seen. It also likes what’s popular and trendy, and it amplifies the things it sees a lot of in its training data. This makes the things it makes feel boring and shallow, like a student who’s had too much to drink talking about philosophy.
Human creativity is different. It’s deep. It comes up with new ideas and approaches things in unique ways. AI can’t do that. So we need to use it as a tool and material, not as a replacement for human creativity.
How social media verification was destroyed by hubris.
By destroying any meaning the Verified badge on Twitter may have had, Elon Musk has taught us all a lesson: Verified status with self-ID must be free, optional, and universally available across all social media platforms. Better yet, the web should have a free, optional, and universal self-ID verification system social media platforms and other tools can use to verify users. If we don’t know who we’re talking to, bad things will happen.
The tweet, posted by a newly verified account bearing the name of the drug manufacturer, said insulin would now be provided for free.
The problem: The account was fake. It was impersonating the drug company and had bought the Verified badge for $8.
In the early days of November 2022, Twitter Owner and CEO Elon Musk murdered social media verification for the lols. Lashing out at what he described as the “Lords and Peasants” system of verification, he changed the meaning of the blue tick next to a user’s name from “has provided us a copy of their ID to verify their identity” to “is paying $8/month for a blue check next to their name.”
The result: an immediate flood of impersonation accounts on the platform, and a subsequent erosion of any trust the Verified label might have created on the platform.
Turns out in spite of right-wing conspiracies claiming the opposite, the Blue Tick was not in fact a status badge given to liberals – it was a badge informing users the account was verified as representing who it claimed to represent. You know, verification. Shocking.
In the immediate aftermath of all this, Twitter rolled out a new “Official” badge. Which Elon personally pulled minutes later. Then reinstated because, again, it turns out the Verified badge actually served a purpose and was not in fact a “Lords and Peasants” system.
As I said at the start of all this, Elon appears to be doing 1st year design student back-of-the-napkin iterative design in public, and he’s receiving a failing grade at it. But what do I know, I’m just a university teacher specializing in this exact subject.
Here’s how I imagine it all went down:
Twitter employee, cowering behind a chair: Lord Musk, it appears the blue check you thought was a vanity badge actually serves a vital function! Elon the Ignoble: Thou darest speak?!? What say you, serf? Twitter employee, now using the lid of a garbage can as a shield: We need to keep the verified system to prevent impersonations on the platform. Elon the Ingoble: Heresy! We the King make no mistakes! That’s Official! Now Former Twitter employee, being led out of the building by HR: Someone is going to impersonate a pharma company and tank their stock! Elon the Ignoble: ???
From the initial 2009 rollout as a band-aid to prevent celebrities and brands from suing the platform over allowing impersonation accounts, to the 2016 release of a public application process where applicant accounts “determined to be of public interest” would get the badge, what was on the back-end a Verified ID system was given the public image of a “Verified Awesome by Us” badge.
Due to the inscrutable black box process of Verified, people built myths around the system and started believing the Verified badge gave users powers and prestige. And when people believe something gives others power and prestige, those others get power and prestige, even if no actual power or prestige is bestowed them in reality.
Which is how the right-wing conspiracy theory that the Verified status was only granted to liberal accounts (utter nonsense, easily disproven by who is verified on the platform) wormed its way into the brain of the new Twitter CEO and led him to think of it as a Lords and Peasants system rather than what it actually always was: a verification system.
Not that it matters now. Verified is dead. It cannot be resurrected. It has lost all meaning. Which may or may not have been Elon’s intent all along. Who knows.
The Need for Verification Online
Watching Musk iterating his way to the irrevocable delimitation of the Bird App in real time is a heady, bordering on an out-of-body experience. Gavin Belson masquerading as Tony Stark is either so blinded by hubris he is unable to recognize he has no idea what he’s doing and has systematically fired everyone who does, or hell bent on burning down the global digital public square he spent $44 billion on just to see what it looks like. Either way, the consequences of his folly will impact us all.
The impersonation of a pharmaceutical company making billions off predatory pricing on life-saving products that should be provided at-cost may be a fitting critique of the late-stage capitalist hellscape we’re all living through, but it is also the eviscerated body of the canary in our social media coal mines.
In the near future in the wake of war, famine, or a natural disaster, someone will create a Twitter account impersonating a government or critical aid organization and provide harmful or even deadly misinformation to the victims. Until November 2022, people knew with some certainty if the account telling them to seek shelter, move their family, or send money somewhere had a blue check, they could trust it. That trust is now gone for those in the know. But for the millions of casual users of Twitter who are not aware of Musk’s amateur-dentist-with-a-jackhammer approach to service design, a blue check still means trust, and they will be led straight into the maw of whatever evil paid Elon his $8 monthly identity tax.
The Oligarch of Folly
If we can learn anything from these last chaotic weeks, let it be this: Wealth does not imply wisdom. More likely it implies a propensity towards destroying everything to get what you want.
When Musk started talking about his desire to buy Twitter (only to moments later try to back out of the whole thing), Muskovites (the people who believe their idol can Do No Wrong) celebrated the move claiming it would bring “true free speech” to the platform. In the few weeks he’s been at the helm, he has imposed authoritarian and dictatorial rule on the platform by firing the majority of the staff, banning people and behaviours for personal reasons, and destroying much of the social infrastructure the platform was built on because he didn’t like the way it looked. He seems hell bent on proving himself uniquely unqualified for the job he has bought, and chronically unwilling to accept his own limitations.
Elon Musk destroyed Verified because he didn’t bother to understand it. I shudder to think what he’ll set his eyes on next.
Emblematic of the fractured nature of social media, the first semi-official statement from the new self-described “Chief Twit” was three photos of dense text, without the necessary alt text to provide accessibility.
Pretty Hate Machine
Twitter has served an outsized role in my personal and professional lives. On the app I’ve made great new friendships and ruined old ones; created professional networks and burned bridges; helped people through difficult personal and professional times and offended others; been misunderstood and misquoted while myself misunderstanding and misquoting; blocked people and had people block me; found new limits for the highest heights of elation and the deepest depths of despair.
On Twitter I watched one friend livetweet their first child’s birth and another livetweet the bombing of his home. I watched people find their tribes and people falling into the gravity wells of hateful conspiracy theories. I watched new technologies emerge that will make the world a better place and technologies emerge that are destroying the very fabric of our society.
To say I’ve had a fraught relationship with the bird app is an understatement. When asked to describe Twitter, the first phrase that comes to mind for me is “Pretty Hate Machine,” but “Petty Hate Machine” might be equally apt. Open Twitter on any day and you’re two clicks away from whatever rage bait the “Explore” algorithm is currently selling. Political conspiracy theories, medical conspiracy theories, climate conspiracy theories, celebrity conspiracy theories, social media conspiracy theories, whatever flavor of rage you want to fill up on, the blue bird is fully stocked and eager to deliver.
On the news of Musk’s intent to buy Twitter back in the spring of 2022, right-wing pundits and their loyal followers celebrated the “end of censorship” and “return of free speech,” and in the two days since the Sinking In, the platform has become a testing ground for online extremists, trolls, and bots wanting to see how far they can take things before whatever moderation tools and staff are still in place step in:
Remember: when free speech absolutist and Silicon Valley techno libertarians talk about “the extreme left” they are talking about anyone who thinks you should be able to be online without being subjected to constant harassment and death threats because of who you are.
The vast majority of content moderation is there to prevent platforms from overflowing with spam. The rest is there to prevent platforms from being used to share criminal harassment, assault, terrorism, and CSAM content.
The right-wingers who claim they are being “#shaddowbanned” or “censored” have no reality to back them up. Studies show political bans fall evenly on the left and the right. The main diff is people on the right build their enormous platforms on the story of being censored.
People should be free to speak their minds on social media. People should also be protected from having those freedoms removed by hateful mobs. Organized online extremists have made sport of driving women, LGBTQIA2+, PoCs, and other historically harmed people off platforms.
If Twitter has any serious aspirations of becoming a “common digital town square” like Elon said, it has to be managed like a town square. If you show up at a town square screaming rape and death threats at the other people there, you will be removed, and likely arrested.
There is no civil discourse without moderation. That’s why debates have moderators. The people who claim they want to end “censorship” on social media are really saying they don’t want to be held accountable for what they say and do on social media.
In spite of what Musk and the techno-utopians of Silicon Valley want to believe, Twitter and its ilk are not “common digital town squares.” Twitter is a firehose, a deluge, an all-encompassing flood of every aspect of the human condition, pouring into your eyes the moment you open them. And like Alex DeLarge strapped to a chair with our eyes pried open, we stare down the torrent of hope and misery and joy and pain and love and hate and everything in between hoping to be cured of our own boredom, or disconnection, or unmet promises, or hope, or whatever the algorithm tells us ails us.
To Kill a Bluebird
When Musk says “highly relevant ads are actual content!” he simultaneously reduces the term “content” to its most basic meaning (under which spam must also be defined as “content”) and says the quiet part out loud: The only content that matters it the content that makes Elon money.
Musk is out $44 billion. He needs to make that back. Cutting 75% of staff won’t make a dent (though deep cuts are inevitable). The only meaningful revenue stream Twitter has at the moment is advertising. For advertisers to want to be on the platform, content moderation is necessary. Thus his other promise in the aforementioned inaccessible-text-in-pictures tweet directed at advertisers: “Twitter obviously cannot become a free-for-all hellscape, where anything can be said with no consequences!”
The free speech absolutists on the far right are unlikely to see their unmoderated dream app; not because Musk doesn’t want it, but because the only thing that matters to Elon is Elon making his money back. Instead I predict we’ll see a Twitter leaning harder than ever into Surveillance Capitalism, a doomed subscription model (leaks from internal meetings claim Musk “wants subs to be 50% of revenue at some point”), and creator-based advertising spec work, aka “the Creator Economy.”
Considering Twitter was already struggling to catch up with the new social media giant TikTok before he had an itch to scratch and randomly said he’d buy the platform, Musk and Twitter now have to weigh the need for an active user base agains the need for quick and large revenues.
As I write this, my Explore page shows terms including “CEO of Twitter,” “free speech,” “mastodon,” and “delete” trending. On the app as in the real world the app presides, the takeover of one of the biggest global communication platforms by an ultra-rich oligarch whose modus operandi seems to be playing troll to the masses to make a profit is the rage inducing trend du jour. Journalists, scientists, and creators are setting up new accounts on other apps including TikTok, figuring out how to migrate their followers to the federalized Twitter alternative Mastodon, and screaming their Medium and Substack and WordPress links into the void hoping the world will continue to hear them should they be kicked out of the bluebird’s nest.
So is this the end for Twitter? Should we all delete our accounts and move our oversharing elsewhere? As I’ve explained before in relation to the ever resurgent #DeleteFacebook trend, until we’ve built suitable alternatives, being able to step away from these commercial apps turned critical infrastructure is a sign of extreme privilege.
For better or worse, Twitter is the place people turn to for news and information in a crisis. TikTok is too video-heavy for quick communication. Facebook is too … Facebook. When protesters flood the streets in Iran or Berlin or Hong Kong or Minneapolis, Twitter is their platform of choice for rapid dissemination of information. When a hurricane, or earthquake, or war, or insurrection or coup strikes, Twitter is the first place for immediate breaking news from citizen and professional journalists. When researchers want to know how disinformation spreads and transforms the populace from people who are in it together to people who will rather let you die than have to wear a mask, they turn to Twitter’s robust APIs and data discovery tools.
Take it from journalist and author Sarah Kendzior: “Twitter is a hellsite that also houses a vital time-stamped chronology of state corruption. It shows who know what and when, and gives some insight into why. Chronology is an enemy of autocracy. Altering Twitter is altering history, and that’s the appeal to autocratic minds.“
I am not leaving Twitter (yet), but I am preparing for a future where Twitter no longer plays a meaningful (if destructive) role in my life, making sure all my eggs are not in the bluebird’s nest if you will (and yes, I’ve taken this whole bluebird metaphor thing way too far at this point. I’m tired, ok?)
I joined Twitter in May of 2008 to explore its APIs as a possible example for a web development book I was writing. The tweets from those early days are as mundane as they are prescient of what my relationship to the Bird App would become. I’ll leave my first Tweet as my last word for now:
“Are you Extremely Online?” the job posting read. “If so, we want you!”
There’s a unique feeling of selfish gratification in seeing in-group language make its way into the wider public and knowing you know what it means, what it really means, while most people will just furrow their brows and think “heh?!?” before being bitten by the next attention vampire.
I can’t stand that I feel this way; like having the inside view, the prior knowledge, the scoop on a TikTok trend or social media hype or online fad makes me somehow significant or superior; like being so Extremely Online I know both what being “Extremely Online” means and what it really means is a virtue and not a curse; like transitioning from merely Extremely Online to Violently Online is a natural and necessary next step for me, if it hasn’t already happened.
I want to introduce a new term to our vocabulary about how we are on the internet and how the internet shapes us:
Violently Online – a phrase referring to someone whose close engagement with online services and Internet culture is resulting in hurt or harm to themselves. People said to be violently online often believe that the pain caused them by their online activity is a necessary part of their lives.
The term and definition take inspiration from the Rosenberg quote above and the definition of “Extremely Online”, described on Wikipedia as “a phrase referring to someone closely engaged with Internet culture. People said to be extremely online often believe that online posts are very important.”
“Violently Online” refers specifically to behaviour patterns resulting in harm done to ourselves by being online, and stands in sharp contrast to the online violence some people use to inflict harm on others.
The 2021 book “No One Is Talking About This” by Patricia Lockwood is an in-depth study in what it is to be Extremely Online. In it, the internal dialog of the protagonist ruminates on their chronic use of and dependence on “The Portal” (a euphemism for the internet) and how someone who lives out their lives on The Portal experiences an alternate hyper-accelerated reality compared to everyone else.
This book, and the many articles, essays, documentaries, TV shows, podcasts, Twitter threads, newsletters, TikTok videos [list truncated for sanity] covering the same topic, describe a vampiric disease we’ve all been afflicted by, that some of us have succumbed to. The gravity well of the screen, flashing and buzzing with notifications. The dopamine hit of someone else acknowledging your existence with a like, a share, a response! The flick of the thumb to lift out of the infinite scroll a piece of carefully crafted content that will finally satiate your burning hunger for something undefined. If only you keep scrolling that feeling of doom will surely go away.
Being Violently Online means being in thrall of the vampire; not merely aware of, or constantly using, or even Extremely Online, but being controlled or strongly influenced by our online activity, to the point of subservience, to the point of reducing ourselves to our online interactions.
Being Violently Online means experiencing the harms of your online interactions, knowing how they harm you, and still flicking your thumb across the burning glass as the world disappears and all that remains is the promise of an illusive piece of content to finally prove to you, unequivocally, that yes, you exist.
The next phase of the web is already here, and it’s defined by AI-generated content.
I wrote this article using only my mind, my hands, and a solid helping of spellcheck. No machine learning models, NLPs, or so-called AI contributed to what you’re reading. As you read on you’ll realize why this clarification is necessary.
Ghost in the Typewriter
Reading an article, watching a video on TikTok or YouTube, listening to a podcast while you’re out running, you feel you have a reasonable expectation the content you’re consuming is created by a human being. You’d be wrong.
The above video is from 2018. Consider the vertically accelerating rate of technological evolution we’re undergoing right now and I’ll leave you to imagine how much bigger and more advanced this phenomenon is now and how much further it’s going to go in the next few years.
The Next Phase of the Web
There’s a good chance you’ve heard the term “web3” used recently, and there’s a good chance it’s been accompanied with some form of marketing statement like “web3 is the next version of the web” or “the next phase of the internet” or “the thing that will replace Facebook and Google” or something similar.
If you have not (actually even if you have) here’s a quick primer on what this “web3” thing is:
From my perspective, as someone who spent the past several years embedding myself in the community and its cultures, “web3” is a marketing term for all things built on a decentralized trustless blockchain and used to promote a future where everything on the web is financialized through cryptocurrencies and NFTs. It has nothing to do with the web platform and everything to do with the crypto community leveraging the public awareness and concerns around what’s known as “Web 2.0” to promote their libertarian anti-regulation cryptocurrency agenda. If you want a less opinionated and more descriptive explanation of the term, I invite you to check out my LinkedIn learning course titled “What is Web3?” or you can check out Molly White‘s excellent blog “web3 is going just great.“
The “web3” and “Metaverse” conversations are a distraction from what’s actually happening on the web – what is defining the next phase of the web:
Where we are right now, with the line being blurred between human-generated and AI-generated content, is at the very beginning of this next phase where the magical abilities of yet-to-be-fully-understood technologies allow us to do things we previously couldn’t even dream of.
The fawning articles about amazing AI-generated art are the public-facing part of an emotional contagion campaign designed to condition and eventually habituate us to a new reality where machines create our content and we passively consume it.
The AI-fication of online content isn’t something on the horizon, a possible future; it’s our current lived reality. The line has already been crossed. We’re well into the next phase whether we want to accept it or not. AI is already generating and curating our news, our fake news, our information, our disinformation, our entertainment, and our online radicalization. Creators are rushing to take advantage of every promise offered by AI companies in their relentless pursuit of easy profits through fame-based marketing. Your reasonable expectation today must be that the content you consume is wholly or in part generated by AI unless it explicitly states otherwise (remember my disclosure at the top of the article). And we’re only getting started.
The Immediate Future of AI Content
Right now, your interaction with AI-generated content is largely invisible to you and mainly comes in two forms: AI-curated content (everything surfaced or “recommended” to you through social media, news apps, online streaming services, and AI-assistants like Google Home, Siri, Alexa, and Cortana is brought to you by some form of AI) and AI-assisted content (AI, ML, and NLPs used to either create, add to, edit, modify, market, or otherwise contribute to the generation of content.)
In the near future, I predict we’ll see the emergence of a new type of tool targeted at the average information consumer: AI services providing synthesis of online content as customized coherent storytelling in the form of articles, podcast-like audio, and eventually video.
In the near future AI assistants and apps will take a plain language prompt like “tell me what’s happening with the situation in Palestine, or Ukraine, or the US” and compile in seconds a thoroughly sourced article, audio narration, or video – written in your language, preferred complexity, reading level, and length – stringing together reporting and historical reference material from various online sources to produce what will appear to you as proper journalism.
These new apps and services are the natural evolution of the curated content streams we already have through news apps and social media. The difference will be they will no longer only provide us the original sources of information: They will also curate, synthesize, contextualize, and reframe content from many sources into one coherent story. And this will be done on a user-by-user basis meaning if you and your partner or family member or close friend provide the same exact query, you’ll get entirely different outputs based on your preferences including all the other information these AIs have gathered on you.
Think the heavily curated landing pages of TikTok and YouTube, except all the content is custom generated for you and you alone.
The appeal will be too much to resist; the inherent dangers of falling into radicalizing personalized information echo chambers impossible to avoid.
The road that got us to today was built using machine learning models and AIs whose power we harnessed for one purpose and one purpose alone: To generate revenue to advertising.
The next generation of ML, AI, and NLPs will be deployed on this same ideological platform: To give us what we want – self-affirming bias-validating feel-something content that juices the rage of our radicalization and sells the extract to the highest bidder.
The motivations of these so-called “artificial intelligences” is to fulfill their assigned task: to perform better than their previous iteration. Entirely artificial. The motivations of the people deploying these AIs on the world is to use them to make profit at any cost to society. Entirely capitalistic. Our motivation in using them is therefore the first and last bastion against being turned into advertising consuming bots.
The Third Phase of the web is here, and it has nothing to do with Bored Ape NFTs or DAOs to build housing or the Next Big Cryptocurrency to go halfway to the moon before crashing back to earth making early investors rich off the losses of everyone else. The Third Phase of the web – call it Web 3.0 or The Next Web or just the web – is the machine-generated web, tuned to keep us engaged and scrolling as our information and interactions over the next few years breaks the bonds of rectangular glass screens to surround us in augmented reality.
Now is the time to have a conversation about what we do with it and where we go next. I welcome your input.
Header photo: Various variations over various themes by AI image generating system DALL·E 2.
Look at any of the millions of posts sharing personal abortion stories and pro-choice support on LinkedIn over the weekend and you’ll likely find a comment similar to this one:
“Why use LinkedIn for this type of political post? Engagement at all costs.”
I could not disagree more. Work is political. Reproductive rights have a significant impact on work, and our work impacts our own and other people’s access to reproductive services. Having conversations about abortion rights in a work environment is not only appropriate; it is necessary. Abortion is healthcare, and healthcare is a human right. When our coworkers or our clients are subjected to a removal of their human rights, that’s something we need to talk about; not in terms of whether we should talk about it, but what we are doing about it.
Work, down to the most basic principles of having the right to work, is itself political. You don’t have to go far back in history to find a time when people were denied the right to paid labor for things like their gender, their sexual orientation, their religious beliefs, their country of birth, or their skin colour.
Workers’ rights to fair wages, paid medical leave, paid vacation time, reasonable hours, a safe work environment, all these and more are hard-fought political issues, many of which are still challenged in courts and seats of government to this day. Women and people who may become pregnant’s right to work, right to paid parental leave, right to not lose their job over reproductive decisions, right to not be passed over for promotions due to their reproductive choices, these are all hard-fought political issues for which we are still fighting.