AI Training and the Slow Poison of Opt-Out

Asking users to opt-out of AI training is a deceptive pattern. Governments and regulators must step in to enforce opt-in as the mandated international standard. In my opinion.

In May 2024, European users of Instagram and Facebook got a new system message informing them all their public posts would be used for training AI starting June 26th. To exclude their content from this program, each user (and each business account) would have to actively opt-out – a process that requires knowing where to go and what to do. Additionally, even if you do opt out, and even if you don’t even have a Facebook account, Meta grants itself generous rights to use any content it can get its hands on for AI training. From their How Meta uses information for generative AI models and features page:

“Even if you don’t use our Products and services or have an account, we may still process information about you to develop and improve AI at Meta. For example, this could happen if you appear anywhere in an image shared on our Products or services by someone who does use them or if someone mentions information about you in posts or captions that they share on our Products and services.”

Bottom Trawling the Internet

Meta is not alone in this. The established standard for acquiring AI training data has been to scrape the internet of any publicly available data and use it as each AI company sees fit. And as with bottom trawling, the consequences to privacy, copyright, and the livelihoods of many creators are severe.

Historically, AI scraping has been done by default, without warning or even acknowledgement, often as part of general web scraping to support search indexes. As awareness of this practice has grown, some companies like Automattic (, Tumblr, etc) and now Meta now offer opt-out features so users can exclude their content from AI scraping, but this often comes with direct consequences to visibility and functionality. My cynical hunch is the platform companies are aware of the public pushback around these practices and they are now covering themselves legally. My hope is platforms offering an explicit opt-out potion means they have realized the wholesale scraping of the web is ethically problematic and they are at least trying to do something about it.

Here’s the thing: The opt-out is part of the problem!

Power and the Principle of Least Privilege

A few years ago I attended a conference where each attendee was given a choice to attach a black or red lanyard to their badges. Black meant the event had permission to take photos and videos of the attendee, red meant they did not. If you didn’t choose (or like me didn’t listen when it was explained) they gave you a red lanyard.

This is a real-world implementation of the Principle of Least Privilege: Photographers were only allowed to create images of people who gave explicit permission; the attendees who opted in.

At a different conference that same year I saw the reverse of this approach: Scattered around the venue were posters reading as follows:

“The [Conference] reserves the right to photograph any attendee for use in promotional materials. If you do not wish to be in the pictures, please notify the roaming photographers.”

Here, the attendees were opted in by default, and it was up to each attendee to actively opt out at each interaction with a photographer. Needless to say this is not feasible, and as a result everyone at the conference either relented to having their pictures taken or left.

I think most will agree the first conference acted ethically towards the attendees, the second did not. In fact, the second conference experienced a major backlash after the event, and the following year they handed out “NO PHOTO” stickers for attendees to put on their badges if they so desired.

There are two important takeaways here:

First, when it’s a real-world situation, most people immediately see the ethical missteps of the second conference. And second, even so most attendees stayed at the conference knowing they might be photographed against their will.

The conference created a power dynamic where people who didn’t want to be photographed were left with bad options: Constantly be on guard for photographers to tell them they did not want their picture taken, or leave the conference they paid and probably travelled to attend. It’s unethical, but it’s not explicitly illegal, and in the end it means they get more promo shots to use. So be it if some attendees are uncomfortable.

AI scraping and the current opt-out strategy falls squarely in the same category as the second conference. While the obvious ethical choice is to let people opt-in to AI scraping, an opt-out option provides just enough cover to not get sued while ensuring broad access to content because most users won’t go through the trouble of opting out – especially if you make the feature hard to find and hard to use.

My Content, My Choice

Platforms have long argued they can do what they will with user content. In fact, using user content to meet business needs is the economic basis for most platforms, and this is the bargain we’ve collectively agreed to.

Building on this premise, platforms and AI companies now want to extend this principle to AI training, claiming both that they have a right to use the data without explicit permission because it’s public, and that not being able to use it without explicit permission would make it impossible for them to operate at all.

I think it’s high time we to question both these stances:

Letting platforms do what they wish with our content was always a Devil’s bargain, and we’re now acutely aware of how bad of a deal it really was. The negative effects of surveillance capitalism, filter bubbles, and ad-driven online radicalization engines (nee “recommendation algorithms”) are plain to see and play a significant part in the erosion of everything from privacy to democracy.

The claim that an entire business category can’t be competitive unless it has free access to raw materials is one we’ve heard before, and again we know the consequences. Bottom trawls and overfishing has depleted our oceans, pollution chokes our air and waters, the exploitation of cheap labour in the global south keeps billons of people in chronic poverty. To say these are false equivalences is to ignore the reality of what we’re talking about. While the actual bits and bytes collected during an AI scrape are not a finite resource, the creative energy that went into creating them are. And the purpose of scraping data from any source is to train a machine to mimic and otherwise use that data in place of a human mind.

Opt-out is a slow poison because it puts choice just far enough away that becomes out of reach for most people. It makes a choice on our behalf and then forces us to negate it. It’s exactly opposite of how it should be.

The Choice is Ours

We are at the very beginning of a new era of technology, and we’re still figuring it all out. This means right now we have the power to make decisions, and the responsibility of making the right decisions.

This is the moment for us to learn from our mistakes with surveillance capitalism and take bold steps to build a more just and equitable world for everyone who interacts with technology.

One of the first, and most straightforward steps we can take right now is to make a simple regulation for all tech companies dealing with user data:

Users must opt-in to any change in how their data is handled.

And to protect users:

Choosing not to opt in must not impact the user experience of existing features.

This puts the onus on the AI companies to get consent when collecting data to train their models, and gives users agency to choose what if any AI training they want their data included in.

If I wanted to make a name for myself in the political realm, this is where I’d start: With a self-evident regulation protecting the rights of every person to own their own work.

We shall see.

Originally published in my newsletter.


Ten Questions for Matt Mullenweg Re: Data Ownership and AI

Dear Matt. 404 Media tells me you’re in the process of selling access to the data I’ve published on and Tumblr to the AI companies OpenAI and Midjourney and that I have to actively opt-out if I don’t want my data included.

I have ten questions for you regarding this that I think most of your users would also like to hear your answers to:

The Ten Questions

  1. Why do users have to opt-out of sharing their data with AI companies? This assumes most users agree to share their content with AI services. What data and/or reasoning backs up this assumption?
  2. Who gets the revenues from this data sharing, and how much are those revenues? Specifically, are creators being compensated for their data being sold to a third party?
  3. Who decides on behalf of abandoned sites or sites whose creators are no longer with us? Not everyone has the capability of opting out. Who speaks on their behalf and protects their interests?
  4. How were the affected users consulted on this? And what was their feedback?
  5. What professionals were consulted, and what did they say? Did you consult with legal? Did you consult with your ethics officer (I’m assuming you have one)? Who else were involved in this decision?
  6. How does selling this data line up with your open source principle of users owning their own content? Who do you believe own and have the right to profit from user content hosted on your platforms?
  7. Is the data being sold only from free sites, or does it also include sites the user pays you to host on your platform? If the latter, how do you justify “double-dipping” in the revenue stream?
  8. Why do you believe this is the right decision to make on behalf of your users? And how do you respond to those who say it is not?
  9. How did you pick these commercial AI companies over open source alternatives? Reporting indicates you are in talks with OpenAI and Midjourney. Neither is open source.
  10. Why should we trust you with our data going forward? See The Five Questions of Tony Benn.

Please respond with a post of your own and link it back here.

Best regards,



AI Coding Assistants Made Me Go Back to School

The introduction of graphing calculators didn’t remove the need to understand math; they removed the need to do rote math and elevated the need to know what to do with it.

AI coding assistants like GitHub Copilot make the highly specialized and until now in-demand skill of code writing cheap and accessible, shifting the skill demand from knowing how to write code to knowing what the code is for and how to evaluate its quality.

In other words your AI coding assistant is only as good as your knowledge of software design and development.

To use AI coding assistants effectively, it’s imperative that you know not only what you want done, but how to do it and how not to do it.

AI coding assistants can quickly conjure up semantically correct code snippets based on their training data, but have no knowledge or understanding of how to actually build code, let alone what the purpose of the code is for.

They are spicy autocomplete for code. Nothing more, nothing less.

With an AI coding assistant at your fingertips, a coding task that previously would take hours to complete can be done in minutes. This has enormous implications for you as a software developer and for anyone getting into the industry today:

It’s no longer enough to learn to code or know how to code well. You need to learn why how to validate and optimize the code, how to ensure the code is up to latest standards and best-practices, etc. But more importantly, you need to learn how to design the software you’re building and then provide the necessary instructions for the AI coding assistant to do the rote work of writing the code for you. It’s a massive step up in skill and scope, and it means many people in the industry need to retrain to avoid becoming obsolete.

I myself am not immune to this. Which is why I, a 20 year industry veteran and senior tech educator, have gone back to school to take basic Data Science classes at the University of British Columbia.

And it’s already paying off:

This past week I was tasked to build an AI-powered app for my colleagues. But before I could even get to the AI part I had to do a metric tonne of data processing. The project involved taking data from several different sources, conforming it to a consistent format, connecting the various data together to reflect their relationships, paring it all down leaving only the pieces I needed, converting it to tidy data, and then generating embeddings from the data.

Six months ago I would not have known where to start or even that I needed to do any of this work. And more importantly for our current context, I would not know how to instruct my AI coding assistant to do these things, because I didn’t have the necessary language and understanding to formulate the correct prompts.

However, thanks to my newly acquired and rapidly developing skills I was able to work with my AI coding assistant to process the data and get to a point where the actual AI work could begin.

That “work with my AI coding assistant” was about 80% telling it what I wanted done, it getting it fundamentally wrong, or going off the rails, me iterating over multiple ideas to narrow down the scope until the assistant finally generated some meaningful code, then running the new code, moving on to the next step, and starting the process over again.

Which sounds like a colossal waste of time but was actually enormously helpful. The AI coding assistant saved me hours of work by eliminating the need to look up specific code syntax and functions and evaluating edge cases. It also enabled me to rapidly experiment and iterate over different approaches, and debug the code as I implemented it.

The process often went like this:

Me: I have [some code/data structure]. I want this [desired output]. Give me only the code.
AI: [Code]
Me: This outputs [incorrect output]
AI: Oh, sorry. My bad. Try this instead [new code]
Me: That works. Now I want to add [new thing]
AI: Here's the same code, with [new thing] added.
Me: This outputs [incorrect output]
AI: Oh, sorry. My bad. Try this instead [new code]

For this to be an effective process, two conditions need to be met:

  1. I need to know what I’m doing
  2. I need to know how to work with the AI system

Condition 1 is met by me upskilling to expand my understanding not only of coding languages (this project required advanced Python for data science) but also higher-level design and thinking about what I’m actually trying to do.

Condition 2 is met by me spending a lot of time working with AI systems to find their fences, cliffs, tracks to nowhere, and inherent biases. For example: If you use ChatGPT or a custom GPT to write large bulks of code and you keep iterating over the same code, the system will start outputting incorrect code. This happens because with every new prompt the system re-ingests its own old (often incorrect) code and starts repeating the bad patterns. To get around this problem, after a few cycles of iteration start a new chat with the latest version of your code. Now the AI “memory” won’t get muddled by past mistakes.

From all of this, take away these three things:

  1. AI coding assistants are here to stay. Learn to use them or be replaced by someone who does.
  2. Invest in upskilling and re-skilling. Your job is now less about coding than managing someone (an AI coding assistant) who does the coding work at a … somewhat passable level.
  3. Learning to code is no longer enough: You also have to learn why to code and how to check the work of your AI coding assistant. That’s a whole new level of skills and expertise.

This is the future, already here in front of us. Join me in upskilling to push humanity and software development forward!


Is OpenAI imposing a Token Tax on Non-English Languages?

Is OpenAI imposing a token tax on non-English languages? The token count on international languages are significantly higher than English. This is a serious accessibility and equity issue for the global majority who use languages other than English.

I did an experiment using OpenAI’s Tokenizer: Get the token count for the same text in different languages, adjusted for character count. Here are the results:

  • ?? English: 105 tokens
  • ?? Spanish: 137 tokens
  • ?? French: 138 tokens
  • ?? German: 138 tokens
  • ?? Dutch: 144 tokens
  • ?? Norwegian: 157 tokens
  • ?? Hungarian: 164 tokens
  • ?? Arabic: 286 tokens

For ChatGPT users this means international users will hit the token window limit faster resulting in higher risk of “hallucinations,”, reduced response quality in longer conversations, and reduced ability to process larger volumes of text.

For OpenAI API users this also means significantly higher cost for every prompt and response as they are charged per token.

For English, the tokenizer counts most words as one token. For non-English languages, most words are counted as two or more tokens. This is likely because English dominates the training data GPT was built on, and results in an effective token tax on non-English languages, especially smaller and more complex languages.

As AI becomes an ever more present component of our lives and work, this token tax poses a significant equity problem disadvantaging the global majority and anyone not using English as their primary language.

My Opinion

The children on the ground

I remember my mom crying to the evening news. When I asked her why, she answered “the children.” The screen she was watching told the story of a young child, unaccompanied and unidentified, carrying an even younger sibling across a desert to escape a war. I was a child myself back then, and all I understood was how wrong this was – so wrong it made my mother cry.

Since then I’ve seen the image repeated again and again. The scenery changes, as do the children, but the story remains the same: Across deserts and fields, rivers and oceans, roads and railroad tracks, children walk towards the unknown because anywhere and anything is better than the atrocities they’ve witnessed and the violence their bodies have endured in what was once their home.

I remember terror in the eyes of a Romanian boy when the Norwegian army showed up to drop off sleeping bags for an upcoming island excursion at our summer camp. I think it was 1990. A delegation of four kids from Bucharest had joined 45 other 11-year-olds from around the world for a month of cross-cultural activities in Kristiansand, Norway. As the army truck pulled up, the Romanian kids shrunk into the background. Later I learned they thought they’d be taken away. “Why?” I asked. “Because some of their family members were taken by the military,” a camp leader explained. “And they never came back.”

I remember confusion when Yugoslavian refugee kids started arriving at our school. They came alone or in pairs, at random times, always without assistance. I was in 7th, or maybe 8th grade. They were airlifted from war and ethnic cleansing in their homeland to bucolic lethargy on a peninsula outside the frozen capital of Norway and sent to school knowing nothing of the local language or what had become of their families. The pull-down maps in our classroom were old. They showed a Germany split in two, a Yugoslavia still intact. A teacher told us to welcome our new friends and invite them to play. We tried. We didn’t know how. They lived in an asylum centre. They wore hand-me-down clothes from our own families. They were angry, and confused, and depressed. They were just kids, but they’d already endured events even adults can’t handle. Yet somehow they were expected to adapt: plug in and normalize. Some were there one day and then gone the next – sent off to other asylum centres in the country or, if they were lucky, reunited with their families in a proper new home. Others were sent back to their old homes, and a fate unknown.

I remember friends at university, home from UN peacekeeping missions in Lebanon, Somalia, and the former Yugoslavia, haunted by what they’d seen. All of them talked about the children. “So many of them were born into conflict,” one explained. “Their entire lives lived under the boot of war. They suffered from malnutrition, injuries, and treatable illnesses. They had no access to education or healthcare or even food. They would hang out outside our base looking for handouts. Food, money, toys, whatever we would give them. And then they died, from a stray bullet or a rocket or an infected cut or some easily curable disease. And if they somehow survived, they were picked up by the war machine and turned into weapons.”

I remember a child lying face down in the surf off the coast of Greece. “It’s terrible what is happening over there,” someone said in the mall food court in Vancouver, Canada, and their lunch companion responded “Come on! Nobody forced them! They chose this, and now they are paying for it. You don’t see me on a refugee boat crossing the Mediterranean!” That same week a high school friend texted me from another beach in Greece. She’d gone there on vacation and ended up staying on for weeks helping with the relief effort. “I have to leave,” she said. “I can’t watch another child die.”

Faced with the atrocities of the world our minds bring the curtains down on our empathy.

Not my kin. Not my war. Nothing I can do.

But they are our children; maybe not from flesh and blood or culture and creed or nationality, but from humanity. They did not choose to live through war; they were plunged into it by forces they have no control over and decisions they have no say in.

We owe it to ourselves to pull those curtains back up. There, as the saying goes, but for the grace of luck and good fortune, go we all. 

In my 45 years I have never felt war on my body. That is an extraordinary privilege not afforded to hundreds of millions of people around the world. And while I can’t resolve the conflicts of the world, I can lend a hand to those on the ground who are doing the work of making life livable for the people who have been displaced and turned refugees by conflict and war.

According to the Norwegian Refugee Council (NRC), at the end of 2022 there were 108 million people around the world displaced from their homes by conflict, violence or persecution – the highest figure ever recorded. By the end of 2023, that number will be significantly higher.

Today, I see my son’s face in every child fleeing from and victimized by conflict and war. Now, I understand why my mother wept. Every child hurt by conflict is our child hurt by conflict. Every child hurt by conflict is one too many. We can, and we must, do better: for the children, for their families, for ourselves. When a child is forced to flee a conflict, or is harmed or even killed by it, we have failed in our most basic duty as human beings and as a society: To care for those who can’t care for themselves.

To help children displaced by conflict and war, consider supporting one of these international relief organizations or a child-focused relief organization of your choice:


Do Humans Dream of Electric Minds? How language influences our thinking about AI

“One of our persisting challenges is excessive hallucinations.”

I’ll cut right to the quick: AI systems are nothing like human beings, but our language makes us think they are. That’s a problem, for us and for the AI systems we’re building.

When ChatGPT was introduced to the public in November 2022, people were baffled to discover when you asked the system a question it would often return text presenting information which appeared to be true at first blush but on further scrutiny was only partially true or even entirely fabricated. A barrage of news articles and social media commentary followed, about how the AI systems were “lying” or intentionally “deceiving” us, suggesting these passive computer systems were acting out of some form of malice. 

AI experts explained this is a well-known phenomenon called “hallucinations.” The term quickly took root in the public consciousness and provided a platform from which our common understanding of these technologies would grow:

“If a machine can hallucinate like me, it must have a mind like mine.”

A relatable metaphor can be a useful way of explaining something complex by referring to something similar and less complex. Metaphors are found throughout our everyday language: “You are an angel for doing this”, “she was on fire today,” “I am toast,” these are all nonsensical statements with real-world meanings easily understood by people with sufficient language skills and shared cultural and societal experience. 

Metaphor can also be a useful tool when explaining complex concepts without requiring the listener to understand the full complexity. When our son complains about his hands hurting after he’s been coloring for a long time and I tell him it’s because they are “tired” he understands he needs to give them a “rest” without needing to understand the physiological causes of muscle fatigue. When a TV show is abruptly interrupted by an error message on the TV and I tell him it’s because our TV can’t talk to the streaming service, he understands this is a communication problem without needing to understand the intricacies of HTTP protocol, DNS servers, or packet loss.

So when seemingly all-knowing AI systems inexplicably fabricate information, it’s easier to explain what happened through the metaphor of hallucination than it is to explain the inner workings of computer systems even the people who build them do not have a complete understanding of.

Throughout the history of the science of artificial intelligence, we’ve used metaphorical language rooted in human cognition and behaviour to explain how these systems operate. The term “artificial intelligence” is a metaphor describing systems whose capabilities go beyond traditional computer systems and are “smart” the way humans are smart. Saying AI systems “learn” about the world through “training” uses education metaphors to make simple the enormously complex machine learning algorithms and processes that go into building their models. Saying AIs have “knowledge,” “reasoning” capabilities, and the ability to “follow instructions” uses metaphor to explain their often surprising power. We use the metaphor of human communication and interaction when we tell people to “have conversations” with the systems and refer to them as individuals with human traits like attitudes and emotions.

When we use these metaphors to describe AI, people get enough of an understanding of what’s going on to be able to speak about these systems and see how they can fit into their lives and work without having to understand their technical underpinnings.

The problem is by using anthropomorphic language – metaphors referencing human traits – we construct an image in our minds of these systems being variants of ourselves: machines that are intelligent like us humans, that learn about the world through training like us humans, that hold knowledge and reason like us, and follow instructions like us, and have conversations like us. And when some of those systems use our own very human language as both input and output, our metaphors get validation in the real world and we start thinking of the machines and their software as living conscious agents even when we know they are not.

No wonder then when an AI outputs information that looks true but turns out to be a fabrication we continue the pattern and describe the machine as a liar. 

We could have, and probably should have, chosen to use more technical language for these machines, but in doing so we’d have missed out on the magic and the marketing. “This is the courtyard and Juliet is a human on the balcony” pales to the evocation of “This is the east and Juliet is the sun!” because the language we use colours and shapes our understanding of the world. So now that we’ve chosen to use human metaphors to describe systems of non-linear computing algorithms that process information and build network models, perform advanced retrieval from data graphs and calculate responses based on neural networks, take input and produce output in the form of tokens, and output statistically correct but sometimes fabricated token sequences that reproduce human language, we must always be on guard against the hallucinations our language conjures within us.

I fear in our attempt to make AI more understandable we have committed an unintentional act of self-deception. The metaphor of humanity rides too close to our dreams of machines built in our image, and our language makes us confuse those dreams with reality.

In the Age of AI, our biggest challenge may be overcoming our own excessive hallucinations.

Cross-posted to LinkedIn.


Transformer: Year One of the ChatGPT Era

“We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”OpenAI Blog post “Introducing ChatGPT,” November 30, 2022

With these words, a transformation – from the time before to our time with the machines and systems we’ve chosen to call “Artificial Intelligence.”

ChatGPT, launched a year ago today as a “research preview” of our future, was our dreams and our nightmares made real: A computer we could talk to that seemed to talk back. A mirror and a marvel, at times indiscernible for magic, brimming with peril and potential.

Born from the transformational transformer, the Generative Pre-Trained Transformer outfitted with a chat interface would prove to be the tickler our imaginations needed after three years of pandemic trauma and looming economic uncertainty.

Can I get this thing to do my work, my homework, my chores, can I outsource my thinking, make it my therapy? Can it be my friend, my companion, my lover, my master?

The fenced path toward our carefully designed future suddenly opened into an infinite opportunity space with us standing at its edge, in awe, uncertain and excited and afraid to take our first steps, take any steps, to go at all because where do you go when you can go anywhere?

Listen carefully in those days to the whispers in the halls of power and you’d find those who thought it had come too soon; this tool that broke the academic confines of AI and put it in the hands of everyone. They had not yet the time to forge our path towards their vision. No fences were ready to keep us in line. Our future with AI was suddenly here, unannounced and unbounded, with imaginary grandmothers telling children’s stories of napalm and search engines catfishing journalists and a democratization and lowering of barriers to almost every field of exploration and creativity and discovery.

“Just a perfect day, you made me forget myself. I thought I was someone else, someone good.” – Lou Reed, Perfect Day, Transformer

One year in and we’ve learned things; about ChatGPT; about the people who built it; about creativity and generation; about people and machines; about work and life; and about us, ourselves, the humans in the mix.

One year ago my colleague asked me to help the world move forward with this. “Show the world how to get this right, how we can get it right together” he said, or something like it. And I said to him “I’m glad it’s us.” And I am. Glad it’s us. Not only in the narrow sense of him and me and our friends who we have the honour and privilege of working with, but in the broadest sense of all. I’m glad we, the global community who share this moment in time, get to explore this opportunity space together, unbounded by prior plans and pre-determined directions.

With AI we have an opportunity to think anew about the things that occupy our lives: How we work, how we play, how we create, how we commune. We get to think again, or maybe for the first time, about our time and how we use it.

We get to ask questions, about the role of technology and the role of humanity, about what we value and how value is measured, about who gets a say as we build our future with AI, and who decides who gets a say. We get to ask questions about power and privilege and access and limitations and what is good and right for us, our communities, and our world.

The decisions we make and the steps we take into the possibility space of AI determine where we go next and where those who come after us get to go in their futures. The time before is gone. This time is something new, and we get to define it.

This transformer in our midst, this up-ender of every process of creation and alteration and organization and interaction, is something so new, so unknown we don’t yet know what it is for. And the way we use it and its myriad of sibling cats well and truly out of the bag and on their way into the world today is the way we used the iPhone when it first arrived in our hands: As a phone with a flashlight app. The AIs brought forward and into our hands by ChatGPT are both tools for us to use and materials for us to work with. With new tools and materials in our practice we get to reflect on our old ways and find new ways – hopefully better ways – to do the old things and the new.

As I stand here, one year into the era of ChatGPT, what I see before me in the near future and several years from now is a period of ceaseless transformation. This year was the preparation; the slow organizing of exploratory teams, the uneven distribution of resources, the first furtive steps on untouched ground. What comes next is the journey into unexplored opportunity space. What we find there and how we use it will be up to us. And if we do it right, if we care for one another and help each other and build paths everyone can follow, it might – no – it will be amazing.

Hope is a catalyst. Build the future for everyone and for yourself.


Your No-Hype Guide to Everything OpenAI Announced at their DevDay

OpenAI – the creators of ChatGPT and current designers of our collective futures with AI – announced a metric tonne of new updates, features, and products at their inaugural DevDay this Monday. I was there, and here is my no-hype, no-nonsense, pragmatic guide to what was released and what it means for you.

GPT-4 Turbo: Upgraded model with upgraded speed, upgraded “knowledge,” and lower pricing

All software has version releases. So does GPT – the underlying foundation model for ChatGPT and OpenAI’s other language-based AI services. The latest version – GPT-4 Turbo – boasts:

  • faster speed. GPT-4 was notoriously slow compared to GPT-3.5 and GPT-3.5 Turbo. GPT-4 Turbo is reportedly significantly faster meaning you don’t have to wait as long for responses. This is especially noticeable with their text-to-speech features.
  • bigger context window. GPT-4 Turbo has a 128,000 token context vs GPT-4’s 8,000 – 32,000 tokens. This means you can now provide around 300 pages of text for the system to reference during your session without it losing track of what you’re talking about. LLM systems are language transformers, and the more context you provide, the better they are able to perform tasks. This has big implications which I’ll address in the below sections on GPTs and the Assistant API.
  • updated knowledge cutoff. GPT-3, 3.5, 3.5 Turbo, and 4 were all trained on data collected before September 2021. This meant if you asked them about something that happened at a later date, they would not be able to answer. GPT-4 Turbo’ knowledge cutoff is April 2023 and in the DevDay keynote OpenAI CEO Sam Altman said they will “try to never let it get that out of date again.”
  • lower cost. GPT-4 Turbo is 3x cheaper than GPT-4 for prompts, and 2x cheaper for completions. This is significant for developers who are building things with OpenAI’s API because every token costs money. This is a transparent play to get more developers to work with the platform. As for ChatGPT, the pricing stays the same, so the majority of users won’t see any pricing impact.
  • Other things: Multimodal by default in ChatGPT (Dall-E 3, web lookup, code interpreter triggers automatically), invite-only fine-tuning for GPT-4, and increased rate limits for the API.

What this means for us

GPT-4 Turbo is the next version of GPT, and based on previous history we can expect either a GPT-4.5 Turbo or a GPT-5 in the relatively near future. The model provides incremental and obvious improvements and we see a clear pattern here: Base models improve performance, turbo models improve speed, extend context windows, and lower cost. The real-world implications of this new model are significant:

  • ChatGPT will appear “smarter” and more “knowledgable” meaning people will be more inclined to think of these systems as “intelligent” and neutral arbiters of the truth. This continues to be a serious societal problem and will be amplified every time the models get upgraded.
  • Using GPT models for practical things got a lot easier. Context window limits have been a major issue for use cases including knowledge retrieval from large documents, summary writing, and more. The enormous context window of GPT-4 Turbo means students can use ChatGPT to summarize academic articles and entire textbooks, writers can use it to review entire chapters and even books, and data professionals can use it to parse much larger data sets.
  • More people will be using ChatGPT and GPT-based systems for more advanced things, and lean more on the mythologized “reasoning” within these systems to make decisions, and the systems will produce completions good enough to pass a cursory overview leading people to think they are doing good work. Education is necessary to help people understand why this is not the case.

GPTs: The first step towards GPT Agents and the sidelining of plugins

ChatGPT users can now create so-called GPTs – effectively tailored ChatGPT versions with custom instructions, expanded “knowledge”, and specialized actions. These GPTs are built from the ChatGPT interface and programmed using natural language meaning you don’t have to be a programmer to build them. This democratizes the creation of custom GPT agents and gives people new AI capabilities.

  • Each GPT has its own custom instructions – a large system prompt where you describe what the GPT is for, what it should do, and how it should behave.
  • You can upload “knowledge” to a GPT in the form of documents and other data and the GPT will refer to this knowledge in its completions. For example, you can upload a textbook as a PDF and tell the GPT to act like a teaching assistant and it can help you learn the content of the textbook, quiz you on important topics, provide summaries, etc.
  • Actions allow you to connect GPTs to external services and customize their interactions. For example you can connect a GPT to a weather API and instruct it on how to pull real-time data from that API for accurate reporting.
  • You can create private GPTs with any content you want.
  • You can share GPTs (when you do they go through a copyright check to make sure you’re not sharing content you don’t own the rights to).
  • Enterprise users can create enterprise-only GPTs to share within their orgs.
  • There will be a future GPT marketplace where you can buy and sell GPTs with profit sharing.
  • Currently GPTs are in beta, available only to ChatGPT Plus users, and being rolled out slowly. Unclear whether they will become available to non-paying users.
  • Some mentions were made about how actions could be associated with ChatGPT plugins, but reading between the lines the message is quite clear: Plugins are being silently sidelined in favour of GPTs.

What this means for us

GPTs will become the new primary way people use ChatGPT because they eliminate the need to state the purpose of your interaction with each chat. GPTs will also dramatically accelerate advanced user of ChatGPT because they bring down some significant barriers to entry:

  • The massive 128,000 token limit allows you to upload entire books as “knowledge” in a GPT meaning every student can and will create a GPT for every textbook they own and use it to supercharge their learning.
  • Sharing of GPTs means as people create new capabilities with ChatGPT they’re able to give those capabilities to others. This will be especially important for things like helpdesk, documentation search, and internal enterprise operations.
  • The plugins ecosystem is fading into irrelevancy, both because GPTs take over for them and because the release of GPTs meant the death of hundreds of well-funded startups and projects, all informed by the plugins they created. For example, every “talk to your PDF” type plugin is now meaningless as GPTs do this by default.
  • OpenAI will have a nightmare task on their hands as they try to moderate the tsunami on top of an avalanche of GPTs people make and try to sell in their marketplace. Moderation will be key, and it will be enormously costly.

Assistants: The programmer’s path to agents

Along with GPTs (which belong in ChatGPT), OpenAI released the Assistant API which provides the same family of functionality in for programmers who build tools utilizing GPT services. With the Assistant API comes a bunch of features that make the work of every developer a lot easier:

  • Threaded responses for streaming so you don’t have to keep track of every prompt/response pair in your own database
  • Invoke multiple functions at once with function calling
  • “Knowledge” retrieval from documents (low-key low-investment RAG for smaller documents)
  • a stateful API ? because this is 2023 not 1996
  • API access to the code interpreter (and a future path towards custom code interpreters)
  • Future promises including mutimodal API, async, and support for web sockets and web hooks.

What this means for us

If you’ve built any application on top of the OpenAI API, chances are you now have to rebuild it. Many of the new features released (threading, multi-function calling, retrieval, statefulness) replace custom features developers were forced to build due to the lack of core support from the API. This will be enormously expensive and damaging to many projects, but is necessary to move the entire space forward. The lack of these features in the original API were deficiencies and their introduction is long overdue. One thing I didn’t see was any mention of proper authentication. The current key-based auth in the OpenAI API is sub-optimal at best and leaves developers having to rig their own security around their apps which is… not great.

  • Building anything with the API is now way easier.
  • This is an aggressive play to onboard more developers, and OpenAI is clearly taking slow developer adoption seriously.
  • The importance of parallel function calling cannot be overstated – this is the path to a lot of advanced functionality.
  • Building extensions to OpenAI’s features remains risky as the API and underlying services evolve, so make sure you have room to rapidly iterate and change.
  • I expect we’ll see a continuation of this rapid evolution of API features for a long time, so stay nimble.

In summary

OpenAI is rapidly evolving from a startup with an insanely popular experimental service to a full-fledged platform company with professional products on offer. The rapid evolution of their products shows now sign of slowing down and I expect by next year’s DevDay what was released this week will appear quaint and old.

If you’re working with AI in any way, get used to the constant change and uncertainty, because they are going to keep accelerating for a while.

My Opinion

AI is a Loom: The End and the New Beginning of Web Dev

Web dev as we know it is deprecated. We just haven’t downloaded the latest version yet. What comes next is a metamorphosis, a paradigm shift, a revolution changing how we work, how the web works, and everything we know.

In March 2023, OpenAI CTO Greg Brockman drew a rough sketch of a website on a livestream, uploaded it to GPT, and got fully functional HTML, CSS, and JavaScript in return. That feature – dubbed “multimodal GPT” – is shipping to all ChatGPT users in the coming months. Soon you’ll be able to open an app, provide some instructions, and have a fully designed and populated website with real content, an ecommerce store, and a social media marketing campaign built in minutes with the help of AI. 

I’m not saying coding is a dying craft; I’m saying the craft of actually writing code on a keyboard to be input into a coding environment for further processing will become a narrow specialty. Most people who write code today will not be writing code in the near future. Many of them don’t need to and shouldn’t have to write code today! Writing code instructing computers to do things when we have software that writes better code for us makes no sense, and this whole notion of everyone learning how to code feels more and more like telling highschool students their future employment hinges on their ability to master T9 texting.”

Me, in an email dated November 2021

Web development stands on the precipice of an AI-driven metamorphosis. In front of us, the demarcation line between two paradigms: The old world of human-written code aided by machines, and the new world of AI-generated code aided by humans. 

For the web, AI is the Jacquard loom. 

For most developers, this means transitioning from writing and debugging code to directing AI what to build and checking its work. AI represents a Jacquard loom moment for web development, transitioning our work from hand-coding the fabric of the web to using that fabric as material for building new experiences.

The implications are enormous, not just for our work but the web’s future. As AI becomes part of our practice, our role shifts from writing standards-based, accessible, progressively-enhanced code to ensuring AIs use the latest, most advanced standards to build the future. If we don’t embrace this new role, progress will stall as AI biases established standards and ignores new tools and best practices..

Here’s what I see:

Very soon the public will access AI services that create websites in minutes from prompts, sketches and assets. Wix teased this, and competitors aren’t far behind.

I’d be shocked if Canva and Figma don’t unveil full “draw it, AI builds it” services by year’s end. Soon there will be ChatGPT plugins that build websites for you from scratch. This is inevitable.

When I say this out loud, the immediate response is usually some version of “AI can’t write good code” or “AI doesn’t understand users” or “AI makes constant mistakes.” All true, and all irrelevant. This isn’t about AI writing code or autocorrecting our code. AI will instead use the well-documented and well-established frameworks, templates, build processes, and automation we’ve created to make our work easier to weave together the websites and apps we ask for.

For walled gardens like Wix, this is straightforward: their back-ends, systems, and design languages allow AI to rapidly wire sites to user specifications. And that’s just the start. We’ll soon see new semi-agentive tools supporting various stacks, so you (with the help of an AI) can select frameworks, design systems, ecommerce platforms, etc. based on project needs without writing or knowing how to write code.

Look at what the people over at Builder are doing, then add an agentive AI on top and you start getting the idea:

What People Want, What Automation Provides

Two massive waves of progress are converging:

Developers have spent a decade building automation tools, frameworks, and systems to improve dev experience. You can now spin up a Next.js site in GitHub Codespaces in minutes without writing a single line of code. Component-based frameworks provide code consistency and add LEGO-like block assembly to web development. Design systems, component libraries, style guides, and tokens enable rapid prototyping and consistent UX/UI. Linting, optimization, accessibility, testing, CI/CD are largely automated. Bridging layout and code is reality. Often, we just connect these pieces. AI serves well as an automated and intelligent loom weaving these pieces together into workable materials.

On the user side, people want friction-free, no-maintenance, always-on experiences. Faced with the choice between the DIY bazaar of the open web and the shiny mall of app-based walled gardens, they pick the moving sidewalk of least resistance. And they are willing to pay for that convenience; with money and by giving up their data and privacy to surveillance capitalism. Where publishing on the web used to mean standing up a WordPress site (or paying someone else to do it), today bloggers, creators, influencers, and small businesses opt for TikTok, Instagram, YouTube, Medium, Substack, Shopify, and Linktree. 

The web we lost is a bygone web a larger and larger portion of the public never experienced, and concepts like self-hosting seem archaic and inefficient to the masses. Counterintuitively AI may help bridge this gap and reignite the interest in carving out your own space on the web by lowering the barriers to entry down to describing what you want and watching it manifest.

What is pushed down as these waves converge and elevate the capabilities of the web-using public is the need for traditional developers. When the options are either an AI site from Wix built from a prompt in minutes or a complex and expensive custom build that takes months to complete, there’s no choice for most people and businesses. When the Jacquard machine automated weaving, hand-woven textiles transitioned from an essential commodity to a luxury art form, and the expertise of manual weaving morphed from a commodity skill into an artistic pursuit. Weavers still exist, and bespoke fabrics are still made, but the vast majority of textile products were made by machines guided by humans who spent their time designing the products instead of making the materials. That’s what comes next for the web. 

AI Creates Opportunity Space

This may sound like AI replacing humans. It’s not. Instead it’s a fundamental shift and refocusing of the role of the developer: From writing code to auditing AI-written code. From knowing how to wire together different frameworks to architecting the system that serves up the website. From fighting with CSS to fine-tuning AI-powered user experiences. 

The people currently working as coders will take a step up the ladder to focus on higher-order capabilities, using their expertise in languages and frameworks to help AIs produce the best output instead of doing the grunt work of writing the code. 

Web dev as we know it is dead. What comes next is a metamorphosis, a paradigm shift, a revolution changing everything we know.

Our new human focus as we move into this future together is to ease the persistent tensions found in the intersection between technology and humanity. AI can’t conduct UX research, design novel experiences, innovate standards and best practices. That was always and will remain our territory. As AI takes over the work of weaving the fabric of the web, we do the work of making new things with those materials while improving their quality and inventing new ones.

In the short term, we’ll become AI managers – customizing, configuring, ensuring user flows and information architectures make sense, monitoring the generated code to ensure the latest standards are in use, and counteracting the inherent bias of AI to repeat prevalent patterns even when they are outdated. We’ll shift from writing code to deciding what the code should accomplish. To do that, we must all become experts at the bleeding edge of code, and invest our time in innovating new standards, patterns, and frameworks for the AIs to use. It’s a whole different job needing a whole new version of the skills we’ve always had.

This transformation is happening now. For consumers and SMBs, it will be lightning fast. For institutions and large enterprises it will be slower, hindered by legacy systems, institutional inertia, and resistance to change. But it’s coming. 

For web workers, it is no longer enough to know the core languages and established best practices. UX, interaction design, accessibility, and innovation is our new bread and butter, built on a strong foundation of modern web standards and bleeding edge HTML, CSS, JavaScript.

The future of the web belongs to those who strategically apply AI to meet user needs. With proper guidance, AI can supercharge our work, provided we put ethics, accessibility, user experience, and innovation front and center.

We build the future with every decision we make. How we decide to work with AI decides what future we get to live in.

Cross-posted to LinkedIn and


“Ice Cream So Good” and the Behavioural Conditioning of Creators

If you’ve been on TikTok or Instagram over the past few months, chances are you’ve come across creators exclaiming “yes, yes, yes, mmm, ice cream so good” while moving in repetitive patterns akin to video game characters. There’s also a good chance you’ve thought to yourself “This is ridiculous! I would never do something like that” even though you and I and everyone else perform the same type of alchemic incantations to please the algorithmic gods of the attention economy on a daily basis.

Every time we use a hashtag or think about the SEO of a piece of content or create a post to match a trend or ask our viewers to “hit that bell and remember to like and subscribe,” we are acting on the behavioural conditioning social media and other platforms expose us to, changing our behaviour to get our meagre slice of the attention pie (and maybe some money to boot.) Look no further than YouTube where for every type of content there is an established style and creators mimic each other so closely it’s becoming hard to tell them apart.

The only substantive difference between optimizing your article title for SEO and exclaiming “ice cream so good” when someone sends you a sticker on TikTok live is the latter act comes with a guarantee of financial return.

“Yes, yes, yes, gang gang, ice cream so good”

Dubbed “NPC streaming, the latest trend on TikTok is being met with equal parts astonishment, concern, and mimicry. The core idea is simple: TikTok has a feature where creators can host live events. During those live events, viewers can buy tokens in the form of stickers, animations, and filters they can send to the creator in real time. The creator in turn gets a tiny percentage of the profits from the sticker or animation or filter being used.

In other words, the more viewers a creator gets, and the more incentive they give those viewers to send them stickers and animations and filters, the more money the creator (and the platform) gets. Crafty creators have figured out the easiest way to get people to send them these digital tokens is by responding directly to them. Thus if you send an ice cream sticker, PinkyDoll will smile and say “mmmm, ice cream so good.”

Creating live content encouraging users to send stickers is nothing new. I remember seeing a live of a man who pretended to try to have a serious conversation about something while getting more and more outraged as people applied ridiculous filters to his face a few years ago. The recent invention of the NPC streaming characters are the refined distillate of this insight:

Forget about content – the easiest way for creators to earn money is by letting people control them directly through payment.

Based on recent reporting, the most successful NPC Streamers can earn thousands of dollars per day doing this work. TikTok takes a reported 50% of their profits, so this trend is enormously lucrative for the platform even when the creators themselves don’t earn all that much.

Please Please Me Like I Please You

In a recent article titled “Operant Conditioning in Generative AI Image Creation,” UX pioneer Jacob Nielsen makes the following observation:

“Generative AI for images torments users with alternating emotions of euphoria and anguish as it metes out sublime or disastrous pictures with wanton unpredictability. This makes users feel like the rat in an operant conditioning experiment, entrapping them in a ceaseless pursuit of rewards amidst sporadic punishments.”

Replace “Generative AI for images” with “monetization schemes on social media platforms” and the observation rings just as true:

From SEO to NPC Streaming, the opaque and ever-changing algorithms trickling out a tiny share of the enormous profits social media platforms make off their creators are giant (hopefully) accidental operant conditioning experiments demonstrating just how far we humans are willing to go in our pursuit of a promised reward.

Social media monetization is exploitationware (aka “gamification”) in its purest form: Creators are placed in an environment where if they stroke the algorithm the exact right way at the exact right time, there may or may not be a payout at the end. Like a rigged slot machine, most creators get close enough to see the big win, but never quite close enough to grab it. Like a casino the platforms promote a select few creators who actually hit the jackpot, making everyone else feel like if they just try one more time, they might win as well. And like every subject in an effective operant conditioning system, we alter and conform our behaviour to the conditions of the system in a never ending chase to get that dopamine fix of cracking the code and getting our reward.

In the book “The Willpower Instinct“, author Kelly McGonigal describes how this exploit of our reward system works:

“When dopamine hijacks your attention, the mind becomes fixated on obtaining or repeating whatever triggered it. (…) Evolution doesn’t give a damn about happiness itself, but will use the promise of happiness to keep us struggling to stay alive. And so the promise of happiness–not the direct experience of happiness–is the brain’s strategy to keep you hunting, gathering, working, and wooing. (…) When we add the instant gratification of modern technology to this primitive motivation system, we end up with dopamine-delivery devices that are damn near impossible to put down.”

That’s creator platform monetization: A dopamine-delivery system encouraging creators to seek happiness in cracking the code, gaming the system, and chasing the promise of happiness in the form of a paycheck.

TV-shaped eyes

Growing up in the 1980s there was much talk among the adults about their kids developing “TV-shaped eyes” from watching too many cartoons. Never mind that in Norway in the 1980s there was only one channel, and it aired one hour of children’s programming per day, at 6pm, right before the evening news.

The underlying concern was prescient though: Our media consumption not only consumes our time and attention; it alters our behaviour in significant ways. Social media platforms have taken this to the ultimate extreme through their incentive-based monetization systems, and we are all paying the price for it.

SEO is about gaming the ever-changing search engine algorithms to get higher ranking. NPC streaming is about gaming the TikTok monetization system to get as much money out of it as possible. If it was easy, if the platforms shared their profits automatically with every creator, the dopamine incentive of the game would go away and we would stop posting and shareholder profits would tank. So instead we get the attention economy and its latest most pure incarnation: The NPC Streamer.

Breaking the cage

The engine driving the NPC Streaming trend (and every other trend on creator platforms) is monetization, and the monetization models they use are fundamentally inequitable to both creators and passive users. Rather than paying creators their fair share of platform profits, platforms use the gamification of payments as behavioural conditioning to get creators to make content that encourages other users to consume more content and pay money into the system. What we need is something else, something more in the shape of a monetization system that pays creators for the quality of their content and the value and utility people derive from it.

What got us here won’t get us anything but fake ice cream. I welcome your ideas about how we break this cage and build a better online future for us all.

Cross-posted to LinkedIn.


The Zeroth Law of AI Implementation

“An AI may not be used to harm humanity, or, by not being used, allow humanity to come to harm.”

As Artificial Intelligence systems (AI) like #ChatGPT enter into our lives and our work, we need some basic guidelines for how we implement them going forward. Here’s a place to start:

The Zeroth Law of AI Implementation:

An AI may not be used to harm humanity, or, by not being used, allow humanity to come to harm. Implement AI in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end, and treat AI always as a means to an end and never as an end in itself.

The sufficiently esoteric sci-fi and philosophy reader will note these are rewrites and recontextualizations of Isaac Asimov’s Zeroth Law of Robotics and the second formulation of Kant’s Categorical Imperative.

The Breakdown

My proposed Zeroth Law of AI Implementation aims to ground us in a shared purpose and vision for how we build our future with AI. It sets forth four basic principles:

  1. Do No Harm* with AI.
  2. Harm can be caused by having a tool and refusing to use it or otherwise limiting its use. For example harm can be caused by limiting access or capability based on factors including socio-economic status, geography, disability, etc.
  3. Humans are always ends in themselves, and must never be considered only means to an end (see “The Age of Surveillance Capitalism“).
  4. AIs are always means to (human) ends, and must never be ends in themselves.

* We need a clear definition what “harm humanity” means including 1) who gets to determine what constitutes harm, 2) who can be harmed, and 3) who adjudicates whether harm has been caused.

The Reason

The goal of technology is to build futures where humans have the capabilities to be and do what they have reason to value. AI technology presents an infinite possibility space for us to make that happen. AI technology also allows us potential to limit our own capabilities and cause real harm.

These statements seem self-evident, yet when technology is developed and implemented, we often forget its core purpose as we are blinded by the capabilities of the technology (it becomes an end in itself), by the power it affords us (those with access gain powers not afforded to others), and by the financial opportunities it affords us (using technology to turn humans into means for financial ends).

Grounding ourselves in a common principle like this proposed Zeroth Law of AI Implementation reminds us of the core purpose of technological advancement. It gives us a reference point we can use to evaluate and challenge our decisions, and a shared foundational platform we can build further principles on.


Right now, at the start of our future with AI, we have a unique opportunity to talk about where we want to go next and how we want to get there. That conversation starts with talking about core principles. The Zeroth Law of AI Implementation is my contribution to this conversation. I’d love to hear yours!

Cross-posted to LinkedIn.

Book Reviews

Book Review: How To Be Perfect by Michael Schur

4 1/2 of 5

While the book doesn’t teach you how to be perfect, you’ll be a better person for reading it.

If ever I teach an intro to moral philosophy class, this book will be prerequisite reading. Sold as a fun book about ethics from the creator of the TV show “The Good Place,” this is actually a solid introduction to the academic subject of ethics, sprinkled with humour and real-life anecdotes to make it relatable.

“How To Be Perfect” is a semi-biographical story about a TV writer who goes on a journey through moral philosophy to try to figure out how to be a better person. And maybe more importantly how to teach his young children how to be the best they can be. Not to spoil anything, but at the end of the book there’s an entire section where the author talks to his kids about how to be good people, and it is wonderful. 

The book introduces a variety of branches of moral philosophy with questions like “Should I lie and tell my friend I like her ugly shirt?” and “Do I have to return my shopping cart to the shopping cart rack thingy?” and “Should I punch my friend in the face for no reason?” And this is where the book truly shines: It succeeds at framing real moral problems in a comedic yet relatable way and introducing ethics to people in a way that actually makes practical sense to them.

Something we all need more of.

I suggested “How To Be Perfect” to my design ethics book club as a light read for the holidays. Two chapters in I dreaded the comments I’d get from my friends. “Light read? I bet Kant would have some opinions on passing off a textbook as an enjoyable holiday treat!” Then I continued reading and realized I’d sold my friends and the book short.

“How To Be Perfect” is an imperfect but damn fine effort at making the exceptionally challenging and often mind-numbingly turgid topic of ethics and moral philosophy fun and engaging. If you’re interested in ethics at all, and you’ve wondered where to start or worried it would be either too boring or too depressing, I recommend this book. In fact I recommend this book, period. And I’m not just saying that because I am a philosopher by education and deeply fascinated by ethics.

This book sets out to do something moral philosophy sorely needs: Make ethics make sense, in a human and relatable way. Moral philosophy has a bad tendency of being at the same time overbearingly moralistic (“here’s how you’re doing everything wrong in your life, and here are some impossible standards you must follow to right yourself!”), philosophically partisan (“my form of ethics, in my specific interpretation, is the only real ethics. All other ethics are wrong!”), and fundamentally unrelateable (“Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.”) Michael Schur tries (and mostly succeeds) in balancing on a knife’s edge between staying true to the academic foundations of moral philosophy while also framing the many theories covered in real-world scenarios, funny anecdotes, personal experiences, and a heavy helping of yelling through a bullhorn at the ivory towers of academic philosophy.

This last point is probably best exemplified in the chapter on charity where Schur points out how moral philosophers of different traditions will contort themselves into Gordian knot over the moral failings of massively wealthy people using charitable giving as a self-congratulatory popularity contest while in the real world the money they raise actually does some good.

Schur also does something extraordinary in the book: He tries (and I sincerely hope he succeeds!) to introduce a new term both to philosophy and to our common language: “Moral Exhaustion.” Let me quote from the book:

“even if we scale the triple-peaked mountain of Daily Stress, Serious Problems, and Circumstance, and (running on 5 percent battery power) try our very best to do the harder/better thing, we often fail miserably despite our best intentions. It. Is. Exhausting.”

Michael Schur, How To Be Perfect

I think moral exhaustion is a great description of the malaise we are all feeling in our lives and our work today, and I’m now using the term freely in my everyday language thanks to this book.

One major problem with moral philosophy (aka ethics) – and I say this as someone who studied moral philosophy for years at university – is its detachment from the real world and its separation into distinct traditions. You are either a Utilitarian or a Deontologist, a Virtue ethician or a Contractualist, and whatever position you hold, you must defend your tradition against the others. (I am oversimplifying here, but this is a real struggle. Call it trauma from years of being an analytical philosopher taught by a faculty almost entirely composed from Kantians.) Through the book, Schur attempts to line up these and other moral philosophy traditions and theories and thread a needle straight through them to show that rather than treat ethics as One Theory to Rule Them All we are best served with an Everything, Everywhere, All At Once approach to our decision making.

As an introduction to ethics and moral philosophy, “How To Be Perfect” does a good job introducing the main branches of western philosophy (Virtue, Duty, and Consequentialist ethics), newer traditions like Contractualism, and even non-western traditions including Ubuntu and Bhudist ideas. This breadth stems from the impressive research Schur did while writing the TV comedy show “The Good Place” which in reality is a covert psy-op to secretly educate people about ethics by making ethics fun.

Side note: Watching “The Good Place” I would typically at least once in every episode jump up and yell “ARE YOU KIDDING ME?!?!?!? They are doing a WHOLE EPISODE on [insert obscure moral philosophy thing]???!?!?!” To which my wife of endless patience would say “Sit down and watch the show.” Point being that show was astounding and if you haven’t watched it, I cannot recommend it enough. Because it is hilarious. And well written. And exceptionally acted. And also, it contextualizes ethics in a way that just makes sense.

Another side note: I recommend getting the audiobook version of this book. It is narrated by the author and the entire leading cast of “The Good Place,” with snarky footnotes from the book’s academic advisor Todd May and even occasional cameos.

How is “How To Be Perfect” not perfect? In brutal honesty I’ll say it reads like what it is: An introduction to moral philosophy written by someone who is at an introductory level in moral philosophy. Schur finds fascination in the typical places: The vileness and eye-watering absurdity of Ayn Randy’s Objectivism, the spectacle of Jeremy Bentham’s posthumous existence as a cadaver on display at a random university (content warning on that link), the turgidness of Immanuel Kant’s writings, etc. We’ve all been there. 

In the same vein, in my opinion he makes two significant blunders – one historical and one of lack of foresight: 

He writes off Heidegger’s works due to their impenetrability and his much discussed association with Nazism, ignoring the enormous impact Heidegger had on moral and other philosophy. As one of the members of my book club said “I wish he (Schur) would go beyond just hints and snarky remarks to actually explain why he sidesteps Heidegger. I felt like he was making excuses for not reading the work.”

Schur also spends a fair bit of time towards the end of the book celebrating the works of Peter Singer and his longtermism. Anyone paying attention to the collapse of crypto and the bizarre politics driving many Silicon Valley founders will know Singer’s ideals have become a breeding ground for … let’s call them problematic ideas from white men of enormous wealth and power about how we should structure and organize our society today to protect the people of tomorrow. I can’t help but think had “How To Be Perfect” been written in 2022 that entire section of the book would have been very different. So in honesty my critique on this point is a perfect example of an anachronism.

Let me be perfectly clear here: I consider these issues minor to the point of being irrelevant. This book is not an academic textbook, it’s a deeply personal book about morals and ethics that tries to do right by the subject matter and the reader and succeeds more than any similar book I’ve ever read.

Final thoughts

If you’re still with me at this point, you’re definitely the type of person who will enjoy this book, so go out and get it in whatever format you prefer. If on the other hand you are looking for a book to give to your friend who refuses to return their shopping cart to the shopping cart shed thingy, or to subtly tell your family member that it’s not OK to tell people their shirt is ugly even if it is, chances are it’ll be a nice decoration on a shelf and will eventually end up in a donation box. “How To Be Perfect” is not light reading for an airplane ride, in spite of how it’s marketed. It is so much more, and because of this it demands much more from the reader. Just like real life demands so much more from us all. And why this book is wroth reading.

Cross-posted to LinkedIn.