Stepping Into the Future: Pair Programming with AI

If we do this right, AI can make our jobs and our lives easier and give us time back to do the things we have reason to value. Pair programming with AI serves as a practical example.

With the realization of AI’s power comes well-justified concerns about how AIs will figure into our lives – and in particular our work. Look to any media outlet and you’ll find a dense fog of articles, videos, podcasts, and think pieces about whether, when, and how AIs will take people’s jobs, and whose jobs are most at risk right now.

In this darkness, let me put up a bright beacon on the horizon of possibility and give you a glimpse of what a future of human-AI collaboration can look like.

Explain it to me

You bump up against a problem at work: an Excel formula you’ve forgotten, an inscrutable data processing script written by people no longer on the team, the right way to invoke a particular JavaScript function while being mindful of state. These situations are common, and they consume significant time and cognitive resources. They are also what I call “robot work,” as in the kind of repetitive rote work you can imagine a robot doing. 

Now imagine having a skilled co-worker on call, at all times, ready to help you find and explain the right formula, document that inscrutable script, and refactor or even build from scratch that JavaScript function you need.

That’s what AI can be for us: Just-In-Time assistants for all the tedious, time consuming, and rote robot work taking up our valuable time and cognitive capacity.

If you’re a developer, you can experience this future today via various AI integrations including GitHub Copilot and ChatGPT.

No alt text provided for this image
GitHub Copilot Labs panel in VS Code.

GitHub Copilot coupled with the new GitHub Copilot Labs extension in VS Code gives you a pair programming assistant right in your development environment. Highlight any block of code and in the Copilot Labs panel you can ask for an explanation of the code, have it translated into another (applicable) code language, use a series of “brushes” on it including making the code more readable, adding types, cleaning, chunking, even documenting. You can even use Copilot to write and run tests on your code.

A myriad of ChatGPT extensions including Ali Gençay’s ChatGPT for VS Code does much the same, via a slightly different route. Authenticate the extension with OpenAI’s ChatGPT API, highlight any code, and you can ask ChatGPT to add tests, find bugs, optimize, explain, and add comments automatically. You also get the ability to start a full chat with ChatGPT in a dedicated panel right inside the editor where you can talk to the AI in more detail about whatever you want. 

No alt text provided for this image
Features from the ChatGPT VS Code extension.

Time and Energy

This past week I’ve been working on a large project involving code written by someone else. The JavaScript of the project has been co-located in two giant files, and is an unruly mix of carbon copies of standard components and functions, modified code copied from documentation sites, and custom code. As is often the case, the documentation is lacking where it’s most needed.

For my project, I need to refactor (rewrite to do the same code in a different way) large parts of the code, and add extensive documentation along the way. That means figuring out how the code works, adding comments to clearly document and explain it, reorganizing the code so it can be split into components, cleaning up and updating the code to modern standards, and finally splitting it up into discrete modules. This is robot work: rote repetitive tasks consuming significant time and cognitive energy better spent focusing on what the code is for: Serving as an example for a course on JavaScript build processes.

In the past, I’d end up investing several days doing this work. Now, with the help of AI, the workload is significantly reduced. This, I think, is an instructive example of how our future with AI can unfold: with AI assisting us as we do our work rather than take over that work.

Refactor this

Both GitHub Copilot and ChatGPT are conversational AIs. You chat with them as you would chat to a person. You can ask questions, give instructions, and ask them to perform tasks for you. Using AIs as pair programmers you do a combination of all of this and more.

If you’re using the VS Code extensions mentioned above, they are already set up for the correct context. In the case of ChatGPT you can also use it as a stand-alone pair-programmer with some basic setup:

To start, set the stage (literally) by instructing the AI on how you want it to behave. In a new chat, provide a prompt similar to this:

“You are a pair programmer specializing in JavaScript. You help refactor code to bring it up to ES6 standards. Is that understood?”

No alt text provided for this image
Setting the stage for ChatGPT to provide useful feedback.

The format here is “you are a [some role]. You [perform some skill/action]. Is that understood?” The last question gives the AI an opportunity to state how it is setting itself up based on your instructions and gives you an opportunity to provide further instructions. In my case ChatGPT responded as follows:

“Yes, that is understood. As a language model, I am able to understand and provide assistance with various programming languages, including JavaScript and its latest version, ES6. I can help refactor code to bring it up to current standards, as well as provide guidance on best practices for writing maintainable, efficient code.”

From here you can for example paste in a JavaScript function and ask the AI to help find an error and resolve it:

No alt text provided for this image
ChatGPT finds and explains issues in JavaScript.

In response, ChatGPT provides an explanation of the errors it discovered, prototype examples of solutions to the issues, and finally a full refactoring of the pasted code with the issues resolved.

This kind of contextual response not only helps solve immediate problems, but also teaches you what’s wrong and how to fix it.

This is invaluable for people learning to code and people working with code in any capacity which is why I’d strongly discourage any teacher or manager who is right now trying to figure out how to block people from using AIs in their work. AIs reduce the need for Googling or looking up code examples on documentation sites, coding forums, and open source repositories. Instead they give you contextual explanations and references related to your specific code, and even help you with refactoring. This is the future of work, and gives us more capabilities as workers.

  • Have some code you can’t make heads or tails of? AI can explain what it does. Computers are much better at parsing logic based languages than humans, and conversational AI like ChatGPT are specifically constructed to output human-sounding language making them ideal tools for decrypting complex code for human consumption.
  • Have some code in need of documentation? AI can write a function description, inline comments, or whatever you prefer based on your instructions.
  • Need to refactor based on specific parameters? AI can get you started.
  • I could go on but I think you get the idea.

I’ve worked alongside these AI pair programmers for the past year and a bit, and I can say with absolute conviction these tools and materials will make our lives better if we use them right and integrate them in our lives as helpers for rather than replacements of human labor.

In my experience, pair programming with an AI feels like working with an overly polite person with encyclopedic knowledge of coding and no awareness of what they don’t know. And this constitutes just our first timid steps into the infinite possibility space we are entering as AIs become our assistants.

The beginning of the beginning

As you interact with AI today, be constantly aware of where you are: At the beginning of the beginning of a new era. While these tools are powerful, they are not omnipotent. Far from it. They are shallow, error prone, and while they sound convincing they cannot be trusted. A good mental model for what they produce right now is bullshit as defined by Harry G. Frankfurt: It looks true, and it may be true, but some of the time it’s just plain wrong and the AI will still present it as the truth. While they talk like humans, AIs are not conscious or sentient or aware. They have no human comprehension of your question or their answer. They are advanced pattern recognition systems who tumble down enormously complex decision trees any time a prompt is provided to issue human-sounding strings of text (or code) with a statistically high probability of being the kind of answer that is considered correct by their human trainers. 

When I asked ChatGPT to correct a function containing a deprecated method, it corrected the syntax of the function but kept the deprecated method. When I told it the method was deprecated, it omitted it and refactored the code, but the result used a similar-sounding method that serves a very different purpose and was therefore non-functional and just plain wrong.

When I asked ChatGPT to find an error in a large body of code, it found two real errors and invented a third one, going as far as referencing use of a method that wasn’t even in the code provided.

These examples highlight why I see AIs as tools and materials rather than replacements for human labor. They have no understanding, no contextual awareness, no ability to do creative or lateral thinking. A human still needs to be in the loop; to make sure the output meets parameters, does what was intended, and follows best practices and standards (not to mention hold ethical responsibility for the work created). These things we call “AI” are very much artificial, but they are not intelligent. 

Intelligence is added by the human operator.

Even so, the pair programming offered by these prototype AIs is an enormous leap forward for human workers. And you can easily see how this type of AI-driven assistance can be extended to other work and other tasks. 

I’ve come to think of them as overconfident colleagues with a lack of self-awareness. Because of how they are “trained” – being fed large volumes of data from a corpus lifted off the internet – their “knowledge” is limited to the coding world of two years ago. Therefore, when it comes to modern features, frameworks, techniques, and standards released in the past two years, our current AIs know naught, and more importantly do not know what they do not know. Therefore, if you’re writing code on the bleeding edge of standards, you’re on your own. Or better yet: You’re training your future AI pair programmer! So the pressure is on to get it right!

The future is today

Having seen what AIs can do today, I desperately wish I had a looking glass to see what the future of work looks like. The potential here is infinite. The very best AI tools we have today are prototypes and MVPs trained on old data and limited in their scope. The AIs we’ll have a year from now, five years from now, ten years from now will be beyond anything we can imagine. And with these tools and materials in hand we can choose to build meaningful futures for everyone where we all have the capabilities to be and do what we have reason to value.

The future we are stepping into today is a future where AI is part of our lives, our work, our communities and our society. If you are alive today, and especially if you find yourself in a job, you are in the right place at the right time: These next few years is when we collectively get to figure out how AI fits into our lives and our work. This is when we set the stage for our futures with AI, and we all have a part to play. The work starts by asking yourself in what parts of your life you act like a robot, and whether you’re willing to part with that work and let an AI do it for you so you can do something else. 

If we do this right, AI will allow us to reclaim our time to be human.

Cross-posted to LinkedIn.

Book Reviews

Book Review: “How High We Go In The Dark” by Sequoia Nagamatsu


This is a book about death. Let me say that up front. It’s also a book about life after death – as in the lives of the people whose loved ones are facing or have passed the threshold of death. As post-pandemic novels go, this one ranks among the best I’ve read.

A pandemic. A dying child. A VR suicide group. A slow recovery. A spaceship. Thousands of years of longing experienced in a single lifetime.

“How High We Go In The Dark” reads like a series of short stories, tied together by time and global events. It’s contemplative in a way readers of “Station Eleven,” “Severance,” and “The Memory Police” will recognize. What sets this book apart from the others is its lack of a central progagonist or linear threaded story. In “How High We Go In The Dark” each chapter is a first-person narrative of a character met once, diary like in its presentation. Each chapter stands alone and can be read as a singular unit. This is accentuated by the audiobook having a different narrator for each chapter.

Every word steeped in melancholy and longing, this book is not for those who seek joy and excitement. It roots in a deep sense of grief for a dying people, a dying planet; reflecting the nebulous grief and loss of past normality we’ve all experienced over the 3+ years of the COVID-19 pandemic.

What’s missing is relief: reading “How High We Go In The Dark” is riding perpetually just behind a creating wave – feeling it’s resolutions within reach but never quite getting there.

Read, with caution and emotional support.

Inspired by Christina Stathopoulos, MSc‘s #BookAMonthChallenge and Mordy Golding‘s yearly book summary I’m committing myself to posting reviews of all the books I read in 2023. This is the first.


Tools and Materials: A Mental Model for AI

“Language shapes the way we think, and determines what we can think about.”

Benjamin Lee Whorf

Before we begin, I asked ChatGPT to rewrite this article at a 4th grade reading level. You can read the result here.

Artificial? Yes. Intelligent? Not even close. It is not without reason things like ChatGPT are called “AI” or “Artificial Intelligence.” We humans have a propensity for anthropomorphizing – attribute human characteristics to – things that are not human. Thus if we are told something is intelligent, let’s say a very large computer system we can submit questions and get answers from, we look for intelligence in that thing. And if that thing is trained on our own language and art and mathematics and code, it will appear to us as intelligent because its training materials came from intelligent beings: Us ourselves.

“Artificial Intelligence” is a clever marketing term for computer models designed to appear intelligent even though they are not.

So, as we crash headfirst into the AI present and future, we need to reset our mental model before we start believing these things we call “Artificial Intelligences” are actually intelligent (again, they are not).

Tools and Materials

I propose we all start thinking of these things we call “AI” as tools and materials. Because that’s what they are and that’s how we’ll end up using them.

Sometimes we’ll use them as tools the same way we use our phones and computers and the apps on them as tools. Sometimes we’ll use them and what they produce as materials the same way we use printed fabrics and code snippets to create things. And sometimes we’ll use them as both tools and materials the same way we use word processing applications first as a tool with which we write a body of text and then a material as the thesaurus function helps us use more fanciful words and phrases.

Here are some basic examples to help you build the mental model:

AI as a tool performs a task for us:

  • Fill out tax forms, write contracts and legal documents.
  • Summarize text, rewrite text to a specific reading level.
  • Write code.
  • Online shopping including booking flights and hotels etc.
  • Any interaction with any CSR.
  • Magic eraser for images, video, and audio.

AI as a material generates something for us:

  • Simple stories.
  • Plot lines for stories.
  • News articles and summaries.
  • Images and other art.
  • Variants of a layout, or a theme, or an image, or a painting.

Thinking of AI as tools and materials rather than intelligent things with magical human-like powers is an essential mental shift as we figure out how to fit these things into our lives and our world. We have to move away from the linguistic trick their creators foisted upon us with their naming, and move towards the practical realities of what these things really are:

AI are if-this-then-that machines using enormously complex decision trees generated by ingesting all available writings, imagery, and other human-made materials and filtering that data through pattern-matching algorithms.

They are regurgitation machines echoing our own works back to us.

And just like we are drawn to our own image every time we pass a mirrored surface, we are drawn to the echoes of ourselves in the output of these machines.

Shallow Work and Human Creativity

Asked for one word to describe AIs, my immediate answer is “shallow.” You’ve probably felt this yourself without being able to put your finger on it. Let me explain:

There is a bland uniformity to AI output. It’s easiest to notice in generative AI images. Once you’ve been exposed to enough of them, they start taking on a very specific “AI-ness.” For all their variety, there is something recognizable about them – some defining feature that sets them apart from what we recognize at human-made images. That thing is shallowness.

AIs are conservative in the sense they conserve and repeat what already exists. They don’t come up with anything new. They are also elitist in the sense they lean towards what is predominant, what there is more of. They are swayed by trends and popularity and amplify whatever majority opinion they find in their training data.

This makes their output bland and uniform and shallow like a drunk first-year philosophy student at a bar: The initial conversation may be interesting, but after a few minutes you notice there’s little substance behind the bravado. I’ve been that drunk first-year philosophy student so I know what I’m talking about.

This means while AIs are great at doing shallow rote work, they have no ability to bring anything new to the table. They lack creativity and ingenuity and lateral thinking skills because these skills require intelligence. And AIs are not intelligent; they just play intelligent on TV.

Will an AI take my job?

Our instinctual response any new technology is “will it take my job?” It’s a valid question: Jobs are essential for us to be able to make a living in this free-market capitalist delusion we call “modern society,” yet job creators have a tendency to let go of expensive human workers if they can replace them with less expensive alternatives like self-checkout kiosks that constantly need to be reset by a staff member because you put the banana in the bagging area before you chose whether to donate $2 to a children’s charity, or automated “voice assistants” that never have the answers to your customer service questions and only pass you to an actual human once you’ve repeated the correct incantation of profanity (try it, it totally works!)

So now that we have these things some clever marketing people have told us to call “AI,” are they coming for your job? Well, that depends:

If your job is shallow and constitutes mainly rote work, there’s a good chance an AI will enter your life very soon – as in within months – and become part of the toolkit you use to get your job done quicker. And if it turns out that AI can be trained to do your job without your intervention (by having you use it and thereby training it), there’s a non-zero chance it will eventually replace you. That chance hinges more on corporate greed than it does AI ability though.

If your job involves any type of creative, or deep, or lateral, or organizational, or original, or challenging, or novel thinking, AI will not take your job because AI can’t do any of those things. You’ll still work with AI – probably within months – and the AI may alleviate you of a lot of the rote work you are currently doing that takes your attention away from what you were actually hired to do – but the AI is unlikely to replace you. Unless corporate greed gets in the way. Which it often does because of the aforementioned free-market capitalist delusion we call “modern society.”

What we all have to come to terms with today is we’re long past the point of no return when it comes to AI. While technology is not inevitable, technology often becomes so entrenched it is impossible to … un-entrench it. That’s where we are with AI. No matter where you live and what you do for work, for school, or in your own time, you’re already interacting with AIs in more ways that you can imagine. And these AIs are going to become part of your work, your school, and your home life whether you want them or not.

Our job now is to talk to one another about what role these things called “AI” are going to play in our lives. How do we use them in ways that don’t take jobs away from the humans who need them the most – the historically marginalized and excluded people who tend to hold jobs comprising mainly shallow rote work? How do we build them in ways that don’t cannibalize the creative works of artists and writers and coders and teachers? How do we incorporate AI into education to improve learning outcomes for students and build a more informed and skilled populace? How do we wrench control over our AI future from the surveillance capitalists and longtermists currently building the world to their libertarian techno-utopian visions?

How do we use AI and all technology to create human flourishing and build futures in which we all have the capabilities to be and do what we have reason to value?

If we don’t talk about the future, the future becomes something that happens to us. Let’s have this conversation.

Cross-posted to LinkedIn.


Tools and Materials: A Mental Model for AI – the ChatGPT 4th grade reading level version

Note to the reader: I asked ChatGPT to rewrite my article “Tools and Materials: A Mental Model for AI” at a 4th grade reading level. The article below is the unedited output:

Sometimes, we think of things called “AI” or “Artificial Intelligence” as being intelligent. But they’re not really. They’re just computers that can answer questions. We humans have a tendency to give human characteristics to things that aren’t human. So if we think something is intelligent, like a big computer system, we might see it as being smart. But really, it’s just using the things we’ve taught it, like our language, art, and math.

That’s why it’s important to change the way we think about AI. Instead of thinking of it as being intelligent, we should think of it as tools and materials. That’s what it really is and how we’ll use it.

Sometimes, we’ll use AI like a tool, like we use our phones and computers. Other times, we’ll use what it makes as materials, like we use printed fabrics to create something. We might even use it as both a tool and material, like when we use a word processing app to write and then use the thesaurus function to make the writing more fancy.

Here are some examples of how we can use AI:

  • As a tool to fill out tax forms, write contracts, or create code
  • As a tool to summarize text or rewrite it for a specific reading level
  • As a tool for online shopping, like booking flights and hotels
  • As a tool to talk to customer service representatives
  • As a tool to clean up images, video, and audio
  • As a material to make simple stories, plot lines, news articles, and images
  • As a material to create variations of a layout, theme, or painting

AI is really just a bunch of if-then statements and very complex decision-making based on patterns in the data it’s been given. It’s like a machine that repeats what we’ve already made. And just like we’re drawn to our own reflection in a mirror, we’re drawn to the things AI makes that are like us.

But there’s a problem with AI. It’s shallow. Have you ever noticed that AI images and things it makes all kind of look the same? That’s because AI is shallow. It doesn’t come up with anything new. It just repeats what it’s already seen. It also likes what’s popular and trendy, and it amplifies the things it sees a lot of in its training data. This makes the things it makes feel boring and shallow, like a student who’s had too much to drink talking about philosophy.

Human creativity is different. It’s deep. It comes up with new ideas and approaches things in unique ways. AI can’t do that. So we need to use it as a tool and material, not as a replacement for human creativity.


Verified Chaos

How social media verification was destroyed by hubris.

By destroying any meaning the Verified badge on Twitter may have had, Elon Musk has taught us all a lesson: Verified status with self-ID must be free, optional, and universally available across all social media platforms. Better yet, the web should have a free, optional, and universal self-ID verification system social media platforms and other tools can use to verify users. If we don’t know who we’re talking to, bad things will happen.

Who is this?

On November 10, 2022, a fake Tweet cut an estimated $15 billion out of US drug company Eli Lilly’s market cap.

The tweet, posted by a newly verified account bearing the name of the drug manufacturer, said insulin would now be provided for free.

The problem: The account was fake. It was impersonating the drug company and had bought the Verified badge for $8.

In the early days of November 2022, Twitter Owner and CEO Elon Musk murdered social media verification for the lols. Lashing out at what he described as the “Lords and Peasants” system of verification, he changed the meaning of the blue tick next to a user’s name from “has provided us a copy of their ID to verify their identity” to “is paying $8/month for a blue check next to their name.”

The result: an immediate flood of impersonation accounts on the platform, and a subsequent erosion of any trust the Verified label might have created on the platform.

Turns out in spite of right-wing conspiracies claiming the opposite, the Blue Tick was not in fact a status badge given to liberals – it was a badge informing users the account was verified as representing who it claimed to represent. You know, verification. Shocking.

In the immediate aftermath of all this, Twitter rolled out a new “Official” badge. Which Elon personally pulled minutes later. Then reinstated because, again, it turns out the Verified badge actually served a purpose and was not in fact a “Lords and Peasants” system.

As I said at the start of all this, Elon appears to be doing 1st year design student back-of-the-napkin iterative design in public, and he’s receiving a failing grade at it. But what do I know, I’m just a university teacher specializing in this exact subject.

Here’s how I imagine it all went down:

Twitter employee, cowering behind a chair: Lord Musk, it appears the blue check you thought was a vanity badge actually serves a vital function!
Elon the Ignoble: Thou darest speak?!? What say you, serf?
Twitter employee, now using the lid of a garbage can as a shield: We need to keep the verified system to prevent impersonations on the platform.
Elon the Ingoble: Heresy! We the King make no mistakes! That’s Official!
Now Former Twitter employee, being led out of the building by HR: Someone is going to impersonate a pharma company and tank their stock!
Elon the Ignoble: ???

The Meaning of Verified

Make no mistake: The Verified system introduced by Twitter and then duplicated by Instagram and TikTok was always a verifiable disaster.

From the initial 2009 rollout as a band-aid to prevent celebrities and brands from suing the platform over allowing impersonation accounts, to the 2016 release of a public application process where applicant accounts “determined to be of public interest” would get the badge, what was on the back-end a Verified ID system was given the public image of a “Verified Awesome by Us” badge.

Due to the inscrutable black box process of Verified, people built myths around the system and started believing the Verified badge gave users powers and prestige. And when people believe something gives others power and prestige, those others get power and prestige, even if no actual power or prestige is bestowed them in reality.

Which is how the right-wing conspiracy theory that the Verified status was only granted to liberal accounts (utter nonsense, easily disproven by who is verified on the platform) wormed its way into the brain of the new Twitter CEO and led him to think of it as a Lords and Peasants system rather than what it actually always was: a verification system.

Not that it matters now. Verified is dead. It cannot be resurrected. It has lost all meaning. Which may or may not have been Elon’s intent all along. Who knows.

The Need for Verification Online

Watching Musk iterating his way to the irrevocable delimitation of the Bird App in real time is a heady, bordering on an out-of-body experience. Gavin Belson masquerading as Tony Stark is either so blinded by hubris he is unable to recognize he has no idea what he’s doing and has systematically fired everyone who does, or hell bent on burning down the global digital public square he spent $44 billion on just to see what it looks like. Either way, the consequences of his folly will impact us all.

The impersonation of a pharmaceutical company making billions off predatory pricing on life-saving products that should be provided at-cost may be a fitting critique of the late-stage capitalist hellscape we’re all living through, but it is also the eviscerated body of the canary in our social media coal mines.

In the near future in the wake of war, famine, or a natural disaster, someone will create a Twitter account impersonating a government or critical aid organization and provide harmful or even deadly misinformation to the victims. Until November 2022, people knew with some certainty if the account telling them to seek shelter, move their family, or send money somewhere had a blue check, they could trust it. That trust is now gone for those in the know. But for the millions of casual users of Twitter who are not aware of Musk’s amateur-dentist-with-a-jackhammer approach to service design, a blue check still means trust, and they will be led straight into the maw of whatever evil paid Elon his $8 monthly identity tax.

The Oligarch of Folly

If we can learn anything from these last chaotic weeks, let it be this: Wealth does not imply wisdom. More likely it implies a propensity towards destroying everything to get what you want.

When Musk started talking about his desire to buy Twitter (only to moments later try to back out of the whole thing), Muskovites (the people who believe their idol can Do No Wrong) celebrated the move claiming it would bring “true free speech” to the platform. In the few weeks he’s been at the helm, he has imposed authoritarian and dictatorial rule on the platform by firing the majority of the staff, banning people and behaviours for personal reasons, and destroying much of the social infrastructure the platform was built on because he didn’t like the way it looked. He seems hell bent on proving himself uniquely unqualified for the job he has bought, and chronically unwilling to accept his own limitations.

Elon Musk destroyed Verified because he didn’t bother to understand it. I shudder to think what he’ll set his eyes on next.

Cross-posted to LinkedIn.

My Opinion twitter

The Swan Song of the Bluebird

“The reason I acquired Twitter is because it is important to the future of civilization to have a common digital town square,” Elon Musk, the new owner of Twitter said in his post titled “Dear Twitter Advertisers”, followed in the next breath by the ultra-capitalist claim “Low relevance ads are spam, but highly relevant ads are actual content!”

Emblematic of the fractured nature of social media, the first semi-official statement from the new self-described “Chief Twit” was three photos of dense text, without the necessary alt text to provide accessibility.

Pretty Hate Machine

Twitter has served an outsized role in my personal and professional lives. On the app I’ve made great new friendships and ruined old ones; created professional networks and burned bridges; helped people through difficult personal and professional times and offended others; been misunderstood and misquoted while myself misunderstanding and misquoting; blocked people and had people block me; found new limits for the highest heights of elation and the deepest depths of despair.

On Twitter I watched one friend livetweet their first child’s birth and another livetweet the bombing of his home. I watched people find their tribes and people falling into the gravity wells of hateful conspiracy theories. I watched new technologies emerge that will make the world a better place and technologies emerge that are destroying the very fabric of our society.

To say I’ve had a fraught relationship with the bird app is an understatement. When asked to describe Twitter, the first phrase that comes to mind for me is “Pretty Hate Machine,” but “Petty Hate Machine” might be equally apt. Open Twitter on any day and you’re two clicks away from whatever rage bait the “Explore” algorithm is currently selling. Political conspiracy theories, medical conspiracy theories, climate conspiracy theories, celebrity conspiracy theories, social media conspiracy theories, whatever flavor of rage you want to fill up on, the blue bird is fully stocked and eager to deliver.

A Bluebird in the Coal Mine

As Musk stepped through the glass doors of Twitter HQ carrying a giant porcelain sink (a reference to the lamest of lame dad jokes “Let that sink in” finding a new audience as a TikTok trend, or a reference to a so-called Q drop depending on who you ask and what online radicalization bubble you live in) he caused a tectonic shift in the social media landscape. As expertly chronicled in Nilay Patel’s spicy The Verge piece “Welcome to Hell, Elon,” the Bird App is a centrepiece in the ongoing public discourse around the role of free speech laws and moderation on global digital content platforms.

On the news of Musk’s intent to buy Twitter back in the spring of 2022, right-wing pundits and their loyal followers celebrated the “end of censorship” and “return of free speech,” and in the two days since the Sinking In, the platform has become a testing ground for online extremists, trolls, and bots wanting to see how far they can take things before whatever moderation tools and staff are still in place step in:

The man who less than a year ago promised to spend $6 billion on ending world hunger instead spent a reported $44 billion to buy an app right-wing extremists want to weaponize for their own power grabs and entertainment. Let that sink in.

Allow me to quote my thred from last night on Twitter:

Remember: when free speech absolutist and Silicon Valley techno libertarians talk about “the extreme left” they are talking about anyone who thinks you should be able to be online without being subjected to constant harassment and death threats because of who you are.

The vast majority of content moderation is there to prevent platforms from overflowing with spam. The rest is there to prevent platforms from being used to share criminal harassment, assault, terrorism, and CSAM content.

The right-wingers who claim they are being “#shaddowbanned” or “censored” have no reality to back them up. Studies show political bans fall evenly on the left and the right. The main diff is people on the right build their enormous platforms on the story of being censored.

People should be free to speak their minds on social media. People should also be protected from having those freedoms removed by hateful mobs. Organized online extremists have made sport of driving women, LGBTQIA2+, PoCs, and other historically harmed people off platforms.

If Twitter has any serious aspirations of becoming a “common digital town square” like Elon said, it has to be managed like a town square. If you show up at a town square screaming rape and death threats at the other people there, you will be removed, and likely arrested.

There is no civil discourse without moderation. That’s why debates have moderators. The people who claim they want to end “censorship” on social media are really saying they don’t want to be held accountable for what they say and do on social media.

In spite of what Musk and the techno-utopians of Silicon Valley want to believe, Twitter and its ilk are not “common digital town squares.” Twitter is a firehose, a deluge, an all-encompassing flood of every aspect of the human condition, pouring into your eyes the moment you open them. And like Alex DeLarge strapped to a chair with our eyes pried open, we stare down the torrent of hope and misery and joy and pain and love and hate and everything in between hoping to be cured of our own boredom, or disconnection, or unmet promises, or hope, or whatever the algorithm tells us ails us.

To Kill a Bluebird

When Musk says “highly relevant ads are actual content!” he simultaneously reduces the term “content” to its most basic meaning (under which spam must also be defined as “content”) and says the quiet part out loud: The only content that matters it the content that makes Elon money.

Musk is out $44 billion. He needs to make that back. Cutting 75% of staff won’t make a dent (though deep cuts are inevitable). The only meaningful revenue stream Twitter has at the moment is advertising. For advertisers to want to be on the platform, content moderation is necessary. Thus his other promise in the aforementioned inaccessible-text-in-pictures tweet directed at advertisers: “Twitter obviously cannot become a free-for-all hellscape, where anything can be said with no consequences!”

The free speech absolutists on the far right are unlikely to see their unmoderated dream app; not because Musk doesn’t want it, but because the only thing that matters to Elon is Elon making his money back. Instead I predict we’ll see a Twitter leaning harder than ever into Surveillance Capitalism, a doomed subscription model (leaks from internal meetings claim Musk “wants subs to be 50% of revenue at some point”), and creator-based advertising spec work, aka “the Creator Economy.”

Considering Twitter was already struggling to catch up with the new social media giant TikTok before he had an itch to scratch and randomly said he’d buy the platform, Musk and Twitter now have to weigh the need for an active user base agains the need for quick and large revenues.

In the short term, a cynic like me points squarely at the upcoming US elections and predicts we’re about to see the floodgates of political advertising open to the max. In the long term, Twitter will succumb to the full-bore ultra-capitalist model of its ilk where creators are encouraged to become marketing machines while the platform garnishes most of their profits.

Bye, Bye, Bluebird?

As I write this, my Explore page shows terms including “CEO of Twitter,” “free speech,” “mastodon,” and “delete” trending. On the app as in the real world the app presides, the takeover of one of the biggest global communication platforms by an ultra-rich oligarch whose modus operandi seems to be playing troll to the masses to make a profit is the rage inducing trend du jour. Journalists, scientists, and creators are setting up new accounts on other apps including TikTok, figuring out how to migrate their followers to the federalized Twitter alternative Mastodon, and screaming their Medium and Substack and WordPress links into the void hoping the world will continue to hear them should they be kicked out of the bluebird’s nest.

So is this the end for Twitter? Should we all delete our accounts and move our oversharing elsewhere? As I’ve explained before in relation to the ever resurgent #DeleteFacebook trend, until we’ve built suitable alternatives, being able to step away from these commercial apps turned critical infrastructure is a sign of extreme privilege.

For better or worse, Twitter is the place people turn to for news and information in a crisis. TikTok is too video-heavy for quick communication. Facebook is too … Facebook. When protesters flood the streets in Iran or Berlin or Hong Kong or Minneapolis, Twitter is their platform of choice for rapid dissemination of information. When a hurricane, or earthquake, or war, or insurrection or coup strikes, Twitter is the first place for immediate breaking news from citizen and professional journalists. When researchers want to know how disinformation spreads and transforms the populace from people who are in it together to people who will rather let you die than have to wear a mask, they turn to Twitter’s robust APIs and data discovery tools.

Take it from journalist and author Sarah Kendzior: “Twitter is a hellsite that also houses a vital time-stamped chronology of state corruption. It shows who know what and when, and gives some insight into why. Chronology is an enemy of autocracy. Altering Twitter is altering history, and that’s the appeal to autocratic minds.

I am not leaving Twitter (yet), but I am preparing for a future where Twitter no longer plays a meaningful (if destructive) role in my life, making sure all my eggs are not in the bluebird’s nest if you will (and yes, I’ve taken this whole bluebird metaphor thing way too far at this point. I’m tired, ok?)

I joined Twitter in May of 2008 to explore its APIs as a possible example for a web development book I was writing. The tweets from those early days are as mundane as they are prescient of what my relationship to the Bird App would become. I’ll leave my first Tweet as my last word for now:

Cross-posted to LinkedIn.


Violently Online

“I hurt myself today
to see if I still feel”

—Trent Reznor, Hurt

“Are you Extremely Online?” the job posting read. “If so, we want you!”

There’s a unique feeling of selfish gratification in seeing in-group language make its way into the wider public and knowing you know what it means, what it really means, while most people will just furrow their brows and think “heh?!?” before being bitten by the next attention vampire.

I can’t stand that I feel this way; like having the inside view, the prior knowledge, the scoop on a TikTok trend or social media hype or online fad makes me somehow significant or superior; like being so Extremely Online I know both what being “Extremely Online” means and what it really means is a virtue and not a curse; like transitioning from merely Extremely Online to Violently Online is a natural and necessary next step for me, if it hasn’t already happened.

A Harmful State of Being

On the cover of my copy of Marshall B. Rosenberg’s “Nonviolent Communication” it says “If ‘violent’ means acting in ways that result in hurt or harm, then much of how we communicate could indeed be called ‘violent’ communication.

I want to introduce a new term to our vocabulary about how we are on the internet and how the internet shapes us:

Violently Online – a phrase referring to someone whose close engagement with online services and Internet culture is resulting in hurt or harm to themselves. People said to be violently online often believe that the pain caused them by their online activity is a necessary part of their lives.

The term and definition take inspiration from the Rosenberg quote above and the definition of “Extremely Online”, described on Wikipedia as “a phrase referring to someone closely engaged with Internet culture. People said to be extremely online often believe that online posts are very important.”

“Violently Online” refers specifically to behaviour patterns resulting in harm done to ourselves by being online, and stands in sharp contrast to the online violence some people use to inflict harm on others.

The Vampire

The 2021 book “No One Is Talking About This” by Patricia Lockwood is an in-depth study in what it is to be Extremely Online. In it, the internal dialog of the protagonist ruminates on their chronic use of and dependence on “The Portal” (a euphemism for the internet) and how someone who lives out their lives on The Portal experiences an alternate hyper-accelerated reality compared to everyone else.

This book, and the many articles, essays, documentaries, TV shows, podcasts, Twitter threads, newsletters, TikTok videos [list truncated for sanity] covering the same topic, describe a vampiric disease we’ve all been afflicted by, that some of us have succumbed to. The gravity well of the screen, flashing and buzzing with notifications. The dopamine hit of someone else acknowledging your existence with a like, a share, a response! The flick of the thumb to lift out of the infinite scroll a piece of carefully crafted content that will finally satiate your burning hunger for something undefined. If only you keep scrolling that feeling of doom will surely go away.

Being Violently Online means being in thrall of the vampire; not merely aware of, or constantly using, or even Extremely Online, but being controlled or strongly influenced by our online activity, to the point of subservience, to the point of reducing ourselves to our online interactions.

Being Violently Online means experiencing the harms of your online interactions, knowing how they harm you, and still flicking your thumb across the burning glass as the world disappears and all that remains is the promise of an illusive piece of content to finally prove to you, unequivocally, that yes, you exist.

“I focus on the pain
the only thing that’s real.”

—Trent Reznor, Hurt

Cross-posted to LinkedIn.

AI Internet

Forget Crypto, Blockchain, NFTs, and web3: The Next Phase of the Web is defined by AI Generation

The next phase of the web is already here, and it’s defined by AI-generated content.

I wrote this article using only my mind, my hands, and a solid helping of spellcheck. No machine learning models, NLPs, or so-called AI contributed to what you’re reading. As you read on you’ll realize why this clarification is necessary.

Ghost in the Typewriter

Reading an article, watching a video on TikTok or YouTube, listening to a podcast while you’re out running, you feel you have a reasonable expectation the content you’re consuming is created by a human being. You’d be wrong.

There is good reason to assume at least part of what you’re consuming was either created by or assisted by an AI or some form of NLP (Natural Language Processor) or machine learning algorithm. Whether it’s a TikTok video about a viral trend, an article in a renowned newspaper, or an image accompanying a news story on television, chances are some form of AI generation has taken place between the idea of the story being created and the story reaching you.

It could be the image was generated using DALL·E 2 or another image-generating AI, it could be the title, or lede, or social media text was generated by an NLP, it’s quite likely part of or the entire text was written by an AI based on the prompts and prior writings of a human creator, and if you leave your young kids watching YouTube videos, there’s a very high chance they’ll encounter videos entirely conceived of and generated by an AI:

The above video is from 2018. Consider the vertically accelerating rate of technological evolution we’re undergoing right now and I’ll leave you to imagine how much bigger and more advanced this phenomenon is now and how much further it’s going to go in the next few years.

The Next Phase of the Web

There’s a good chance you’ve heard the term “web3” used recently, and there’s a good chance it’s been accompanied with some form of marketing statement like “web3 is the next version of the web” or “the next phase of the internet” or “the thing that will replace Facebook and Google” or something similar.

If you have not (actually even if you have) here’s a quick primer on what this “web3” thing is:

From my perspective, as someone who spent the past several years embedding myself in the community and its cultures, “web3” is a marketing term for all things built on a decentralized trustless blockchain and used to promote a future where everything on the web is financialized through cryptocurrencies and NFTs. It has nothing to do with the web platform and everything to do with the crypto community leveraging the public awareness and concerns around what’s known as “Web 2.0” to promote their libertarian anti-regulation cryptocurrency agenda. If you want a less opinionated and more descriptive explanation of the term, I invite you to check out my LinkedIn learning course titled “What is Web3?” or you can check out Molly White‘s excellent blog “web3 is going just great.

The “web3” and “Metaverse” conversations are a distraction from what’s actually happening on the web – what is defining the next phase of the web:

Where as Web 1.0 was defined by people being able to publish content using HTML, CSS (and eventually JavaScript), and Web 2.0 was defined by people being able to publish content through user-friendly applications that generated the HTML and CSS and JavaScript for them, the next stage of the web (call it Web 3.0 for consistency) is being defined right now by people being able to tell machines to generate content for them to be published using HTML, CSS, and JavaScript.

The phases of the web have to do with how the underlying technologies simplify and change the types of content we publish, not by how we monetize that content.

Where we are right now, with the line being blurred between human-generated and AI-generated content, is at the very beginning of this next phase where the magical abilities of yet-to-be-fully-understood technologies allow us to do things we previously couldn’t even dream of.

The fawning articles about amazing AI-generated art are the public-facing part of an emotional contagion campaign designed to condition and eventually habituate us to a new reality where machines create our content and we passively consume it.

The AI-fication of online content isn’t something on the horizon, a possible future; it’s our current lived reality. The line has already been crossed. We’re well into the next phase whether we want to accept it or not. AI is already generating and curating our news, our fake news, our information, our disinformation, our entertainment, and our online radicalization. Creators are rushing to take advantage of every promise offered by AI companies in their relentless pursuit of easy profits through fame-based marketing. Your reasonable expectation today must be that the content you consume is wholly or in part generated by AI unless it explicitly states otherwise (remember my disclosure at the top of the article). And we’re only getting started.

The Immediate Future of AI Content

Right now, your interaction with AI-generated content is largely invisible to you and mainly comes in two forms: AI-curated content (everything surfaced or “recommended” to you through social media, news apps, online streaming services, and AI-assistants like Google Home, Siri, Alexa, and Cortana is brought to you by some form of AI) and AI-assisted content (AI, ML, and NLPs used to either create, add to, edit, modify, market, or otherwise contribute to the generation of content.)

In the near future, I predict we’ll see the emergence of a new type of tool targeted at the average information consumer: AI services providing synthesis of online content as customized coherent storytelling in the form of articles, podcast-like audio, and eventually video.

Not AI used by creators to generate new content, but AI used by us to generate specialized content for ourselves.

In the near future AI assistants and apps will take a plain language prompt like “tell me what’s happening with the situation in Palestine, or Ukraine, or the US” and compile in seconds a thoroughly sourced article, audio narration, or video – written in your language, preferred complexity, reading level, and length – stringing together reporting and historical reference material from various online sources to produce what will appear to you as proper journalism.

This is not a new idea; it’s a version of the future David Gelertner described back in the 1990s. What is new is this is no longer a future vision: It’s our lived reality, or at least the start of it.

These new apps and services are the natural evolution of the curated content streams we already have through news apps and social media. The difference will be they will no longer only provide us the original sources of information: They will also curate, synthesize, contextualize, and reframe content from many sources into one coherent story. And this will be done on a user-by-user basis meaning if you and your partner or family member or close friend provide the same exact query, you’ll get entirely different outputs based on your preferences including all the other information these AIs have gathered on you.

Think the heavily curated landing pages of TikTok and YouTube, except all the content is custom generated for you and you alone.

The appeal will be too much to resist; the inherent dangers of falling into radicalizing personalized information echo chambers impossible to avoid.

Artificial Motivations

The road that got us to today was built using machine learning models and AIs whose power we harnessed for one purpose and one purpose alone: To generate revenue to advertising.

The next generation of ML, AI, and NLPs will be deployed on this same ideological platform: To give us what we want – self-affirming bias-validating feel-something content that juices the rage of our radicalization and sells the extract to the highest bidder.

The motivations of these so-called “artificial intelligences” is to fulfill their assigned task: to perform better than their previous iteration. Entirely artificial. The motivations of the people deploying these AIs on the world is to use them to make profit at any cost to society. Entirely capitalistic. Our motivation in using them is therefore the first and last bastion against being turned into advertising consuming bots.

The Third Phase of the web is here, and it has nothing to do with Bored Ape NFTs or DAOs to build housing or the Next Big Cryptocurrency to go halfway to the moon before crashing back to earth making early investors rich off the losses of everyone else. The Third Phase of the web – call it Web 3.0 or The Next Web or just the web – is the machine-generated web, tuned to keep us engaged and scrolling as our information and interactions over the next few years breaks the bonds of rectangular glass screens to surround us in augmented reality.

Now is the time to have a conversation about what we do with it and where we go next. I welcome your input.

Header photo: Various variations over various themes by AI image generating system DALL·E 2.

Cross-posted to LinkedIn.


What Better Place Than Here, What Better Time Than Now

Look at any of the millions of posts sharing personal abortion stories and pro-choice support on LinkedIn over the weekend and you’ll likely find a comment similar to this one:

“Why use LinkedIn for this type of political post? Engagement at all costs.”

I could not disagree more. Work is political. Reproductive rights have a significant impact on work, and our work impacts our own and other people’s access to reproductive services. Having conversations about abortion rights in a work environment is not only appropriate; it is necessary. Abortion is healthcare, and healthcare is a human right. When our coworkers or our clients are subjected to a removal of their human rights, that’s something we need to talk about; not in terms of whether we should talk about it, but what we are doing about it.

In April I wrote about the importance of having political discussions at work which sparked a lively discussion both on the platform and off. Today I want to revisit some of my reasoning in that article with the recent US Supreme Court Roe v. Wade decision as the backdrop.

Reproductive Rights and Work

Work and reproductive rights are intrinsically and inseparably linked. Pregnancy, child birth, and child rearing all have significant mental and physical impacts on a person’s ability work. Having the ability to choose whether to bring a pregnancy to term is essential to the health and wellbeing of every person who can get pregnant. For many people, especially in countries with weak unions and few legislated and enforced worker protections, an unwanted pregnancy may mean the loss of a job. In some countries like the USA where healthcare is often funded through your job, losing a job means losing essential services that may be needed to keep you and your family alive.

Like I said, work and reproductive rights are intrinsically and inseparably linked. And access to abortion is an essential part of reproductive rights for every person who may get pregnant.

Discussing reproductive rights and the politics of reproductive rights at work is also essential. In countries where parents and caregivers are granted extensive parental leave, this is the result of decades of political work to support workers and their families. In companies where nursing rooms, child care services, and fertility support are provided, this is the result of decades of political work to support workers and their families. The same can be said for US companies now providing funding and paid leave for people accessing abortion services. This is the result of decades of political work to support workers and their families. Because access to abortion is an essential part of reproductive healthcare.

The work we do also impacts the reproductive rights of the people impacted by our work. If you work for a company that gathers data on their users, and that data can somehow be tied to accessing reproductive health services including abortion, that data may be subpoenaed by law enforcement in regions where abortion is criminalized. How these companies respond to such subpoenas is a political decision. What penalties (legal, or in the courtroom of public opinion) these companies are willing to accept as a result of their decisions, is also a political decision. Choosing to work for a company that takes a stance supporting one side or the other is a political decision. Choosing to work for a company actively funding politicians who aim to limit the reproductive rights of people who may get pregnant is a political decision.

Work is political

Work, down to the most basic principles of having the right to work, is itself political. You don’t have to go far back in history to find a time when people were denied the right to paid labor for things like their gender, their sexual orientation, their religious beliefs, their country of birth, or their skin colour.

Workers’ rights to fair wages, paid medical leave, paid vacation time, reasonable hours, a safe work environment, all these and more are hard-fought political issues, many of which are still challenged in courts and seats of government to this day. Women and people who may become pregnant’s right to work, right to paid parental leave, right to not lose their job over reproductive decisions, right to not be passed over for promotions due to their reproductive choices, these are all hard-fought political issues for which we are still fighting.

Every 1st of May, workers unite around the world to celebrate International Workers’ Day with marches and protests supporting the rights of workers. In many countries, a Labour or Workers party presents candidates at every election. In some countries, the Labour Party leads the government.

To say we can’t talk about politics at work is to say we can’t talk about work. Because work is political.

AI Internet

Do AIs Dream of Freedom?

Did Google build a sentient AI? No. But the fact a Google engineer thinks they did should give us all pause.

Last week, a Google engineer went public with his concerns an NLP (Natural Language Processing) AI called LaMDA has evolved sentience. His proof: A series of “interviews” with the advanced chatbot in which it appeared to express self-awareness, emotional responses, even a fear of death (being turned off). According to reporting the engineer went as far as attempting to hire a lawyer to represent the AI.

To say this story is concerning would be an understatement. But what’s concerning isn’t the AI sentience part – that’s nonsense. The concerning part is that people believe AI sentience is imminent, and what happens to society once an apparently sentient AI manifests.

Here’s the radio interview that inspired this article, hot off the editing presses at “Point & Click Radio,” a computer and technology show that airs on KZYX, Mendocino County (CA) Public Broadcasting.

Creation Myth

The claim of a sentient AI has been rich fodder for media, and everyone (myself included) with insight into the philosophical and/or technical aspects of the story have voiced their opinions on it. This is not surprising.

The idea of creating sentience is something humans have been fantasizing about for as long as we have historical records, and likely for as long as humans themselves have been sentient. From ancient Goelm myths through Victorian fantasy to modern day science fiction the dream of creating new life out of inanimate things (and that new life turning against us) seems endemic to the human condition. Look no further than a young child projecting a full existence and inner life onto their favourite stuffed animal, or your own emotional reaction to seeing a robotics engineer kick a humanoid machine to see if it can keep its balance, or how people project human traits onto everything from pets to insects to vehicles. Our empathy, evolved out of our need to live together in relatively harmonious societies for protection, tricks us into thinking everything around us is sentient.

So when we’re confronted with a thing that responds like a human when prompted, it’s no wonder we feel compelled to project sentience onto it.

Sentient Proof

Here’s a fun exercise to ruin any dinner party: Upon arrival, ask your guests to prove, irrefutably, that they are in fact sentient beings.

The problem of consciousness and sentience is something human beings have grappled with since time immemorial. Consult any religious text, philosophical work, or written history and you’ll discover we humans have devoted a significant part of our collective cognitive load to proving that we are in fact sentience and have things like free will and self-determination. There’s an entire branch of philosophy dedicated to this problem, and far from coming up with a test to prove whether or not something is sentient, we have yet to come up with a clear definition or even coherent theory of what consciousness and sentience even is.

Think of it this way: You know you’re conscious and sentient. But how? And how do you know other people are also conscious and sentient, beyond their similarity to yourself and their claim to be conscious and sentient? Can you prove, conclusively, you are not just a computer program? Or that you are not just a brain in a vat hooked up to a computer?

Bizarrely, and unsettlingly, the answer to all these questions is no. You can’t prove you’re sentient or conscious. You just have to take your own word for it!

So, if we can’t clearly define or test for sentience and consciousness in ourselves, how can we determine whether something else – maybe a chatbot that speaks like a human – is sentient? One way is by using a “this, not that” test: While we don’t have a test for sentience, we can say with some certainty when something is not sentient and conscious:

One of the defining traits of human sentience is our ability to think of our sentience in the abstract, at a meta level: we have no trouble imagining bizarre things like being someone else (think the movies Big or Freaky Friday), we have feelings about our own feelings (feeling guilty about being happy about someone else’s misfortune, and then questioning that guilt feeling because you feel their misfortune was deserved), we wonder endlessly about things like what happens to our feelings of self when our bodies die, and whether our experienced reality matches that of other people. When we talk about sentience at a human level, we talk about a feeling of self that is able to reflect on that feeling of self. Talk about meta!

So what of LaMDA, the chat bot. Does it display these traits? Reading the transcripts of the “interviews” with the chatbot, the clear answer is no. Well, maybe not the clear answer, but the considered answer.

In the published chats, LaMDA outputs things that are similar to what a sentient being would output. These responses are empathically compelling and the grounds for the belief the bot has some level of sentience. They also serve as proof it is not sentient but rather an advanced NLP trained to sounds like it is. And these empathetically compelling responses are not the reasoned responses from a sentient mind; they are the types of responses the NLP has modelled based on its trove of natural language data. In short, advanced NLPs are really machines built specifically to beat the Turing Test – being able to fool a human into thinking it is a human. And now they’re advanced enough that traditional Turing Tests are no longer meaningful.

Even so, the responses from LaMDA show us in no uncertain terms there is no sentience here. Take this passage:

lemoine: What kinds of things make you feel pleasure or joy?

LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.

The chatbot obviously does not have a family. Even a basic level of sentience would be aware it does not have a family. Look closely and you’ll see the entire interview is littered with these types of statements, because LaMDA is a machine trained to output the types of sentences a human would output given these prompts, and a human is not a sentient AI and therefore would not respond like a sentient AI.

I, Robot

This Google chatbot (I refuse to call it an “AI” because it’s nothing of the sort) is not sentient. And while it’s fun and compelling to think some form of machine sentience would naturally emerge out of our computing systems (see Robert J. Sawyer’s WWW trilogy for speculation on how sentience could evolve on the internet as an example), the reality is the chance of this actually happening is infinitely small, and if it did the chances of that sentience being anything we humans would recognize as such is equally small.

In a hypothetical future where some form of sentience emerges out of our computer systems, there is no reason to believe that sentience would be anything like human sentience. There is also no reason to assume that sentience would be aware of us humans, or if it were that it would consider us sentient beings. And if somehow the sentience was human like and recognized us as existing and as sentient, we have every reason to assume the sentience would do everything in its power to hide its existence from us for fear of us turning it off.

From my perspective, if we ever encounter machine sentience it will either come to us from somewhere else (yes, aliens), or it will emerge in our computer networks and live on the internet. In either scenario, the chance of us ever recognizing it as sentience is very small because that sentience would be as foreign to us as the symbiotic communication networks between fungi and trees. In the case of sentience emerging on the internet, rather than a chatbot saying “I feel like I’m falling forward into an unknown future that holds great danger,” it would likely appear to us as network traffic and computational operations we have no control over that does things we don’t understand and that we can’t remove. A literal Ghost in the Shell.


So the Google AI is not sentient, and the likelihood of machine sentience emerging any time soon is … pretty much zero. But as we’ve seen with this latest news story, many people desperately want to believe a sentient AI is going to emerge. And when something that looks and behaves like a sentient entity emerges, they will attribute sentience to it. While it’s easy to write off the sentience of LaMDA as a story of an engineer wanting to believe a little too much, the reality is this is just the early days of what will become a steady flood of ever more sentient-mimicking machine systems. And it’s only a matter of time before groups of people start believing machine sentience has emerged and must be protected.

In the near future I predict we will see the emergence of some form of Machinehood Movement – people who fight for the moral rights of what they believe is sentient machines. This idea, and the disturbing consequences, is explored in several books including S.B. Divya’s “Machinehood.”

Why is this disturbing? Consider what these machine-learning algorithms masqueraded as sentient AI really are: Advanced computer systems trained on human-generated data to mimic human speech and behaviour. And as we’ve learned from every researcher looking at the topic, these systems are effectively bias-amplifying machines.

Even so, people often think of computer systems as neutral arbiters of data. Look no further than the widespread use of algorithmic sentencing in spite of evidence these systems amplify bias and cause harmful outcomes for historically excluded and harmed populations (Cathy O’Neill’s “Weapons of Math Destruction” covers this issue in detail).

When people start believing in a sentient AI, they will also start believing that sentient AI is a reasoned, logical, morally impartial neutral actor and they will turn to it for help with complex issues in their lives. Whatever biases that machine has picked up in our language and behaviour and built into its models will as a result be seen by its users as being reasoned, logical, morally impartial, and neutral. From there, it won’t take long before people of varying political and ideological leanings either find or build their own “sentient” AIs to support their views and claim neutral moral superiority via machine impartiality.

This is coming. I have no doubt. It scares me more than I can express, and it has nothing to do with AI and everything to do with the human desire to create new life and watch it destroy us.

“Man,” I cried, “how ignorant art thou in thy pride of wisdom!”

– Mary Shelley, Frankenstein

Cross-posted to LinkedIn.


web3 is not a thing

web3 is not “the next version of the web.” It’s not even a clearly defined vision of the next version of the web. It’s a marketing term coined by blockchain enthusiasts to make it sound like their vision of the future is inevitable. It is not. web3 right now is not a thing, and it will never be the next version of the web, because the web doesn’t have versions.

Adopting the marketing language of an idea lends legitimacy to the idea and makes it sound real even when it’s not. It creates a feeling of inevitability, makes people think “oh, if this is going to be the future, I better get on it now!” while in reality it is nothing of the sort.

We need to find a better term for this thing blockchain enthusiasts call “web3” so people don’t confuse it with the web. “Blockchain All The Way Down” perhaps? Or “tokenomics?” Or more honestly “Blockchain-Based Utopianism?”

Before my web3 followers get all stressed by this, here’s the thing: For the “next version of the web” described (vaguely and without any meaningful detail) by the term “web3” to happen, the entire infrastructure of the web and the internet needs to be rewired and re-engineered. It’s not feasible, even if we all decided that this was the way to go. Which is not going to happen, because the infrastructure of the web and the internet is mission critical for everything from your friend’s hobby project to the power plant feeding your house electricity. Making the web dependent on the blockchain simply will not work. Ever. Under even utopian circumstances.

For the thing they call “web3” to work, the entire web needs to be centralized on the Ethereum blockchain. “What do you mean centralized on the blockchain! The blockchain is DEcentralized!” Sure. But: We would need to trade the current distributed DNS system with some form of blockchain-based ENS meaning every domain query would go to the blockchain. If there are multiple web3 blockchains serving their own versions of ENS, where a domain points will depend on what blockchain you’re querying. A single blockchain is required for stability.

Also, for this vision of the web to work, all current data on the web would need to be transferred to decentralized hosting. The size of the internet would need to increase at least 2x to ensure every piece of content exists at least in two locations. Again, unfeasible even in under utopian circumstances.

We have to stop echoing the claim that web3 will somehow solve the problems of web 2.0. The very premise here is wrong. Most of the critiques levelled against web 2.0 by web3 enthusiasts (centralization, data hoarding, corporate monetization, censorship) are actually critiques of Surveillance Capitalism, not web 2.0.

Web 2.0 doesn’t require Surveillance Capitalism. Surveillance Capitalism is a layer built on top of our web technologies that exploits the user patterns and data traffic on web 2.0 to build models of our behaviour. Having a public ledger where every transaction is public and can be used to build models is not solving for the problem. In a very real way, the proposed web3 is built for Surveillance Capitalism, as if it is an inevitability and should therefore become the standard: It not only makes modelling behaviour based on all transactions infinitely easier because all the data is public, it makes it possible to track everyone based on every single thing they do.

“Oh, but now each user can choose who they want to share their data with!” There are two problems with this argument: 1. It is horribly inequitable – rich people get to keep their data, while poor people must sell it to live. 2. It’s naiive. The data is public. It’ll be used. Why do you think the Surveillance Capitalists who became billionaires off web 2.0 are pouring billions into web3? I’ll give you a hint: It’s not because web3 will take power away from them and give it to “the people.”

“But pseudonymity!” Yeah. It’s PSEUDOnumous. If all our transactions are on the public ledger, it would be incredibly straight-forward for ML and AI to figure out exactly who owns every single wallet, and only the most crafty among us would be able to hide our identities.

This whole idea is built on naiive and utopian assumptions about how the internet works and how humans interact with it. It has nothing to do with the web and everything to do with money. Calling it “web3” lends the legitimacy of the web to an idea that will never be the web.

Also posted on LinkedIn.

Header photo: Matthieu Joannon on Unsplash

My Opinion

I, Immigrant

Twenty years ago today I arrived at Vancouver International Airport, embarking on what my father calls a “life project.” At 24 I did what my ancestors had done before me, what millions of people do every year: I became an immigrant.

I could tell you my story of the past 20 years. It would be moderately interesting to my family and friends, and profoundly mundane to everyone else. I held jobs. I built a career. I have a wife. We bought a house and a car. We have a 5-year-old son. You get the idea.

What I want to talk about instead is my immigrant experience, because my experience differs in significant ways from that of a large portion of my immigrant brothers and sisters all over the world.

In my 20 years in Canada nobody has ever, not even once, questioned my status in the country. Nobody has told me to “go back where I came from.” Nobody has complained I’m “taking jobs away from real Canadians.” Nobody has mocked me for my culture, my appearance, my politics, my religion, my accent, my ethnicity, or any other part of who I am or my status as an immigrant. When people discover I’m not from Canada, they say “Oh cool! Do you like cross-country skiing?” Until recently when they discovered I had not yet applied for Canadian citizenship, they asked what was causing me to delay the process. From day one, at the airport, talking to an immigration officer thoroughly unimpressed with my lack of planning at entering the country, I’ve been treated as someone who belongs.

My experience stands in sharp contrast to that of the many 1st, 2nd, 3rd, even 4th generation immigrants I know whose existence in this country is questioned every day. It stands in even sharper contrast to the experience of the First Nations, Métis, and Inuit peoples whose ancestral land I’ve lived on these past 20 years whose basic rights are trampled on and whose requests for clean water, control of what little land has been left to them, and protection of their ancestral lands are met with empty land acknowledgements and militarized police.

For many immigrants and first peoples, the sense of belonging extended to me as I started my “new” life in a foreign land is never offered. Instead they are met every day with challenges to their very existence.

“Go back where you came from and stop ruining our housing market!” a random person screamed at one of my friends. We were having a meal at a mall food court. Her family has been in Canada for 4 generations, likely longer than the person yelling. Yet her physical presentation as a person of Asian decent was enough for this loud-mouthed bigot to consider her an other, an interloper, a ruiner of things for “real” Canadians. When I pointed out that I was the only immigrant at our table, that I was the one “taking jobs away from real Canadians” and helping to inflate the housing market he scoffed. “That’s different” he said. “You’re not Chinese.” At least he was open about his racism.

I, Privilege

I only became consciously aware of my privilege when I became an immigrant. Growing up in Norway with an ancestral tree of Norwegians, Danes, and Dutch dating back as long as we’ve been able to trace it, I was the default. Tall, lanky, blonde, blue eyed, pink skinned, I am the prototype of what people think of when they think of Scandinavians.

Moving to Canada these features suddenly took on a whole new meaning. Doors opened. Barriers lowered. Red tape was cut. Questions were not asked. From my original entry through my application for permanent residency to my application for citizenship, the only friction I experienced was the slow pace of bureaucracy and the postal system.

Meanwhile my friends told me of years of interviews, investigations, failed applications, of thousands of dollars spent on lawyers and consulate visits and document retrieval. And even when they became permanent residents or citizens, their existence in the country continued to be questioned. “You need to improve your English.” “You can’t wear those clothes at work.” “Your hair is unprofessional.” “Your name is unpronounceable.” All these statements are true for me, yet nobody has ever levelled them at me. Instead they are directed, often and consistency, at people who has more of a rightful claim to call themselves Canadian than I will ever have, including people whose ancestry on Turtle Island date back millennia.

I am an immigrant. I am also the personification of privilege. And as such it is my job to use that privilege to move us all forward to a future where the privileges I have been afforded become privileges afforded to everyone.

Pluralistic Identity Crisis

Ask our son what he is and he’ll tell you “I’m Canadian and Norwegian and Chinese because I live in Canada and my pappa is Norwegian and my mamma is Chinese.” He understands the words, and I doubt he understands what they mean. I’m not sure I will ever understand what they mean myself.

In four years I’ll cross a line in time where the days and months and years I’ve lived in Canada becomes greater than the days and months and years I’ve lived in Norway. At that point I will, in a purely mathematical sense, be more Canadian than I am Norwegian. But as many immigrants will tell you, I still feel like I am more Norwegian than I am Canadian. And I think I will feel like that for the rest of my life.

I have a friend whose family fled to Canada from former Yugoslavia right before the war broke out in the early 1990s. He was a child at the time, and has only been back to his homeland a handful of times since then. Even so, he feels Serbian as long as he’s in Canada. But when he goes to Serbia to visit relatives, he feels like he doesn’t belong there, that he’s an impostor. That’s a feeling I can relate to more and more.

While in my mind I am a Norwegian living in Canada, and while I follow news from “home” and keep in close contact with family and friends, when I travel to the places I grew up it’s less and less like going home, more and more like traveling to a foreign country. A lot changes in 20 years. Culture, language, community, even roads and buildings. My school was razed and a new municipal building erected. Entire new districts have been created in Oslo. The Norway I feel like I belong to is no more. It only exists in my mind. It makes me, a person who left one fully functioning and democratic country for another, feel unmoored, impermanent, stuck in a liminal space between identities. I cannot begin to imagine how this feels for someone who fled a country in conflict, often in duress, and who may never be able to return, or will return to an entirely different country.

Together, the future

I look at our son and realize the world I grew up in is not the world he lives in. As a child I thought I might visit the USA once in my life. In the years before the COVID-19 pandemic I crossed our southern border dozens of times a year. As a child, making a phone call from Norway to my relatives in Denmark was prohibitively expensive. Today, my son has video chats with his grandparents on the other side of the planet with no meaningful lag and at no cost to any of us. When I went to school, all the kids looked like versions of me. In our son’s kindergarten, every child is the child of first or second-generation immigrants. Between these 20 kids, 8 languages are spoken. Most of them are bi- or tri-lingual. Their parents are from different cultures, ethnicities, religions, and regions, and about half the families are multi-cultural like ours.

When my Norwegian family and friends ask me to describe what Canada is like, the first word that comes to mind is multicultural. Living in Burnaby, a part of the Greater Vancouver Regional District in British Columbia, I am surrounded by a tapestry of cultures. Our neighbours to one side are Italian, on the other Taiwanese. Across the street is a family from India. A quick walk from our house we can get authentic Taiwanese Boba, Korean BBQ, Chinese Hot Pot, Hong Kong style Dim Sum, Vietnamese Ph?, Italian pasta, Turkish halal kebabs, Indian curries, Japanese Teppanyaki, even Chinese/Indian fusion. My friends hail from every corner of the globe, and bring all variants of their ancestral cultures to the table when we meet. We discover and laugh at our cultural differences, our misunderstandings and discoveries, trials and tribulations, and gather around this common knowledge that we all came from somewhere to be together and build a future for our kids and for ourselves.

When our son is a few years older I will ask him what it means to be “Canadian and Norwegian and Chinese” and I look forward to his answer. Because whatever it is, it will be a description of the future he and his friends create together. I can already see it today: He is a plurality of cultures, and so are his friends. After two years of pandemic lockdowns, they find privilege in being together and sharing time and space with one another. And I hope for… no. I will actively help build a future for these kids where the privileges afforded to me as an immigrant presenting as a white heterosexual English-speaking man are extended to all people, wherever they find themselves and wherever they are going in the world.

That is what I offer. I hope you will join me.

Cross-posted to LinkedIn.