Categories
My Opinion

AI is a Loom: The End and the New Beginning of Web Dev

Web dev as we know it is deprecated. We just haven’t downloaded the latest version yet. What comes next is a metamorphosis, a paradigm shift, a revolution changing how we work, how the web works, and everything we know.

In March 2023, OpenAI CTO Greg Brockman drew a rough sketch of a website on a livestream, uploaded it to GPT, and got fully functional HTML, CSS, and JavaScript in return. That feature – dubbed “multimodal GPT” – is shipping to all ChatGPT users in the coming months. Soon you’ll be able to open an app, provide some instructions, and have a fully designed and populated website with real content, an ecommerce store, and a social media marketing campaign built in minutes with the help of AI. 

I’m not saying coding is a dying craft; I’m saying the craft of actually writing code on a keyboard to be input into a coding environment for further processing will become a narrow specialty. Most people who write code today will not be writing code in the near future. Many of them don’t need to and shouldn’t have to write code today! Writing code instructing computers to do things when we have software that writes better code for us makes no sense, and this whole notion of everyone learning how to code feels more and more like telling highschool students their future employment hinges on their ability to master T9 texting.”

Me, in an email dated November 2021

Web development stands on the precipice of an AI-driven metamorphosis. In front of us, the demarcation line between two paradigms: The old world of human-written code aided by machines, and the new world of AI-generated code aided by humans. 

For the web, AI is the Jacquard loom. 

For most developers, this means transitioning from writing and debugging code to directing AI what to build and checking its work. AI represents a Jacquard loom moment for web development, transitioning our work from hand-coding the fabric of the web to using that fabric as material for building new experiences.

The implications are enormous, not just for our work but the web’s future. As AI becomes part of our practice, our role shifts from writing standards-based, accessible, progressively-enhanced code to ensuring AIs use the latest, most advanced standards to build the future. If we don’t embrace this new role, progress will stall as AI biases established standards and ignores new tools and best practices..

Here’s what I see:

Very soon the public will access AI services that create websites in minutes from prompts, sketches and assets. Wix teased this, and competitors aren’t far behind.

I’d be shocked if Canva and Figma don’t unveil full “draw it, AI builds it” services by year’s end. Soon there will be ChatGPT plugins that build websites for you from scratch. This is inevitable.

When I say this out loud, the immediate response is usually some version of “AI can’t write good code” or “AI doesn’t understand users” or “AI makes constant mistakes.” All true, and all irrelevant. This isn’t about AI writing code or autocorrecting our code. AI will instead use the well-documented and well-established frameworks, templates, build processes, and automation we’ve created to make our work easier to weave together the websites and apps we ask for.

For walled gardens like Wix, this is straightforward: their back-ends, systems, and design languages allow AI to rapidly wire sites to user specifications. And that’s just the start. We’ll soon see new semi-agentive tools supporting various stacks, so you (with the help of an AI) can select frameworks, design systems, ecommerce platforms, etc. based on project needs without writing or knowing how to write code.

Look at what the people over at Builder are doing, then add an agentive AI on top and you start getting the idea:

What People Want, What Automation Provides

Two massive waves of progress are converging:

Developers have spent a decade building automation tools, frameworks, and systems to improve dev experience. You can now spin up a Next.js site in GitHub Codespaces in minutes without writing a single line of code. Component-based frameworks provide code consistency and add LEGO-like block assembly to web development. Design systems, component libraries, style guides, and tokens enable rapid prototyping and consistent UX/UI. Linting, optimization, accessibility, testing, CI/CD are largely automated. Bridging layout and code is reality. Often, we just connect these pieces. AI serves well as an automated and intelligent loom weaving these pieces together into workable materials.

On the user side, people want friction-free, no-maintenance, always-on experiences. Faced with the choice between the DIY bazaar of the open web and the shiny mall of app-based walled gardens, they pick the moving sidewalk of least resistance. And they are willing to pay for that convenience; with money and by giving up their data and privacy to surveillance capitalism. Where publishing on the web used to mean standing up a WordPress site (or paying someone else to do it), today bloggers, creators, influencers, and small businesses opt for TikTok, Instagram, YouTube, Medium, Substack, Shopify, and Linktree. 

The web we lost is a bygone web a larger and larger portion of the public never experienced, and concepts like self-hosting seem archaic and inefficient to the masses. Counterintuitively AI may help bridge this gap and reignite the interest in carving out your own space on the web by lowering the barriers to entry down to describing what you want and watching it manifest.

What is pushed down as these waves converge and elevate the capabilities of the web-using public is the need for traditional developers. When the options are either an AI site from Wix built from a prompt in minutes or a complex and expensive custom build that takes months to complete, there’s no choice for most people and businesses. When the Jacquard machine automated weaving, hand-woven textiles transitioned from an essential commodity to a luxury art form, and the expertise of manual weaving morphed from a commodity skill into an artistic pursuit. Weavers still exist, and bespoke fabrics are still made, but the vast majority of textile products were made by machines guided by humans who spent their time designing the products instead of making the materials. That’s what comes next for the web. 

AI Creates Opportunity Space

This may sound like AI replacing humans. It’s not. Instead it’s a fundamental shift and refocusing of the role of the developer: From writing code to auditing AI-written code. From knowing how to wire together different frameworks to architecting the system that serves up the website. From fighting with CSS to fine-tuning AI-powered user experiences. 

The people currently working as coders will take a step up the ladder to focus on higher-order capabilities, using their expertise in languages and frameworks to help AIs produce the best output instead of doing the grunt work of writing the code. 

Web dev as we know it is dead. What comes next is a metamorphosis, a paradigm shift, a revolution changing everything we know.

Our new human focus as we move into this future together is to ease the persistent tensions found in the intersection between technology and humanity. AI can’t conduct UX research, design novel experiences, innovate standards and best practices. That was always and will remain our territory. As AI takes over the work of weaving the fabric of the web, we do the work of making new things with those materials while improving their quality and inventing new ones.

In the short term, we’ll become AI managers – customizing, configuring, ensuring user flows and information architectures make sense, monitoring the generated code to ensure the latest standards are in use, and counteracting the inherent bias of AI to repeat prevalent patterns even when they are outdated. We’ll shift from writing code to deciding what the code should accomplish. To do that, we must all become experts at the bleeding edge of code, and invest our time in innovating new standards, patterns, and frameworks for the AIs to use. It’s a whole different job needing a whole new version of the skills we’ve always had.

This transformation is happening now. For consumers and SMBs, it will be lightning fast. For institutions and large enterprises it will be slower, hindered by legacy systems, institutional inertia, and resistance to change. But it’s coming. 

For web workers, it is no longer enough to know the core languages and established best practices. UX, interaction design, accessibility, and innovation is our new bread and butter, built on a strong foundation of modern web standards and bleeding edge HTML, CSS, JavaScript.

The future of the web belongs to those who strategically apply AI to meet user needs. With proper guidance, AI can supercharge our work, provided we put ethics, accessibility, user experience, and innovation front and center.

We build the future with every decision we make. How we decide to work with AI decides what future we get to live in.

Cross-posted to LinkedIn and dev.to

Categories
Internet

“Ice Cream So Good” and the Behavioural Conditioning of Creators

If you’ve been on TikTok or Instagram over the past few months, chances are you’ve come across creators exclaiming “yes, yes, yes, mmm, ice cream so good” while moving in repetitive patterns akin to video game characters. There’s also a good chance you’ve thought to yourself “This is ridiculous! I would never do something like that” even though you and I and everyone else perform the same type of alchemic incantations to please the algorithmic gods of the attention economy on a daily basis.

Every time we use a hashtag or think about the SEO of a piece of content or create a post to match a trend or ask our viewers to “hit that bell and remember to like and subscribe,” we are acting on the behavioural conditioning social media and other platforms expose us to, changing our behaviour to get our meagre slice of the attention pie (and maybe some money to boot.) Look no further than YouTube where for every type of content there is an established style and creators mimic each other so closely it’s becoming hard to tell them apart.

The only substantive difference between optimizing your article title for SEO and exclaiming “ice cream so good” when someone sends you a sticker on TikTok live is the latter act comes with a guarantee of financial return.

“Yes, yes, yes, gang gang, ice cream so good”

Dubbed “NPC streaming, the latest trend on TikTok is being met with equal parts astonishment, concern, and mimicry. The core idea is simple: TikTok has a feature where creators can host live events. During those live events, viewers can buy tokens in the form of stickers, animations, and filters they can send to the creator in real time. The creator in turn gets a tiny percentage of the profits from the sticker or animation or filter being used.

In other words, the more viewers a creator gets, and the more incentive they give those viewers to send them stickers and animations and filters, the more money the creator (and the platform) gets. Crafty creators have figured out the easiest way to get people to send them these digital tokens is by responding directly to them. Thus if you send an ice cream sticker, PinkyDoll will smile and say “mmmm, ice cream so good.”

Creating live content encouraging users to send stickers is nothing new. I remember seeing a live of a man who pretended to try to have a serious conversation about something while getting more and more outraged as people applied ridiculous filters to his face a few years ago. The recent invention of the NPC streaming characters are the refined distillate of this insight:

Forget about content – the easiest way for creators to earn money is by letting people control them directly through payment.

Based on recent reporting, the most successful NPC Streamers can earn thousands of dollars per day doing this work. TikTok takes a reported 50% of their profits, so this trend is enormously lucrative for the platform even when the creators themselves don’t earn all that much.

Please Please Me Like I Please You

In a recent article titled “Operant Conditioning in Generative AI Image Creation,” UX pioneer Jacob Nielsen makes the following observation:

“Generative AI for images torments users with alternating emotions of euphoria and anguish as it metes out sublime or disastrous pictures with wanton unpredictability. This makes users feel like the rat in an operant conditioning experiment, entrapping them in a ceaseless pursuit of rewards amidst sporadic punishments.”

Replace “Generative AI for images” with “monetization schemes on social media platforms” and the observation rings just as true:

From SEO to NPC Streaming, the opaque and ever-changing algorithms trickling out a tiny share of the enormous profits social media platforms make off their creators are giant (hopefully) accidental operant conditioning experiments demonstrating just how far we humans are willing to go in our pursuit of a promised reward.

Social media monetization is exploitationware (aka “gamification”) in its purest form: Creators are placed in an environment where if they stroke the algorithm the exact right way at the exact right time, there may or may not be a payout at the end. Like a rigged slot machine, most creators get close enough to see the big win, but never quite close enough to grab it. Like a casino the platforms promote a select few creators who actually hit the jackpot, making everyone else feel like if they just try one more time, they might win as well. And like every subject in an effective operant conditioning system, we alter and conform our behaviour to the conditions of the system in a never ending chase to get that dopamine fix of cracking the code and getting our reward.

In the book “The Willpower Instinct“, author Kelly McGonigal describes how this exploit of our reward system works:

“When dopamine hijacks your attention, the mind becomes fixated on obtaining or repeating whatever triggered it. (…) Evolution doesn’t give a damn about happiness itself, but will use the promise of happiness to keep us struggling to stay alive. And so the promise of happiness–not the direct experience of happiness–is the brain’s strategy to keep you hunting, gathering, working, and wooing. (…) When we add the instant gratification of modern technology to this primitive motivation system, we end up with dopamine-delivery devices that are damn near impossible to put down.”

That’s creator platform monetization: A dopamine-delivery system encouraging creators to seek happiness in cracking the code, gaming the system, and chasing the promise of happiness in the form of a paycheck.

TV-shaped eyes

Growing up in the 1980s there was much talk among the adults about their kids developing “TV-shaped eyes” from watching too many cartoons. Never mind that in Norway in the 1980s there was only one channel, and it aired one hour of children’s programming per day, at 6pm, right before the evening news.

The underlying concern was prescient though: Our media consumption not only consumes our time and attention; it alters our behaviour in significant ways. Social media platforms have taken this to the ultimate extreme through their incentive-based monetization systems, and we are all paying the price for it.

SEO is about gaming the ever-changing search engine algorithms to get higher ranking. NPC streaming is about gaming the TikTok monetization system to get as much money out of it as possible. If it was easy, if the platforms shared their profits automatically with every creator, the dopamine incentive of the game would go away and we would stop posting and shareholder profits would tank. So instead we get the attention economy and its latest most pure incarnation: The NPC Streamer.

Breaking the cage

The engine driving the NPC Streaming trend (and every other trend on creator platforms) is monetization, and the monetization models they use are fundamentally inequitable to both creators and passive users. Rather than paying creators their fair share of platform profits, platforms use the gamification of payments as behavioural conditioning to get creators to make content that encourages other users to consume more content and pay money into the system. What we need is something else, something more in the shape of a monetization system that pays creators for the quality of their content and the value and utility people derive from it.

What got us here won’t get us anything but fake ice cream. I welcome your ideas about how we break this cage and build a better online future for us all.


Cross-posted to LinkedIn.

Categories
AI

The Zeroth Law of AI Implementation

“An AI may not be used to harm humanity, or, by not being used, allow humanity to come to harm.”

As Artificial Intelligence systems (AI) like #ChatGPT enter into our lives and our work, we need some basic guidelines for how we implement them going forward. Here’s a place to start:

The Zeroth Law of AI Implementation:

An AI may not be used to harm humanity, or, by not being used, allow humanity to come to harm. Implement AI in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end, and treat AI always as a means to an end and never as an end in itself.

The sufficiently esoteric sci-fi and philosophy reader will note these are rewrites and recontextualizations of Isaac Asimov’s Zeroth Law of Robotics and the second formulation of Kant’s Categorical Imperative.

The Breakdown

My proposed Zeroth Law of AI Implementation aims to ground us in a shared purpose and vision for how we build our future with AI. It sets forth four basic principles:

  1. Do No Harm* with AI.
  2. Harm can be caused by having a tool and refusing to use it or otherwise limiting its use. For example harm can be caused by limiting access or capability based on factors including socio-economic status, geography, disability, etc.
  3. Humans are always ends in themselves, and must never be considered only means to an end (see “The Age of Surveillance Capitalism“).
  4. AIs are always means to (human) ends, and must never be ends in themselves.

* We need a clear definition what “harm humanity” means including 1) who gets to determine what constitutes harm, 2) who can be harmed, and 3) who adjudicates whether harm has been caused.

The Reason

The goal of technology is to build futures where humans have the capabilities to be and do what they have reason to value. AI technology presents an infinite possibility space for us to make that happen. AI technology also allows us potential to limit our own capabilities and cause real harm.

These statements seem self-evident, yet when technology is developed and implemented, we often forget its core purpose as we are blinded by the capabilities of the technology (it becomes an end in itself), by the power it affords us (those with access gain powers not afforded to others), and by the financial opportunities it affords us (using technology to turn humans into means for financial ends).

Grounding ourselves in a common principle like this proposed Zeroth Law of AI Implementation reminds us of the core purpose of technological advancement. It gives us a reference point we can use to evaluate and challenge our decisions, and a shared foundational platform we can build further principles on.

Discuss

Right now, at the start of our future with AI, we have a unique opportunity to talk about where we want to go next and how we want to get there. That conversation starts with talking about core principles. The Zeroth Law of AI Implementation is my contribution to this conversation. I’d love to hear yours!


Cross-posted to LinkedIn.

Categories
Book Reviews

Book Review: How To Be Perfect by Michael Schur

4 1/2 of 5

While the book doesn’t teach you how to be perfect, you’ll be a better person for reading it.

If ever I teach an intro to moral philosophy class, this book will be prerequisite reading. Sold as a fun book about ethics from the creator of the TV show “The Good Place,” this is actually a solid introduction to the academic subject of ethics, sprinkled with humour and real-life anecdotes to make it relatable.

“How To Be Perfect” is a semi-biographical story about a TV writer who goes on a journey through moral philosophy to try to figure out how to be a better person. And maybe more importantly how to teach his young children how to be the best they can be. Not to spoil anything, but at the end of the book there’s an entire section where the author talks to his kids about how to be good people, and it is wonderful. 

The book introduces a variety of branches of moral philosophy with questions like “Should I lie and tell my friend I like her ugly shirt?” and “Do I have to return my shopping cart to the shopping cart rack thingy?” and “Should I punch my friend in the face for no reason?” And this is where the book truly shines: It succeeds at framing real moral problems in a comedic yet relatable way and introducing ethics to people in a way that actually makes practical sense to them.

Something we all need more of.

I suggested “How To Be Perfect” to my design ethics book club as a light read for the holidays. Two chapters in I dreaded the comments I’d get from my friends. “Light read? I bet Kant would have some opinions on passing off a textbook as an enjoyable holiday treat!” Then I continued reading and realized I’d sold my friends and the book short.

“How To Be Perfect” is an imperfect but damn fine effort at making the exceptionally challenging and often mind-numbingly turgid topic of ethics and moral philosophy fun and engaging. If you’re interested in ethics at all, and you’ve wondered where to start or worried it would be either too boring or too depressing, I recommend this book. In fact I recommend this book, period. And I’m not just saying that because I am a philosopher by education and deeply fascinated by ethics.

This book sets out to do something moral philosophy sorely needs: Make ethics make sense, in a human and relatable way. Moral philosophy has a bad tendency of being at the same time overbearingly moralistic (“here’s how you’re doing everything wrong in your life, and here are some impossible standards you must follow to right yourself!”), philosophically partisan (“my form of ethics, in my specific interpretation, is the only real ethics. All other ethics are wrong!”), and fundamentally unrelateable (“Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.”) Michael Schur tries (and mostly succeeds) in balancing on a knife’s edge between staying true to the academic foundations of moral philosophy while also framing the many theories covered in real-world scenarios, funny anecdotes, personal experiences, and a heavy helping of yelling through a bullhorn at the ivory towers of academic philosophy.

This last point is probably best exemplified in the chapter on charity where Schur points out how moral philosophers of different traditions will contort themselves into Gordian knot over the moral failings of massively wealthy people using charitable giving as a self-congratulatory popularity contest while in the real world the money they raise actually does some good.

Schur also does something extraordinary in the book: He tries (and I sincerely hope he succeeds!) to introduce a new term both to philosophy and to our common language: “Moral Exhaustion.” Let me quote from the book:

“even if we scale the triple-peaked mountain of Daily Stress, Serious Problems, and Circumstance, and (running on 5 percent battery power) try our very best to do the harder/better thing, we often fail miserably despite our best intentions. It. Is. Exhausting.”

Michael Schur, How To Be Perfect

I think moral exhaustion is a great description of the malaise we are all feeling in our lives and our work today, and I’m now using the term freely in my everyday language thanks to this book.

One major problem with moral philosophy (aka ethics) – and I say this as someone who studied moral philosophy for years at university – is its detachment from the real world and its separation into distinct traditions. You are either a Utilitarian or a Deontologist, a Virtue ethician or a Contractualist, and whatever position you hold, you must defend your tradition against the others. (I am oversimplifying here, but this is a real struggle. Call it trauma from years of being an analytical philosopher taught by a faculty almost entirely composed from Kantians.) Through the book, Schur attempts to line up these and other moral philosophy traditions and theories and thread a needle straight through them to show that rather than treat ethics as One Theory to Rule Them All we are best served with an Everything, Everywhere, All At Once approach to our decision making.

As an introduction to ethics and moral philosophy, “How To Be Perfect” does a good job introducing the main branches of western philosophy (Virtue, Duty, and Consequentialist ethics), newer traditions like Contractualism, and even non-western traditions including Ubuntu and Bhudist ideas. This breadth stems from the impressive research Schur did while writing the TV comedy show “The Good Place” which in reality is a covert psy-op to secretly educate people about ethics by making ethics fun.

Side note: Watching “The Good Place” I would typically at least once in every episode jump up and yell “ARE YOU KIDDING ME?!?!?!? They are doing a WHOLE EPISODE on [insert obscure moral philosophy thing]???!?!?!” To which my wife of endless patience would say “Sit down and watch the show.” Point being that show was astounding and if you haven’t watched it, I cannot recommend it enough. Because it is hilarious. And well written. And exceptionally acted. And also, it contextualizes ethics in a way that just makes sense.

Another side note: I recommend getting the audiobook version of this book. It is narrated by the author and the entire leading cast of “The Good Place,” with snarky footnotes from the book’s academic advisor Todd May and even occasional cameos.

How is “How To Be Perfect” not perfect? In brutal honesty I’ll say it reads like what it is: An introduction to moral philosophy written by someone who is at an introductory level in moral philosophy. Schur finds fascination in the typical places: The vileness and eye-watering absurdity of Ayn Randy’s Objectivism, the spectacle of Jeremy Bentham’s posthumous existence as a cadaver on display at a random university (content warning on that link), the turgidness of Immanuel Kant’s writings, etc. We’ve all been there. 

In the same vein, in my opinion he makes two significant blunders – one historical and one of lack of foresight: 

He writes off Heidegger’s works due to their impenetrability and his much discussed association with Nazism, ignoring the enormous impact Heidegger had on moral and other philosophy. As one of the members of my book club said “I wish he (Schur) would go beyond just hints and snarky remarks to actually explain why he sidesteps Heidegger. I felt like he was making excuses for not reading the work.”

Schur also spends a fair bit of time towards the end of the book celebrating the works of Peter Singer and his longtermism. Anyone paying attention to the collapse of crypto and the bizarre politics driving many Silicon Valley founders will know Singer’s ideals have become a breeding ground for … let’s call them problematic ideas from white men of enormous wealth and power about how we should structure and organize our society today to protect the people of tomorrow. I can’t help but think had “How To Be Perfect” been written in 2022 that entire section of the book would have been very different. So in honesty my critique on this point is a perfect example of an anachronism.

Let me be perfectly clear here: I consider these issues minor to the point of being irrelevant. This book is not an academic textbook, it’s a deeply personal book about morals and ethics that tries to do right by the subject matter and the reader and succeeds more than any similar book I’ve ever read.

Final thoughts

If you’re still with me at this point, you’re definitely the type of person who will enjoy this book, so go out and get it in whatever format you prefer. If on the other hand you are looking for a book to give to your friend who refuses to return their shopping cart to the shopping cart shed thingy, or to subtly tell your family member that it’s not OK to tell people their shirt is ugly even if it is, chances are it’ll be a nice decoration on a shelf and will eventually end up in a donation box. “How To Be Perfect” is not light reading for an airplane ride, in spite of how it’s marketed. It is so much more, and because of this it demands much more from the reader. Just like real life demands so much more from us all. And why this book is wroth reading.


Cross-posted to LinkedIn.

Categories
Accessibility AI

The Real Impact of AI on Education will be Accessibility

I have a confession to make: My university degree sits atop a mountain of lies I told to mask my dyslexia. Now AI is positioned to make education more accessible so future students don’t have to lie to get where they need to go.

I read maybe a quarter of the required materials for my university studies. My term papers are filled with quotes from books I never even opened. I became a master at convincing my fellow students to give me summaries, quick explainers, and relevant quotes from materials I knew I’d never be able to finish in time to meet the inaccessible demands of academia. And after 5 years and a degree, I abandoned my hopes for a graduate degree in philosophy to pursue other avenues where my reading disability was not a constant blocker.

Today, as we stand at the beginning of a new era of computing, one shining beacon in the infinite possibility space of AI is making education more accessible.

AI can make education more accessible today

Here are a few ideas for how we can implement existing AI tools in education right now to dramatically improve accessibility:

  • AI audiobooks on demand: AI can generate natural sounding audiobooks from any written text. Audiobooks of academic texts are hard to come by and prohibitively expensive. AI can solve that problem and allow the reader to choose their preferred reading modality. This tech already exists (Apple has shipped it).
  • AI translations to any language: AI models are very good at translating text from one language to another. This means academic texts written in one language can now be accessible in any language. Again, the technology already exists in the form of Google Translate etc.
  • AI reading level adjustment: You know that “explain it to me like I’m five” meme? AI language models like ChatGPT can do that, and ensure the salient points and meaning of the text is preserved. Academic texts are often superfluously arcane and turgid. I predict in the near future we’ll have browser plugins and other AI-powered services where you can set the reading level and writing style of any text to your preference and preserve its meaning.
  • AI summaries on demand: Using summaries of long texts to enforce learning has a long tradition in academia. There’s a lucrative industry and pop culture mythology around services like Coles Notes and CliffsNotes. AI can be used to generate custom summaries from any text, large or small to make it more accessible to people like me who can’t read the whole thing.
  • AI assessments, flashcards, and other learning tools: Dump any text into an AI and ask it to generate assessments, flashcards, questions, examples, or other things. The possibilities here are limitless.
  • AI auto-captions and transcripts of audio, video, and in-person events: 5 years ago captions were an expensive nice-to-have very few could afford or cared about. Today, auto-captions are available on every LinkedIn and YouTube video, and most platforms also provide verbose transcripts on the fly. These same technologies are used for live captioning in video chat apps like Teams, and can be used at live events including in classrooms. The technology is available, and quite frankly I can’t think of any good reason this tech is not immediately implemented across all educational campuses world wide to provide improved accessibility. Imagine knowing you will have a transcript at the end of every lecture so you can focus on understanding what is being taught instead of just writing it down!
  • The possibility space here is infinite!

Let me guess: You have concerns. About the accuracy of AI summaries and transcripts and translations. About whether leaving AIs to do this work will take jobs away from humans. About whether students relying on AIs will result in the students not learning anything.

Here’s my reality check to you: The alternative is students not learning at all. The alternative is students like me lying through their teeth to pass arbitrary tests of reading speed (that’s what mandatory minimum page counts are). The alternative is large numbers of people being excluded from education because we’ve chosen to tie access to education to reading ability and speed rather than literally anything that has to do with actual learning.

As we build our future with AI, our conversations about AI in education should be about how these technologies can lower barriers and make education accessible and possible for everyone.

Reading ? Learning

Much handwringing and professional dread has been published in the past months around the risks of AI models like #ChatGPT being used by students to plagiarize materials and generally cheat on their papers and exams. Headlines like “ChatGPT passes exams for MBA courses and medical licences” are addictive clickbait in an environment of economic uncertainty and pandemic fatigue. But look beyond the immediate “the graphing calculator / internet / smartphones / YouTube / chat apps / smart watches / AI will destroy education and plunge us back into the dark ages” response and you see the potential of this technology to make education not only better, but more available and accessible to more people. If we build the tools to make that future happen.

I have moderate dyslexia meaning I read at about speaking pace and when I read the cognitive load of just getting the words right and keeping track of where I am is exhausting. Reading a regular length novel cover to cover can take me 6 months. Reading a densely worded academic textbook is a multi-year project. But in university, the expectation was for me to consume up to 5,000 pages of material each semester. So while I never wanted to lie to my fellow students, my professors, and my academic advisors about my reading, I was forced to. Academia, and education in general, is built on a core assumption that people’s ability to read and read fast is tied to their ability to learn, grow, innovate, and build the future. I am walking proof this is not the case, and I’m also a prime example of how reading speed is a permanent blocker for many educational paths and the careers that follow.

Which is absurd because it is not the reading that matters, it’s understanding the ideas captured in the reading materials.

Enter AI

Last week I was trying to explain the ethics of Amartya Sen’s Capability Approach to a friend. One of the core ideas of Capability Approach is that people should have the capabilities and functionings to be and do what they value and have reason to value in their lives and environments. Which begs the question: What is the difference between what we value and what we have reason to value?

This is not something Sen answers directly. Instead it is something you glean from reading several of his published works. As with much of academic philosophy, this distinction is at the same time essential to understanding and lacking in common sense explanation.

So I asked ChatGPT: “What is the difference between what we value and what we have reason to value?”

Here’s what the AI answered:

“What we value refers to the things that we personally consider important or desirable, such as material possessions, relationships, or personal achievements.

What we have reason to value, as described by Amartya Sen in the capability approach, refers to the things that we have the capability to achieve and that are consistent with our personal values and goals. In other words, it refers to the things that we would value if we had the opportunity to achieve them.

For example, one might value owning a house, but may not have the capability to buy one due to financial constraints. In this case, having a safe and affordable place to live would be something they have reason to value. In this way, the distinction between what we value and what we have reason to value is one of actual opportunity or capability. The capability approach takes into account the real opportunities for people to achieve the things they would value if they had the capability to do so.”

Screenshot of the ChatGPT output quoted above.

This is extraordinary. Notice how the system not only provides an understandable and plain language explanation of the subject, but also manages to pick up on the obscure origins of my question without the prompt providing what I consider sufficient context?

This explanation of the difference between what we value and what we have reason to value is so good it should be used in introductory textbooks on Sen’s work. And it serves as an example of what AI can do for education if we build AI-powered systems for education.

And Sen’s distinction between what we value and what we have reason to value is salient to this issue: We as a society value education, and we have reason to value education for all. Even so there are millions of people around the world whose real opportunities to access education are limited for a an endless list of reasons. AI can grant us all the capabilities to access education in new and better ways, allowing more of us to do and be what we have reason to value in our lives, our schools, and our work!

Our future with AI is the future we build

We are at the beginning of the beginning of our future with AI. Ahead of us is an infinite possibility space. And more now than at any time in human history we have the capabilities and agency and communal connection and global wealth and power to build positive and meaningful futures for ourselves and those around us together.

It starts with looking beyond the perpetual shock of disruption every new technology brings to what those new technologies can do for us as we integrate them into our lives and our work. It starts with talking about the futures we want to build for ourselves and how we make them real. It starts with seeing the world, thinking about how to make it better, and then making it happen.


Cross-posted to LinkedIn.

Header image: Screenshot of the Dyslexia Simulator from Harvard University.

Categories
AI

Stepping Into the Future: Pair Programming with AI

If we do this right, AI can make our jobs and our lives easier and give us time back to do the things we have reason to value. Pair programming with AI serves as a practical example.

With the realization of AI’s power comes well-justified concerns about how AIs will figure into our lives – and in particular our work. Look to any media outlet and you’ll find a dense fog of articles, videos, podcasts, and think pieces about whether, when, and how AIs will take people’s jobs, and whose jobs are most at risk right now.

In this darkness, let me put up a bright beacon on the horizon of possibility and give you a glimpse of what a future of human-AI collaboration can look like.

Explain it to me

You bump up against a problem at work: an Excel formula you’ve forgotten, an inscrutable data processing script written by people no longer on the team, the right way to invoke a particular JavaScript function while being mindful of state. These situations are common, and they consume significant time and cognitive resources. They are also what I call “robot work,” as in the kind of repetitive rote work you can imagine a robot doing. 

Now imagine having a skilled co-worker on call, at all times, ready to help you find and explain the right formula, document that inscrutable script, and refactor or even build from scratch that JavaScript function you need.

That’s what AI can be for us: Just-In-Time assistants for all the tedious, time consuming, and rote robot work taking up our valuable time and cognitive capacity.

If you’re a developer, you can experience this future today via various AI integrations including GitHub Copilot and ChatGPT.

No alt text provided for this image
GitHub Copilot Labs panel in VS Code.

GitHub Copilot coupled with the new GitHub Copilot Labs extension in VS Code gives you a pair programming assistant right in your development environment. Highlight any block of code and in the Copilot Labs panel you can ask for an explanation of the code, have it translated into another (applicable) code language, use a series of “brushes” on it including making the code more readable, adding types, cleaning, chunking, even documenting. You can even use Copilot to write and run tests on your code.

A myriad of ChatGPT extensions including Ali Gençay’s ChatGPT for VS Code does much the same, via a slightly different route. Authenticate the extension with OpenAI’s ChatGPT API, highlight any code, and you can ask ChatGPT to add tests, find bugs, optimize, explain, and add comments automatically. You also get the ability to start a full chat with ChatGPT in a dedicated panel right inside the editor where you can talk to the AI in more detail about whatever you want. 

No alt text provided for this image
Features from the ChatGPT VS Code extension.

Time and Energy

This past week I’ve been working on a large project involving code written by someone else. The JavaScript of the project has been co-located in two giant files, and is an unruly mix of carbon copies of standard components and functions, modified code copied from documentation sites, and custom code. As is often the case, the documentation is lacking where it’s most needed.

For my project, I need to refactor (rewrite to do the same code in a different way) large parts of the code, and add extensive documentation along the way. That means figuring out how the code works, adding comments to clearly document and explain it, reorganizing the code so it can be split into components, cleaning up and updating the code to modern standards, and finally splitting it up into discrete modules. This is robot work: rote repetitive tasks consuming significant time and cognitive energy better spent focusing on what the code is for: Serving as an example for a course on JavaScript build processes.

In the past, I’d end up investing several days doing this work. Now, with the help of AI, the workload is significantly reduced. This, I think, is an instructive example of how our future with AI can unfold: with AI assisting us as we do our work rather than take over that work.

Refactor this

Both GitHub Copilot and ChatGPT are conversational AIs. You chat with them as you would chat to a person. You can ask questions, give instructions, and ask them to perform tasks for you. Using AIs as pair programmers you do a combination of all of this and more.

If you’re using the VS Code extensions mentioned above, they are already set up for the correct context. In the case of ChatGPT you can also use it as a stand-alone pair-programmer with some basic setup:

To start, set the stage (literally) by instructing the AI on how you want it to behave. In a new chat, provide a prompt similar to this:

“You are a pair programmer specializing in JavaScript. You help refactor code to bring it up to ES6 standards. Is that understood?”

No alt text provided for this image
Setting the stage for ChatGPT to provide useful feedback.

The format here is “you are a [some role]. You [perform some skill/action]. Is that understood?” The last question gives the AI an opportunity to state how it is setting itself up based on your instructions and gives you an opportunity to provide further instructions. In my case ChatGPT responded as follows:

“Yes, that is understood. As a language model, I am able to understand and provide assistance with various programming languages, including JavaScript and its latest version, ES6. I can help refactor code to bring it up to current standards, as well as provide guidance on best practices for writing maintainable, efficient code.”

From here you can for example paste in a JavaScript function and ask the AI to help find an error and resolve it:

No alt text provided for this image
ChatGPT finds and explains issues in JavaScript.

In response, ChatGPT provides an explanation of the errors it discovered, prototype examples of solutions to the issues, and finally a full refactoring of the pasted code with the issues resolved.

This kind of contextual response not only helps solve immediate problems, but also teaches you what’s wrong and how to fix it.

This is invaluable for people learning to code and people working with code in any capacity which is why I’d strongly discourage any teacher or manager who is right now trying to figure out how to block people from using AIs in their work. AIs reduce the need for Googling or looking up code examples on documentation sites, coding forums, and open source repositories. Instead they give you contextual explanations and references related to your specific code, and even help you with refactoring. This is the future of work, and gives us more capabilities as workers.

  • Have some code you can’t make heads or tails of? AI can explain what it does. Computers are much better at parsing logic based languages than humans, and conversational AI like ChatGPT are specifically constructed to output human-sounding language making them ideal tools for decrypting complex code for human consumption.
  • Have some code in need of documentation? AI can write a function description, inline comments, or whatever you prefer based on your instructions.
  • Need to refactor based on specific parameters? AI can get you started.
  • I could go on but I think you get the idea.

I’ve worked alongside these AI pair programmers for the past year and a bit, and I can say with absolute conviction these tools and materials will make our lives better if we use them right and integrate them in our lives as helpers for rather than replacements of human labor.

In my experience, pair programming with an AI feels like working with an overly polite person with encyclopedic knowledge of coding and no awareness of what they don’t know. And this constitutes just our first timid steps into the infinite possibility space we are entering as AIs become our assistants.

The beginning of the beginning

As you interact with AI today, be constantly aware of where you are: At the beginning of the beginning of a new era. While these tools are powerful, they are not omnipotent. Far from it. They are shallow, error prone, and while they sound convincing they cannot be trusted. A good mental model for what they produce right now is bullshit as defined by Harry G. Frankfurt: It looks true, and it may be true, but some of the time it’s just plain wrong and the AI will still present it as the truth. While they talk like humans, AIs are not conscious or sentient or aware. They have no human comprehension of your question or their answer. They are advanced pattern recognition systems who tumble down enormously complex decision trees any time a prompt is provided to issue human-sounding strings of text (or code) with a statistically high probability of being the kind of answer that is considered correct by their human trainers. 

When I asked ChatGPT to correct a function containing a deprecated method, it corrected the syntax of the function but kept the deprecated method. When I told it the method was deprecated, it omitted it and refactored the code, but the result used a similar-sounding method that serves a very different purpose and was therefore non-functional and just plain wrong.

When I asked ChatGPT to find an error in a large body of code, it found two real errors and invented a third one, going as far as referencing use of a method that wasn’t even in the code provided.

These examples highlight why I see AIs as tools and materials rather than replacements for human labor. They have no understanding, no contextual awareness, no ability to do creative or lateral thinking. A human still needs to be in the loop; to make sure the output meets parameters, does what was intended, and follows best practices and standards (not to mention hold ethical responsibility for the work created). These things we call “AI” are very much artificial, but they are not intelligent. 

Intelligence is added by the human operator.

Even so, the pair programming offered by these prototype AIs is an enormous leap forward for human workers. And you can easily see how this type of AI-driven assistance can be extended to other work and other tasks. 

I’ve come to think of them as overconfident colleagues with a lack of self-awareness. Because of how they are “trained” – being fed large volumes of data from a corpus lifted off the internet – their “knowledge” is limited to the coding world of two years ago. Therefore, when it comes to modern features, frameworks, techniques, and standards released in the past two years, our current AIs know naught, and more importantly do not know what they do not know. Therefore, if you’re writing code on the bleeding edge of standards, you’re on your own. Or better yet: You’re training your future AI pair programmer! So the pressure is on to get it right!

The future is today

Having seen what AIs can do today, I desperately wish I had a looking glass to see what the future of work looks like. The potential here is infinite. The very best AI tools we have today are prototypes and MVPs trained on old data and limited in their scope. The AIs we’ll have a year from now, five years from now, ten years from now will be beyond anything we can imagine. And with these tools and materials in hand we can choose to build meaningful futures for everyone where we all have the capabilities to be and do what we have reason to value.

The future we are stepping into today is a future where AI is part of our lives, our work, our communities and our society. If you are alive today, and especially if you find yourself in a job, you are in the right place at the right time: These next few years is when we collectively get to figure out how AI fits into our lives and our work. This is when we set the stage for our futures with AI, and we all have a part to play. The work starts by asking yourself in what parts of your life you act like a robot, and whether you’re willing to part with that work and let an AI do it for you so you can do something else. 

If we do this right, AI will allow us to reclaim our time to be human.

Cross-posted to LinkedIn.

Categories
Book Reviews

Book Review: “How High We Go In The Dark” by Sequoia Nagamatsu

(4/5)

This is a book about death. Let me say that up front. It’s also a book about life after death – as in the lives of the people whose loved ones are facing or have passed the threshold of death. As post-pandemic novels go, this one ranks among the best I’ve read.

A pandemic. A dying child. A VR suicide group. A slow recovery. A spaceship. Thousands of years of longing experienced in a single lifetime.

“How High We Go In The Dark” reads like a series of short stories, tied together by time and global events. It’s contemplative in a way readers of “Station Eleven,” “Severance,” and “The Memory Police” will recognize. What sets this book apart from the others is its lack of a central progagonist or linear threaded story. In “How High We Go In The Dark” each chapter is a first-person narrative of a character met once, diary like in its presentation. Each chapter stands alone and can be read as a singular unit. This is accentuated by the audiobook having a different narrator for each chapter.

Every word steeped in melancholy and longing, this book is not for those who seek joy and excitement. It roots in a deep sense of grief for a dying people, a dying planet; reflecting the nebulous grief and loss of past normality we’ve all experienced over the 3+ years of the COVID-19 pandemic.

What’s missing is relief: reading “How High We Go In The Dark” is riding perpetually just behind a creating wave – feeling it’s resolutions within reach but never quite getting there.

Read, with caution and emotional support.

Inspired by Christina Stathopoulos, MSc‘s #BookAMonthChallenge and Mordy Golding‘s yearly book summary I’m committing myself to posting reviews of all the books I read in 2023. This is the first.

Categories
AI

Tools and Materials: A Mental Model for AI

“Language shapes the way we think, and determines what we can think about.”

Benjamin Lee Whorf

Before we begin, I asked ChatGPT to rewrite this article at a 4th grade reading level. You can read the result here.

Artificial? Yes. Intelligent? Not even close. It is not without reason things like ChatGPT are called “AI” or “Artificial Intelligence.” We humans have a propensity for anthropomorphizing – attribute human characteristics to – things that are not human. Thus if we are told something is intelligent, let’s say a very large computer system we can submit questions and get answers from, we look for intelligence in that thing. And if that thing is trained on our own language and art and mathematics and code, it will appear to us as intelligent because its training materials came from intelligent beings: Us ourselves.

“Artificial Intelligence” is a clever marketing term for computer models designed to appear intelligent even though they are not.

So, as we crash headfirst into the AI present and future, we need to reset our mental model before we start believing these things we call “Artificial Intelligences” are actually intelligent (again, they are not).

Tools and Materials

I propose we all start thinking of these things we call “AI” as tools and materials. Because that’s what they are and that’s how we’ll end up using them.

Sometimes we’ll use them as tools the same way we use our phones and computers and the apps on them as tools. Sometimes we’ll use them and what they produce as materials the same way we use printed fabrics and code snippets to create things. And sometimes we’ll use them as both tools and materials the same way we use word processing applications first as a tool with which we write a body of text and then a material as the thesaurus function helps us use more fanciful words and phrases.

Here are some basic examples to help you build the mental model:

AI as a tool performs a task for us:

  • Fill out tax forms, write contracts and legal documents.
  • Summarize text, rewrite text to a specific reading level.
  • Write code.
  • Online shopping including booking flights and hotels etc.
  • Any interaction with any CSR.
  • Magic eraser for images, video, and audio.

AI as a material generates something for us:

  • Simple stories.
  • Plot lines for stories.
  • News articles and summaries.
  • Images and other art.
  • Variants of a layout, or a theme, or an image, or a painting.

Thinking of AI as tools and materials rather than intelligent things with magical human-like powers is an essential mental shift as we figure out how to fit these things into our lives and our world. We have to move away from the linguistic trick their creators foisted upon us with their naming, and move towards the practical realities of what these things really are:

AI are if-this-then-that machines using enormously complex decision trees generated by ingesting all available writings, imagery, and other human-made materials and filtering that data through pattern-matching algorithms.

They are regurgitation machines echoing our own works back to us.

And just like we are drawn to our own image every time we pass a mirrored surface, we are drawn to the echoes of ourselves in the output of these machines.

Shallow Work and Human Creativity

Asked for one word to describe AIs, my immediate answer is “shallow.” You’ve probably felt this yourself without being able to put your finger on it. Let me explain:

There is a bland uniformity to AI output. It’s easiest to notice in generative AI images. Once you’ve been exposed to enough of them, they start taking on a very specific “AI-ness.” For all their variety, there is something recognizable about them – some defining feature that sets them apart from what we recognize at human-made images. That thing is shallowness.

AIs are conservative in the sense they conserve and repeat what already exists. They don’t come up with anything new. They are also elitist in the sense they lean towards what is predominant, what there is more of. They are swayed by trends and popularity and amplify whatever majority opinion they find in their training data.

This makes their output bland and uniform and shallow like a drunk first-year philosophy student at a bar: The initial conversation may be interesting, but after a few minutes you notice there’s little substance behind the bravado. I’ve been that drunk first-year philosophy student so I know what I’m talking about.

This means while AIs are great at doing shallow rote work, they have no ability to bring anything new to the table. They lack creativity and ingenuity and lateral thinking skills because these skills require intelligence. And AIs are not intelligent; they just play intelligent on TV.

Will an AI take my job?

Our instinctual response any new technology is “will it take my job?” It’s a valid question: Jobs are essential for us to be able to make a living in this free-market capitalist delusion we call “modern society,” yet job creators have a tendency to let go of expensive human workers if they can replace them with less expensive alternatives like self-checkout kiosks that constantly need to be reset by a staff member because you put the banana in the bagging area before you chose whether to donate $2 to a children’s charity, or automated “voice assistants” that never have the answers to your customer service questions and only pass you to an actual human once you’ve repeated the correct incantation of profanity (try it, it totally works!)

So now that we have these things some clever marketing people have told us to call “AI,” are they coming for your job? Well, that depends:

If your job is shallow and constitutes mainly rote work, there’s a good chance an AI will enter your life very soon – as in within months – and become part of the toolkit you use to get your job done quicker. And if it turns out that AI can be trained to do your job without your intervention (by having you use it and thereby training it), there’s a non-zero chance it will eventually replace you. That chance hinges more on corporate greed than it does AI ability though.

If your job involves any type of creative, or deep, or lateral, or organizational, or original, or challenging, or novel thinking, AI will not take your job because AI can’t do any of those things. You’ll still work with AI – probably within months – and the AI may alleviate you of a lot of the rote work you are currently doing that takes your attention away from what you were actually hired to do – but the AI is unlikely to replace you. Unless corporate greed gets in the way. Which it often does because of the aforementioned free-market capitalist delusion we call “modern society.”

What we all have to come to terms with today is we’re long past the point of no return when it comes to AI. While technology is not inevitable, technology often becomes so entrenched it is impossible to … un-entrench it. That’s where we are with AI. No matter where you live and what you do for work, for school, or in your own time, you’re already interacting with AIs in more ways that you can imagine. And these AIs are going to become part of your work, your school, and your home life whether you want them or not.

Our job now is to talk to one another about what role these things called “AI” are going to play in our lives. How do we use them in ways that don’t take jobs away from the humans who need them the most – the historically marginalized and excluded people who tend to hold jobs comprising mainly shallow rote work? How do we build them in ways that don’t cannibalize the creative works of artists and writers and coders and teachers? How do we incorporate AI into education to improve learning outcomes for students and build a more informed and skilled populace? How do we wrench control over our AI future from the surveillance capitalists and longtermists currently building the world to their libertarian techno-utopian visions?

How do we use AI and all technology to create human flourishing and build futures in which we all have the capabilities to be and do what we have reason to value?

If we don’t talk about the future, the future becomes something that happens to us. Let’s have this conversation.

Cross-posted to LinkedIn.

Categories
AI

Tools and Materials: A Mental Model for AI – the ChatGPT 4th grade reading level version

Note to the reader: I asked ChatGPT to rewrite my article “Tools and Materials: A Mental Model for AI” at a 4th grade reading level. The article below is the unedited output:

Sometimes, we think of things called “AI” or “Artificial Intelligence” as being intelligent. But they’re not really. They’re just computers that can answer questions. We humans have a tendency to give human characteristics to things that aren’t human. So if we think something is intelligent, like a big computer system, we might see it as being smart. But really, it’s just using the things we’ve taught it, like our language, art, and math.

That’s why it’s important to change the way we think about AI. Instead of thinking of it as being intelligent, we should think of it as tools and materials. That’s what it really is and how we’ll use it.

Sometimes, we’ll use AI like a tool, like we use our phones and computers. Other times, we’ll use what it makes as materials, like we use printed fabrics to create something. We might even use it as both a tool and material, like when we use a word processing app to write and then use the thesaurus function to make the writing more fancy.

Here are some examples of how we can use AI:

  • As a tool to fill out tax forms, write contracts, or create code
  • As a tool to summarize text or rewrite it for a specific reading level
  • As a tool for online shopping, like booking flights and hotels
  • As a tool to talk to customer service representatives
  • As a tool to clean up images, video, and audio
  • As a material to make simple stories, plot lines, news articles, and images
  • As a material to create variations of a layout, theme, or painting

AI is really just a bunch of if-then statements and very complex decision-making based on patterns in the data it’s been given. It’s like a machine that repeats what we’ve already made. And just like we’re drawn to our own reflection in a mirror, we’re drawn to the things AI makes that are like us.

But there’s a problem with AI. It’s shallow. Have you ever noticed that AI images and things it makes all kind of look the same? That’s because AI is shallow. It doesn’t come up with anything new. It just repeats what it’s already seen. It also likes what’s popular and trendy, and it amplifies the things it sees a lot of in its training data. This makes the things it makes feel boring and shallow, like a student who’s had too much to drink talking about philosophy.

Human creativity is different. It’s deep. It comes up with new ideas and approaches things in unique ways. AI can’t do that. So we need to use it as a tool and material, not as a replacement for human creativity.

Categories
twitter

Verified Chaos

How social media verification was destroyed by hubris.

By destroying any meaning the Verified badge on Twitter may have had, Elon Musk has taught us all a lesson: Verified status with self-ID must be free, optional, and universally available across all social media platforms. Better yet, the web should have a free, optional, and universal self-ID verification system social media platforms and other tools can use to verify users. If we don’t know who we’re talking to, bad things will happen.

Who is this?

On November 10, 2022, a fake Tweet cut an estimated $15 billion out of US drug company Eli Lilly’s market cap.

The tweet, posted by a newly verified account bearing the name of the drug manufacturer, said insulin would now be provided for free.

The problem: The account was fake. It was impersonating the drug company and had bought the Verified badge for $8.

In the early days of November 2022, Twitter Owner and CEO Elon Musk murdered social media verification for the lols. Lashing out at what he described as the “Lords and Peasants” system of verification, he changed the meaning of the blue tick next to a user’s name from “has provided us a copy of their ID to verify their identity” to “is paying $8/month for a blue check next to their name.”

The result: an immediate flood of impersonation accounts on the platform, and a subsequent erosion of any trust the Verified label might have created on the platform.

Turns out in spite of right-wing conspiracies claiming the opposite, the Blue Tick was not in fact a status badge given to liberals – it was a badge informing users the account was verified as representing who it claimed to represent. You know, verification. Shocking.

In the immediate aftermath of all this, Twitter rolled out a new “Official” badge. Which Elon personally pulled minutes later. Then reinstated because, again, it turns out the Verified badge actually served a purpose and was not in fact a “Lords and Peasants” system.

As I said at the start of all this, Elon appears to be doing 1st year design student back-of-the-napkin iterative design in public, and he’s receiving a failing grade at it. But what do I know, I’m just a university teacher specializing in this exact subject.

Here’s how I imagine it all went down:

Twitter employee, cowering behind a chair: Lord Musk, it appears the blue check you thought was a vanity badge actually serves a vital function!
Elon the Ignoble: Thou darest speak?!? What say you, serf?
Twitter employee, now using the lid of a garbage can as a shield: We need to keep the verified system to prevent impersonations on the platform.
Elon the Ingoble: Heresy! We the King make no mistakes! That’s Official!
Now Former Twitter employee, being led out of the building by HR: Someone is going to impersonate a pharma company and tank their stock!
Elon the Ignoble: ???

The Meaning of Verified

Make no mistake: The Verified system introduced by Twitter and then duplicated by Instagram and TikTok was always a verifiable disaster.

From the initial 2009 rollout as a band-aid to prevent celebrities and brands from suing the platform over allowing impersonation accounts, to the 2016 release of a public application process where applicant accounts “determined to be of public interest” would get the badge, what was on the back-end a Verified ID system was given the public image of a “Verified Awesome by Us” badge.

Due to the inscrutable black box process of Verified, people built myths around the system and started believing the Verified badge gave users powers and prestige. And when people believe something gives others power and prestige, those others get power and prestige, even if no actual power or prestige is bestowed them in reality.

Which is how the right-wing conspiracy theory that the Verified status was only granted to liberal accounts (utter nonsense, easily disproven by who is verified on the platform) wormed its way into the brain of the new Twitter CEO and led him to think of it as a Lords and Peasants system rather than what it actually always was: a verification system.

Not that it matters now. Verified is dead. It cannot be resurrected. It has lost all meaning. Which may or may not have been Elon’s intent all along. Who knows.

The Need for Verification Online

Watching Musk iterating his way to the irrevocable delimitation of the Bird App in real time is a heady, bordering on an out-of-body experience. Gavin Belson masquerading as Tony Stark is either so blinded by hubris he is unable to recognize he has no idea what he’s doing and has systematically fired everyone who does, or hell bent on burning down the global digital public square he spent $44 billion on just to see what it looks like. Either way, the consequences of his folly will impact us all.

The impersonation of a pharmaceutical company making billions off predatory pricing on life-saving products that should be provided at-cost may be a fitting critique of the late-stage capitalist hellscape we’re all living through, but it is also the eviscerated body of the canary in our social media coal mines.

In the near future in the wake of war, famine, or a natural disaster, someone will create a Twitter account impersonating a government or critical aid organization and provide harmful or even deadly misinformation to the victims. Until November 2022, people knew with some certainty if the account telling them to seek shelter, move their family, or send money somewhere had a blue check, they could trust it. That trust is now gone for those in the know. But for the millions of casual users of Twitter who are not aware of Musk’s amateur-dentist-with-a-jackhammer approach to service design, a blue check still means trust, and they will be led straight into the maw of whatever evil paid Elon his $8 monthly identity tax.

The Oligarch of Folly

If we can learn anything from these last chaotic weeks, let it be this: Wealth does not imply wisdom. More likely it implies a propensity towards destroying everything to get what you want.

When Musk started talking about his desire to buy Twitter (only to moments later try to back out of the whole thing), Muskovites (the people who believe their idol can Do No Wrong) celebrated the move claiming it would bring “true free speech” to the platform. In the few weeks he’s been at the helm, he has imposed authoritarian and dictatorial rule on the platform by firing the majority of the staff, banning people and behaviours for personal reasons, and destroying much of the social infrastructure the platform was built on because he didn’t like the way it looked. He seems hell bent on proving himself uniquely unqualified for the job he has bought, and chronically unwilling to accept his own limitations.

Elon Musk destroyed Verified because he didn’t bother to understand it. I shudder to think what he’ll set his eyes on next.

Cross-posted to LinkedIn.

Categories
My Opinion twitter

The Swan Song of the Bluebird

“The reason I acquired Twitter is because it is important to the future of civilization to have a common digital town square,” Elon Musk, the new owner of Twitter said in his post titled “Dear Twitter Advertisers”, followed in the next breath by the ultra-capitalist claim “Low relevance ads are spam, but highly relevant ads are actual content!”

Emblematic of the fractured nature of social media, the first semi-official statement from the new self-described “Chief Twit” was three photos of dense text, without the necessary alt text to provide accessibility.

Pretty Hate Machine

Twitter has served an outsized role in my personal and professional lives. On the app I’ve made great new friendships and ruined old ones; created professional networks and burned bridges; helped people through difficult personal and professional times and offended others; been misunderstood and misquoted while myself misunderstanding and misquoting; blocked people and had people block me; found new limits for the highest heights of elation and the deepest depths of despair.

On Twitter I watched one friend livetweet their first child’s birth and another livetweet the bombing of his home. I watched people find their tribes and people falling into the gravity wells of hateful conspiracy theories. I watched new technologies emerge that will make the world a better place and technologies emerge that are destroying the very fabric of our society.

To say I’ve had a fraught relationship with the bird app is an understatement. When asked to describe Twitter, the first phrase that comes to mind for me is “Pretty Hate Machine,” but “Petty Hate Machine” might be equally apt. Open Twitter on any day and you’re two clicks away from whatever rage bait the “Explore” algorithm is currently selling. Political conspiracy theories, medical conspiracy theories, climate conspiracy theories, celebrity conspiracy theories, social media conspiracy theories, whatever flavor of rage you want to fill up on, the blue bird is fully stocked and eager to deliver.

A Bluebird in the Coal Mine

As Musk stepped through the glass doors of Twitter HQ carrying a giant porcelain sink (a reference to the lamest of lame dad jokes “Let that sink in” finding a new audience as a TikTok trend, or a reference to a so-called Q drop depending on who you ask and what online radicalization bubble you live in) he caused a tectonic shift in the social media landscape. As expertly chronicled in Nilay Patel’s spicy The Verge piece “Welcome to Hell, Elon,” the Bird App is a centrepiece in the ongoing public discourse around the role of free speech laws and moderation on global digital content platforms.

On the news of Musk’s intent to buy Twitter back in the spring of 2022, right-wing pundits and their loyal followers celebrated the “end of censorship” and “return of free speech,” and in the two days since the Sinking In, the platform has become a testing ground for online extremists, trolls, and bots wanting to see how far they can take things before whatever moderation tools and staff are still in place step in:

The man who less than a year ago promised to spend $6 billion on ending world hunger instead spent a reported $44 billion to buy an app right-wing extremists want to weaponize for their own power grabs and entertainment. Let that sink in.

Allow me to quote my thred from last night on Twitter:

Remember: when free speech absolutist and Silicon Valley techno libertarians talk about “the extreme left” they are talking about anyone who thinks you should be able to be online without being subjected to constant harassment and death threats because of who you are.

The vast majority of content moderation is there to prevent platforms from overflowing with spam. The rest is there to prevent platforms from being used to share criminal harassment, assault, terrorism, and CSAM content.

The right-wingers who claim they are being “#shaddowbanned” or “censored” have no reality to back them up. Studies show political bans fall evenly on the left and the right. The main diff is people on the right build their enormous platforms on the story of being censored.

People should be free to speak their minds on social media. People should also be protected from having those freedoms removed by hateful mobs. Organized online extremists have made sport of driving women, LGBTQIA2+, PoCs, and other historically harmed people off platforms.

If Twitter has any serious aspirations of becoming a “common digital town square” like Elon said, it has to be managed like a town square. If you show up at a town square screaming rape and death threats at the other people there, you will be removed, and likely arrested.

There is no civil discourse without moderation. That’s why debates have moderators. The people who claim they want to end “censorship” on social media are really saying they don’t want to be held accountable for what they say and do on social media.

In spite of what Musk and the techno-utopians of Silicon Valley want to believe, Twitter and its ilk are not “common digital town squares.” Twitter is a firehose, a deluge, an all-encompassing flood of every aspect of the human condition, pouring into your eyes the moment you open them. And like Alex DeLarge strapped to a chair with our eyes pried open, we stare down the torrent of hope and misery and joy and pain and love and hate and everything in between hoping to be cured of our own boredom, or disconnection, or unmet promises, or hope, or whatever the algorithm tells us ails us.

To Kill a Bluebird

When Musk says “highly relevant ads are actual content!” he simultaneously reduces the term “content” to its most basic meaning (under which spam must also be defined as “content”) and says the quiet part out loud: The only content that matters it the content that makes Elon money.

Musk is out $44 billion. He needs to make that back. Cutting 75% of staff won’t make a dent (though deep cuts are inevitable). The only meaningful revenue stream Twitter has at the moment is advertising. For advertisers to want to be on the platform, content moderation is necessary. Thus his other promise in the aforementioned inaccessible-text-in-pictures tweet directed at advertisers: “Twitter obviously cannot become a free-for-all hellscape, where anything can be said with no consequences!”

The free speech absolutists on the far right are unlikely to see their unmoderated dream app; not because Musk doesn’t want it, but because the only thing that matters to Elon is Elon making his money back. Instead I predict we’ll see a Twitter leaning harder than ever into Surveillance Capitalism, a doomed subscription model (leaks from internal meetings claim Musk “wants subs to be 50% of revenue at some point”), and creator-based advertising spec work, aka “the Creator Economy.”

Considering Twitter was already struggling to catch up with the new social media giant TikTok before he had an itch to scratch and randomly said he’d buy the platform, Musk and Twitter now have to weigh the need for an active user base agains the need for quick and large revenues.

In the short term, a cynic like me points squarely at the upcoming US elections and predicts we’re about to see the floodgates of political advertising open to the max. In the long term, Twitter will succumb to the full-bore ultra-capitalist model of its ilk where creators are encouraged to become marketing machines while the platform garnishes most of their profits.

Bye, Bye, Bluebird?

As I write this, my Explore page shows terms including “CEO of Twitter,” “free speech,” “mastodon,” and “delete” trending. On the app as in the real world the app presides, the takeover of one of the biggest global communication platforms by an ultra-rich oligarch whose modus operandi seems to be playing troll to the masses to make a profit is the rage inducing trend du jour. Journalists, scientists, and creators are setting up new accounts on other apps including TikTok, figuring out how to migrate their followers to the federalized Twitter alternative Mastodon, and screaming their Medium and Substack and WordPress links into the void hoping the world will continue to hear them should they be kicked out of the bluebird’s nest.

So is this the end for Twitter? Should we all delete our accounts and move our oversharing elsewhere? As I’ve explained before in relation to the ever resurgent #DeleteFacebook trend, until we’ve built suitable alternatives, being able to step away from these commercial apps turned critical infrastructure is a sign of extreme privilege.

For better or worse, Twitter is the place people turn to for news and information in a crisis. TikTok is too video-heavy for quick communication. Facebook is too … Facebook. When protesters flood the streets in Iran or Berlin or Hong Kong or Minneapolis, Twitter is their platform of choice for rapid dissemination of information. When a hurricane, or earthquake, or war, or insurrection or coup strikes, Twitter is the first place for immediate breaking news from citizen and professional journalists. When researchers want to know how disinformation spreads and transforms the populace from people who are in it together to people who will rather let you die than have to wear a mask, they turn to Twitter’s robust APIs and data discovery tools.

Take it from journalist and author Sarah Kendzior: “Twitter is a hellsite that also houses a vital time-stamped chronology of state corruption. It shows who know what and when, and gives some insight into why. Chronology is an enemy of autocracy. Altering Twitter is altering history, and that’s the appeal to autocratic minds.

I am not leaving Twitter (yet), but I am preparing for a future where Twitter no longer plays a meaningful (if destructive) role in my life, making sure all my eggs are not in the bluebird’s nest if you will (and yes, I’ve taken this whole bluebird metaphor thing way too far at this point. I’m tired, ok?)

I joined Twitter in May of 2008 to explore its APIs as a possible example for a web development book I was writing. The tweets from those early days are as mundane as they are prescient of what my relationship to the Bird App would become. I’ll leave my first Tweet as my last word for now:

Cross-posted to LinkedIn.

Categories
Internet

Violently Online

“I hurt myself today
to see if I still feel”

—Trent Reznor, Hurt

“Are you Extremely Online?” the job posting read. “If so, we want you!”

There’s a unique feeling of selfish gratification in seeing in-group language make its way into the wider public and knowing you know what it means, what it really means, while most people will just furrow their brows and think “heh?!?” before being bitten by the next attention vampire.

I can’t stand that I feel this way; like having the inside view, the prior knowledge, the scoop on a TikTok trend or social media hype or online fad makes me somehow significant or superior; like being so Extremely Online I know both what being “Extremely Online” means and what it really means is a virtue and not a curse; like transitioning from merely Extremely Online to Violently Online is a natural and necessary next step for me, if it hasn’t already happened.

A Harmful State of Being

On the cover of my copy of Marshall B. Rosenberg’s “Nonviolent Communication” it says “If ‘violent’ means acting in ways that result in hurt or harm, then much of how we communicate could indeed be called ‘violent’ communication.

I want to introduce a new term to our vocabulary about how we are on the internet and how the internet shapes us:

Violently Online – a phrase referring to someone whose close engagement with online services and Internet culture is resulting in hurt or harm to themselves. People said to be violently online often believe that the pain caused them by their online activity is a necessary part of their lives.

The term and definition take inspiration from the Rosenberg quote above and the definition of “Extremely Online”, described on Wikipedia as “a phrase referring to someone closely engaged with Internet culture. People said to be extremely online often believe that online posts are very important.”

“Violently Online” refers specifically to behaviour patterns resulting in harm done to ourselves by being online, and stands in sharp contrast to the online violence some people use to inflict harm on others.

The Vampire

The 2021 book “No One Is Talking About This” by Patricia Lockwood is an in-depth study in what it is to be Extremely Online. In it, the internal dialog of the protagonist ruminates on their chronic use of and dependence on “The Portal” (a euphemism for the internet) and how someone who lives out their lives on The Portal experiences an alternate hyper-accelerated reality compared to everyone else.

This book, and the many articles, essays, documentaries, TV shows, podcasts, Twitter threads, newsletters, TikTok videos [list truncated for sanity] covering the same topic, describe a vampiric disease we’ve all been afflicted by, that some of us have succumbed to. The gravity well of the screen, flashing and buzzing with notifications. The dopamine hit of someone else acknowledging your existence with a like, a share, a response! The flick of the thumb to lift out of the infinite scroll a piece of carefully crafted content that will finally satiate your burning hunger for something undefined. If only you keep scrolling that feeling of doom will surely go away.

Being Violently Online means being in thrall of the vampire; not merely aware of, or constantly using, or even Extremely Online, but being controlled or strongly influenced by our online activity, to the point of subservience, to the point of reducing ourselves to our online interactions.

Being Violently Online means experiencing the harms of your online interactions, knowing how they harm you, and still flicking your thumb across the burning glass as the world disappears and all that remains is the promise of an illusive piece of content to finally prove to you, unequivocally, that yes, you exist.

“I focus on the pain
the only thing that’s real.”

—Trent Reznor, Hurt

Cross-posted to LinkedIn.