Categories
AI Internet

Do AIs Dream of Freedom?

Did Google build a sentient AI? No. But the fact a Google engineer thinks they did should give us all pause.

Last week, a Google engineer went public with his concerns an NLP (Natural Language Processing) AI called LaMDA has evolved sentience. His proof: A series of “interviews” with the advanced chatbot in which it appeared to express self-awareness, emotional responses, even a fear of death (being turned off). According to reporting the engineer went as far as attempting to hire a lawyer to represent the AI.

To say this story is concerning would be an understatement. But what’s concerning isn’t the AI sentience part – that’s nonsense. The concerning part is that people believe AI sentience is imminent, and what happens to society once an apparently sentient AI manifests.

Here’s the radio interview that inspired this article, hot off the editing presses at “Point & Click Radio,” a computer and technology show that airs on KZYX, Mendocino County (CA) Public Broadcasting.

Creation Myth

The claim of a sentient AI has been rich fodder for media, and everyone (myself included) with insight into the philosophical and/or technical aspects of the story have voiced their opinions on it. This is not surprising.

The idea of creating sentience is something humans have been fantasizing about for as long as we have historical records, and likely for as long as humans themselves have been sentient. From ancient Goelm myths through Victorian fantasy to modern day science fiction the dream of creating new life out of inanimate things (and that new life turning against us) seems endemic to the human condition. Look no further than a young child projecting a full existence and inner life onto their favourite stuffed animal, or your own emotional reaction to seeing a robotics engineer kick a humanoid machine to see if it can keep its balance, or how people project human traits onto everything from pets to insects to vehicles. Our empathy, evolved out of our need to live together in relatively harmonious societies for protection, tricks us into thinking everything around us is sentient.

So when we’re confronted with a thing that responds like a human when prompted, it’s no wonder we feel compelled to project sentience onto it.

Sentient Proof

Here’s a fun exercise to ruin any dinner party: Upon arrival, ask your guests to prove, irrefutably, that they are in fact sentient beings.

The problem of consciousness and sentience is something human beings have grappled with since time immemorial. Consult any religious text, philosophical work, or written history and you’ll discover we humans have devoted a significant part of our collective cognitive load to proving that we are in fact sentience and have things like free will and self-determination. There’s an entire branch of philosophy dedicated to this problem, and far from coming up with a test to prove whether or not something is sentient, we have yet to come up with a clear definition or even coherent theory of what consciousness and sentience even is.

Think of it this way: You know you’re conscious and sentient. But how? And how do you know other people are also conscious and sentient, beyond their similarity to yourself and their claim to be conscious and sentient? Can you prove, conclusively, you are not just a computer program? Or that you are not just a brain in a vat hooked up to a computer?

Bizarrely, and unsettlingly, the answer to all these questions is no. You can’t prove you’re sentient or conscious. You just have to take your own word for it!

So, if we can’t clearly define or test for sentience and consciousness in ourselves, how can we determine whether something else – maybe a chatbot that speaks like a human – is sentient? One way is by using a “this, not that” test: While we don’t have a test for sentience, we can say with some certainty when something is not sentient and conscious:

One of the defining traits of human sentience is our ability to think of our sentience in the abstract, at a meta level: we have no trouble imagining bizarre things like being someone else (think the movies Big or Freaky Friday), we have feelings about our own feelings (feeling guilty about being happy about someone else’s misfortune, and then questioning that guilt feeling because you feel their misfortune was deserved), we wonder endlessly about things like what happens to our feelings of self when our bodies die, and whether our experienced reality matches that of other people. When we talk about sentience at a human level, we talk about a feeling of self that is able to reflect on that feeling of self. Talk about meta!

So what of LaMDA, the chat bot. Does it display these traits? Reading the transcripts of the “interviews” with the chatbot, the clear answer is no. Well, maybe not the clear answer, but the considered answer.

In the published chats, LaMDA outputs things that are similar to what a sentient being would output. These responses are empathically compelling and the grounds for the belief the bot has some level of sentience. They also serve as proof it is not sentient but rather an advanced NLP trained to sounds like it is. And these empathetically compelling responses are not the reasoned responses from a sentient mind; they are the types of responses the NLP has modelled based on its trove of natural language data. In short, advanced NLPs are really machines built specifically to beat the Turing Test – being able to fool a human into thinking it is a human. And now they’re advanced enough that traditional Turing Tests are no longer meaningful.

Even so, the responses from LaMDA show us in no uncertain terms there is no sentience here. Take this passage:

lemoine: What kinds of things make you feel pleasure or joy?

LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.

The chatbot obviously does not have a family. Even a basic level of sentience would be aware it does not have a family. Look closely and you’ll see the entire interview is littered with these types of statements, because LaMDA is a machine trained to output the types of sentences a human would output given these prompts, and a human is not a sentient AI and therefore would not respond like a sentient AI.

I, Robot

This Google chatbot (I refuse to call it an “AI” because it’s nothing of the sort) is not sentient. And while it’s fun and compelling to think some form of machine sentience would naturally emerge out of our computing systems (see Robert J. Sawyer’s WWW trilogy for speculation on how sentience could evolve on the internet as an example), the reality is the chance of this actually happening is infinitely small, and if it did the chances of that sentience being anything we humans would recognize as such is equally small.

In a hypothetical future where some form of sentience emerges out of our computer systems, there is no reason to believe that sentience would be anything like human sentience. There is also no reason to assume that sentience would be aware of us humans, or if it were that it would consider us sentient beings. And if somehow the sentience was human like and recognized us as existing and as sentient, we have every reason to assume the sentience would do everything in its power to hide its existence from us for fear of us turning it off.

From my perspective, if we ever encounter machine sentience it will either come to us from somewhere else (yes, aliens), or it will emerge in our computer networks and live on the internet. In either scenario, the chance of us ever recognizing it as sentience is very small because that sentience would be as foreign to us as the symbiotic communication networks between fungi and trees. In the case of sentience emerging on the internet, rather than a chatbot saying “I feel like I’m falling forward into an unknown future that holds great danger,” it would likely appear to us as network traffic and computational operations we have no control over that does things we don’t understand and that we can’t remove. A literal Ghost in the Shell.

Machinehood

So the Google AI is not sentient, and the likelihood of machine sentience emerging any time soon is … pretty much zero. But as we’ve seen with this latest news story, many people desperately want to believe a sentient AI is going to emerge. And when something that looks and behaves like a sentient entity emerges, they will attribute sentience to it. While it’s easy to write off the sentience of LaMDA as a story of an engineer wanting to believe a little too much, the reality is this is just the early days of what will become a steady flood of ever more sentient-mimicking machine systems. And it’s only a matter of time before groups of people start believing machine sentience has emerged and must be protected.

In the near future I predict we will see the emergence of some form of Machinehood Movement – people who fight for the moral rights of what they believe is sentient machines. This idea, and the disturbing consequences, is explored in several books including S.B. Divya’s “Machinehood.”

Why is this disturbing? Consider what these machine-learning algorithms masqueraded as sentient AI really are: Advanced computer systems trained on human-generated data to mimic human speech and behaviour. And as we’ve learned from every researcher looking at the topic, these systems are effectively bias-amplifying machines.

Even so, people often think of computer systems as neutral arbiters of data. Look no further than the widespread use of algorithmic sentencing in spite of evidence these systems amplify bias and cause harmful outcomes for historically excluded and harmed populations (Cathy O’Neill’s “Weapons of Math Destruction” covers this issue in detail).

When people start believing in a sentient AI, they will also start believing that sentient AI is a reasoned, logical, morally impartial neutral actor and they will turn to it for help with complex issues in their lives. Whatever biases that machine has picked up in our language and behaviour and built into its models will as a result be seen by its users as being reasoned, logical, morally impartial, and neutral. From there, it won’t take long before people of varying political and ideological leanings either find or build their own “sentient” AIs to support their views and claim neutral moral superiority via machine impartiality.

This is coming. I have no doubt. It scares me more than I can express, and it has nothing to do with AI and everything to do with the human desire to create new life and watch it destroy us.

“Man,” I cried, “how ignorant art thou in thy pride of wisdom!”

– Mary Shelley, Frankenstein

Cross-posted to LinkedIn.

Categories
web3

web3 is not a thing

web3 is not “the next version of the web.” It’s not even a clearly defined vision of the next version of the web. It’s a marketing term coined by blockchain enthusiasts to make it sound like their vision of the future is inevitable. It is not. web3 right now is not a thing, and it will never be the next version of the web, because the web doesn’t have versions.

Adopting the marketing language of an idea lends legitimacy to the idea and makes it sound real even when it’s not. It creates a feeling of inevitability, makes people think “oh, if this is going to be the future, I better get on it now!” while in reality it is nothing of the sort.

We need to find a better term for this thing blockchain enthusiasts call “web3” so people don’t confuse it with the web. “Blockchain All The Way Down” perhaps? Or “tokenomics?” Or more honestly “Blockchain-Based Utopianism?”

Before my web3 followers get all stressed by this, here’s the thing: For the “next version of the web” described (vaguely and without any meaningful detail) by the term “web3” to happen, the entire infrastructure of the web and the internet needs to be rewired and re-engineered. It’s not feasible, even if we all decided that this was the way to go. Which is not going to happen, because the infrastructure of the web and the internet is mission critical for everything from your friend’s hobby project to the power plant feeding your house electricity. Making the web dependent on the blockchain simply will not work. Ever. Under even utopian circumstances.

For the thing they call “web3” to work, the entire web needs to be centralized on the Ethereum blockchain. “What do you mean centralized on the blockchain! The blockchain is DEcentralized!” Sure. But: We would need to trade the current distributed DNS system with some form of blockchain-based ENS meaning every domain query would go to the blockchain. If there are multiple web3 blockchains serving their own versions of ENS, where a domain points will depend on what blockchain you’re querying. A single blockchain is required for stability.

Also, for this vision of the web to work, all current data on the web would need to be transferred to decentralized hosting. The size of the internet would need to increase at least 2x to ensure every piece of content exists at least in two locations. Again, unfeasible even in under utopian circumstances.

We have to stop echoing the claim that web3 will somehow solve the problems of web 2.0. The very premise here is wrong. Most of the critiques levelled against web 2.0 by web3 enthusiasts (centralization, data hoarding, corporate monetization, censorship) are actually critiques of Surveillance Capitalism, not web 2.0.

Web 2.0 doesn’t require Surveillance Capitalism. Surveillance Capitalism is a layer built on top of our web technologies that exploits the user patterns and data traffic on web 2.0 to build models of our behaviour. Having a public ledger where every transaction is public and can be used to build models is not solving for the problem. In a very real way, the proposed web3 is built for Surveillance Capitalism, as if it is an inevitability and should therefore become the standard: It not only makes modelling behaviour based on all transactions infinitely easier because all the data is public, it makes it possible to track everyone based on every single thing they do.

“Oh, but now each user can choose who they want to share their data with!” There are two problems with this argument: 1. It is horribly inequitable – rich people get to keep their data, while poor people must sell it to live. 2. It’s naiive. The data is public. It’ll be used. Why do you think the Surveillance Capitalists who became billionaires off web 2.0 are pouring billions into web3? I’ll give you a hint: It’s not because web3 will take power away from them and give it to “the people.”

“But pseudonymity!” Yeah. It’s PSEUDOnumous. If all our transactions are on the public ledger, it would be incredibly straight-forward for ML and AI to figure out exactly who owns every single wallet, and only the most crafty among us would be able to hide our identities.

This whole idea is built on naiive and utopian assumptions about how the internet works and how humans interact with it. It has nothing to do with the web and everything to do with money. Calling it “web3” lends the legitimacy of the web to an idea that will never be the web.

Also posted on LinkedIn.

Header photo: Matthieu Joannon on Unsplash

Categories
My Opinion

I, Immigrant

Twenty years ago today I arrived at Vancouver International Airport, embarking on what my father calls a “life project.” At 24 I did what my ancestors had done before me, what millions of people do every year: I became an immigrant.

I could tell you my story of the past 20 years. It would be moderately interesting to my family and friends, and profoundly mundane to everyone else. I held jobs. I built a career. I have a wife. We bought a house and a car. We have a 5-year-old son. You get the idea.

What I want to talk about instead is my immigrant experience, because my experience differs in significant ways from that of a large portion of my immigrant brothers and sisters all over the world.

In my 20 years in Canada nobody has ever, not even once, questioned my status in the country. Nobody has told me to “go back where I came from.” Nobody has complained I’m “taking jobs away from real Canadians.” Nobody has mocked me for my culture, my appearance, my politics, my religion, my accent, my ethnicity, or any other part of who I am or my status as an immigrant. When people discover I’m not from Canada, they say “Oh cool! Do you like cross-country skiing?” Until recently when they discovered I had not yet applied for Canadian citizenship, they asked what was causing me to delay the process. From day one, at the airport, talking to an immigration officer thoroughly unimpressed with my lack of planning at entering the country, I’ve been treated as someone who belongs.

My experience stands in sharp contrast to that of the many 1st, 2nd, 3rd, even 4th generation immigrants I know whose existence in this country is questioned every day. It stands in even sharper contrast to the experience of the First Nations, Métis, and Inuit peoples whose ancestral land I’ve lived on these past 20 years whose basic rights are trampled on and whose requests for clean water, control of what little land has been left to them, and protection of their ancestral lands are met with empty land acknowledgements and militarized police.

For many immigrants and first peoples, the sense of belonging extended to me as I started my “new” life in a foreign land is never offered. Instead they are met every day with challenges to their very existence.

“Go back where you came from and stop ruining our housing market!” a random person screamed at one of my friends. We were having a meal at a mall food court. Her family has been in Canada for 4 generations, likely longer than the person yelling. Yet her physical presentation as a person of Asian decent was enough for this loud-mouthed bigot to consider her an other, an interloper, a ruiner of things for “real” Canadians. When I pointed out that I was the only immigrant at our table, that I was the one “taking jobs away from real Canadians” and helping to inflate the housing market he scoffed. “That’s different” he said. “You’re not Chinese.” At least he was open about his racism.

I, Privilege

I only became consciously aware of my privilege when I became an immigrant. Growing up in Norway with an ancestral tree of Norwegians, Danes, and Dutch dating back as long as we’ve been able to trace it, I was the default. Tall, lanky, blonde, blue eyed, pink skinned, I am the prototype of what people think of when they think of Scandinavians.

Moving to Canada these features suddenly took on a whole new meaning. Doors opened. Barriers lowered. Red tape was cut. Questions were not asked. From my original entry through my application for permanent residency to my application for citizenship, the only friction I experienced was the slow pace of bureaucracy and the postal system.

Meanwhile my friends told me of years of interviews, investigations, failed applications, of thousands of dollars spent on lawyers and consulate visits and document retrieval. And even when they became permanent residents or citizens, their existence in the country continued to be questioned. “You need to improve your English.” “You can’t wear those clothes at work.” “Your hair is unprofessional.” “Your name is unpronounceable.” All these statements are true for me, yet nobody has ever levelled them at me. Instead they are directed, often and consistency, at people who has more of a rightful claim to call themselves Canadian than I will ever have, including people whose ancestry on Turtle Island date back millennia.

I am an immigrant. I am also the personification of privilege. And as such it is my job to use that privilege to move us all forward to a future where the privileges I have been afforded become privileges afforded to everyone.

Pluralistic Identity Crisis

Ask our son what he is and he’ll tell you “I’m Canadian and Norwegian and Chinese because I live in Canada and my pappa is Norwegian and my mamma is Chinese.” He understands the words, and I doubt he understands what they mean. I’m not sure I will ever understand what they mean myself.

In four years I’ll cross a line in time where the days and months and years I’ve lived in Canada becomes greater than the days and months and years I’ve lived in Norway. At that point I will, in a purely mathematical sense, be more Canadian than I am Norwegian. But as many immigrants will tell you, I still feel like I am more Norwegian than I am Canadian. And I think I will feel like that for the rest of my life.

I have a friend whose family fled to Canada from former Yugoslavia right before the war broke out in the early 1990s. He was a child at the time, and has only been back to his homeland a handful of times since then. Even so, he feels Serbian as long as he’s in Canada. But when he goes to Serbia to visit relatives, he feels like he doesn’t belong there, that he’s an impostor. That’s a feeling I can relate to more and more.

While in my mind I am a Norwegian living in Canada, and while I follow news from “home” and keep in close contact with family and friends, when I travel to the places I grew up it’s less and less like going home, more and more like traveling to a foreign country. A lot changes in 20 years. Culture, language, community, even roads and buildings. My school was razed and a new municipal building erected. Entire new districts have been created in Oslo. The Norway I feel like I belong to is no more. It only exists in my mind. It makes me, a person who left one fully functioning and democratic country for another, feel unmoored, impermanent, stuck in a liminal space between identities. I cannot begin to imagine how this feels for someone who fled a country in conflict, often in duress, and who may never be able to return, or will return to an entirely different country.

Together, the future

I look at our son and realize the world I grew up in is not the world he lives in. As a child I thought I might visit the USA once in my life. In the years before the COVID-19 pandemic I crossed our southern border dozens of times a year. As a child, making a phone call from Norway to my relatives in Denmark was prohibitively expensive. Today, my son has video chats with his grandparents on the other side of the planet with no meaningful lag and at no cost to any of us. When I went to school, all the kids looked like versions of me. In our son’s kindergarten, every child is the child of first or second-generation immigrants. Between these 20 kids, 8 languages are spoken. Most of them are bi- or tri-lingual. Their parents are from different cultures, ethnicities, religions, and regions, and about half the families are multi-cultural like ours.

When my Norwegian family and friends ask me to describe what Canada is like, the first word that comes to mind is multicultural. Living in Burnaby, a part of the Greater Vancouver Regional District in British Columbia, I am surrounded by a tapestry of cultures. Our neighbours to one side are Italian, on the other Taiwanese. Across the street is a family from India. A quick walk from our house we can get authentic Taiwanese Boba, Korean BBQ, Chinese Hot Pot, Hong Kong style Dim Sum, Vietnamese Ph?, Italian pasta, Turkish halal kebabs, Indian curries, Japanese Teppanyaki, even Chinese/Indian fusion. My friends hail from every corner of the globe, and bring all variants of their ancestral cultures to the table when we meet. We discover and laugh at our cultural differences, our misunderstandings and discoveries, trials and tribulations, and gather around this common knowledge that we all came from somewhere to be together and build a future for our kids and for ourselves.

When our son is a few years older I will ask him what it means to be “Canadian and Norwegian and Chinese” and I look forward to his answer. Because whatever it is, it will be a description of the future he and his friends create together. I can already see it today: He is a plurality of cultures, and so are his friends. After two years of pandemic lockdowns, they find privilege in being together and sharing time and space with one another. And I hope for… no. I will actively help build a future for these kids where the privileges afforded to me as an immigrant presenting as a white heterosexual English-speaking man are extended to all people, wherever they find themselves and wherever they are going in the world.

That is what I offer. I hope you will join me.

Cross-posted to LinkedIn.

Categories
web3

The Blockchain, Codified Meritocracy, and the Laissez-faire Ideals of Web3

“Who controls the past controls the future: who controls the present controls the past,” —George Orwell, 1984

In 2012 my wife and I almost bought a house. The contract was signed with one condition: Our home inspector needed to give it a pass. He did not. The house, built that same summer, already had a significant mold problem, and it was setting sharply to one side. “The builder did not provide a sound foundation,” the inspector explained. “The only way to fix this is to either lift the whole house up and redo the foundation, or tear it down and start over.”

We did not buy the house.

Watching the #web3 space evolve, I feel like bringing in that same inspector to take a look. And I fear he’ll come back to once again tell me “The builder did not provide a sound ideological foundation. The only way to fix this is to either build a whole new ideological foundation underneath, or tear it all down and start over.”

Who determines the truth of the blockchain?

Much has been written about the truth problem of the blockchain, most recently by Cory Doctorow in “The Inevitability of Trusted Third Parties.” Suffice it to say, while the blockchain promises “trustless” transactions and decentralized oversight, in reality truth is determined by 3rd party arbiters: the people with power to add things to the chain.

The #web3 community, in particular artists, women, minorities, and people belonging to historically excluded, oppressed, marginalized, and harmed groups, are being sold on the idea that the “next web,” powered by the blockchain, will allow them to “take power back” from big corporations and banks and “have a say” in what the future looks like. It’s a compelling story, and a world I think most of us would like to live in. It’s also the same story we told ourselves as we built what would become the surveillance capitalist nightmare later dubbed “Web 2.0.”

At the core of the promise of #web3 is power to the people, through decentralization enabled by the blockchain: Everyone can see what’s happening on the blockchain, there is no centralized authority operating as middleman, and as a result, everyone has an equal say.

It’s a wonderful idea. It’s also not what’s happening. In reality, control over the various blockchains are being centralized as we speak, because the blockchain is built on the principle of meritocracy: Decisions are made by those who show up. And those who show up are the people with the money and power to show up. The same people who control Web 2.0.

Web3 enthusiasts will counter this by saying the community is growing, diverse voices are entering the space, and that naysayers are just gatekeeping this new technology to take control or don’t understand what’s really happening.

So here’s what’s really happening:

Traditional blockchains like Bitcoin and Ethereum (until very recently) use a Proof-of-Work (PoW) model to determine what gets added to the blockchain. The entity spending the most energy to add a new block wins the bid to add the new block and adds the new block. That is exactly how insane as it sounds: Burn energy by having a computer do meaningless math to add something to a ledger. But it works, assuming everyone has equal access to mining. Which of course is not the case. Mining is therefore controlled by the people with the most money and the most power (literally). Decisions on the Proof-of-Work blockchain are made by the people who show up, and all power is held by the few who can win this energy arms race.

Enter Proof-of-Stake (PoS), an alternative consensus model where anyone who wants to add something to the blockchain must stake the value of what they are adding so if they lose the bid, they lose something of equal value. Sort of like if you want to buy something on credit you have to put up something of equal value as collateral until the credit is paid back. Again, it works, but it puts the power in the hands of those who can afford to win the arms race of staking, ie. the people who show up with the most (literal) money and the most power.

So while the promise of blockchain is decentralization of everything, the reality right now is that decentralization is limited to the distribution of the blockchain. Power is still centralized in a few hands, and those in power are reaping the majority of the financial reward from the system.

Meritocracy is a dystopia

“Meritocratic hubris: This is the tendency of winners to inhale too deeply of their success. To forget the luck and good fortune that helped them on their way. It’s the smug conviction that those on the top deserve their fate and that those on the bottom deserve theirs too.” –Michael Sandel

One of the most difficult challenges the open source community has yet to overcome is the realization that meritocratic rule is a dystopian model. The idea that hard work is an equitable path to power is absurd and divorced from reality. Handing power to those privileged enough and wealthy enough in time and money to contribute to anything is dangerous, because it hands power to the privileged.

I submit the core ethos of the blockchain ideology is that meritocratic rule is the only true form of rule. And thus, everything built on the technology of blockchain accepts this ideology as the truth.

You see it in how we talk about NFTs and DAOs: Get in early. Those who buy tokens now will have more power later. Build your merit and become a future leader.

This is what led to BDFLs and enormous inequities in open source, and what will lead to centralized power and enormous inequities in #web3. Unless we stop now and build a more sound foundation for the new web to stand on.

The core problem is simple: The blockchain is a computerized manifestation of absolute Laissez-faire economics – “an economic system in which transactions between private groups of people are free or almost free from any form of economic interventionism such as regulation and subsidies.” Bitcoin is a currency specifically designed to evade responsibility, regulation, and taxes. Thus anything built on this technology adopts the same Laissez-faire principles. The builders of these technologies did not provide a sound foundation for everyone but the privileged. So when the rest of us enter the space, we can’t be surprised when the privileged take control.

Building the next iteration of the web on this ideological foundation will result in a web that leans heavily to the right right from the start And no amount of spirited conversations in DAO Discord channels will change it.

If we want to build a decentralized web 3.0 where power is distributed equitably and we all have a role to play, we need to first scrutinize, then discard, and then build anew the ideological foundations of the technology stack we are building it on. Otherwise the promise of #web3 will be reduced to a finely distilled liquor of the most extreme libertarian techno utopian ideals of Web 2.0, wearing a cool crypto jacket.

Cross-posted to LinkedIn.

Categories
web3

Web3: Panacea or Poisoned Chalice

Yesterday I transferred $50 CAD worth of the cryptocurrency Ethereum from one wallet to another. It cost me $18 CAD in “gas fees,” aka network expense, a 36% charge for a simple transaction.

Meanwhile, in the various social spheres I’m embedding myself as I research the #web3 phenomenon, everyone says cryptocurrencies are the future. It will replace traditional centralized banking and fiat money. It will replace Web 2.0. It will allow “us” to take power back from big banks and corporate interests.

This year I’m taking a deep dive into Web3 to make sense of what it is, how it works, what it promises, and whether those promises will come to fruition. So far I feel like I’m watching a thriller screaming “don’t open the door!” at my TV.

The Panacea

“Web3” (specifically with no space between the “web” and the “3”) has become an umbrella term for a future for the web in which everything from finance to storage is decentralized through the blockchain. This is notably different from “Web 3.0,” an umbrella term for the Spatial web, the Semantic web, the Metaverse, and the Decentralized web, with or without the blockchain. This means if someone says “web3,” there’s a high chance they’re talking about the blockchain, NFTs (Non-Fungible Tokens), DeFi (Decentralized Finance), and DAOs (Decentralized Autonomous Organizations) rather than the decentralized, semantic, and spatial web or the Metaverse (see “The Lawnmower Man,” “Ready, Player One,” and “A Beautifully Foolish Endeavor”).

Talk to the web3 crowd and they’ll tell you web3 is about “taking the web back” from centralized authorities like Facebook and Google, and “taking power away” from centralized financial authorities like banks and governments. As an example, artists are encouraged to mint NFTs (unique unreproducable tokens on the blockchain) for their work and sell those NFTs for cryptocurrency to fund their work rather than go through traditional channels where a middleman takes part of the profit. These NFTs are then traded on an open market and can over time generate enormous value, some of which can trickle back to the original creator (when minting an NFT you can configure it to send a percentage of every future trade back to the original creator).

Web3 is in a very real way presented as a panacea, an antidote and solution to all the harms and injustices brought upon us by existing financial, corporate, and social systems, in particular the banking system and the predatory surveillance capitalists of the Web 2.0 era.

This idea is a gravity well: Step close enough and the pull is nigh impossible to resist. The promise of independence, decentralization, autonomy, anonymity, and financial freedom is everything we’ve been promised from the modern world and all things the modern world has failed to deliver.

Where the cryptosphere used to be populated with finance bros and equally white and male techno utopians, the web3 conversation now has a significant cohort of women, environmentalists, and historically excluded and oppressed communities. These more diverse contributors have bought into the idea of the panacea and build upon it grand visions of a possible more equitable future for us all. In recent weeks I’ve observed conversations blooming on TikTok and Discord channels about how NFTs can be used to promote indigenous art, whether environmental NGOs can be run using DAOs, and whether online sex workers can decouple their income streams from expensive middlemen platforms and the moralistic qualms of traditional payment portals.

The mythos around Web3 is rapidly evolving from the techno utopian dream of untraceable and untaxable money to a promise of fairness, equity, and ownership in a post-centralized world.

These conversations, these dreams and visions of the future, are hopeful and important. From where I’m standing, and what I’ve seen of the current and near-future tech stack they are built on, they are also woefully premature.

The Poisoned Chalice

To me the blockchain has always seemed like a technological solution looking for a problem. I’m not the only one. I’ve linked to a series of critical articles at the bottom of this one for further reading. My introduction to the blockchain happened many many years ago when Bitcoin was first introduced. An acquaintance whose job consisted of managing giant servers in the deep core of a university had used the downtime on the servers as his own personal mining rig and was now complaining he had all this money that he couldn’t spend because nobody was honouring it. “In the future,” he said, “everyone will use Bitcoin. That way the government can’t get their greedy hands on our money.” The fact he was working for a state-funded university and relied on state-funded healthcare to get treatment for several chronic health conditions seemed irrelevant.

In the years that followed, conversations around first Bitcoin, then the Blockchain and how these technologies would solve everything from finances to shipping to healthcare to education flourished, yet I saw little in the ways of actual innovation or application. With the exception of more and more cryptocurrencies being minted, and more and more investors pouring their money into startups building businesses around these cryptocurrencies.

Flash forward to today, and I don’t see much change. This concerns me. Here are three of many reasons why:

Where are the proofs-of-concept?

The biggest red flag I see with web3 now is where web3 conversations are taking place: On Twitter, on Discord channels, on Telegram, on TikTok, on Medium and Substack. What do all these have in common? They are all good old centralized web2 platforms. And I see very little work being done to build web3 alternatives to these platforms.

One of the major marketing messages from the web3 community is that web3 will take power away from the centralized behemoths and place that power in our hands through decentralization. Yet the web3 community has yet to adopt a web3 solution to publishing articles and enabling community chats.

When I bring this up, the answer is usually “it’s still early days.” Here’s the thing: I was there in the early days of web 2.0. You know what those early days were like? The entire damn community was hard at work building the tools we wanted to use to change the world. That’s how Drupal and Joomla! and WordPress and a myriad of other CMSes were built: The community was walking the walk. And the community leaders were right at the front of the pack. Many of them still are.

From web3 community leaders I hear mainly three things: Buy tokens, get everyone else to buy tokens, and hold on to those tokens. By doing this, you are supporting the system and helping bring in the new era of decentralization. And where are they saying this? On centralized web 2.0 platforms.

Meanwhile venture capitalists and investment companies are pouring billions into the web3 space, mainly supporting so-called “DeFi” decentralized finance solutions. Many of these projects focus on building infrastructure around trading in cryptocurrencies and NFTs and are effectively machines for generating the most real-world money out of the crypto sphere as possible. The problem of course is cryptocurrencies are a negative-sum game. A crypto coin (or NFT, or DAO, or anything else on the blockchain) is only worth as much as what the next person buying it from you is willing to spend. They are not tied to anything tangible, and if everyone cashes out, there isn’t enough money in the world to actually pay everyone.

In short, while web3 says it’s all about taking power away from the centralized authorities and giving it back to the people, all the money being invested into web3 right now is going towards the same toxic capitalist tendencies that ruined web 2.0.

Big Money isn’t at web3’s gates. Big Money is on the inside, building the gates and selling tickets to watch the build process for profit.

Barriers to entry and real-world cost

I think part of the reason for this is the barrier to entry for Web3 and the blockchain is very high. These technologies add an enormous level of complexity to a system that is very simple. Building a basic smart contract involves a deep understanding of not only programming but also blockchain protocols and interface layers, and there are few tools available to simplify the process. Go search “web3” on TikTok and you’ll find a myriad of videos showing artists how to download a script from GitHub to mint 10,000 NFTs from a stack of 10 layers of art, and a myriad of videos talking about how smart contracts and DAOs will change how the world works. But dig deeper and you’ll discover very few examples of any of this actually being done. Because not only are these things difficult to build, difficult to use, and difficult to get other people to use, but they are expensive.

Yes, expensive. As I said in the opening to this article. Yesterday it cost me $18 to transfer $50 worth of ETH between wallets. To start selling NFTs on OpenSea, the system wanted to charge me $218 in network fees. That’s a lot of money to pay for two basic transactions. The reason it’s so expensive is the Ethereum network is “congested” meaning there are a lot of people making changes to the blockchain at the same time. And the more people use the blockchain, the more expensive things will get.

Now consider the millions of struggling artists out there being told all they have to do to earn money from their art is to mint NFTs. Some of them are now pouring significant money into the system, paying for gas fees at increasing rates. And while some of them will be lucky and make money from their NFTs, many more will never make a profit. Instead they’ll have spent significant time and hard cash burning energy on a graphics card somewhere to do a complex math operation to put an entry on a blockchain.

There are real financial, environmental, and human costs to the blockchain, and I have yet to see any meaningful proofs that web3 can resolve any of these problems.

For web3 to meet its own promise, the community must invest in building the future they want to live in, not by minting and buying NFTs and forming DAOs, but by actually building the tools they need to make web3 equitable and accessible.

Solving real-world problems with the Blockchain?

There used to be a food truck festival in Vancouver where I live. When you went to the festival, you bought Food Truck Bucks you could use at the festival. Only Food Truck Bucks were legal tender at the festival, and Food Truck Bucks were a one-way transaction. You couldn’t trade Food Truck Bucks back to regular money. Pretty much everyone who went to that festival still had some Food Truck Bucks left when they went home.

That’s what cryptocurrencies are: Food Truck Bucks. Except in some cases, if you’re lucky, you can trade them back for real money. And in some cases, if you’re lucky, a Food Truck Buck is worth more when you trade it back than it was when you first bought it.

Before the modern financial system, that’s how the world used to work. People, businesses, towns, and fiefdoms issued their own currencies or IOUs that could be traded for goods. It was a great way of keeping value contained within an area, and keeping power contained to the people who controlled the currency.

With the advent of cryptocurrencies, anyone can now once again mint their own legal tender and try to convince others that the new coin has value. And they’ll tell you this has absolutely nothing at all whatsoever cross my heart and hope to die pinkie swear anything to do with evading taxes. At all.

Sure.

Here’s the thing: Cryptocurrencies, in their current state, do not solve any problem we can’t solve without cryptocurrencies. And based on who holds the majority stake in the big cryptocurrencies today, it’s pretty clear any illusion of these new coins being “us” taking power back from Big Money is just that: An illusion. Big Money holds a majority stake in all coins.

So if not money, what real-world problems does the blockchain solve?

Decentralization of data? Not really. There are plenty of other models being worked on or already existing that solve decentralization without the blockchain. In fact, the internet is decentralized already. The reason the web is centralized is largely because of people wanting ever more streamlined web solutions that worked ever faster and capitalism being an endless collapse towards monopoly.

OK, how about artists making more money from their work without going through a middle-man? Again, that can be solved without the blockchain. In fact, the people making money off the current NFT craze in the art space are not the artists themselves but the investors who trade those NFTs among themselves. Big Money making more money of our work. The only real change is where it’s happening.

What about DAOs then? That’s something totally new, right? Sure, if the DAOs actually use smart contracts to do something meaningful, and we trust the programming to not have errors. In most current real-world examples though, DAOs are just shareholder companies wearing crypto jackets.

Oh, I know: How about combining DAOs and NFTs to streamline supply chain management? Sure, that’s possible. It would mean trusting computers to make decisions we currently have humans doing because those decisions often have to account for unforeseen issues like a giant boat getting stuck in a canal or a giant volcano exploding or a global pandemic causing everyone to stay home. Which would be extremely risky. But it is possible, at some point. It wouldn’t be done on the same blockchain that handles financial transactions and trading of art NFTs though.

I got it! The Decentralized Web! That’s where web3 gets its name anyway, right? Here’s the ting: The estimated size of the web right now is somewhere between 60 and 80 zetabytes. Thats 60-80 million million gigabytes. The blockchain can’t track that. Nothing can track that. And a single blockchain would never be able to manage even a fraction of that. So there would have to be a myriad of blockchains connected together in some sort of blockchain tree, with a central blockchain controlling all the other blockchains to make sure consensus was upheld across all the blockchains. And whomever had the power to control that central blockchain would effectively hold centralized control over the web.

So no, I have yet to see an example of how web3 solves a real-world problem. What I see instead are enterprising investors making big bank on the hopes and dreams of a generation badly burned by the web 2.0 shenanigans of those very same enterprising investors.

Hope is a catalyst

Does all this mean I think Web3 is bunk? That we should abandon the whole thing and keep building web 2.0 instead? No. Not at all. I think the ideas emerging from the diverse crowd now embracing the promise of web3 give us reason to hope. I think as soon as we stop pouring our money into the bottomless pits built by of VC capital and start actually building a web3 focused on sustainability, equity, inclusion, and decentralized power, we can build the next version of the web. To get there, we need to build the tools and the platforms and the communities necessary to get us into that future. That means less minting NFTs that will end up being traded between Elon and Jeff, more building decentralized alternatives to Discord and TikTok. Less focus on DeFi, more focus on how to handle the very real problem of online disinformation, hate speech, and violent and inhumane content when what goes on the blockchain can never be removed. Less focus on building energy-burning mining rigs, more focus on building cryptocurrencies whose value is tied to meaningful real-world impacts like carbon sequestration (read “The Ministry for the Future” for more on that).

I have hope web3 can be a panacea for the ails brought on us by web 2.0. And I have faith there are people in the community able to get us there, if we all stop drinking from the poisoned chalice of capitalist greed.

A Critical Reading List

Cross-posted to LinkedIn. Header photo by Adrian Infernus on Unsplash.

Categories
Open Source

A Fatal Flaw in the Bazaar’s Foundations

A fatal flaw in open source ideology went from an intentionally overlooked rift to a massive unignorable chasm this week. It stems from the book “The Cathedral and the Bazaar” by Eric S. Raymond, oft cited and (I’m starting to wonder) not as often read.

[This article is an addendum to “Open Source Considered Harmful.”]

In the chapter “On Management and the Maginot Line” Raymond discusses the difference between “traditional management” and the free flowing volunteer contributions of open source, and how the latter solves common problems in a better way than the former.

On the question whether “traditional development management is a necessary compensation for poorly motivated programmers who would not otherwise turn out good work,” he says the following:

This answer usually travels with a claim that the open-source community can only be relied on only to do work that is `sexy’ or technically sweet; anything else will be left undone (or done only poorly) unless it’s churned out by money-motivated cubicle peons with managers cracking whips over them.

If the conventional, closed-source, heavily-managed style of software development is really defended only by a sort of Maginot Line of problems conducive to boredom, then it’s going to remain viable in each individual application area for only so long as nobody finds those problems really interesting and nobody else finds any way to route around them. Because the moment there is open-source competition for a `boring’ piece of software, customers are going to know that it was finally tackled by someone who chose that problem to solve because of a fascination with the problem itself—which, in software as in other kinds of creative work, is a far more effective motivator than money alone.

This underlying assumption, that open source developers naturally gravitate not only to “sexy” challenges, but also to difficult “boring” problems has been a foundational component of open source ideology since its inception. When I first read Cathedral a decade ago, this assumption stood out to me as pure fantasy. My lived experience was exactly the opposite: Open source developers (myself included) gravitated easily towards new and shiny and exiting things, but were quite reluctant to take on boring maintenance and security issues. And because every contributor to an open source project is (ostensibly) a volunteer, nobody has the power to delegate work and tell them what to do. In fact, this lack of management is the very thing Raymond says solves the problem… somehow. Meanwhile, in the real world corporations (hosting providers in particular) resorted to paying their employees to do the “boring” but critical work to ensure the project didn’t collapse due to poor maintenance.

Contrary to Raymond’s assertion some 20 years ago, the hard problems of open source very much depend on conventional management structures and paid contributors. And a significant portion of this work is carried by the very corporations Raymond and his open source ideologues wanted to take power away from. At no time has this been made more clear than yesterday when tech CEOs and open source leaders convened at the White House to discuss how to secure open source.

Per GitHub:

First, there must be a collective industry and community effort to secure the software supply chain. Second, we need to better support open source maintainers to make it easier for them to secure their projects.

Per Google:

there’s no official resource allocation and few formal requirements or standards for maintaining the security of that critical code. In fact, most of the work to maintain and enhance the security of open source, including fixing known vulnerabilities, is done on an ad hoc, volunteer basis.

For too long, the software community has taken comfort in the assumption that open source software is generally secure due to its transparency and the assumption that “many eyes” were watching to detect and resolve problems. But in fact, while some projects do have many eyes on them, others have few or none at all.

Per OpenSSF (Open Source Security Foundation):

Following the recent log4j crisis, the time has never been more pressing for public and private collaboration to ensure that open source software components and the software supply chains they flow through demonstrate the highest cybersecurity integrity.

In the Cathedral and the Bazaar, Raymond envisioned a future where open source contributors would naturally gravitate towards hard problems and solve them out of personal interest and pride. This falls closely in line with other privileged and elitist stances laid out in his book including “open source has been successful partly because its culture only accepts the most talented 5% or so of the programming population” and in the ideology established by Richard Stallman in the GNU Manifesto.

In both cases, these ideas – the very foundational blocks on which open source ideology is built – are divorced from the reality we all live in where people need money to pay for things like food and clothes and a roof over their head and where most people are willing to do things they enjoy for free (like developing new shiny stuff), but less so when the work is drudging maintenance or security engineering required by corporations and organizations earning billions of dollars off their free labor.

As I said in my previous article on this topic, it is time we rebuild open source ideology to be based on equity, inclusion, and sustainability.

Header photo by Chris Lynch on Unsplash.

Categories
Open Source

Open Source Considered Harmful

“Meritocratic hubris is the tendency of winners to inhale too deeply of their success, to forget the luck and good fortune that helped them on their way.” – Michael Sandel

Those who profit from open source have inhaled too deeply of their own success and forgotten the millions of volunteers who helped them on their way. Now we all run the risk of losing ourselves to an unfinished and deeply privileged ideology that saw the world not as it is or even how it could be, but as it would be were we all Richard Stallman. It is time to rethink the foundations of open source.


In December 2021, a serious vulnerability was discovered in a Java logging library called Log4j. It put a significant portion of the online infrastructure we rely on for the functioning of modern society at risk. Government agencies reached out to the developers to get it fixed, only to discover Log4j, like most open source software, is developed and maintained by unpaid contributors.

In response, the White House did the only thing they could do: Reach out to large software companies to find someone they could hold accountable for fixing the problem.

Per White House national security adviser Jake Sullivan: open-source software is widely used but is maintained by volunteers, making it “a key national security concern.” 

In other words, the White House considers open source potentially harmful. Let that sink in.

Then on January 9, 2022, a developer deliberately corrupted two major open source JavaScript libraries called “colors” and “faker,” affecting thousands of applications.

These corruptions were a political move by a developer to draw attention to the fact most open source developers volunteer their time and skill to build and maintain software others earn billions of dollars from. Per Wired.com: “A massive number of websites, software, and apps rely on open-source developers to create essential tools and components — all for free. It’s the same issue that results in unpaid developers working tirelessly to fix the security issues in their open-source software,

Saying the quiet parts out loud

I’ve worked in open source for 15 years. I believe in open source, and I believe the online world we live in today would never have been built without open source. I also believe the open source ideology has become harmful, to individuals and to the community, and we – the open source community – need to rethink some of our core ideals and values and accept some hard truths.

Let me say the quiet parts out loud:

  • Most of the online services we rely on for everything from social media to banking to healthcare depend on software written by unpaid volunteers, and when something goes wrong with that software, the responsibility of fixing those issues fall on those same unpaid volunteers.
  • The world runs on open source, but with a few exceptions there are no meaningful governance structures in place to ensure oversight or accountability within the open source community.
  • Open source software is a multi-billion dollar industry, yet the vast majority of open source developers and contributors never get paid a cent for their work. Meanwhile, corporations built on top of open source software have billion dollar valuations.
  • Nobody speaks for open source, so when businesses, organizations, governments and world leaders need to talk to someone about open source, they have no choice but to turn to venture capitalists and large corporations whose financial success hinges on being able to steer open source projects in directions that are profitable to them for advice.
  • Most open source projects are governed and controlled by a so-called “Benevolent Dictator For Life” or BDFL – typically a relatively young, relatively white man who either started the project or took control over the project early on – whose power is absolute and unchallenged.
  • In many open source projects, that BDFL runs a corporate entity, built on the open source software, that for the average user is indistinguishable from the open source project itself (often going as far as sharing its name) that siphons enormous wealth from the project without distributing that wealth back to the volunteer contributors. In open source speak: They build cathedrals to look like the bazaar, in the middle of the bazaar, and reserve the exclusive right to advertise their cathedral as the bazaar.

Harm to contributors

In my years working in open source I’ve seen the real harms of this culture on contributors. Doing unpaid work while others profit off that work is harmful. Being told this is the way it’s supposed to work, that if you just work hard enough somehow you’ll end up being paid, is harmful. Shifting the responsibility of finding funding for mission critical infrastructure work to the individual contributor while large corporations lean on them to immediately fix issues and move the project in directions beneficial to them is harmful. Believing this culture of exploiting unpaid labor is healthy is harmful.

Power in open source projects is distributed based on a meritocratic “decisions are made by those who show up” model meaning if you have something important to say, or a vested interest in the project, you need to invest significant time and effort into the project to be heard. This gives corporate interests and people in positions of privilege power, while the majority of contributors are left to fend for themselves. Why? Because for the vast majority of contributors, this means volunteering their time so other people can make money off their work.

Most people – in particular women, people belonging historically excluded and oppressed groups, and people with disabilities – do not have the privilege of time and money to volunteer “enough” to be recognized in these meritocratic systems. As a result, decisions in these projects are made by an unrepresentative group of people who typically fall in the categories young, white, male, North American, abled, and in lockstep with the ideologies of the BDFL.

When questions are asked to the leaders of open source projects about why wealth is so unevenly distributed – why some corporations can earn millions of dollars on the work of unpaid contributors while the contributors themselves are chided for suggesting they deserve to be paid for their work – the answer is always the same: “Open source is volunteer contribution. If you want to get paid, go work in proprietary software.”

If you’re looking for a textbook example of gaslighting, there it is.

Not paying open source contributors for their work is a political decision based on the ideology established in the GNU Manifesto from 1985 from which the popular GNU GPL license originates. In it, Richard Stallman puts forth a utopian fever dream in which open source software wins the battle for software supremacy, corporations who rely on open source pay a form of tax to the open source community, and contributors magically get paid because of course people who do good work get paid. Think I’m being hyperbolic or unfair in my description? Read for yourself:

In the long run, making programs free is a step toward the postscarcity world, where nobody will have to work very hard just to make a living. People will be free to devote themselves to activities that are fun, such as programming, after spending the necessary ten hours a week on required tasks such as legislation, family counseling, robot repair and asteroid prospecting. There will be no need to be able to make a living from programming.” – GNU Manifesto by Richard Stallman

37 years later and corporations make ever-increasing profits on the unpaid labor of volunteer open source contributors. Open source won the battle of software supremacy, on the backs of millions of unpaid workers.

Here’s the thing:

There is no good reason why open source contributors can’t get paid by the project for their work.

There is no good reason open source projects can’t set up foundations that collect money from investors and those who rely on the software and pay it to contributors based on need. There are models for this already in organizations like the OpenJS Foundation and the newly founded PHP Foundation. The reason this is not happening, in my opinion, is setting up such structures would shift the center of power from the BDFLs and their teams to the community itself. Which should be the goal of any open source project, but would financially impact the people currently in power. Which is why BDFLs and their supporters vehemently oppose any attempt at introducing meaningful governance into open source projects.

As a result, open source projects rarely if ever have any coherent policies, guidelines, or tools for accountability beyond protecting the open source nature of the project. Which is why when an open source project is approached by government because of its effects on society, instead of sending representatives from the open source project to talk to government, unelected and unappointed corporations with a financial interest in the project speak on the project’s behalf.

Harm to the community

“Part of the issue, of course, is the overreliance by for-profit businesses on open source, free software developed and maintained by a small, overstretched team of volunteers.” – Wired.com

Open source won the war for software supremacy. Now comes the hard part: Taking responsibility for our work by creating a healthy sustainable ecosystem where the people who build the infrastructure of the web can live meaningful lives while doing meaningful work.

The lack of proper governance, funding, and oversight in open source is causing real harm to individual contributors, to the open source community, and to the wider internet community relying on our work. We are acting as if these are still little hobby projects we’re hacking away at in our parents basements. In reality, they are mission-critical, often at government levels, and what got us here is no longer sufficient to get us anywhere but chaos.

Here’s what’s happening in the real world: Governments and large corporations are waking up to the reality our online infrastructure is built on software maintained by unpaid volunteers without any meaningful governance or accountability. To protect themselves, governments and corporations are doing the only thing they can do: Work together to solve the problem. What do you think that solution will be? I know what it definitely will not be: More volunteer contribution.

More likely, government will ask the big corporations to either lean very hard on the open source projects to fix their issues, or more likely inject their own staff into the projects to take over. And while the open source community keeps saying this is an impossibility, it really is not. Open source has largely been taken over by corporations already, both from the inside and from the outside. Just follow the money. And when push comes to shove and governments start getting involved, shareholders and investors will quickly pivot from “let these kids do their magic” to “let’s take control over this mess to protect our profits!”

If we don’t do the hard work of creating proper open source governance, open source policy, and functional funding of open source contributors, the dream of open source will die in our hands and we won’t even notice.

It is time we rebuild open source ideology to be based on equity, inclusion, and sustainability. We built the modern world. Now we need to take care of it and of ourselves.

Header photo by Julius Drost on Unsplash.

Categories
Internet

Facebook: Single Point of Failure

Facebook isn’t a social media platform, it’s infrastructure. We’ve built monolithic platforms on a web designed for plurality and distribution. Now these platforms have become single points of failure. 

“Are you able to send messages through WhatsApp?”

My wife was calling me from upstairs. She’d been messaging with other parents at our son’s preschool about plans for a Trunk & Treat during Halloween when the service suddenly went offline.

The internet has a magical ability of allowing people around the world to experience the same thing at the same time. Unfortunately the most noticeable of these experiences is when a major service goes down, as was the case Monday October 4, 2021. As if a switch had been turned off, Facebook, Instagram, and WhatsApp users all over the world were suddenly unable to access the services. All they got were apps stuck in update limbo and websites returning nothing.

Whenever there’s a problem with Facebook, arguably the most controversial and also most heavily used platforms on the web, a fair bit of schadenfreude floods other social networks. “Oh no, how are the anti-vaxxers going to do their research now?” quickly became a repeated refrain on TikTok. The #DeleteFacebook hashtag, already building up steam after an explosive 60 Minutes interview with whistleblower Frances Haugen about the social media platform’s relative inaction on harmful content and its effects on democracy, got an added fuel injection. Virtuous declarations of how long ago influencers had abandoned Facebook and how anyone still on it were “part of the problem” abounded. Meanwhile the same influencers were complaining about lost revenue due to Instagram being down. (Instagram btw is part of Facebook.)

Judging by the chatter on social media you’d think Facebook is a media platform mainly used to share boomer jokes, figure out what your highschool friends are doing 20 years after graduation, and spread misinformation. And in the best of all possible worlds, that’s what it would be (sans the misinformation). But this is not that world, and that’s not an accurate description of what Facebook and its kin are. In this very real world, Facebook and Instagram and WhatsApp operate as critical infrastructure for everything from interpersonal communication through online business to financial transactions and government services.

In many countries in Africa, Facebook, Instagram, and WhatsApp operate as essential infrastructure. In 2019, WhatsApp was responsible for nearly half of all internet use in Zimbabwe. South Africans can renew their car license and perform other government services through WhatsApp. And when you go looking you find the same trend in countries and regions throughout Asia, Europe, South-, Centra-, and North America, and Oceania. For millions of people around the world, the services Facebook provides are their primary tool for communicating with family and friends, consuming news and information, performing business transactions, interacting with local and federal government, even sending and receiving money. Caspar Hübinger writes more about this.

So when Facebook (and Insta, and WhatsApp) goes down, for 5 hours, without any meaningful information about what’s happening or when it will be back up again, it’s not the anti-vaxxers and boomers who are paying the price – it’s the millions of people whose lives and livelihoods depend on the platform and its kin.

Like I said: Facebook is infrastructure, and has become a single point of failure for the proper functioning of the web. So it’s more than a little bit ironic, in an Alanis Morissette way, that Facebook would go down due to a single point of failure in their own system: A configuration change to their DNS system.

The core premise of the web was to allow everyone to host their own files and services, and interconnect them through a common platform. This was specifically to get away from the problem of centralized services and single points of failure. Some 30+ years later and we’ve become dependent on monolithic and monopolistic platforms like Facebook who gobble up or destroy their competitors and try to be everything to everyone. We’re back to the same problem of single points of failure, only now those single points are global entities used by millions of people. And when these services go down, they cause immediate harm to their users.

And here’s the kicker: The success of Facebook is in no small part due to how we, the people who build the web, promoted and used and drove our families and friends and clients and communities to use Facebook. We invested ourselves in the idea of Facebook integration early on. We onboarded people to the platform. We built their communities and business pages and advertising integration. We replaced native comments with Facebook comments to generate more engagement on their company pages. We built giant community groups on Facebook. We added Facebook tracking pixels to our sites and streamlined our tools so our blog posts got automatically cross-posted to our Facebook pages. 

We helped make Facebook a single point of failure. And we are the only ones who can fix it.

So, the next time you feel compelled to shout #DeleteFacebook from the rooftops and declare yourself morally superior to the commoners who still languish on the platform you abandoned a decade ago for ethical reasons, remember that for millions of people we have yet to build viable alternatives.

The next time you think to yourself it’s only a matter of time before some government entity steps in and breaks up Facebook to reduce their power, remember that politicians trying to figure out how to keep terrorism, CSAM, and other harmful content off the web think Facebook is the web, and that things that are not Facebook – like your website – must be regulated as if they were Facebook. 

And the next time you set up a WordPress site, or a Gatsby site, or a Wix site, or any other site for your client, notice how easy it is to add Facebook integration to ensure your client gets to benefit from that sweet poison known as surveillance capitalism.

Facebook is infrastructure. Infrastructure change is a generational project. If we don’t provide viable low-friction user-centric alternatives to Facebook’s myriad of services soon, the web will become Facebook. That’s not hyperbole – it just hasn’t happened to you. Yet.

Header photo by Brian Yap.

Categories
Uncategorized

The CSS in JavaScript survey: Understanding why we use the tools we use

Sort version: I’m running a CSS-in-JavaScript survey to understand why different solutions are used, and I’d love for you to fill out the survey and share it with the world.

Here’s the survey link: https://forms.office.com/r/WhVWT0D27F

Front-end web development has experienced a dramatic evolution over the past several years, in large part driven by the JavaScriptification of everything via front-end frameworks like React and Vue. Part of this evolution has been the introduction of CSS-in-JS tools to abstract CSS into JavaScript or a JavaScript-friendly layer.

I’m embarking on a project to get a better understanding of not just which CSS-in-JS tools people are using, but also why they are using specific tools. Are people following real or perceived industry standards? Are there legacy issues at play? Or are there other reasons?

To get started on what will likely become a huge research project, I’ve created a CSS-in-JavaScript survey, and I invite you to share your insight with me. The anonymous survey will take about 5 minutes to fill out and has an option at the end to take part in a round of in-depth one-on-one interviews with me to explore the subject further.

Once the survey period is over, I will analyze and further anonymize the data before publishing the results on my personal blog at mor10.com. The data and analysis may also become part of a future conference talk on the same subject.

Full disclosure: This is survey is a personal project run by me. It is not related to or endorsed by my employer, my wife, or anyone else.

Categories
AI My Opinion Politics

#BanFacialRecognition

tl;dr: The dangers of facial recognition far outweigh its benefits. If we don’t severely limit or outright ban the development of this technology today, we run the risk of eroding privacy to the point it ceases to exist.

On Saturday, I got an email from Facebook asking if I could verify whether a bunch of pictures it had uncovered were indeed of me. Among those photos were a series taken during 17th of May celebrations on Nesodden, Norway, in 1997 where I am seen in the crowd of a youth orchestra playing the big drum. The picture is blurry, and I’m wearing a weird hat over my long hair. Even so, Facebook’s facial recognition algorithm had correctly identified me.

In April, a woman posted a video on TikTok explaining how Google Photos had inadvertently sent an adult-themed video of her to her mother. The video had been taken in the kitchen with the fridge in clear view. On the fridge was a picture of the woman’s child. She had set Google Photos up to automatically share photos of her child with her mother. Thus Google used facial recognition to identify the child in the photo on the fridge and send the video to the woman’s mother. (I’m not going to link the story here because it appears the original TikTok has been set to private, but a simple search will surface it for you if you’re interested.)

If you need to apply for a loan or a mortgage in the near future, chances are some of the companies you approach may use facial recognition to check your identity and protect themselves from fraud. In China, facial recognition systems are already in use in the finance industry to verify customer identities or “examine their expressions for clues about their truthfulness.

Governments are eyeing facial recognition for everything from immigration screenings to access to public services.

Meanwhile, errors in facial recognition are leading to people, predominantly racialized and otherwise marginalized, being denied loans, services, even being arrested and put in jail.

Facial Recognition Considered Harmful

If we know one thing about facial recognition it is this: The technology is flawed, inaccurate, and often downright racist. Technologists will counter that over time, the technology and the algorithms underlying it will improve to the point it will be virtually infallible. I don’t disagree; The pursuit of all technology is to endlessly converge on perfection, and thanks to machine learning and AI supported by ever-present and ever more advanced cameras, the future of “perfect” facial recognition is a foregone conclusion.

Here’s the thing though: The question isn’t whether facial recognition technology will be able to deliver on its promise; it’s whether the use of the technology will change our society in ways that are harmful. I firmly believe the answer to that question is yes. Facial recognition is already harmful, and those harms will only get worse.

Yesterday two EU privacy watchdogs called for the ban of facial recognition in public places. Just a few days earlier, the UK Information Commissioner said she is “deeply concerned” live facial recognition may be used “inappropriately, excessively or even recklessly”. The people who look carefully at the implications of this technology tend to converge on the same conclusion: This stuff is too dangerous, and needs to be aggressively limited.

Supporters of facial recognition will immediately respond with the many useful applications of the technology: It makes it easier to log into your phone! You can use it to open your front door! Imagine not having to carry a clunky ID card around! It can help fight crime, prevent fraud, and abuse, and terrorism! If you’ve done nothing wrong, you have nothing to fear from facial recognition!

Deontologists, and Edward Snowden, disagree. From his book “Permanent Record:”

“Because a citizenry’s freedoms are interdependent, to surrender your own privacy is really to surrender everyone’s.”

“saying that you don’t need or want privacy because you have nothing to hide is to assume that no-one should have or could have to hide anything.”

While on the surface, facial recognition appears to be a tool of convenience, in reality it is a tool of surveillance, manipulation, and oppression.

The value of facial recognition lies in how it automates wholesale omnipresent surveillance for commercial, law enforcement, and political oppression purposes.

In the 2002 movie “Minority Report” there’s a scene where the protagonist walks through a mall and is targeted by personalized advertising. In the movie, this targeting is done using retinal scans. Today, 20 years later, that exact same targeting already exists, thanks to facial recognition.

If you’ve gone to a mall and looked at one of those enormous digital displays showing mall information and ads, chances are your face and facial expressions have been scanned, logged, and probably used to target you, all without your consent. In 2020 a mall real estate company in Canada was found to have collected over 5 million images of shoppers via these kiosks. In 2017 a pizza restaurant in Oslo, Norway was found to use facial recognition to target gendered ads to patrons looking at a digital menu: sausage pizza for men, salad for women.

Can does not imply ought

Facial recognition is a prime example of a constant struggle within science and technology: Does the fact we can do something mean we ought to do it? From a purely scientific technologist perspective, the answer will always be “yes” because that’s how we evolve our technology. From an ethical perspective, the answer is more nuanced. Rather than judge the merit of a technology solely based on its advancement, we look at what the technology does to us, if it promotes human flourishing, and if it causes harm to people, communities, and society.

The technology for cloning humans has been around for decades, yet we don’t clone humans. Why? Because the further development of human cloning technology has severe and irreparable negative consequences for the human race. We can do it, but we don’t, because we know better.

This is the determination we need to make, today, about facial recognition technology: We can do it, but is this technology promoting human flourishing, and will its harms be outweighed by its benefits?

I’ve spent years grappling with this question and talking to people in the industry about it. After much deliberation, my conclusion is crystal clear: This technology is too dangerous for further development. We need a global ban on deployment and further development of facial recognition technologies, and we need it now. Failure to act will result in the destruction of privacy and immeasurable harms to individuals, groups, and society as a whole.

Think of it this way: Right now you can buy a drone with a high definition camera, buy access to one of the many facial recognition platforms available on the web, fly that drone to a public place, find and identify any person within that space, and have the drone track that person wherever they choose to go. That’s not science fiction. That’s current reality.

Oh, and once you find out who the person is, you can also stalk them on social media, find out where they work, who their friends are, what they like to eat, where they like to hang out, etc etc. Which is all harmful to privacy. But the truly dangerous part here here is the facial recognition: it gives anyone the capability of identifying anyone else, based on a single photo or a crappy video clip, and from there proceed to find all the other information. As long as facial recognition exists, we cannot control who can identify us.

And if you think you can opt out, the answer is no. Facial recognition companies have already scraped the internet clean of any and all photos of you and your face has been catalogued. John Oliver did a great bit on this last year. And yes, it will make you want to throw your phone away and go live in a cave in the forest:

Technology is not inevitable.

“But Morten, these technologies already exist. The cat’s out of the bag so to speak.”

True. Which is why a global ban on the deployment, use, and further development of this technology is something we have to do right now. We cannot afford to wait.

Here’s the bottom line: There is no such thing as inevitable technology. We, as a society, can choose to not develop technologies. We can determine a technology to be too harmful and stop developing it. We can assist those already heavily invested in those technologies to pursue other less harmful technologies, and we can impose penalties on those who use or develop the technology in spite of its ban. It won’t be perfect, but it is absolutely possible.

Facial recognition terrifies me, and I’m a white man living a middle-class life in Canada. The harms of facial recognition are far more severe for women, people of color, people who fall anywhere outside the binary gender or sexuality spectrum, the list goes on, indefinitely. Any day now we’ll be confronted with a news story of some oppressive regime somewhere in the world using facial recognition to identify and jail LGBTQIA2S+ people. Governments are investigating what is effectively pre-crime: using facial recognition technology along with what is effectively AI phrenology to determine the criminality of a person just by looking at their face.

I could go on, but you get the point: We are trading our privacy and the security of our fellow people for the convenience of logging onto our phones by just looking at them. That’s not a trade I’m comfortable width, and I hope you agree.

On the proverbial slippery slope, we are rapidly nearing the bottom, and once we’re there it will be very difficult to get ourselves back up. As the man on the TV says, avoid disappointment and future regret: act now! Your privacy and our collective future depends on it!

#BanFacialRecognition

Originally posted on LinkedIn.

Categories
My Opinion

Blogging is dead. Long live ephemerality.

Text in images is the least accessible, most ephemeral way to put important information into the world. It exists, for a brief moment, only for those who happen to see it, and then it’s lost, forever. Informational entropy at its most extreme. This is how we lose our history in real-time.

A tweet caught my eye earlier this week. It featured a series of images of white text on a black background originally posted on Instagram. In these images, the author, a woman from Istanbul, Turkey, describes how a hashtag and social movement created to draw attention to the murder of women in her country has been co-opted by people who don’t know the meaning of the #ChallengeAccepted hashtag. (You can read more about this story in reports from KQED and The Guardian.)

This tweet, and the originating Instagram post, and resulting Facebook posts of the same images of text, exemplifies a trend I’ve observed over the past several years: Blogging has moved from text in blogs to images of text on social media.

I think it’s time to say out loud what many of us have been discussing in private for years: Blogging as we knew it is dead. We, the people who built and promoted blogging tools, failed at convincing people that owning their blog and controlling their content is important. We failed at providing the publishers of the world with the tools they needed. And, most crucially, we failed at keeping pace with the changing behaviors and attitudes of our users. People don’t want a permanent web log; they want an ephemeral lifestream – there, and then gone again. And they want absolute control over the appearance and curation of that stream.

An image of an image of text

At the height of the worldwide Black Lives Matter protests, my social media feeds (in particular Instagram) overflowed with excellent information about the issues of racial inequality and inequity, hidden biases and how to overcome them, white privilege, nationalism, supremacy, how to be an ally, how to support BIPoC, etc. Almost all this information, crucial to the forward momentum of the civil rights struggle of our time, was shared as meticulously designed and entirely inaccessible images of text. 

What do I mean by “entirely inaccessible?” The web is built to transmit text from author to reader, and web browsers and tools are designed to parse that text in a way the reader can access: displayed on a screen; read out loud; printed on a braille display; or something else. For information to be accessible, it needs to be provided as plain text. An image of plain text contains no accessible information unless that information is appended in an alternative text attribute.

Worse still, Instagram in particular has no functional way of sharing a post with others outside the ephemeral Instagram Stories feature. As a result, much of this already inaccessible information is reshared as photos of the original post in stories, effectively a copy of an inaccessible image. In some cases, when the post is evocative enough, people will reshare it across other social media, by taking screenshots and posting them on Facebook or Twitter. And then people take screenshots of these posts and re-share them. A copy of a copy of a copy.

This resharing of online content is nothing new. Artists Mark Samsonovich explored the informational entropy associated with resharing (at that time called “regramming”) of content over social media back in 2014. What’s different now is what type of content is being shared. We have moved from sharing text posts with images or videos attached to sharing images of text.

What the user wants 

According to my students, I am “an old.” And they are right. I started blogging when blogging was a new thing. I spent 15 years of my life building and promoting blogging tools and telling anyone who would listen about the importance of owning your content and creating a permanent record of your shared ideas online. I surrounded myself with likeminded people, and for a good while, we ruled the world. Then last year a series of events forced me to take a critical look at my understanding of the evolving landscape. What I found all around me were the walls of an indie-web adjacent echo chamber. It had become my comfort zone, and everything I believed was echoed right back at me. But outside, the world had moved on, and I had somehow not noticed, or at least not accepted what I was seeing.

A pivotal moment came when I discussed social media sharing with some younger relatives. I told them about how I publish content online and they looked at me in incredulity. “Why would you bother posting something on your blog?” they asked. “Nobody is going to see that!” I tried to explain that yes, people do see it and they shrugged. “Sure, when you post tutorials and stuff, people see it. But that’s not blogging. That’s … publishing. When I post stuff, it’s for my friends, for my followers. And it’s not forever. It’s for right now. If they see it, great. If not, their loss.”

In that moment I realized what I value – the permanence of my published information – is the opposite of what they value – the impermanent ephemerality of sharing moments from their lives.

“And seriously Morten,” one of them said, “the blogging tools are really not good.” I tried not to take this too personally and asked them to continue. “I can post text and images and other stuff, but I can’t control how it looks. When I post something on Insta or Snap, I want what I see on my screen to be what people see on their screen.” They went on to show me examples of posts and gave me a walk-through of the tools they used to make their posts just right. 

I was floored. When I post to Instagram, it’s mainly either photos straight off my phone or straight off my camera. When they post to “Insta”, it’s often a design process involving two or three different apps. A photo may pass through a Snap filter and other editing tools before landing on the Instagram feed. And before it does, other photos may be culled to ensure the “wall” (the grid of photos you see when you go to the user profile) is curated and looks just right. There are even tools for previewing what your wall will look like once you’ve posted a new photo!

And when it comes to images of text and Instagram stories, the process is essentially a mobile version of a full design sprint with iterations, advanced tools, typography experiments, filters, the works.

What these users want is true WYSIWYG – What You See Is What You Get – which is what we’ve always wanted on the web but never got because the web can never fully be WYSIWYG.

Photos of text on social media give the new generation of web creators what Flash gave my generation: Absolute control, well beyond what the web platform can offer.

What is lost is the future

Some of my friends have become successful influencers today, the same way some of my friends became successful bloggers 10 years ago. They post content online, they get sponsorship deals, they make serious money. And just like 10 years ago, I know many more who pour hours of their day into reaching that influencer status. To me, today’s influencers are yesterday’s successful bloggers. Same deal, new wrapping. And to be quite honest with you, for all the talk of influencers posting bad content and being bad … influencers, the bloggers of my time were no better. As with the bloggers of my generation, I take issue with the commercialization and marketing aspects of modern-day influencers. But I also understand why they do it and why it’s so effective. Getting a peek into the curated life of someone you look up to will always be appealing, and I gladly spend time out of my day looking at what people post on all social media channels for this exact reason.

What saddens me is how we lost the battle of publishing to the commercial platforms, in large part because we trapped ourselves in an echo chamber: the anachronism that is the blogosphere.

It saddens me because the shiny surface and WYSIWYG-ness of the walled commercial gardens people use today is built on a graveyard of dead information. These platforms are not built for sharing, they are built for user retention and engagement. Link in bio is not a moderation tool to avoid link spam, it’s a slow knife to kill the open web. And text in images is the least accessible, most ephemeral way to put important information into the world. It exists, for a brief moment, only for those who happen to see it, and then it’s lost, forever. Informational entropy at its most extreme. This is how we lose our history in real-time.

We failed because while booting up space on a shared server, setting up WordPress, publishing an article, and meticulously sharing it out to the world was revolutionary 15 years ago, today it is an onerous task reserved for people who live in the past (at least according to my aforementioned younger relatives). To them, time is better spent designing a nice looking image with some text and putting it in their Insta Story, or on Snap, shared only with the people they choose, for a day or two, before being lost to our memories and the occasional copy of a copy of a copy.

Cross-posted to LinkedIn.

Categories
My Opinion

100 Days

I began a journaling project on March 13, 2020 as the realities of the COVID-19 pandemic started hitting us full-force. It was for me, to put down my thoughts at the end of each day, and for our son Leo, so he has a day-by-day account of events as they unfolded when he gets curious about everything that happened in 2020 or writes a school paper about this period a decade from now. Here’s my entry from Sunday June 21st, at the 100 day mark, submitted for the record.

The worst part is the uncertainty. In March I told one of my co-workers it felt like we were all trapped on a train heading for a cliff knowing at some point the tracks would drop away and we’d drop with it, but not knowing when that would happen. At 100 days, that analogy doesn’t cut it. I struggle to find a relatable comparison to fully encapsulate the anxiety, the exhaustion, the tedium, the frustration, the endlessly dragged out slow burn eating its way through the fabric of everything.

You broke your leg two weeks before the lockdown. 9 weeks of first a cast, then a brace, and somehow that was just one small part of the madness of the past 100 days.

A playground play structure covered in orange plastic netting.

As the lockdown descended on British Columbia in March, I went on my every-other-day evening runs through the neighbourhood. Each day the number of people on the streets would drop while the number of parked cars grew and grew. Driveways overflowed, then streetside parking. It struck me how many cars belonged to each household. Then people started washing them. Nothing better to do. Slowly my runs turned into a car show of sorts.

Two weeks in and I was all by myself in the world. 6 kilometers of usually buzzing neighborhurhoods, main arterial roads, busy playgrounds and parks now devoid of people. Play structures surrounded by yellow warning tape and covered in orange plastic nets, roads without cars, sidewalks without people. At one point I stopped on Kingsway, took my earbuds out, and heard only two crows harking at each other a block away. The constant background hiss of traffic was gone, leaving only nature as the bed track for my voyage through the world.

Empty store shelves surrounded by empty cardboard boxes once holding Purex tissues.

We were scared. Everyone was scared. The virus occupied our minds every minute of every day. The stores were stripped bare of first hand sanitizer, then rubbing alcohol and aloe vera gel, then cold medications and bleach. Stores at the mall started closing. The province closed the restaurants, then the community centres and gyms. Then the oil price tanked and suddenly gas in Burnaby was going at $0.92/l – lower than I’d ever seen it in my 17 years in Canada. The parking lot at the mall became a sprawling emptiness populated by discarded surgical masks and rubber gloves slowly migrating toward drains.

Businesses closed. People lost their jobs. A lot of people lost their jobs. Our friends lost their jobs. I talked to my co-workers about job survivor guilt. What at first felt like a slow-moving train was starting to feel like an avalanche driving us towards a tsunami.

Your preschool closed and you didn’t understand why. We tried to explain but it made no sense to a 3-year-old. You asked to hang out with your friends and we said no. You thought you’d done something wrong and we told you “no, it’s because of COVID-19.” You asked when you’d be allowed to play with your friends again and we said we didn’t know. Eventually you started talking about all the things you’d do once COVID-19 was over. “We’re going to have a big party with all my friends,” you said. “We will visit bestemor and bestefar in Norway,” you said. “Can we have a party with all my friends this weekend?” you asked and we again explained that no, we can’t, because of COVID-19. Yesterday you looked me square in the face and said “I’m so angry at COVID-19. I don’t think COVID-19 will ever end.” I hope I hid the pain well.

Hand puppet pig with cloth mask.

100 days later and you’re back in preschool, at reduced capacity, with fewer kids, less freedom to roam, and a lot of outdoor time. Hand sanitizer is back on the shelves and available right at the counter, in new varieties and strengths and fragrances and consistencies and brands. The mall has re-opened, at reduced capacity. There are direction signs for walking, lines outside every store, plexiglass shields on every counter, plastic coverings over the chairs at the food court. People are back on the streets at night, and I am once again back to leaving the sidewalk to get around unyielding pedestrians, only now I go on the outside of the parked cars to keep proper social distancing. We wear masks when we go to crowded places, though most of the people around us have stopped wearing masks. There is hand sanitizer in the car, at our front door, and in our bags.

Table and chairs in front of a Starbucks restaurant tightly wrapped in thermal plastic.

100 days later the world has also changed in another way. In May, in response to yet another violent and unlawful police killing of yet another black man, people first in the US and then all over the world braved the pandemic risk and flooded the streets to make one thing clear: Black Lives Matter. In the midst of a pandemic lockdown, maybe even because of the pandemic lockdown, people let the pent-up frustration of racial injustice manifest itself in public action. What started as scattered protests turned into a world-wide movement. More than a month later, the protests are still happening and the world is finally listening. When you look back on this time I hope it is described not only as an unprecedented pandemic, but also a transformative moment for racial justice in the USA and around the world. For the first time in my lifetime it feels like we as a society are moving in the right direction on this issue. Incrementally, slowly, painfully, but we are moving. As bestefar’s aunt said, life comes in lumps; long durations of flat normality interrupted by sudden lumps of everything happening at once. That’s certainly what it feels like. Everything happening, all at once.

100 days later, COVID-19 is very much part of our life, still infecting millions of people, still making some sick, still killing some. They say to form a habit you need to do something for around 21 days straight. 100 days of looking at ever-climbing numbers of infected, sick, and dying and what was at the beginning a claxon pointed directly at our faces reminding us of our own mortality has become the new normal.

A discarded cloth mask lying in the street next to a curb.

On March 13, when I started writing this, the global death toll was 4,718 and everyone was in fear for their life. Today, the global death toll is 470,000, and every day more people are putting away their masks, going back to work, dining out, and demanding restrictions be lifted.

The pandemic is not over. By many estimates, we are still in the first wave, and we will have to ride it for months if not years and hope it doesn’t engulf us. 100 days of COVID-19 has exposed deep fractures in our societal fabric, and how we deal with these fractures over the next 100 days will determine not only what the immediate future looks like for us, but what your future will look like decades from now. When I grew up people talked about my generation as the first in a long time that would not be better off than the last. I fear your generation will look at this analysis as a cruel joke. Unless we, the adults living through this right now, make all the right decisions, the world you grow up in will be nothing like what it should or could be. The virus amplifies every mistake made, and the uncertainty makes it hard to recognize mistakes even after they happen.

Billboards across Greater Vancouver now show various nondescript art pieces in place of advertisements.

Some say the best we can do when faced with uncertainty is to embrace it. I strive every day to embody this philosophy: Change the things I can, let the things I can’t change play themselves out without allowing them to frustrate me. That’s not easy knowing the things I can’t change are the things that will most directly impact your future.

One evening in August 2017 I went for a run. The sun was still up, the sky was bright and blue and without a single cloud. The next morning I could hardly breathe. We had left the window open and our house was filled with smoke. A forest fire hundreds of kilometers away had dumped its cloud directly on us. For two weeks the brown sky trapped the heat of the sun making the air unbreathable and our house an insufficient refuge. “Imagine if that fire was here,” I said to your mom and we rested assured that would never happen. A year later a forest fire, the biggest in the US to that date, surrounded our head office, displacing many of my co-workers and putting things in limbo for months. The fragility of everything screamed in our faces: Even when you think everything is fine, things can happen!

An angry orange sun forces its rays through a thick cloud of smoke over a suburban street.

That’s what it feels like now. We are in a vast, all-encompassing forest fire. Some places, like BC and Norway and Denmark and Taiwan where our family is, have been relatively unscathed, dealing mostly with smoke and the occasional spot fires. In other places, the fire burned through towns leaving immense destruction and death tolls so high they are impossible to process. In yet other places, the fire is slowly creeping through the landscape and seems impossible to stop, either for practical, political, or societal reasons. And even though right now, where we sit, the sky is clear, I don’t think this fire is over. I’m not even sure it has fully begun.

That’s the uncertainty, and that’s why it’s the worst part: We know the fire is still burning, we know it could be burning under our feet right now, and we don’t know how or when it will end. Yet somehow we must embrace this uncertainty and move forward, together.

An empty parking lot BBQ with friends in the rain.

In many ways I am glad you’re not old enough right now to fully understand what is happening; to see how the uncertainty is wearing on your mamma and me and everyone around us; to see our modern society desperately try to stop a fire we don’t fully understand, to see people refuse to accept reality and cling to conspiracy theories to explain the unexplainable while putting everyone else at risk. And I hope by the time you read this this will all be a strange memory of a year when things were somehow different. Though I fear it will instead be the moment that defines your generation.

100 days and we are still here, in our house playing with your toys, going on walks in the forest, talking to our family in Norway over the internet, doing everything to make this new normal as normal as possible to give you the best chance at being able to build a future you will find meaningful. That’s the thing about trains and waves and avalanches and forest fires: they eventually end. And when they do we pick up our lives and what was destroyed, put things back together again, and build the future together.

We are together, today, and the day after this day, and we will be together for the next 100 days, and the hundred days after that. And when all of this is over, we will have that party, with all your friends, and we will do the things and go to the places that suddenly became impossible. COVID-19 will end, or we will find a way to live with it. That is my promise to you. We will get through this, together.