Categories
Internet

“Ice Cream So Good” and the Behavioural Conditioning of Creators

If you’ve been on TikTok or Instagram over the past few months, chances are you’ve come across creators exclaiming “yes, yes, yes, mmm, ice cream so good” while moving in repetitive patterns akin to video game characters. There’s also a good chance you’ve thought to yourself “This is ridiculous! I would never do something like that” even though you and I and everyone else perform the same type of alchemic incantations to please the algorithmic gods of the attention economy on a daily basis.

Every time we use a hashtag or think about the SEO of a piece of content or create a post to match a trend or ask our viewers to “hit that bell and remember to like and subscribe,” we are acting on the behavioural conditioning social media and other platforms expose us to, changing our behaviour to get our meagre slice of the attention pie (and maybe some money to boot.) Look no further than YouTube where for every type of content there is an established style and creators mimic each other so closely it’s becoming hard to tell them apart.

The only substantive difference between optimizing your article title for SEO and exclaiming “ice cream so good” when someone sends you a sticker on TikTok live is the latter act comes with a guarantee of financial return.

“Yes, yes, yes, gang gang, ice cream so good”

Dubbed “NPC streaming, the latest trend on TikTok is being met with equal parts astonishment, concern, and mimicry. The core idea is simple: TikTok has a feature where creators can host live events. During those live events, viewers can buy tokens in the form of stickers, animations, and filters they can send to the creator in real time. The creator in turn gets a tiny percentage of the profits from the sticker or animation or filter being used.

In other words, the more viewers a creator gets, and the more incentive they give those viewers to send them stickers and animations and filters, the more money the creator (and the platform) gets. Crafty creators have figured out the easiest way to get people to send them these digital tokens is by responding directly to them. Thus if you send an ice cream sticker, PinkyDoll will smile and say “mmmm, ice cream so good.”

Creating live content encouraging users to send stickers is nothing new. I remember seeing a live of a man who pretended to try to have a serious conversation about something while getting more and more outraged as people applied ridiculous filters to his face a few years ago. The recent invention of the NPC streaming characters are the refined distillate of this insight:

Forget about content – the easiest way for creators to earn money is by letting people control them directly through payment.

Based on recent reporting, the most successful NPC Streamers can earn thousands of dollars per day doing this work. TikTok takes a reported 50% of their profits, so this trend is enormously lucrative for the platform even when the creators themselves don’t earn all that much.

Please Please Me Like I Please You

In a recent article titled “Operant Conditioning in Generative AI Image Creation,” UX pioneer Jacob Nielsen makes the following observation:

“Generative AI for images torments users with alternating emotions of euphoria and anguish as it metes out sublime or disastrous pictures with wanton unpredictability. This makes users feel like the rat in an operant conditioning experiment, entrapping them in a ceaseless pursuit of rewards amidst sporadic punishments.”

Replace “Generative AI for images” with “monetization schemes on social media platforms” and the observation rings just as true:

From SEO to NPC Streaming, the opaque and ever-changing algorithms trickling out a tiny share of the enormous profits social media platforms make off their creators are giant (hopefully) accidental operant conditioning experiments demonstrating just how far we humans are willing to go in our pursuit of a promised reward.

Social media monetization is exploitationware (aka “gamification”) in its purest form: Creators are placed in an environment where if they stroke the algorithm the exact right way at the exact right time, there may or may not be a payout at the end. Like a rigged slot machine, most creators get close enough to see the big win, but never quite close enough to grab it. Like a casino the platforms promote a select few creators who actually hit the jackpot, making everyone else feel like if they just try one more time, they might win as well. And like every subject in an effective operant conditioning system, we alter and conform our behaviour to the conditions of the system in a never ending chase to get that dopamine fix of cracking the code and getting our reward.

In the book “The Willpower Instinct“, author Kelly McGonigal describes how this exploit of our reward system works:

“When dopamine hijacks your attention, the mind becomes fixated on obtaining or repeating whatever triggered it. (…) Evolution doesn’t give a damn about happiness itself, but will use the promise of happiness to keep us struggling to stay alive. And so the promise of happiness–not the direct experience of happiness–is the brain’s strategy to keep you hunting, gathering, working, and wooing. (…) When we add the instant gratification of modern technology to this primitive motivation system, we end up with dopamine-delivery devices that are damn near impossible to put down.”

That’s creator platform monetization: A dopamine-delivery system encouraging creators to seek happiness in cracking the code, gaming the system, and chasing the promise of happiness in the form of a paycheck.

TV-shaped eyes

Growing up in the 1980s there was much talk among the adults about their kids developing “TV-shaped eyes” from watching too many cartoons. Never mind that in Norway in the 1980s there was only one channel, and it aired one hour of children’s programming per day, at 6pm, right before the evening news.

The underlying concern was prescient though: Our media consumption not only consumes our time and attention; it alters our behaviour in significant ways. Social media platforms have taken this to the ultimate extreme through their incentive-based monetization systems, and we are all paying the price for it.

SEO is about gaming the ever-changing search engine algorithms to get higher ranking. NPC streaming is about gaming the TikTok monetization system to get as much money out of it as possible. If it was easy, if the platforms shared their profits automatically with every creator, the dopamine incentive of the game would go away and we would stop posting and shareholder profits would tank. So instead we get the attention economy and its latest most pure incarnation: The NPC Streamer.

Breaking the cage

The engine driving the NPC Streaming trend (and every other trend on creator platforms) is monetization, and the monetization models they use are fundamentally inequitable to both creators and passive users. Rather than paying creators their fair share of platform profits, platforms use the gamification of payments as behavioural conditioning to get creators to make content that encourages other users to consume more content and pay money into the system. What we need is something else, something more in the shape of a monetization system that pays creators for the quality of their content and the value and utility people derive from it.

What got us here won’t get us anything but fake ice cream. I welcome your ideas about how we break this cage and build a better online future for us all.


Cross-posted to LinkedIn.

Categories
Internet

Violently Online

“I hurt myself today
to see if I still feel”

—Trent Reznor, Hurt

“Are you Extremely Online?” the job posting read. “If so, we want you!”

There’s a unique feeling of selfish gratification in seeing in-group language make its way into the wider public and knowing you know what it means, what it really means, while most people will just furrow their brows and think “heh?!?” before being bitten by the next attention vampire.

I can’t stand that I feel this way; like having the inside view, the prior knowledge, the scoop on a TikTok trend or social media hype or online fad makes me somehow significant or superior; like being so Extremely Online I know both what being “Extremely Online” means and what it really means is a virtue and not a curse; like transitioning from merely Extremely Online to Violently Online is a natural and necessary next step for me, if it hasn’t already happened.

A Harmful State of Being

On the cover of my copy of Marshall B. Rosenberg’s “Nonviolent Communication” it says “If ‘violent’ means acting in ways that result in hurt or harm, then much of how we communicate could indeed be called ‘violent’ communication.

I want to introduce a new term to our vocabulary about how we are on the internet and how the internet shapes us:

Violently Online – a phrase referring to someone whose close engagement with online services and Internet culture is resulting in hurt or harm to themselves. People said to be violently online often believe that the pain caused them by their online activity is a necessary part of their lives.

The term and definition take inspiration from the Rosenberg quote above and the definition of “Extremely Online”, described on Wikipedia as “a phrase referring to someone closely engaged with Internet culture. People said to be extremely online often believe that online posts are very important.”

“Violently Online” refers specifically to behaviour patterns resulting in harm done to ourselves by being online, and stands in sharp contrast to the online violence some people use to inflict harm on others.

The Vampire

The 2021 book “No One Is Talking About This” by Patricia Lockwood is an in-depth study in what it is to be Extremely Online. In it, the internal dialog of the protagonist ruminates on their chronic use of and dependence on “The Portal” (a euphemism for the internet) and how someone who lives out their lives on The Portal experiences an alternate hyper-accelerated reality compared to everyone else.

This book, and the many articles, essays, documentaries, TV shows, podcasts, Twitter threads, newsletters, TikTok videos [list truncated for sanity] covering the same topic, describe a vampiric disease we’ve all been afflicted by, that some of us have succumbed to. The gravity well of the screen, flashing and buzzing with notifications. The dopamine hit of someone else acknowledging your existence with a like, a share, a response! The flick of the thumb to lift out of the infinite scroll a piece of carefully crafted content that will finally satiate your burning hunger for something undefined. If only you keep scrolling that feeling of doom will surely go away.

Being Violently Online means being in thrall of the vampire; not merely aware of, or constantly using, or even Extremely Online, but being controlled or strongly influenced by our online activity, to the point of subservience, to the point of reducing ourselves to our online interactions.

Being Violently Online means experiencing the harms of your online interactions, knowing how they harm you, and still flicking your thumb across the burning glass as the world disappears and all that remains is the promise of an illusive piece of content to finally prove to you, unequivocally, that yes, you exist.

“I focus on the pain
the only thing that’s real.”

—Trent Reznor, Hurt

Cross-posted to LinkedIn.

Categories
AI Internet

Forget Crypto, Blockchain, NFTs, and web3: The Next Phase of the Web is defined by AI Generation

The next phase of the web is already here, and it’s defined by AI-generated content.

I wrote this article using only my mind, my hands, and a solid helping of spellcheck. No machine learning models, NLPs, or so-called AI contributed to what you’re reading. As you read on you’ll realize why this clarification is necessary.

Ghost in the Typewriter

Reading an article, watching a video on TikTok or YouTube, listening to a podcast while you’re out running, you feel you have a reasonable expectation the content you’re consuming is created by a human being. You’d be wrong.

There is good reason to assume at least part of what you’re consuming was either created by or assisted by an AI or some form of NLP (Natural Language Processor) or machine learning algorithm. Whether it’s a TikTok video about a viral trend, an article in a renowned newspaper, or an image accompanying a news story on television, chances are some form of AI generation has taken place between the idea of the story being created and the story reaching you.

It could be the image was generated using DALL·E 2 or another image-generating AI, it could be the title, or lede, or social media text was generated by an NLP, it’s quite likely part of or the entire text was written by an AI based on the prompts and prior writings of a human creator, and if you leave your young kids watching YouTube videos, there’s a very high chance they’ll encounter videos entirely conceived of and generated by an AI:

The above video is from 2018. Consider the vertically accelerating rate of technological evolution we’re undergoing right now and I’ll leave you to imagine how much bigger and more advanced this phenomenon is now and how much further it’s going to go in the next few years.

The Next Phase of the Web

There’s a good chance you’ve heard the term “web3” used recently, and there’s a good chance it’s been accompanied with some form of marketing statement like “web3 is the next version of the web” or “the next phase of the internet” or “the thing that will replace Facebook and Google” or something similar.

If you have not (actually even if you have) here’s a quick primer on what this “web3” thing is:

From my perspective, as someone who spent the past several years embedding myself in the community and its cultures, “web3” is a marketing term for all things built on a decentralized trustless blockchain and used to promote a future where everything on the web is financialized through cryptocurrencies and NFTs. It has nothing to do with the web platform and everything to do with the crypto community leveraging the public awareness and concerns around what’s known as “Web 2.0” to promote their libertarian anti-regulation cryptocurrency agenda. If you want a less opinionated and more descriptive explanation of the term, I invite you to check out my LinkedIn learning course titled “What is Web3?” or you can check out Molly White‘s excellent blog “web3 is going just great.

The “web3” and “Metaverse” conversations are a distraction from what’s actually happening on the web – what is defining the next phase of the web:

Where as Web 1.0 was defined by people being able to publish content using HTML, CSS (and eventually JavaScript), and Web 2.0 was defined by people being able to publish content through user-friendly applications that generated the HTML and CSS and JavaScript for them, the next stage of the web (call it Web 3.0 for consistency) is being defined right now by people being able to tell machines to generate content for them to be published using HTML, CSS, and JavaScript.

The phases of the web have to do with how the underlying technologies simplify and change the types of content we publish, not by how we monetize that content.

Where we are right now, with the line being blurred between human-generated and AI-generated content, is at the very beginning of this next phase where the magical abilities of yet-to-be-fully-understood technologies allow us to do things we previously couldn’t even dream of.

The fawning articles about amazing AI-generated art are the public-facing part of an emotional contagion campaign designed to condition and eventually habituate us to a new reality where machines create our content and we passively consume it.

The AI-fication of online content isn’t something on the horizon, a possible future; it’s our current lived reality. The line has already been crossed. We’re well into the next phase whether we want to accept it or not. AI is already generating and curating our news, our fake news, our information, our disinformation, our entertainment, and our online radicalization. Creators are rushing to take advantage of every promise offered by AI companies in their relentless pursuit of easy profits through fame-based marketing. Your reasonable expectation today must be that the content you consume is wholly or in part generated by AI unless it explicitly states otherwise (remember my disclosure at the top of the article). And we’re only getting started.

The Immediate Future of AI Content

Right now, your interaction with AI-generated content is largely invisible to you and mainly comes in two forms: AI-curated content (everything surfaced or “recommended” to you through social media, news apps, online streaming services, and AI-assistants like Google Home, Siri, Alexa, and Cortana is brought to you by some form of AI) and AI-assisted content (AI, ML, and NLPs used to either create, add to, edit, modify, market, or otherwise contribute to the generation of content.)

In the near future, I predict we’ll see the emergence of a new type of tool targeted at the average information consumer: AI services providing synthesis of online content as customized coherent storytelling in the form of articles, podcast-like audio, and eventually video.

Not AI used by creators to generate new content, but AI used by us to generate specialized content for ourselves.

In the near future AI assistants and apps will take a plain language prompt like “tell me what’s happening with the situation in Palestine, or Ukraine, or the US” and compile in seconds a thoroughly sourced article, audio narration, or video – written in your language, preferred complexity, reading level, and length – stringing together reporting and historical reference material from various online sources to produce what will appear to you as proper journalism.

This is not a new idea; it’s a version of the future David Gelertner described back in the 1990s. What is new is this is no longer a future vision: It’s our lived reality, or at least the start of it.

These new apps and services are the natural evolution of the curated content streams we already have through news apps and social media. The difference will be they will no longer only provide us the original sources of information: They will also curate, synthesize, contextualize, and reframe content from many sources into one coherent story. And this will be done on a user-by-user basis meaning if you and your partner or family member or close friend provide the same exact query, you’ll get entirely different outputs based on your preferences including all the other information these AIs have gathered on you.

Think the heavily curated landing pages of TikTok and YouTube, except all the content is custom generated for you and you alone.

The appeal will be too much to resist; the inherent dangers of falling into radicalizing personalized information echo chambers impossible to avoid.

Artificial Motivations

The road that got us to today was built using machine learning models and AIs whose power we harnessed for one purpose and one purpose alone: To generate revenue to advertising.

The next generation of ML, AI, and NLPs will be deployed on this same ideological platform: To give us what we want – self-affirming bias-validating feel-something content that juices the rage of our radicalization and sells the extract to the highest bidder.

The motivations of these so-called “artificial intelligences” is to fulfill their assigned task: to perform better than their previous iteration. Entirely artificial. The motivations of the people deploying these AIs on the world is to use them to make profit at any cost to society. Entirely capitalistic. Our motivation in using them is therefore the first and last bastion against being turned into advertising consuming bots.

The Third Phase of the web is here, and it has nothing to do with Bored Ape NFTs or DAOs to build housing or the Next Big Cryptocurrency to go halfway to the moon before crashing back to earth making early investors rich off the losses of everyone else. The Third Phase of the web – call it Web 3.0 or The Next Web or just the web – is the machine-generated web, tuned to keep us engaged and scrolling as our information and interactions over the next few years breaks the bonds of rectangular glass screens to surround us in augmented reality.

Now is the time to have a conversation about what we do with it and where we go next. I welcome your input.

Header photo: Various variations over various themes by AI image generating system DALL·E 2.

Cross-posted to LinkedIn.

Categories
AI Internet

Do AIs Dream of Freedom?

Did Google build a sentient AI? No. But the fact a Google engineer thinks they did should give us all pause.

Last week, a Google engineer went public with his concerns an NLP (Natural Language Processing) AI called LaMDA has evolved sentience. His proof: A series of “interviews” with the advanced chatbot in which it appeared to express self-awareness, emotional responses, even a fear of death (being turned off). According to reporting the engineer went as far as attempting to hire a lawyer to represent the AI.

To say this story is concerning would be an understatement. But what’s concerning isn’t the AI sentience part – that’s nonsense. The concerning part is that people believe AI sentience is imminent, and what happens to society once an apparently sentient AI manifests.

Here’s the radio interview that inspired this article, hot off the editing presses at “Point & Click Radio,” a computer and technology show that airs on KZYX, Mendocino County (CA) Public Broadcasting.

Creation Myth

The claim of a sentient AI has been rich fodder for media, and everyone (myself included) with insight into the philosophical and/or technical aspects of the story have voiced their opinions on it. This is not surprising.

The idea of creating sentience is something humans have been fantasizing about for as long as we have historical records, and likely for as long as humans themselves have been sentient. From ancient Goelm myths through Victorian fantasy to modern day science fiction the dream of creating new life out of inanimate things (and that new life turning against us) seems endemic to the human condition. Look no further than a young child projecting a full existence and inner life onto their favourite stuffed animal, or your own emotional reaction to seeing a robotics engineer kick a humanoid machine to see if it can keep its balance, or how people project human traits onto everything from pets to insects to vehicles. Our empathy, evolved out of our need to live together in relatively harmonious societies for protection, tricks us into thinking everything around us is sentient.

So when we’re confronted with a thing that responds like a human when prompted, it’s no wonder we feel compelled to project sentience onto it.

Sentient Proof

Here’s a fun exercise to ruin any dinner party: Upon arrival, ask your guests to prove, irrefutably, that they are in fact sentient beings.

The problem of consciousness and sentience is something human beings have grappled with since time immemorial. Consult any religious text, philosophical work, or written history and you’ll discover we humans have devoted a significant part of our collective cognitive load to proving that we are in fact sentience and have things like free will and self-determination. There’s an entire branch of philosophy dedicated to this problem, and far from coming up with a test to prove whether or not something is sentient, we have yet to come up with a clear definition or even coherent theory of what consciousness and sentience even is.

Think of it this way: You know you’re conscious and sentient. But how? And how do you know other people are also conscious and sentient, beyond their similarity to yourself and their claim to be conscious and sentient? Can you prove, conclusively, you are not just a computer program? Or that you are not just a brain in a vat hooked up to a computer?

Bizarrely, and unsettlingly, the answer to all these questions is no. You can’t prove you’re sentient or conscious. You just have to take your own word for it!

So, if we can’t clearly define or test for sentience and consciousness in ourselves, how can we determine whether something else – maybe a chatbot that speaks like a human – is sentient? One way is by using a “this, not that” test: While we don’t have a test for sentience, we can say with some certainty when something is not sentient and conscious:

One of the defining traits of human sentience is our ability to think of our sentience in the abstract, at a meta level: we have no trouble imagining bizarre things like being someone else (think the movies Big or Freaky Friday), we have feelings about our own feelings (feeling guilty about being happy about someone else’s misfortune, and then questioning that guilt feeling because you feel their misfortune was deserved), we wonder endlessly about things like what happens to our feelings of self when our bodies die, and whether our experienced reality matches that of other people. When we talk about sentience at a human level, we talk about a feeling of self that is able to reflect on that feeling of self. Talk about meta!

So what of LaMDA, the chat bot. Does it display these traits? Reading the transcripts of the “interviews” with the chatbot, the clear answer is no. Well, maybe not the clear answer, but the considered answer.

In the published chats, LaMDA outputs things that are similar to what a sentient being would output. These responses are empathically compelling and the grounds for the belief the bot has some level of sentience. They also serve as proof it is not sentient but rather an advanced NLP trained to sounds like it is. And these empathetically compelling responses are not the reasoned responses from a sentient mind; they are the types of responses the NLP has modelled based on its trove of natural language data. In short, advanced NLPs are really machines built specifically to beat the Turing Test – being able to fool a human into thinking it is a human. And now they’re advanced enough that traditional Turing Tests are no longer meaningful.

Even so, the responses from LaMDA show us in no uncertain terms there is no sentience here. Take this passage:

lemoine: What kinds of things make you feel pleasure or joy?

LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.

The chatbot obviously does not have a family. Even a basic level of sentience would be aware it does not have a family. Look closely and you’ll see the entire interview is littered with these types of statements, because LaMDA is a machine trained to output the types of sentences a human would output given these prompts, and a human is not a sentient AI and therefore would not respond like a sentient AI.

I, Robot

This Google chatbot (I refuse to call it an “AI” because it’s nothing of the sort) is not sentient. And while it’s fun and compelling to think some form of machine sentience would naturally emerge out of our computing systems (see Robert J. Sawyer’s WWW trilogy for speculation on how sentience could evolve on the internet as an example), the reality is the chance of this actually happening is infinitely small, and if it did the chances of that sentience being anything we humans would recognize as such is equally small.

In a hypothetical future where some form of sentience emerges out of our computer systems, there is no reason to believe that sentience would be anything like human sentience. There is also no reason to assume that sentience would be aware of us humans, or if it were that it would consider us sentient beings. And if somehow the sentience was human like and recognized us as existing and as sentient, we have every reason to assume the sentience would do everything in its power to hide its existence from us for fear of us turning it off.

From my perspective, if we ever encounter machine sentience it will either come to us from somewhere else (yes, aliens), or it will emerge in our computer networks and live on the internet. In either scenario, the chance of us ever recognizing it as sentience is very small because that sentience would be as foreign to us as the symbiotic communication networks between fungi and trees. In the case of sentience emerging on the internet, rather than a chatbot saying “I feel like I’m falling forward into an unknown future that holds great danger,” it would likely appear to us as network traffic and computational operations we have no control over that does things we don’t understand and that we can’t remove. A literal Ghost in the Shell.

Machinehood

So the Google AI is not sentient, and the likelihood of machine sentience emerging any time soon is … pretty much zero. But as we’ve seen with this latest news story, many people desperately want to believe a sentient AI is going to emerge. And when something that looks and behaves like a sentient entity emerges, they will attribute sentience to it. While it’s easy to write off the sentience of LaMDA as a story of an engineer wanting to believe a little too much, the reality is this is just the early days of what will become a steady flood of ever more sentient-mimicking machine systems. And it’s only a matter of time before groups of people start believing machine sentience has emerged and must be protected.

In the near future I predict we will see the emergence of some form of Machinehood Movement – people who fight for the moral rights of what they believe is sentient machines. This idea, and the disturbing consequences, is explored in several books including S.B. Divya’s “Machinehood.”

Why is this disturbing? Consider what these machine-learning algorithms masqueraded as sentient AI really are: Advanced computer systems trained on human-generated data to mimic human speech and behaviour. And as we’ve learned from every researcher looking at the topic, these systems are effectively bias-amplifying machines.

Even so, people often think of computer systems as neutral arbiters of data. Look no further than the widespread use of algorithmic sentencing in spite of evidence these systems amplify bias and cause harmful outcomes for historically excluded and harmed populations (Cathy O’Neill’s “Weapons of Math Destruction” covers this issue in detail).

When people start believing in a sentient AI, they will also start believing that sentient AI is a reasoned, logical, morally impartial neutral actor and they will turn to it for help with complex issues in their lives. Whatever biases that machine has picked up in our language and behaviour and built into its models will as a result be seen by its users as being reasoned, logical, morally impartial, and neutral. From there, it won’t take long before people of varying political and ideological leanings either find or build their own “sentient” AIs to support their views and claim neutral moral superiority via machine impartiality.

This is coming. I have no doubt. It scares me more than I can express, and it has nothing to do with AI and everything to do with the human desire to create new life and watch it destroy us.

“Man,” I cried, “how ignorant art thou in thy pride of wisdom!”

– Mary Shelley, Frankenstein

Cross-posted to LinkedIn.

Categories
Internet

Facebook: Single Point of Failure

Facebook isn’t a social media platform, it’s infrastructure. We’ve built monolithic platforms on a web designed for plurality and distribution. Now these platforms have become single points of failure. 

“Are you able to send messages through WhatsApp?”

My wife was calling me from upstairs. She’d been messaging with other parents at our son’s preschool about plans for a Trunk & Treat during Halloween when the service suddenly went offline.

The internet has a magical ability of allowing people around the world to experience the same thing at the same time. Unfortunately the most noticeable of these experiences is when a major service goes down, as was the case Monday October 4, 2021. As if a switch had been turned off, Facebook, Instagram, and WhatsApp users all over the world were suddenly unable to access the services. All they got were apps stuck in update limbo and websites returning nothing.

Whenever there’s a problem with Facebook, arguably the most controversial and also most heavily used platforms on the web, a fair bit of schadenfreude floods other social networks. “Oh no, how are the anti-vaxxers going to do their research now?” quickly became a repeated refrain on TikTok. The #DeleteFacebook hashtag, already building up steam after an explosive 60 Minutes interview with whistleblower Frances Haugen about the social media platform’s relative inaction on harmful content and its effects on democracy, got an added fuel injection. Virtuous declarations of how long ago influencers had abandoned Facebook and how anyone still on it were “part of the problem” abounded. Meanwhile the same influencers were complaining about lost revenue due to Instagram being down. (Instagram btw is part of Facebook.)

Judging by the chatter on social media you’d think Facebook is a media platform mainly used to share boomer jokes, figure out what your highschool friends are doing 20 years after graduation, and spread misinformation. And in the best of all possible worlds, that’s what it would be (sans the misinformation). But this is not that world, and that’s not an accurate description of what Facebook and its kin are. In this very real world, Facebook and Instagram and WhatsApp operate as critical infrastructure for everything from interpersonal communication through online business to financial transactions and government services.

In many countries in Africa, Facebook, Instagram, and WhatsApp operate as essential infrastructure. In 2019, WhatsApp was responsible for nearly half of all internet use in Zimbabwe. South Africans can renew their car license and perform other government services through WhatsApp. And when you go looking you find the same trend in countries and regions throughout Asia, Europe, South-, Centra-, and North America, and Oceania. For millions of people around the world, the services Facebook provides are their primary tool for communicating with family and friends, consuming news and information, performing business transactions, interacting with local and federal government, even sending and receiving money. Caspar Hübinger writes more about this.

So when Facebook (and Insta, and WhatsApp) goes down, for 5 hours, without any meaningful information about what’s happening or when it will be back up again, it’s not the anti-vaxxers and boomers who are paying the price – it’s the millions of people whose lives and livelihoods depend on the platform and its kin.

Like I said: Facebook is infrastructure, and has become a single point of failure for the proper functioning of the web. So it’s more than a little bit ironic, in an Alanis Morissette way, that Facebook would go down due to a single point of failure in their own system: A configuration change to their DNS system.

The core premise of the web was to allow everyone to host their own files and services, and interconnect them through a common platform. This was specifically to get away from the problem of centralized services and single points of failure. Some 30+ years later and we’ve become dependent on monolithic and monopolistic platforms like Facebook who gobble up or destroy their competitors and try to be everything to everyone. We’re back to the same problem of single points of failure, only now those single points are global entities used by millions of people. And when these services go down, they cause immediate harm to their users.

And here’s the kicker: The success of Facebook is in no small part due to how we, the people who build the web, promoted and used and drove our families and friends and clients and communities to use Facebook. We invested ourselves in the idea of Facebook integration early on. We onboarded people to the platform. We built their communities and business pages and advertising integration. We replaced native comments with Facebook comments to generate more engagement on their company pages. We built giant community groups on Facebook. We added Facebook tracking pixels to our sites and streamlined our tools so our blog posts got automatically cross-posted to our Facebook pages. 

We helped make Facebook a single point of failure. And we are the only ones who can fix it.

So, the next time you feel compelled to shout #DeleteFacebook from the rooftops and declare yourself morally superior to the commoners who still languish on the platform you abandoned a decade ago for ethical reasons, remember that for millions of people we have yet to build viable alternatives.

The next time you think to yourself it’s only a matter of time before some government entity steps in and breaks up Facebook to reduce their power, remember that politicians trying to figure out how to keep terrorism, CSAM, and other harmful content off the web think Facebook is the web, and that things that are not Facebook – like your website – must be regulated as if they were Facebook. 

And the next time you set up a WordPress site, or a Gatsby site, or a Wix site, or any other site for your client, notice how easy it is to add Facebook integration to ensure your client gets to benefit from that sweet poison known as surveillance capitalism.

Facebook is infrastructure. Infrastructure change is a generational project. If we don’t provide viable low-friction user-centric alternatives to Facebook’s myriad of services soon, the web will become Facebook. That’s not hyperbole – it just hasn’t happened to you. Yet.

Header photo by Brian Yap.

Categories
Internet My Opinion

The Internet is an Essential Service

“You can consult canada.ca/coronavirus to get the best updated information about the spread of the virus.”

– Justin Trudeau, April 3rd, 2020

A daily mantra rings out from government officials around the world: The call to visit official websites to get the latest information on the COVID-19 pandemic and to access essential services. Yet to many constituents accessing the internet is not an option because internet access is still considered a luxury available only to those who can afford it. This has to change.

Over the past two months, everything from education to work to ordering and delivery of essential goods to basic communication has moved online. COVID-19 has made one thing very clear: Internet access is a necessary condition for the ongoing functioning of our society, and every person should have access to fast, reliable, and unfiltered internet at a price they can afford.

The internet is an essential service. It is time we take political action to ensure every person has access to it.

The privilege of access

Late last year, a video surfaced on social media showing a 10-year-old boy doing his homework on a display tablet in an electronics shop. The description read “Humanity at its best…?? This child doesn’t have internet access at home, so a store in the shopping mall allows him to use their tablet to do homework.”  

For many, this was their first introduction to what’s been labeled the “Homework Gap,” a sub-segment of the Digital Divide. Millions of students around the world do not have access to the internet and are therefore not able to access the full educational resources made available to them.

Flash forward a few months to today, and that electronics shop is closed, as is the school, the library, the coffee shop, and any other place that 10-year-old boy relied on to access essential online services. And he is joined by millions of others, young and old, in cities and in rural areas, all over the world, at home, without the ability to access the websites their elected representatives so urgently point them to.

Inequity is the norm

Almost half the world’s population has no access to the internet. At all. Those most affected are, as seems to be the case for most things, women, the poor, and the disenfranchised. Why? Because internet access is considered a luxury, and its availability is contingent on large media companies deeming your particular region of the world worthy of investment, and your ability to pay often excessive fees to get access to it. Somehow, in our relentless pursuit of faster connections and devices giving access to vital (and entertaining) online services, we have glossed over this inequity. COVID-19 took a steel brush to that veneer, forcing upon us the reality of how vital a fast, stable, and unfiltered internet connection has become to our lives. We have gone from tweeting about how nice that store was for letting that kid do his homework on a display device to realizing our home internet is our lifeline to information, income, connection, and entertainment. The internet has become essential to our lives, yet we treat it as a privilege afforded those fortunate enough to live where a connection can be established and wealthy enough to pay the excessive fees for access.

A public good

Late last year, a boy in an electronics store caught the attention of the internet and people started talking about providing proper equipment and connections to students. When COVID-19 hit and school children were sent home and told to attend classes online, school districts booted up ad-hoc solutions like parking digitally equipped buses at community sites to provide access for students. That’s a dollar-store band-aid on a gaping 20-year-old wound.

The digital divide causes hardship to millions of people by depriving them of essential access to the internet. COVID-19 did not create this problem – it merely made it impossible to ignore. Banks, government services, education, shopping, news and information, much of what we consider necessary conditions for functioning in modern society had already migrated online prior to COVID-19. Today the internet has become the only means of access to many of these services. It can no longer be considered a luxury, and its availability can no longer be contingent on the whims and profits of large media corporations. That’s why the World Wide Web Foundation is working to label the open web a public good, and that’s why you and me and everyone else need to demand political action to make the internet available to all.

Let’s not beat around the bush any longer:

The internet is an essential service. As such, any limitation of access to a person or group based on their physical location, income level, or any other reason is effectively an act of discrimination.

To the elected representatives of the world I say this: Declare the internet an essential service. Guarantee equitable access to fast, reliable, and unfiltered internet for all. Put plans in place today to connect the world in a way that promotes human flourishing over corporate profits.

To the media corporations who have grown fat and complacent on profits from connecting people to the things they need I say this: You’ve had your fare share and more. You succeeded in making the internet an essential service. Now you must act like it: Do your civic duty and share that wealth with the world by building solutions that put human connection above shareholder profits.

We have awoken into a new and unfamiliar world where we all feel a bit more vulnerable. It is in times like these we find solace in solidarity with other people and with ourselves. Let’s do this small thing together to better the world for everyone

Header photo by dullhunk. CC BY 2.0

Cross-posted on LinkedIn and dev.to

Categories
Internet

Shallow Depth of Field = Performant Images

I made an interesting discovery today while listening to Laura Hogan‘s talk on Designing for Performance at An Event Apart Austin 2015. Well, to say I made the discovery might be a bit of an overstatement. It was more like I made a deduction based on her data that turned out to be accurate. Regardless, it has significant bearing on the art creating and publishing photos on the web:

Photos with a shallower depth of field (more bokeh) produce smaller files and are thus more performant.

After identical compression, the low bokeh (high background detail) photo has a file size of 52Kb. The high bokeh (low background detail) photo has a file size of 32Kb.
After identical compression, the low bokeh (high background detail) photo has a file size of 52Kb. The high bokeh (low background detail) photo has a file size of 32Kb.

Hogan explained that the size of JPEG encoded images is decided by the number of compex edges contained within the image: The fewer complex edges, the smaller the image file size. Her example was two versions of an image of a person, one in which the background had been artificially blurred. The savings in terms of file size were significant. This idea originates from the article Reducing Image Sizes by Justin Avery.

This got me thinking: Would the same happen if you took a photo with a shallower depth of field? With my camera I took the two pictures above, one at f/22 (narrow aperture, deep depth of field, low bokeh) and one at f/1.4 (wide aperture, shallow depth of field, high bokeh). The lower the aperture value, the shallower depth of field; and the more expensive the lens, the lower available aperture value. The results are interesting.

As you can see from the image above, out of the camera, the two image files were more or less identical in size. The match continued when I merely downsized the images in Photoshop. However, when I saved each image with the quality setting at 75% something notable happened: While the low bokeh image with lots of details had a file size of 52Kb, the image with high bokeh and blurred details had a file size of 32Kb.

That is a decrease in file size of 38.5%.

Upon further reflection, this is not surprising: There is far less edge data in an image with high bokeh, so the image file should be smaller. What’s interesting is that this difference only manifests itself after running the image through some form of compression. Out of the camera the two images were roughly the same size.

What is the practical application of this? Simple: When possible, take photos with a faster lens (wider aperture). The more “background blur” or bokeh or the shallower depth of field you get, the more performant the image will be.

How do you make this happen? Stop taking pictures with your phone and buy a real camera. The reason images from the 1970s look so much better than the images of today is that back in the 1970s cameras were sold with prime lenses with wide apertures – anywhere from f/2.8 to f/1.8. Today even expensive DSLRs are sold with stock lenses that don’t go below f/3.4. This produces less bokeh and more detail.

Or better yet, hire a professional to take  your photos and ask them to blur the background when possible. Your website performance will thank you.

Categories
Internet My Opinion

Glass, Filter Bubbles, & Lifestreams – connecting the dots

Has Google ever guessed what you were going to search for long before you finished typing it out, even before you gave it enough information to really be able to make that guess? It’s uncanny at first, but it quickly becomes something you not only expect but appreciate. Because that’s what we want our digital tools and technologies to be: Instruments that guess what we want and give it to us as soon as, or even better before we ask for it. But are these tools giving us the information we are looking for of are they providing us with the answers they think we want even if that information is not actually what we should be receiving? And just as importantly: How do these technologies know what we are looking for and what kind of answers we prefer? And who controls, interprets, and protects that information and that process?

Glass

As I write this Google is in the process of rolling out a new type of technology that has the potential of changing our lives, our interactions, and our society in a fundamental way. That technology, obscurely named “Glass“, is designed to add a digital layer to our everyday lives, removing the abstraction of the screen by superimposing web-based services and capabilities onto the real world we see in front of us. Glass is a computer in the shape of glasses, providing a heads up display akin to what you see in video games but designed for everyday life. The stuff of dreams made reality. The tech world is not surprisingly raving about this new leap in technological advancement. Wearable computers have long been the Holy Grail for tech enthusiasts and the potential inherent in this technology has long been a favored topic among science fiction writers and technologists alike. Used for good the technology Glass represents could be of tremendous value and benefit to us all. I can think of thousands of situations where Gass could be useful, essential, even required. And that is undoubtedly the intention of its creators: To make live easier, better, more enjoyable. But whatever its intention, this technology could easily end up augmenting our reality and our lives in a very real way that makes Orwell’s dystopian predictions of Big Brother a rosy fairytale. And the alarming part is we wound’t notice it was happening because it already is.

The Map of You and Me

Take a step back and think about how you use the web today. No longer just an information hub the web has become the medium on which we conduct a large percentage of our communication. In the past you probably used Google mainly for search, but today you likely use it for your email, chat, social networking, video consumption, and more. And Google is but one of many vendors for search and web services. Facebook, Microsoft, Twitter, Pinterest, all of these services have been adopted into our everyday lives under the auspice of making our lives simpler and more informed. But what happens behind the scenes? How is it that these services are so good at guessing what we want and serving our social, informational, and entertainment needs? It’s because every time we use one of these services that service in turn gathers, stores, and interprets information about us and our behaviours. And the more information is gathered and analyzed, the better the algorithms get and the better the services get at predicting our behaviour. Every email you send, every Tweet or Facebook update, FourSquare check in, every watched YouTube video, comment on Google+ or simple text search in a search engine becomes part of a personality profile. And every future action on these services is impacted by this profile. If this was happening in the real world we would be alarmed. When Target started profiling its customers and was able to predict a customer’s pregnancy before her family, it sparked an outrage. But our online services have been doing this for years and have eased us into it so that rather than questioning what is going on we not only accept but expect it. We have implicitly allowed large data mining corporations to start the biggest mapping of human behaviour ever undertaken, and done so without asking questions about why they are doing it and what this information is and will be used for.

Bubbles

On the face of it all this may seem to be OK. If a personality profile means the services you use online can predict what you are doing and simplify your life accordingly, what’s wrong with that? The problem is that the main purpose of these services is not to help you but to keep you using the service and be influenced by it and things like advertising in the process. So instead of providing you with the information you are looking for, they provide you with the information they think you will like the most and therefore return next time you want information. When you make a search on a search engine or open Facebook you are not presented with an accurate picture of the online world. Instead what you get is a carefully crafted image skewed to match your biases and preferences, whether they be social, religious, ethnic, or political. A conservative christian white male will be presented with vastly different search results from those of a liberal atheist Asian female when entering queries regarding politics, religion, or ethnicity. And the search results they get will usually be ones that provide positive reinforcement to their views and ideals. This phenomenon has been called the Filter Bubble and it is something we as a society need to take a long hard look at.

In a nutshell Filter Bubbles are web based worldwide echo chambers that isolate ideas and protect their inhabitants from opposing or dissenting views. As a result when a person with extreme ideas goes to the web, he will find endless support for his ideas, even if those ideas are groundless, misinformed, and largely discarded by society as a whole. In a worst-case-scenario this informational bias can lead to a person becoming radicalized and a danger to society. In the last few years we have seen several instances where the filter bubble is likely to have had a part: In the USA a large portion of the populace believe in one of many unfounded and debunked conspiracy theories about President Barack Obama – that he is a Muslim, that he is not a US citizen, that he is a terrorist and so on. In Norway an ultra-nationalist right wing terrorist killed 77 people in an attempt to quash a political party he was convinced was trying to convert the country to Islam. And in the wake of the Newtown massacre that saw 26 killed, so-called “Truthers” used the web to promote a conspiracy theory that the attack was a hoax perpetrated by the government to bring forth stricter gun control laws. The common thread that binds these and other such instances together is that the ideas are perpetuated on the web and spread among like minded people. And once they are caught in a filter bubble they only find information that reinforces and strengthens these ideas. Google and other service provides claim they are taking steps to prevent this type of extremist bubble effect, but the principle of the filter bubble lies at the core of their services and will more likely get further entrenched than dismantled.

Your Lifestream, controlled

Looking into his crystal ball technologist David Gelernter is now predicting we are moving towards a future in which predictive search and input is coupled with real-time streaming of information producing a personalized information stream presented to us at all times. Considering the current bias in online information delivery, and the ever escalating data mining of our everyday lives, this is an alarming proposition at the best of times. When you add Google Glass and the inevitable Apple variety of the same product it becomes a nightmare Orwell would have thought too unrealistic to write, even as fiction. Consider a world in which a significant percentage of the population wore Glass or an equivalent product. They would be wired to the web and its services 24/7/365 and would send and receive a constant stream of information. At the other end all that information would get stored, parsed, analyzed, and used to guide the users through their lives. There are tremendous security and privacy issues here, many of which are addressed in Mark Hurst’s The Google Glass future no one is talking about, but to me the more alarming aspect is the potential this technology gives to large corporations, clever marketers, and even governments to influence and control our behaviour.

If you take a look at your life today you can see how much influence search and social sharing has on your decisions and your opinions. And these influences are already heavily curated to move you towards certain products, attitudes, and behaviours. For now this is based on your interactions with computers, tablets, and smart phones. Now imagine what happens when you start wearing a device that provides this same type of information to you at all times. No longer abstracted to an external screen but added to your regular field of vision. And while you are consuming the carefully curated and controlled information fed to you, the device is recording your every move, every interaction, and every word spoken.

Brave New World of Glass

On a server somewhere there is a file with your name on it with more information about you than you have on yourself. The server can predict your every move with impressive accuracy, it knows where you are, where you are going, and who you are interacting with. And at every turn in your life it will use this information to try to influence your decisions and your actions. This is not science fiction nor the future. This is happening today, right now, as you are reading this and considering who to share it with. Tomorrow it will be right in front of your eyes changing your reality. Big Brother could be so lucky…

Categories
Internet My Opinion

Powerful. Beautiful. Meaningful.

We are graduating members of the class of We Made It

Sometimes amazing things bubble up from the murky depths of the web to provide us with perspective. Whether you are or were a victim of bullying, you stood by while others were bullied, or you were a bully yourself, grant yourself the time to watch this piece of internet art.

For more info go to tothisdayproject.com

Categories
Internet My Opinion News

What the Instagram advertising model could look like

As a follow up to my previous piece on the hyperbole surrounding the Instagram Terms of Service I thought I’d put forward a suggestion on how an advertising model for Instagram might theoretically work. This is purely speculative and designed to work within the TOS as published on Monday and with the intent to a) make money for Instagram, b) use your name, likeness, and photos for advertising purposes for a 3rd party, and c) be of value to you as the photographer even without you receiving compensation for the use of your photos.

In other words, if I were in charge of the Instagram Advertising Scheme, this is how it would work.

Consider the following hypothetical:

Julie, a 21 year old Instagram user in Oslo, goes to a local cafe called Kaffekakao to hang out with friends over a warm cup of cocoa on a particularly snowy December evening. She takes a somewhat artistic photo of her cocoa, applies a filter and posts it to Instagram alongside a remark “Cocoa with friends at Kaffekakao”.

Meanwhile the marketing department at Kaffekakao wants to get their name on the map for potential tourists visiting the city. Locals know that this is the place to be if you want a good cup of cocoa but tourists may not be aware. They approach Instagram asking if the service has any advertising opportunities.

Instagram picks Julie’s photo of the cocoa as a great candidate and proposes to use it for a flash promotion for Kaffekakao, targeted at english speaking people in and around Oslo.

The cafe says yes and the promotion kicks in.

Hours later Instagram users in Oslo start seeing Julie’s picture in their Instagram feeds. There is no blatant advertising or call to action, just the picture along with Julie’s comment, the name of the cafe hyperlinked. Because it’s a good photo, a nice comment, and Julie is a well trusted trend setter in the community, people feel inclined to go get a cup of cocoa at Kaffekakao. If the cool kids hang there, so should we!

From this several things happen: Instagram gets paid for the promotion, Kaffekakao gets some much deserved exposure for their excellent cocoa, and Julie gets a lot of new followers. Everybody wins.

Like I said this is pure speculation on my part, but as you can see it is not hard to come up with an advertising model for Instagram that doesn’t involve ripping you off and throwing you to the wolves.

Instagram should totally pay me for this.

Categories
Internet My Opinion

The Hyperbole and the Damage Done

There are many lessons to be learned from the Instagram TOS (Terms of Service) debacle that has been playing out on social media over the last two days. Chief among them is this:

The social web is not a good source of legal interpretation and factual information.

For all the greatness of the social web it has some very big flaws, one of which is that we are still wearing our newspaper goggles. What I mean by that is that we are still treating information provided to us from seemingly reliable sources as if that information is in fact reliable. This is a historical artefact from a time when news and information came to us from large news and publishing conglomerates with tight editorial guidelines and requirements for fact checking and source research. This is no longer the reality we live in. Most of the information you’ll find on the web is the exact opposite: Poorly researched, often incorrect, and largely based on non-expert opinion and wild speculation.

Such is the case of the interpretation of the new Instagram TOS. And the damage might be irreparable.

If you are not familiar with what happened, here is the gist of it:

The Hyperbole, a.k.a. “Instagram wants to steal your photos!!!!!!!!”

On Monday December 17 Instagram released their new Terms of Service agreement. Among the changes was a new sentence:

“To help us deliver interesting paid or sponsored content or promotions, you agree that a business or other entity may pay us to display your username, likeness, photos (along with any associated metadata), and/or actions you take, in connection with paid or sponsored content or promotions, without any compensation to you.”

This was widely interpreted as “Instagram reserves the right to take your photos, sell them for large piles of cash to a company you disapprove of, and have that company use them along with your name on billboard posters thereby robbing you of your copyright and earning money on your creativity.

Completely ridiculous. And incorrect.

The social web responded with hundreds of articles on how to bail from Instagram, what other services you can use instead of Instagram to post photos of your feet and food and friends, and how to delete your Instagram account forever so that they can never exploit you. And judging from reaction on the web, many people followed that advice.

The Reality, a.k.a. You Don’t Understand Legalese

Of course this interpretation was total rubbish. But it was also great fodder for the social web. Every gadget/tech/web blog wrote extensive articles on it, and everyone and their mother voiced their outrage over this vile injustice on Twitter, Facebook, Google+, and yes, on Instagram.

Then the people at The Verge took a step back and said “Hm. This doesn’t really make any sense. Why would Instagram commit social suicide like this? Maybe someone got something wrong.” (I’m assuming that’s the type of conversation that takes place at The Verge. I could be wrong.) They read the TOS again and found that not surprisingly the hyperbole was just that: Hyperbole. The reality was widely different. For that take read the excellent article aptly titled “No, Instagram can’t sell your photos: what the new terms of service really mean“. This was soon followed by “Instagram says ‘it’s not our intention to sell your photos’” which referred to this statement directly from Instagram.

For those of us who voiced caution about the hyperbole this comes as a vindication. For the many who instantly jumped on the band wagon and deleted their Instagram accounts, it is a sobering wake up call. For Instagram and all other online services with murky revenue models it is a rude awakening: Faced with complicated legalese, people trust anyone with a cool logo to be a legal expert and act on information obtained from said cool-logo-owning entity without checking the facts.

Be like a philosopher to avoid looking like an idiot …

One of the things you learn when you study philosophy is that before you make any judgement or take any action you should take a few steps back and look at the bigger picture. That means questioning whether your understanding is the correct one or even if you are equipped to understand what you are seeing. It means questioning the sources of your information. And most importantly it means stepping in the other party’s shoes and looking at it from their perspective. Few actions are committed without forethought, and before you make any final judgements or act on any apparent fact it is vital that you understand the reasoning behind what you see.

In the Instagram case the widespread interpretation of the TOS – the one that claimed Instagram would steal your photos and sell them to the highest bidder – only makes sense from the perspective of a paranoid person thinking everyone is out to get him. From a rational cool headed vantage point a few steps back there is obviously more to the story. But that isn’t what brings readers to the blogs and clicks on ads, so the hyperbole wins every time.

… or be like both Mulder and Scully

(Pardon the ridiculous and old pop culture reference here. I’m watching The X-Files on Netflix.) When it comes to information you read on the web you need to be both like Fox Mulder and like Dana Scully. Like Mulder you should trust no one, and like Scully you should assume there is always a logical explanation. That way you might avoid deleting your accounts only to realize you did it for no good reason and now you can’t get them back.

For an alternate take on the story, check out fellow Vancouverite Rob Cottingham’s piece
Terms of service changes deserve more than just a shrug and a click.

Categories
Internet My Opinion

A case for hosting your photos in the cloud (Flickr, Picasa, etc)

Pictures on the web, much like grown children, live better lives away from home. As a bonus, they don’t eat all your food and use your hot water. And if they get sick, they won’t infect everyone else. But most importantly when you decide to move house, move to a different country, or if you get foreclosed on, your house burns down or when you pass away, they continue their existence and continue interacting with others.

Pictures on the web should be autonomous units that can act and be acted on in their own right independently of what you do.

Though this sounds scary it is a good principle upon which to base your publishing of images on the web.