AI My Opinion Politics


tl;dr: The dangers of facial recognition far outweigh its benefits. If we don’t severely limit or outright ban the development of this technology today, we run the risk of eroding privacy to the point it ceases to exist.

On Saturday, I got an email from Facebook asking if I could verify whether a bunch of pictures it had uncovered were indeed of me. Among those photos were a series taken during 17th of May celebrations on Nesodden, Norway, in 1997 where I am seen in the crowd of a youth orchestra playing the big drum. The picture is blurry, and I’m wearing a weird hat over my long hair. Even so, Facebook’s facial recognition algorithm had correctly identified me.

In April, a woman posted a video on TikTok explaining how Google Photos had inadvertently sent an adult-themed video of her to her mother. The video had been taken in the kitchen with the fridge in clear view. On the fridge was a picture of the woman’s child. She had set Google Photos up to automatically share photos of her child with her mother. Thus Google used facial recognition to identify the child in the photo on the fridge and send the video to the woman’s mother. (I’m not going to link the story here because it appears the original TikTok has been set to private, but a simple search will surface it for you if you’re interested.)

If you need to apply for a loan or a mortgage in the near future, chances are some of the companies you approach may use facial recognition to check your identity and protect themselves from fraud. In China, facial recognition systems are already in use in the finance industry to verify customer identities or “examine their expressions for clues about their truthfulness.

Governments are eyeing facial recognition for everything from immigration screenings to access to public services.

Meanwhile, errors in facial recognition are leading to people, predominantly racialized and otherwise marginalized, being denied loans, services, even being arrested and put in jail.

Facial Recognition Considered Harmful

If we know one thing about facial recognition it is this: The technology is flawed, inaccurate, and often downright racist. Technologists will counter that over time, the technology and the algorithms underlying it will improve to the point it will be virtually infallible. I don’t disagree; The pursuit of all technology is to endlessly converge on perfection, and thanks to machine learning and AI supported by ever-present and ever more advanced cameras, the future of “perfect” facial recognition is a foregone conclusion.

Here’s the thing though: The question isn’t whether facial recognition technology will be able to deliver on its promise; it’s whether the use of the technology will change our society in ways that are harmful. I firmly believe the answer to that question is yes. Facial recognition is already harmful, and those harms will only get worse.

Yesterday two EU privacy watchdogs called for the ban of facial recognition in public places. Just a few days earlier, the UK Information Commissioner said she is “deeply concerned” live facial recognition may be used “inappropriately, excessively or even recklessly”. The people who look carefully at the implications of this technology tend to converge on the same conclusion: This stuff is too dangerous, and needs to be aggressively limited.

Supporters of facial recognition will immediately respond with the many useful applications of the technology: It makes it easier to log into your phone! You can use it to open your front door! Imagine not having to carry a clunky ID card around! It can help fight crime, prevent fraud, and abuse, and terrorism! If you’ve done nothing wrong, you have nothing to fear from facial recognition!

Deontologists, and Edward Snowden, disagree. From his book “Permanent Record:”

“Because a citizenry’s freedoms are interdependent, to surrender your own privacy is really to surrender everyone’s.”

“saying that you don’t need or want privacy because you have nothing to hide is to assume that no-one should have or could have to hide anything.”

While on the surface, facial recognition appears to be a tool of convenience, in reality it is a tool of surveillance, manipulation, and oppression.

The value of facial recognition lies in how it automates wholesale omnipresent surveillance for commercial, law enforcement, and political oppression purposes.

In the 2002 movie “Minority Report” there’s a scene where the protagonist walks through a mall and is targeted by personalized advertising. In the movie, this targeting is done using retinal scans. Today, 20 years later, that exact same targeting already exists, thanks to facial recognition.

If you’ve gone to a mall and looked at one of those enormous digital displays showing mall information and ads, chances are your face and facial expressions have been scanned, logged, and probably used to target you, all without your consent. In 2020 a mall real estate company in Canada was found to have collected over 5 million images of shoppers via these kiosks. In 2017 a pizza restaurant in Oslo, Norway was found to use facial recognition to target gendered ads to patrons looking at a digital menu: sausage pizza for men, salad for women.

Can does not imply ought

Facial recognition is a prime example of a constant struggle within science and technology: Does the fact we can do something mean we ought to do it? From a purely scientific technologist perspective, the answer will always be “yes” because that’s how we evolve our technology. From an ethical perspective, the answer is more nuanced. Rather than judge the merit of a technology solely based on its advancement, we look at what the technology does to us, if it promotes human flourishing, and if it causes harm to people, communities, and society.

The technology for cloning humans has been around for decades, yet we don’t clone humans. Why? Because the further development of human cloning technology has severe and irreparable negative consequences for the human race. We can do it, but we don’t, because we know better.

This is the determination we need to make, today, about facial recognition technology: We can do it, but is this technology promoting human flourishing, and will its harms be outweighed by its benefits?

I’ve spent years grappling with this question and talking to people in the industry about it. After much deliberation, my conclusion is crystal clear: This technology is too dangerous for further development. We need a global ban on deployment and further development of facial recognition technologies, and we need it now. Failure to act will result in the destruction of privacy and immeasurable harms to individuals, groups, and society as a whole.

Think of it this way: Right now you can buy a drone with a high definition camera, buy access to one of the many facial recognition platforms available on the web, fly that drone to a public place, find and identify any person within that space, and have the drone track that person wherever they choose to go. That’s not science fiction. That’s current reality.

Oh, and once you find out who the person is, you can also stalk them on social media, find out where they work, who their friends are, what they like to eat, where they like to hang out, etc etc. Which is all harmful to privacy. But the truly dangerous part here here is the facial recognition: it gives anyone the capability of identifying anyone else, based on a single photo or a crappy video clip, and from there proceed to find all the other information. As long as facial recognition exists, we cannot control who can identify us.

And if you think you can opt out, the answer is no. Facial recognition companies have already scraped the internet clean of any and all photos of you and your face has been catalogued. John Oliver did a great bit on this last year. And yes, it will make you want to throw your phone away and go live in a cave in the forest:

Technology is not inevitable.

“But Morten, these technologies already exist. The cat’s out of the bag so to speak.”

True. Which is why a global ban on the deployment, use, and further development of this technology is something we have to do right now. We cannot afford to wait.

Here’s the bottom line: There is no such thing as inevitable technology. We, as a society, can choose to not develop technologies. We can determine a technology to be too harmful and stop developing it. We can assist those already heavily invested in those technologies to pursue other less harmful technologies, and we can impose penalties on those who use or develop the technology in spite of its ban. It won’t be perfect, but it is absolutely possible.

Facial recognition terrifies me, and I’m a white man living a middle-class life in Canada. The harms of facial recognition are far more severe for women, people of color, people who fall anywhere outside the binary gender or sexuality spectrum, the list goes on, indefinitely. Any day now we’ll be confronted with a news story of some oppressive regime somewhere in the world using facial recognition to identify and jail LGBTQIA2S+ people. Governments are investigating what is effectively pre-crime: using facial recognition technology along with what is effectively AI phrenology to determine the criminality of a person just by looking at their face.

I could go on, but you get the point: We are trading our privacy and the security of our fellow people for the convenience of logging onto our phones by just looking at them. That’s not a trade I’m comfortable width, and I hope you agree.

On the proverbial slippery slope, we are rapidly nearing the bottom, and once we’re there it will be very difficult to get ourselves back up. As the man on the TV says, avoid disappointment and future regret: act now! Your privacy and our collective future depends on it!


Originally posted on LinkedIn.

By Morten Rand-Hendriksen

Morten Rand-Hendriksen is a Senior Staff Instructor at LinkedIn Learning (formerly specializing in AI, bleeding edge web technologies, and the intersection between technology and humanity. He also occasionally teaches at Emily Carr University of Art and Design. He is a popular conference and workshop speaker on all things tech ethics, AI, web technologies, and open source.