šŸ“… December 02 - December 08 | The Perilous Potential of Tech

There are times when I start writing these articles, and I suddenly think to myself, oh shit. I think I have bitten off more than I can chew. This is one of those moments, and against my better judgment, I have decided to try to get through it. Bear with me.

I have so many thoughts floating around that empty space balanced above my neck. And I have so many fleeting side conversations, some more in-depth and others that come to me during the day. Their existence is only in that moment and they are forever lost if not captured. So, I have been trying to write down many of these thoughts and ideas in any way I can, even if they turn into nothing. Iā€™m sure I could be more diligent, but I have to work with what Iā€™ve got, and that ainā€™t much.

Feel free to skip this one. If you do read it through, Iā€™d be interested in your thoughts. You know how to contact me.


I have a whiteboard in my home office on the wall behind my monitor. I have a fairly large monitor on the desk with its height adjusted for comfort, but it slightly obscures the bottom of the Whiteboard. But there is enough for me to keep a note of a couple of thoughts visible throughout my day or whenever I come into and out of the room (the board is close to the door).

Why am I telling you this?

Well, I have been meaning to think deeper and eventually write about a couple of things I wrote on that board that are still there. Iā€™m not there yet, and I think it will take a lot more time to flesh them out fully, but I thought it would be useful to start that process despite its unfinished state.

Letā€™s start with what is written on the board. Despite my fear of embarrassment, Iā€™ll write exactly as it is:

Dehumanising tech

Takes away the barrier that humans have to temper their reactions

Violent reactions against tech will increase because it harms no one

To progress, I think the next step is to break down those three phrases, explain what I mean, and perhaps define some terms.

Dehumanising Tech

What do I mean by this title or statement?

In essence, Iā€™m thinking about how we are being divided into data points, automated scripts, and other digital assets that can be used, abused, monetised, and ultimately dehumanised.

Source: wallpapers.com

Typically controlled by a fraction of the worldā€™s population, they give you a few crumbs to make you believe you have control over that digital expression of your self when, in fact, you donā€™t. You are no longer the owner of you.

And what about Artificial Intelligence, the topic of the moment? Is AI going to change the world (for good)? Iā€™ve long argued against using the term AI to talk about LLMs, so I wonā€™t do that here. But I thought it would be interesting to discuss one story about the beginnings of AI.

Around the advent of powerful computers, several scientists set out to replicate the human brain by using complex calculating machines that they believed would become equal to the human brain at some point (when the technology got powerful enough). This failed spectacularly then, and it is still failing to this day.

To parody these hopeless attempts at building computers that think like humans, a computer scientist named Joseph Weizenbaum built what he claimed was a computer psychotherapist. The patient would ā€œtalkā€ to the machine by tying in their problems. The machine was named ELIZA, and it replaced the works of a real practising psychotherapist called Carl Rogers, who simply repeated back to the patient what they had just said in what became known as the Rogerian Rhetorical Approach:

The Rogerian method of argument involves each side restating the other’s position to the satisfaction of the other, among other principles.

Source: https://en.wikipedia.org/wiki/Carl_Rogers

This created an illusion of something intelligent in front of the patient and would fool most people who interacted with the machine then. This reaction was not what Joseph Weizenbaum had bargained for, and he went on to write about this in greater detail after this somewhat serendipitous experiment concluded.

What he didnā€™t realise is that in doing this trick, heā€™d created a building block for others to use to dehumanise computer-human interaction and develop a state of illusion that the user ultimately preferred (compared to interactions with other humans). The building blocks went on to be exploited on a massive scale with rage-baiting, ad tech, nudge theory, genocide and the potential to destabilise nation-states. Breaking down the human into a series of numbers, bits and bites, and database entries rendered the product ripe for control, subjugation, and manipulation.

Your sleep-tracking data is just part of this. The data is useless even though you think it isnā€™t. Unless you are well-educated in sleep science, objectively speaking, you can do nothing useful with the data. Or, to put it differently, with or without that data, you can make best guesses about what may or may not be affecting your sleep. The same goes for exercise. As with diet-tracking and any number of areas you can use to deconstruct yourself into two-dimensional data. But, on a mass scale, that data can be extremely valuable to others without considering your interests.

Removing Barriers

If there is one story of humans that remains consistent, it is that we believe we are superior to other species. Our cars, computers, and democracies ā€œproveā€ that humans have evolved past what other species have and that we are, therefore, at the top of the pecking order.

Source: unknown

It allows us to tell stories to ourselves that justify any action we take toward another species. This trickles down into our understanding of difference, specifically race. We tell ourselves stories of how one race is superior to another, not because we believe humans are all equal, but quite the opposite. We believe that some are more equal than others, and the way to achieve this is to remove pieces of the othersā€™ humanity, making them not quite as human as us. When we remove fundamental human rights from others, weā€™re systematically stating that they donā€™t deserve the same rights because they are not as human as us. Often with catastrophic consequences:

The Belgians were influenced by anthropometry, then a fashionable science, and obsessed with the classification and differentiation of ā€˜racesā€™. They decided that the Tutsisā€™ facial traits showed they were of Hamitic or Nilotic origin, and were descended from a cattle-herding people who had come to central Africa in search of pasture and imposed themselves on the local Hutus (Bantu farmers) and Twa (a pygmy people who were the original occupants of the land)

Source: https://mondediplo.com/2021/06/11rwanda

This dehumanisation of the other justifies actions taken against the ā€œotherā€ because, without it, we are essentially doing it to ourselves. This is a removal of barriers because the dehumanisation effect justifies the actions.

Technologyā€™s reduction of the human and its constituent parts into data points in a database does the same. It eventually reduces the barriers to an extent that justifies any and all actions we take towards the other on the opposite side of the monitor. For many, the perceived anonymity behind the screen removes enough of that barrier.

On the Internet, nobody knows youā€™re a dog (1993, The New Yorker).

And, as I have noted above, the mass simultaneous hallucination of the world creates siloed realities that bear little resemblance to the real world and provide potentialities for misuse and abuse.

Violence and Technology

As tech dehumanises us, the inevitable consequence is a reaction against it. Weā€™re seeing this; its seeds have been growing slowly for decades.

Source: Office Space (Film)

The Luddites recognised this, too.1 TLDR: they were not against technology. They were against technology dehumanising them and reducing them to irrelevant pieces in the cogs of society, thus devaluing them.

Last week, we saw a stark and brutal consequence of dehumanising tech with the cold-blooded execution of UnitedHealthcare CEO in NYC. More importantly, from a societal point of view, we saw the Internetā€™s reaction to it, with thousands and thousands of people actively celebrating the murder. Hundreds of jokes and memes were pointing to the same thing: the dehumanisation of his insured justified the dehumanisation of him and ultimately justified the retaliation. What they said is (paraphrasing), ā€œIf it is OK for the CEO to hind behind a screen using data to decide the fate of another human being, justifying it through the use of Machine Learning and Bayesian predictions, then it is OK for that life to be ended because like all of our lives, we both are no longer human. We are all just binary representations of humans.ā€

This is a perilous path to pursue and is likely just the start of a cycle of digital violence becoming real-world violence.


No reading list this time

Iā€™ve been rambling enough already, so Iā€™ve moved a list of articles I wanted to recommend to next week.


I feel very vulnerable discussing like this in the open, but I wanted to get something down that I can work from. Donā€™t shout at your screen if you feel Iā€™ve got something wrong. Contact me. Letā€™s chat. I am, after all, human. Have a great week.


  1. Read Blood in the Machine, Brian Merchant - [www.hachettebookgroup.com/titles/br… ↩︎

Matthew Cowen @matthewcowen