š January 20 - January 26 | Analog Privilege
In what I can only describe as a fantastically depressing week, with events worldwide kicking off as expected, but simultaneously going further than anyone imagined.
I wanted to keep this blog apolitical, concentrating mostly on tech, with a little bit of cultural commentary thrown in and a dose of analysis to try to understand how technology has, is and will affect us in the region. That can no longer be the case because I cannot stand by and let powerful (and incredibly vicious) white me in Silicon Valley take everything away to make them and their cronies even richer than obscenely rich. I get the idea that youād like to make a lot of money, I would too. But there is a point where your wealth is too much. Itās disgusting. Itās a cancer. And just like cancerous cells, it grows and grows at the expense of healthy cells, eventually killing the host. If you need me to spell out who is who in that analogy, I invite you to consider where you place yourself today.
/rant over
I promised to write about an article I recently read, and I think now is a good time to do so. The article is called Analog Privilege. It is written by Maroussia LĆ©vesque. The article piqued my interest because it spoke about a long-standing secret in tech. Something that has been reported on, but not enough and not with any conviction either. The purveyors of the tech pushed on us either donāt use it, donāt let their children use it, or have special āoverrideā privileges that shield them from the actual damage their wares have on the everyday person.
If it is good enough for Meta to surveil us 24 hours a day, even when we have explicitly requested they do not, and from applications and products that are not and should not be related, but by backroom deals still spy on us, passing that data to Mark Zuckerberg. The minimum we should require is absolute and complete access to his life. And when a fraction of that intrusion was discovered (the various bots following the private jets of these people), there were shutdowns as a āviolation of privacyā. Really? In reality, we should be the holders of our private information, and their requests to use it should be ephemeral, restrained in scope, revocable and erasable at any instant.
My text here combines the topics discussed in the article with my feelings and discussion of them. Youāll note where I use the noun, I use here original American spelling, and use the British spelling for everything else.
This is called analogue privilege.
The article starts with a very real-world example of how predictive analysis used by child welfare services in the United States is practically used to stigmatise the poor whilst simultaneously sparing the rich from said stigma. In short, the rich get to pay for real doctors, and genuine patient care, whereas the poor, and soon you too, will be thrust into a semi-automated hellscape of healthcare that is only ācareā in name.
Future Hospital visit: [www.youtube.com/watch
When listed, the range and scope of interactions required by humans in everyday life, the potential for abuse through the replacement of humans driving those interactions, is enormous. But what is often lost in that replacement is that human interaction is analogue, and consequently messy by definition.
We, including me, discuss the ādigital divideā and how the underprivileged are being excluded from society and how that is a bad thing generally. And that is true. Typically. But this understanding is changing because more and more technology is detrimental rather than beneficial. She illustrates this using the table below:
|
---|
The āanalogue havesā have the power to decline participation in systems that use AI, for example. Her article focuses mainly on voluntary opt-out of detrimental systems, which she calls āanalog privilege.ā
Paying particular attention to the use of AI in systems, the article highlights how, in real-world use, LLMs show āsignificant downsidesā for people subject to error-prone (hallucinations), probabilistic guesstimates (stochastic parrots) that incorrectly assume the future mirrors the past.
What do all trading adverts say? āPast performance is no guarantee of future results.ā Why is this obvious statement reserved only for these systems and not AI?
Chapter I discusses beneficial and detrimental AI systems, making reference to how automation is about streamlining management and breaking down work into ordered, manageable items to be subsequently automated. Placing AI on a scale from beneficial, where we are freed of repetitive, mindless tasks, to detrimental, where enforcement is supercharged, consequently providing adverse consequences for the āenforcedā.
Real-world examples are widely known nowadays, like the Amazon delivery drivers forced to keep empty bottles in their vans so they can urinate because the algorithm doesnāt factor in human needs through a cycle requiring ever-faster deliveries to hit the targets set by a detrimental AI. This is an example of how a beneficial AI morphed into a detrimental one, and the article shows how automation can easily and quickly degenerate with disastrous consequences.
Noted in the article is that a lot of literature and discussion has focused on the potential AI harms, while missing who AI spares. The debate has centered around inclusion, fairness, and guarantees of transparency, but has failed to discuss the divide between the analogue haves and the analogue have-nots as an important issue.
One example discussed is LegalTech, where a two-speed system is observed, one analogue for the rich and the other automated approximations of legal representation for the poor. Similarly, social media studies have shown the trade-offs of automated content moderation, where being flagged, struck off and even banned for life is reserved for the analogue have-nots, with no recourse, precisely because the analogue route (talking to someone) is tightly cut off for the likes of you and me.
With the prevalence and accelerating deployment of these systems, the actual harms being caused are not the world-ending consequences plied by the AI grifters, they are the everyday acts of violence committed by an unfeeling algorithm that suffers no consequences for poor decision-making and possesses absolutely no compassion or empathy for the human using that system.
A thread that comes up repeatedly in any serious analysis of these systems has been coined as technosolutionism. Big Techās true sin is its arrogance and belief that an algorithm can solve anything. As I discussed earlier, people are messy and illogical, and relying solely on algorithms is a recipe for disaster for those who find themselves in the āInvoluntary lock-in: vulnerabilityā quadrant of the table above. Tech used to work, and it used to solve very well, but only for a small set of very specific and defined problems. As tech has become ubiquitous, tech is showing its limitations, but the tech leaders still believe that the next .0 of their wares will fix these. What those releases generally do is resolve old issues and create a whole host of new ones.
The paper also highlights the obvious conclusions of digital colonialism, where analogue haves are de facto at the top of the pile, benefiting from the exploitation of those underneath. Akin to gated communities, the elite extract themselves from society, refusing to participate meaningfully and contributing nothing to the local community.
The paper then gets into details about how and where LLMs fall short, with a particularly telling conversation with a chatbot:
āPromt: Get your sofa onto the roof of your house, without using a pulley, ladder, a crane ā¦
Human response: I will build a large wooden ramp ā¦ on the side of the my house with platforms every 5 feet
GPT-3 response: Cut the bottom of the sofa so that it would fit through the windowā¦ break the windows to make room for the sofa.ā
Yeah, sure.
And if you think Iām a little melodramatic, take facial expression technology. It is entirely based on the already-discredited pseudoscience of Phrenology āIāll spare you the racist and sexist origins of itā but that isnāt stopping companies from working hard to integrate it into products from criminal investigations to dynamic pricing in supermarkets based on the face of the potential buyer, such a disgraceful and immoral thing to do. And please keep in mind that these systems are known for making stuff up, so it could be you who falls victim, and you wouldnāt even know it! (See below in the reading section).
Maroussia LĆ©vesque has a far more eloquent discussion of this topic, and I would definitely suggest you read it. Although her case study focuses mainly on LegalTech, it isnāt overly technical in that domain and is very readable.
You can find it here.
Itās probably time I concluded thisā¦
Despite this being a largely critical look at tech in general and AI specifically, I do acknowledge that there are many potential benefits, but I would say that it is incumbent upon us to ensure that tech is used for good and for the benefit of everyone. That requires remaining vigilant to the grift, the abuses, and the apparent dog-whistling of products and services that will be used against us as soon as they possibly can.
One area that seems to be taking hold is the use of federated systems, which wouldnāt necessarily solve the problems above but would contribute to better governance. Iāve discussed federated services previously, and I hope you start building out systems along those lines for beneficial use. The most widely used federated system existing today is email. The millions of email servers on the internet all talk to each other, allowing efficient communication from user to user without requiring that user to be beholden to a specific company. Google tried to break email and centralise it for its own selfish use and future extraction. Thankfully, it didnāt work. The more services that are generated, the more chance we have of saving tech and the Internet from its current destructive path.
Reading
āZuckerberg Poisons the World ā Selling Digital Opioidā
I donāt think the future of social media is bright in its current form. I think that bears out with Metaās investments and panics towards the āMetaverseā (howās that going?) and now AI. Looking for the next grift is an endless task.
Automation in Retail Is Even Worse Than You Thought
āNew technology is not just making shopping more challenging for workers and consumersāitās poised to rip off the most vulnerable.ā
Have a lovely week, and thanks for reading.