Matthew Cowen
About Newsletter Categories Working Library Subscribe License Search Also on Micro.blog
  • šŸ“… January 06 - January 12 | Are bigger models better for AI?

    Good day everyone, and may I wish you a happy 2025.

    I’m back with the first proper article of 2025 after a well-deserved break from several activities. I kept working over the holiday period, but I took a few days off to be with family who had returned to the island for a short stay. I was very happy to take that time off.

    You’ll have noticed that I snuck in a quick note during that period. I had the idea to lay down some thoughts I’ve been having about the state of (tech) community in the West Indies. It was kindly re-published on the platform that I was talking about (CIVIC - I’ve since discussed it with the forum admin, who revealed the name in his post, so I’m fine with stating it here now. I thank Yacine Khelladi for that). It didn’t generate as much discussion as I was hoping, which proves my point sadly! One company did reach out, and I’ll be responding this week.

    Anyway, this one is about a paper I read in November —yeah, I live an exciting life— about an aspect of the AI hype cycle that is often misunderstood or forgotten. And, as a bonus, invokes Betteridge’s law.

    AI hype has done at least three things aside from putting the marketing term in the general public’s lexicon. One, it has hijacked discussions on more serious matters by installing, like a brain worm, in the minds of decision-makers a sense of concentration on AI that will (should?) fix all the problems. ā€œGot healthcare problems in your country? AI’ll fix it. Just give us more money.ā€ Two, it has been allowed to talk over debate about whether or not more compute is going to solve the very real problems current generative AI has. The third, and most important for me, is that it has wholly obfuscated sensible discussion on what AI is. Almost anything that calculates is AI now, according to some. This is stupidly untrue, and the differences between generative AI (GenAI), Machine Learning (ML), deep learning (DL), and many other types of AI have been completely lost.

    We in the Caribbean have been largely shielded from this locally and only affected during the consumption of products and services from the US and other countries. However, that will change massively in the Caribbean throughout 2025. Everything is going to be AI.

    I’m here to ask you to watch out for this intellectual sleight of hand and question what is actually meant by ā€œAIā€ when a company is trying to push its wares on you. When your local electricity company says you need to replace your meter with one that has AI, What does that actually mean? In that instance, AI has only replaced the last marketing buzzword of the moment in the instrumentation field, Smart. Smart Meters, Smart Controls etc, etc. It was never ā€œsmartā€ and won’t be ā€œAIā€ either.


    The paper reviewed here is from Gaƫl Varoquaux, Alexandra Sasha Luccioni, and Meredith Whittaker. It is called Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI. You can find a pdf here: Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI.

    It discusses the attention and growing investment in Large Language Models (LLMs) and the narrative that bigger is better. That is, more compute and more data is key to a better AI. The paper challenges this view by asking a couple of simple questions: What is this assumption based on? What are the collateral consequences of this?

    The paper looks at one of the most influential studies about AI scale and performance (AlexNet)1that discussed this.

    all of our experiments suggest that our results can be improved simply by waiting for faster GPUs and bigger datasets to become availableā€

    This set the tone for other studies that, already mindful of this study, concluded similarly into what has become an assumption in the same way that the assumption of Moore’s Law has anchored itself into a discussion of processor performance since it was first ushered. Like Moore’s Law, the reality is much more nuanced and complex. For example, modern processors are actually many processors on a silicon die, more akin to a ā€œsystem on a chipā€ than a simple processor. It could be argued that individual processors hit a limit quite some time ago and that a workaround and redefinition of what a ā€œprocessorā€ is was needed to keep the dream alive.

    So it is with LLM models. They follow more of a law of diminishing returns than a ā€œdoubling of transistors every 18 months (and thus performance).ā€ After a certain point, a saturation point is reached on many tasks performed by LLMs and other AI models. Several studies have shown this, as described in the paper.

    This is precisely why we’re starting to see models that can be run on personal computers, with one of the most influential companies in the AI hardware space, Nvidia, announcing its own AI workstations, marketed to individuals and institutions that have hit a wall with cloud-based AI solutions that are both costly and often less secure.

    Not only has performance plateaued in many circumstances, but the demands on resources have exponentially increased for little gain. Yet, it appears that many applications do not need scale to be efficient and useful. In a medical imaging test on ā€œorgan segmentationā€ (an ML application), models over around 1 GB in size plateaued in performance shortly after reaching that GB, despite the medial images themselves often being much larger. Performance tanked as the models got bigger and bigger. Other applications seem to corroborate this, too. In computer vision, performance rises quickly and then tanks as the models pass over an ā€˜optimal’ size. This has also been shown to be the case for LLMs. One test showed that the LLM performance started to decline from around 100 Gb in size.

    Seemingly, a conclusion is being formulated by a number of those in the research community that smaller, more focused models are better for accuracy and performance reasons, but also another factor that I hinted at in the introduction.

    Some assume that feeding the machine with more will automatically render the results of that machine ā€œbetterā€. We have seen in a couple of examples that that assumption is simply not true in many circumstances. But that assumption also ignores another simple constraint: resources are not infinite.

    Compute is constrained by physics and the capacity to manufacture reliably and sustainably. Energy is constrained by multiple factors, such as production, delivery, and cost, to name a couple. Code itself is constrained by developer’s capability, productivity and the very real issue of time. Have a read of The Mythical Man Month, otherwise known as Brook’s Law.

    Its central theme is that adding manpower to a software project that is behind schedule delays it even longer.

    Another aspect often ignored or glossed over is the costs associated with the alleged performance gains. Growth in the computer required to create and deploy AI models grows faster than the compute cost decreases. So, as their accessibility increases to ever more of the population, there is this wishful thinking that efficiency improvements will solve this. However, the paper points out an economic effect called Jevons Paradox.

    It is a well-known phenomenon in economics that when the efficiency of a general-use technology increases, the falling costs leads to an increase in demand, resulting in an overall increase in resource usage.

    The paper discusses the environmental effects of CO2 emissions, concluding that the cost of a single AI inference is growing faster than compute is improving. And given the fact that companies are scrambling to add ā€œAIā€ to everything, something that was painfully on display at this year’s CES in LasVegas last week, this could increase by an order of magnitude the carbon footprint of AI use and thus contribute even further to the collapse of the climate. To understand this, it is important to understand that inference is AI’s biggest compute cost centre. Google ā€œattributes 60% of its AI-related engird use to inferenceā€. Other studies have shown that with a few million users daily on Open AI, energy use of inference outweighed that of training within a few weeks. It is no wonder most big tech firms have gone quiet on their sustainability targets, with companies like Microsoft announcing that they would ā€œmissā€ them. Miss, is an understatement!

    The paper discusses quality, which I’ll let you read as it is, too, pretty much universally ignored by the marketing materials and the hyping up of the ā€œbenefitsā€ of AI by the salesmen who have their interests at heart, not ours. Suffice it to say that bigger models produce more errors at the cost of compute, energy, CO2, etc.

    Lastly, the paper discusses another angle often lost in the discussion about AI. Scale, i.e., bigger-is-better, is really a means to build a moat around these businesses. GPT-type models are, for the most part, pretty simple, and there are enough examples and open-source projects that it is straightforward to build one yourself —given you possess the right technical skills. In other words, you cannot patent them and protect them using IP laws. So, how do you make it harder for anyone else to enter the game? Have you seen the cost of setting up and running a datacenter? That should tell you all you need to know. To give you an idea, Nvidia’s H100 GPU costs about $40,000. You need a lot of them. Meta is estimated to have spent $18 billion on GPUs in 2024 alone. So this essentially eliminates all but the biggest of budgets, thus protecting the established players, not to mention the circular investment deals like Microsoft and Open AI.

    The broader issue with that is that scientific investigation and innovation can only be done with the blessing of these companies, giving them unilateral control to approve, deny, and stop research if that project doesn’t align with its views. Given the recent far-right shift in the politics of Meta, I’m sure this will end well. Nvidia’s aforementioned AI PC is a tiny step in a direction away from this, but it is not enough.

    As Caribbean residents, we are perpetually exposed to the ravages of the climate. I seriously question whether these tools are useful enough to warrant their ubiquitous usage in the region. I’ll leave that up to you and your conscience to determine.

    Go and read the paper; it is just shy of 10 pages long, and the language is not too technical that non-techs can’t understand.


    Reading

    Here is a quick summary of articles I’ve read recently. These are not endorsements of their content. I sometimes vehemently disagree with their premise but feel it is important to read as much of a variety of views as possible.

    Mark Zuckerberg’s commitment to free speech is as deep as Exxon’s commitment to clean energy

    Nuff said.

    IGF 2024 in Riyadh: AI, WSIS+20 and the Global South

    A roundup from CircleID on the IGF in Riyadh.

    ā€œThe Caribbean is a microcosm of Big Tech’s digital colonialism. Small and medium-sized emerging countries are profitable to exploitā€

    Consulting firm Strand Consult discusses what we all know already. Where are they from again?


    Thanks for reading. Please share with anyone you think might like to read. Have a great week.


    1. . ↩︎

    → 8:35 PM, Jan 13
  • šŸ—žļø A little therapy and an announcement

    Good morning all, from a sunny Martinique this morning.

    I’ve been going over some old writing as a sort of therapeutic exercise or as an exercise in self-harm. I haven’t worked out which it is yet!

    Anyway, I came across something I wrote two years ago, almost exactly to the day. It is a half-baked manifesto for building community in the Caribbean around tech and society.

    My participation in a fairly moribund forum on tech in the Caribbean motivated it. I will not name it, as it serves no purpose other than to highlight what actually happens when a forum like this exists. The admin does a stellar job and tries to keep it going, regularly posting links and articles about tech in the Caribbean. I and several other stakeholders met to discuss how it could continue, flourish, and develop. The meeting was sparsely attended, and unfortunately, no follow-up ensued.

    It frustrated me because I saw a Caribbean with a lot of potential and abundant talent. I see it because I’m embedded in it. Anyone not in the sector or region, however, cannot and don’t, which contributes to the false impressions of the region and stifles the potential before it gets off the ground (internally and externally). Many institutions and businesses in the Caribbean don’t put enough faith in their fellow companies, consultants and innovators. How do we then expect companies outside the Caribbean to put faith in Caribbeans?

    This was a sort of ā€œget some ideas down on paperā€ rather than a thought-out manifesto, and it addressed a specific case. I wanted to post it to the forum in question, but didn’t in the end. Perhaps I should have!

    But I still stand by what I wrote.

    As always, I’m open to discussion/criticism/correction.


    (Reproduced here, as is. Un-edited)

    Good day to everyone.

    From the outset, my apologies in advance for the brusque language. I don’t wish to offend.

    I’m a reasonably recent lurker on this forum. Some of you may have heard of me, some not. I’ll spare you the CV —please connect on LinkedIn. I’ll be only too glad— but I hope I can contribute to the discussion.

    Firstly, as noted above, I am as guilty as anyone. I don’t contribute enough.

    We have to address the elephant in the room. This forum is essentially a roundup of news and a few discussions that peter out fairly quickly. Why?

    Most discussion is offline, direct, WhatsApp, and P2P —therefore hidden and lost to the community. We must foster, encourage and change how we all communicate on topics related to tech and the ecosystem in and around the region.

    An in-person event is all well and good (and I think it should be done). But it’s useless if there’s no follow-up, no follow on, no ongoing discussion, idea-sharing, etc. However, don’t forget that getting the meatware (humans) around the Caribbean is a big and expensive problem. I recently spoke to someone who had come to the FWI from Jamaica - Two days there. Two days back. šŸ¤¦ā€ā™‚ļø

    Regarding: online discussions, if I look at the online groups I participate in, discords, slacks, and forums, there is healthy traffic and real (deep) debate on topics. If anyone asks a question, there are several responses. If anyone doesn’t understand something, many people are willing to help out. Plus, the other ā€œoff-topicā€ discussions about food, games, etc. etc. … pretty much a real community.

    We all seem so frightened of losing business by talking out in the open and trying to work together on projects. I’ve seen it so many times firsthand. I’ve tried to get people working together, only to fail because of a lack of trust. So as a result, we don’t work together.

    I’m not saying it never happens, but there needs to be more of it.

    I would like to see a Caribbean working hand-in-hand with healthy ā€œcoopetitionā€. Take a look at the economic model in Prato, Italy. The Prato model decentralises and specialises the processes (recycling) involved across many small companies, each expert in a specific domain. We have too many ā€œgeneralistsā€ and not enough specialists working together. We have too many individual consultants (me included). We have too many MSMEs.

    95% of businesses in LATAM are MSMEs (counting for 65% of all employment with a GDP contribution of … 25%. JFC, this has to change).

    Just look at the way RFPs go. To outside companies or consulting firms with a PO Box in Trinidad. FFS! We can’t objectively compete as it stands.

    We have to build it. No one will do it for us.

    It would be awfully remiss of me to post this without humbly offering some suggestions:

    • A better discussion platform. The current one is no longer fit for purpose, I believe. There are many options - Discord, Slack, Teams, Mastodon Instance, etc. I’ll make no judgement.
    • More discussion will slowly build trust for a wider population. Perhaps even creating genuine partnerships/agreements to work together on larger and larger projects.
    • A social media manager that can promote and animate topics and discussions to the broader region and diaspora. (See my last point too).
    • Building on the above, a podcast? A newsletter? What can we do to build interest and contribution?
    • An annual (in-person?) event - targeted to the members’ needs (survey-based feedback would help clarify those) with a virtual attendance component for those of us in transport purgatory.
    • A monthly/quarterly virtual meeting designed to keep/build links - on specific topics of the moment.
    • Incentives - what brought you here? What do you get out of it? What can it bring for you and your business? People will only contribute if they get something out of it. That is not the same thing for each person/business. Some it’ll be recognition, others access to trade.
    • We shouldn’t exclude other sectors. We need them in the way they need us. Tech is no longer ā€œcomputers and ICTā€. It touches EVERYTHING. Attracting those that are not from our background is key to us having a better understanding of the opportunities. Let’s not copy the Tech Bros of Silicon Valley. Their arrogance and belief that code can solve the world’s ills is bullshit and always has been. We’re better than that and have more skin in the game.

    Feel free to critique. With pleasure. I’m open to discussion.

    I don’t profess to know everything or have all the answers.

    I wanted to share my feelings, impressions, and thoughts. Nothing more, nothing less.

    To the community, I’ll make myself available to help out where and how I can.

    Have a great day, and I look forward to building alongside you all.

    Happy New Year, 2023


    Updates on my online life

    I regularly participate in discussions on two well-known podcasts, the ICT-Pulse Podcast (English) and Innovation, AgilitƩ & Excellence (French).

    I do a piss-poor job self-promoting this work, so I thought I would do something to rectify that this year and also promote their work and thank them for the effort they put in regularly to bring interesting topics to us all.

    To start, here are the last couple of episodes that I featured:

    ICTP 330: ONLINE ADVERTISING, LEARNING AND COLLABORATION AND DIGICEL’S LATEST DIGITAL TRANSFORMATION REPORT

    ƉPISODE 187: PROMESSES ET RƉALITƉS DE L’IA AVEC MATTHEW COWEN

    NB: If you don’t speak French, there’s a really simple trick to get the text in English. Use a Whisper model, like MacWhisper, to output a transcript. Then, use your favourite translator (even LLMs will do a reasonable job), and you’ll have the discussion.


    Announcing the Caribbean Digital Compass

    This year will see the launch of a project I’ve been trying to get off the ground for a while. I’ll be working closely with Michele Marius (ICT Pulse); we have already started putting things in place. It will start slowly but surely and evolve based on your feedback.

    It’s called the Caribbean Digital Compass. It will specialise in regularly discussing two to three topics (released as and when written, i.e., no specific timetable). It will give you the background to understand the topic, provide context for the Caribbean, and provide opinion and analysis where relevant.

    We plan to write frequently but concisely, keeping the word count reasonable. We also want to record some podcasts that discuss the thinking and research behind the articles, but I suspect that will come later.

    Anyway, I hope you come along with us for this journey, and you can sign up here at caribbeandigitalcompass.com/subscribe/

    The site is currently a bit rudimentary but will evolve over the coming months. For now, all articles will be published there, and subscribers will receive an email as soon as an article is published.

    Just one more heads up. We intend to make this a paid product in the future, but we’ll see how it goes. I wanted to set your expectations. If we go paid, there’ll be a free component, so you can still support us. The cost to subscribe will be small as it is targeted to cover the maximum of the business population in the Caribbean, who, as we know, have very different earnings from island to island!


    Happy New Year, 2025

    → 3:53 PM, Jan 6
  • šŸ“… December 16 - December 22 | A short(-ish) review and thoughts for next year

    If you know me well, you know that this is one of my least favourite times of the year. It’s complicated, so I’ll spare you the details. But It does seem to be tradition for blogs and newsletters to do a retrospective of the, soon to be over, year. It’s possibly something inherited from the traditional media industry. And whilst I’m not really a fan of this tradition —you can call me a miserable git for detesting the Spotify-like ā€œwrappedā€ emails that have been invading and polluting my inbox and general use of apps. Please stop!— I thought it might be useful for me to have a look over some of the stuff I’ve been doing here this year. You don’t have to read this; please just click Mark as Read if it’s not your thing. I won’t judge.

    I might take a break next week, so if you don’t see a new email next week, you’ll know why.

    In the meantime, have a good holiday.


    I started the new year with a new site (this one) with the intention of bringing all my writing and online presence into one place. The Substack ā€œNazi Barā€ problem precipitated the move, and I needed to move off the bigger platforms onto something more independent and open that also had values around the open web.

    I landed on micro.blog, a small company providing a simplified and highly customisable platform for blogging. It included newsletter capabilities (albeit rudimentary) and embraced federated technologies for authors to publish on other platforms. It has its own community and provides a social-like experience on a smaller and more manageable scale compared to standard social media (the Facebook’s and so on.)

    Since then, I have posted exactly 60 articles, averaging one article a week in newsletter form and a couple of other posts about specific topics. I have been able to keep up a stable rhythm of writing and posting. I generally write starting on Sunday evening and finishing the article on Monday. Editing and proofreading are done on Monday evening so that I can post in the evening or early morning. I have been pretty consistent all year, although lately, I’ve been posting on Tuesday morning.

    This post will be the 61st of the year, pushing the total number of words to over 50,000. Depending on who you ask, that is about the length of a book. It is on the lower end of the word count but book-length nonetheless.

    In July, I started working with a small IT company on a part-time basis, and personally, I was worried that it would make it difficult for me to keep writing these articles. In the end, it worked out rather well, as it gave me a certain structure conducive to reading, researching, and writing a lot. I’m grateful to them for my employment. I’ve been continuing small contracts with other businesses, too, so the variety has been interesting and helped me keep my feet on the ground regarding what small businesses are living rather than the lofty ideals the media and tech companies often project. I can tell you they have literally no idea what is happening or what to do and are just happy to keep the money flowing through subscriptions, up-selling and cross-selling. The day a big firm really understands SMEs and does something for them, they’ll clean up.

    This has been, loosely, the theme of this year’s writing: the disconnect and the disconnection of tech and society. A new word entered the dictionary, enshittification, to describe how businesses like Meta, Google, and plenty of others I have written about here have all turned a corner and are now actively making our online lives as hostile an environment as possible to maximise their profits.

    I won’t re-litigate that here again; you can read a few of the posts I wrote here:

    https://matthewcowen.org/archive/

    I think the most significant topic I have concentrated on this year is Internet Governance.

    I was in the ARIN Advisory Council election, and despite not getting elected, I didn’t disgrace myself. I look forward to being on the next list and continuing my involvement with ARIN. I hope we can do something with the French West Indies this year. I’d had initial talks, but time ran out. I’ll keep you posted.

    I will continue to concentrate on this topic, and I’ll be looking to see if there are any employment opportunities in this space for someone like me who has many years of experience with the Internet and associated technologies and likes researching and writing about it. Please reach out if you know of anything.

    I wrote a little about LLMs and had to admit defeat on the terminology. To re-iterate, LLMs (Large Language Models) like ChatGPT and the like are not Artificial Intelligence. Statistical next-word predictors are all they are. Impressive sometimes, yes, but absolutely useless most of the time. Unfortunately, that’s the conclusion that many are coming to now, as the rubber has hit the road. I’ll remind you of my experience in the Microsoft Copilot training, where the trainer used Copilot with a prompt that didn’t produce what was expected and then took a bunch of time to coax it into working, arguably spending more time than just doing the damn thing manually!

    This year, I’ve tried to take a more pragmatic approach to discussion tech, Digital Transformation and the topics that seem to attract more smoke and mirrors discussion than real-world and practical analysis. I don’t think I’ve done an amazing job at that yet, but I think I’m definitely better at it than I was and probably better than a number of prominent tech journalists. To clarify, I’m not suggesting I’m a better journalist. I’m suggesting that I’m perhaps a little better at critical thinking about tech.

    I hope to continue writing like this. I enjoy it, even if it is painful and cringe-inducing at times. I hope you’ve enjoyed reading it.

    My ambitions for 2025 are to be more structured in my approach to articles. Next year, I plan to explore a topic in more detail, so maybe I should write a series of articles. Let me have a few glasses of Rhum over the holidays, and I’ll come back refreshed and ready to continue.

    Thank you for reading, and I’ll be back in the new year.


    Reading

    My reading has only increased this year. I suspect next year will be no different. I have a ton on the backlog to read, and more is coming out daily.

    Artificial Intelligence Isn’t Actually That Amazing - The reality can’t match the hype.

    I think the title says it all. I’ll leave you to discover the short article. Just bear in mind that this is from an economist.

    Dark Patterns in Cookie Banners: CNIL issues formal notice to website publishers

    You may or may not like the cookie consent forms (which, by the way, exist because of the perverse interpretations from the ad tech industries of the requirements of EU regulation). Still, they try to move some control to you, the reader, rather than the ad tech industry. Well, they’re unhappy and not content enough with the already vast sums of minutiae about you and your life. They’re also sneaky bastards trying to trick you into clicking where you don’t want.

    The ā€œSovereign Democratic Infrastructureā€ Hyperscalers Trick. Why We Shouldn’t Fall for It, and What We Should Do Instead

    This is a very considered discussion of technology, sovereignty, data centres, and control. I’m not sure I agree with every point made, but I’m glad I read it.

    European Union Member States Speak Up for Encryption

    Noticeably absent… France, Spain, Belgium, Italy and others that should know better.


    Enjoy your break if you have one, and have a great week.

    → 8:08 PM, Dec 23
  • šŸ“… December 09 - December 15 | Conscientious Objector

    It’s a reasonably short one today. I’m progressing on what I promised to write up but haven’t been able to finish as yet. Soon come.


    As I write this (Monday afternoon), the first official day of the Internet Governance Forum (IGF) has wrapped up. I say this as I am neither attending in-person nor following live virtually. This might seem a little odd for someone who is building more connections and work related to the field of Internet Governance. It is very easy to explain why. The place where the forum is being held this year is a significant factor in my conscientious objection to attendance. The forum is currently running in Riyadh, Saudi Arabia.

    As someone who supports and defends human rights (admittedly at my small level of impact), I cannot condone or approve of a forum of this stature and importance being held in a country that has time and time again shown its disdain for respecting the fundamental human rights of people and has gone as far as extra-judicial murdering of a high profile dissenter and journalist.

    So what am I saying? Should any regime that violates human rights (and, by extension, many digital rights) should be completely shunned, ignored and not negotiated with? Not at all. I’m all for working with and collaborating with regimes where the common good of the world and people are at the centre of that work. What I do think we should be doing is not fawning over the opulent surroundings and preferential treatment we get compared to its citizens, and we should demand human rights concessions before any discussion to host such an important conference.

    After a cursory glance at the conference schedule, I found this event:

    Source: IGF website

    At first, I thought it was a joke. It is not.

    I will take my time to look at many sessions once the conference is over, and I can pick and choose the rhythm to digest some of the information.

    I was not the only one to register and not attend. Hopefully, a message may have been sent as intended. Many groups of people cannot travel to Riyadh because of the danger it would put them in as a result of their sexual orientation, religion, political and journalistic positions. This is undoubtedly not the way to hold what is ostensibly an ā€œopenā€ forum that ā€œwelcomes allā€.

    AccessNow went further and held a round table discussion entitled ā€œInternet Governance Forum (IGF) 2024: when digital rights collide with digital oppression.ā€ I attended and very much enjoyed the conversation. You can watch the recording at this YouTube link. I politely ask you to take the time to watch it if you have the time.

    What Saudi Arabia is doing fits into my previous discussion about Internet fragmentation. States are starting to try to overstep their current self-imposed limits on Internet meddling.

    I said in that article:

    I think we’re heading into a critical phase of governance and the overall battle for control of the Internet.

    …

    The Internet has too big of an influence on populations for governments and ultra-rich private firms with diminishing scruples to ignore. The ā€œSplinternetā€ is already here, albeit limited in application. The worry is that states that elect authoritarians and increasingly extreme right-wing governments will default to control of the Internet, just as they controlled media in the past.

    Saudi Arabia has complete control of the media and is extending that control to the Internet, imposing its oppressive will on its citizens. And whilst I think that governments should get involved in governing the Internet in their countries, some lines shouldn’t be crossed. Unfortunately, these lines are being redrawn and re-redrawn in many countries, including those that traditionally wouldn’t do such a thing (see the UK, USA, and many others).

    We can talk all we like about ā€œthe Internet we want,ā€ but it will not materialise unless we fight for it.

    As I have previously discussed, federated systems, such as social, document storage and sharing, and other organisational systems, are part of the solution. See ActivityPub and other decentralised protocols.

    As I wrote that article, I realised that I should give a little bit of an explanation of what a federated system is. And yes, it is not that easy to explain to a layperson, but we all use at least one service that runs in a federated manner. It is robust, decentralised and open for anyone to use. It is email. The basic email protocol, SMTP, is an example of a federated system that allows all parties worldwide to communicate quickly using agreed standards whilst having local control on some aspects of the delivery. It is simple to spin up a mail host and participate in mail delivery and reception (not that easy, but certainly possible). I’m simplifying a little, but you understand the principle.

    That’s all I’ll say on the topic for the moment.


    Reading

    Last week, I promised some reading links:

    What people in the global majority need from networks

    Wreckage/salvage provides a breakdown of research on the Internet, society, federation, and humanity in the digital world. This article discusses the uses of Big Tech platforms and why populations that are being exploited continue to use them. It is a sobering read that helped me understand my privilege when choosing the tools I use every day.

    Domain Trends Signal Shifting Tides in 2024

    This article in Circle ID discusses findings from a significant domain name provider in the registration and use of domain names in 2024. I’ve seen significant growth for a Caribbean ccTLD, .gy (Guyana). This was from an analysis of BlueSky domain handles. It turns out that .gy is the second-most used ccTLD on the site. After some research, I discovered it was due to a bridging system called Brid.gy that allows BlueSky and Mastodon to interact. I hope Guyana are getting paid enough for its use.šŸ˜‰

    French Piracy Blocking Order Goes Global, DNS Service Quad9 Vows to Fight

    France just can’t keep its hands off the Internet at the moment. Whether or not you think piracy is good or bad, DNS manipulation is a dangerous path to follow. When you sanction it as a state, you’re sanctioning authoritarianism and if it is used in anger, as it was here, what is to say that it can’t be used for more nefarious means? You should not use your ISP’s DNS servers at a minimum and should consider using open DNS servers such as Quad9 or others.

    Documentary: Hypernormalisation

    For a change, I’m recommending a 2016 video documentary. Broadcast on the BBC, it is one of many excellent documentaries from Adam Curtis. I’ll let you discover it and its relevance to today’s times. (Click the title link to open the YouTube video).

    Bonus: An interview in 2018 about capitalism. It is not what you think.

    All of Adam Curtis’ documentaries are available on YouTube (sanctioned by Adam Curtis and the BBC, so you don’t need to feel guilty watching them).


    Back to the grind. Have a great week.

    → 7:22 AM, Dec 17
  • šŸ“… December 02 - December 08 | The Perilous Potential of Tech

    There are times when I start writing these articles, and I suddenly think to myself, oh shit. I think I have bitten off more than I can chew. This is one of those moments, and against my better judgment, I have decided to try to get through it. Bear with me.

    I have so many thoughts floating around that empty space balanced above my neck. And I have so many fleeting side conversations, some more in-depth and others that come to me during the day. Their existence is only in that moment and they are forever lost if not captured. So, I have been trying to write down many of these thoughts and ideas in any way I can, even if they turn into nothing. I’m sure I could be more diligent, but I have to work with what I’ve got, and that ain’t much.

    Feel free to skip this one. If you do read it through, I’d be interested in your thoughts. You know how to contact me.


    I have a whiteboard in my home office on the wall behind my monitor. I have a fairly large monitor on the desk with its height adjusted for comfort, but it slightly obscures the bottom of the Whiteboard. But there is enough for me to keep a note of a couple of thoughts visible throughout my day or whenever I come into and out of the room (the board is close to the door).

    Why am I telling you this?

    Well, I have been meaning to think deeper and eventually write about a couple of things I wrote on that board that are still there. I’m not there yet, and I think it will take a lot more time to flesh them out fully, but I thought it would be useful to start that process despite its unfinished state.

    Let’s start with what is written on the board. Despite my fear of embarrassment, I’ll write exactly as it is:

    Dehumanising tech

    Takes away the barrier that humans have to temper their reactions

    Violent reactions against tech will increase because it harms no one

    To progress, I think the next step is to break down those three phrases, explain what I mean, and perhaps define some terms.

    Dehumanising Tech

    What do I mean by this title or statement?

    In essence, I’m thinking about how we are being divided into data points, automated scripts, and other digital assets that can be used, abused, monetised, and ultimately dehumanised.

    Source: wallpapers.com

    Typically controlled by a fraction of the world’s population, they give you a few crumbs to make you believe you have control over that digital expression of your self when, in fact, you don’t. You are no longer the owner of you.

    And what about Artificial Intelligence, the topic of the moment? Is AI going to change the world (for good)? I’ve long argued against using the term AI to talk about LLMs, so I won’t do that here. But I thought it would be interesting to discuss one story about the beginnings of AI.

    Around the advent of powerful computers, several scientists set out to replicate the human brain by using complex calculating machines that they believed would become equal to the human brain at some point (when the technology got powerful enough). This failed spectacularly then, and it is still failing to this day.

    To parody these hopeless attempts at building computers that think like humans, a computer scientist named Joseph Weizenbaum built what he claimed was a computer psychotherapist. The patient would ā€œtalkā€ to the machine by tying in their problems. The machine was named ELIZA, and it replaced the works of a real practising psychotherapist called Carl Rogers, who simply repeated back to the patient what they had just said in what became known as the Rogerian Rhetorical Approach:

    The Rogerian method of argument involves each side restating the other’s position to the satisfaction of the other, among other principles.

    Source: https://en.wikipedia.org/wiki/Carl_Rogers

    This created an illusion of something intelligent in front of the patient and would fool most people who interacted with the machine then. This reaction was not what Joseph Weizenbaum had bargained for, and he went on to write about this in greater detail after this somewhat serendipitous experiment concluded.

    What he didn’t realise is that in doing this trick, he’d created a building block for others to use to dehumanise computer-human interaction and develop a state of illusion that the user ultimately preferred (compared to interactions with other humans). The building blocks went on to be exploited on a massive scale with rage-baiting, ad tech, nudge theory, genocide and the potential to destabilise nation-states. Breaking down the human into a series of numbers, bits and bites, and database entries rendered the product ripe for control, subjugation, and manipulation.

    Your sleep-tracking data is just part of this. The data is useless even though you think it isn’t. Unless you are well-educated in sleep science, objectively speaking, you can do nothing useful with the data. Or, to put it differently, with or without that data, you can make best guesses about what may or may not be affecting your sleep. The same goes for exercise. As with diet-tracking and any number of areas you can use to deconstruct yourself into two-dimensional data. But, on a mass scale, that data can be extremely valuable to others without considering your interests.

    Removing Barriers

    If there is one story of humans that remains consistent, it is that we believe we are superior to other species. Our cars, computers, and democracies ā€œproveā€ that humans have evolved past what other species have and that we are, therefore, at the top of the pecking order.

    Source: unknown

    It allows us to tell stories to ourselves that justify any action we take toward another species. This trickles down into our understanding of difference, specifically race. We tell ourselves stories of how one race is superior to another, not because we believe humans are all equal, but quite the opposite. We believe that some are more equal than others, and the way to achieve this is to remove pieces of the others’ humanity, making them not quite as human as us. When we remove fundamental human rights from others, we’re systematically stating that they don’t deserve the same rights because they are not as human as us. Often with catastrophic consequences:

    The Belgians were influenced by anthropometry, then a fashionable science, and obsessed with the classification and differentiation of ā€˜races’. They decided that the Tutsis’ facial traits showed they were of Hamitic or Nilotic origin, and were descended from a cattle-herding people who had come to central Africa in search of pasture and imposed themselves on the local Hutus (Bantu farmers) and Twa (a pygmy people who were the original occupants of the land)

    Source: https://mondediplo.com/2021/06/11rwanda

    This dehumanisation of the other justifies actions taken against the ā€œotherā€ because, without it, we are essentially doing it to ourselves. This is a removal of barriers because the dehumanisation effect justifies the actions.

    Technology’s reduction of the human and its constituent parts into data points in a database does the same. It eventually reduces the barriers to an extent that justifies any and all actions we take towards the other on the opposite side of the monitor. For many, the perceived anonymity behind the screen removes enough of that barrier.

    On the Internet, nobody knows you’re a dog (1993, The New Yorker).

    And, as I have noted above, the mass simultaneous hallucination of the world creates siloed realities that bear little resemblance to the real world and provide potentialities for misuse and abuse.

    Violence and Technology

    As tech dehumanises us, the inevitable consequence is a reaction against it. We’re seeing this; its seeds have been growing slowly for decades.

    Source: Office Space (Film)

    The Luddites recognised this, too.1 TLDR: they were not against technology. They were against technology dehumanising them and reducing them to irrelevant pieces in the cogs of society, thus devaluing them.

    Last week, we saw a stark and brutal consequence of dehumanising tech with the cold-blooded execution of UnitedHealthcare CEO in NYC. More importantly, from a societal point of view, we saw the Internet’s reaction to it, with thousands and thousands of people actively celebrating the murder. Hundreds of jokes and memes were pointing to the same thing: the dehumanisation of his insured justified the dehumanisation of him and ultimately justified the retaliation. What they said is (paraphrasing), ā€œIf it is OK for the CEO to hind behind a screen using data to decide the fate of another human being, justifying it through the use of Machine Learning and Bayesian predictions, then it is OK for that life to be ended because like all of our lives, we both are no longer human. We are all just binary representations of humans.ā€

    This is a perilous path to pursue and is likely just the start of a cycle of digital violence becoming real-world violence.


    No reading list this time

    I’ve been rambling enough already, so I’ve moved a list of articles I wanted to recommend to next week.


    I feel very vulnerable discussing like this in the open, but I wanted to get something down that I can work from. Don’t shout at your screen if you feel I’ve got something wrong. Contact me. Let’s chat. I am, after all, human. Have a great week.


    1. Read Blood in the Machine, Brian Merchant - [www.hachettebookgroup.com/titles/br… ↩︎

    → 8:44 AM, Dec 10
← Newer Posts Page 3 of 46 Older Posts →
  • RSS
  • JSON Feed