matthewcowen.org
About Categories Reading Subscribe Search Also on Micro.blog
  • 14 January 2026 - Artificial Intelligence and what I really think

    I’ve been remarkably quiet about AI on this newsletter. I think it is time to change that.

    Upfront, this is a long one. Enjoy.


    I conducted a search on all the writing I have done on this newsletter about AI since its inception (not other places I write, only here) and I have mentioned AI 17 times. I first mentioned in 2019, where I naively discussed the generalisation of AI for automation, which, I essentially, argued that manufacturing robots and automated lines would be boosted through AI to make things more efficient. A full two years before the outing of ChatGPT. Naively, because I fell into the trap that so many people do currently by not defining ‘which’ type of AI they are talking about. Today, the term is used in a dangerously reductive form, suggesting to many that all AI is chatbot Large Language Models (LLM). I was, of course, talking specifically about Machine Learning (ML) at the time, but you’ll have to believe me. I was aware of GPT models from around 2017/2018, where they had been used by malware actors to ‘mimic’ text in emails to avoid anti-spam detection and target specific users using classic social engineering. But they were nowhere near as sophisticated back then.

    Since then, I have discussed AI in terms of use on so-called Big Data, taking care to discuss the downsides and risks of using data in too much of a cavalier fashion. I saw demonstrations during a conference where ML was successfully used to help organise masses of invoices imported from PDF copies. Clearly a great demo, but how many businesses are faced with that situation? Typically, businesses will need to digest data on an ongoing basis (as backups serve the purpose of restoration in the event of catastrophe), but the savings in time are small in the instant, although they add up over a year or so. Obviously, the pandemic turbocharged the idea of using AI and I discussed that a little, and presented to a wide audience some of the trends I had been noticing that would affect the Caribbean as a whole during a conference.

    In September 2020, I rightly observed that NVIDIA was about to go from being a well-known PC graphics card company to being the backbone of everything AI, by a freak of luck in how mathematics works. Side note: I was talking about ARM a lot, too, and that it was headed to greater things in computing hardware —five years later and the benefits of the architecture are apparent. Subsequently, I have only in passing, mentioned how AI (again, I poorly defined what I was talking about) was going to become more and more mainstream, but it was last on a list of things like Cloud Computing, Digital Transformation, Regulation, Security, and Mis/Dis-information! If I had to try to explain, I would suggest that, even back then, whilst I saw some use for it, I was much more sceptical in its transformative capacity than most.

    In Computational, I discussed how the robots were going to take over, but not in the way you think. Then in February, I relayed some feelings and responses that I had been seeing first hand from training sessions I had been giving for local businesses interested in the technology. Even then, I was circumspect about the universal usefulness and the risks of using it. I wrote:

    It is clear to me that I have been surprised by the interest from such a broad range of managers and business leaders for a product that is so technical and so linked to ICT. The OpenAI hype machine has galvanised the public into believing that these tools can make them one hundred or more times as efficient for 100 times less money than they are spending at the moment (on personnel). This, of course, is not true at all, and I find I have to temper expectations and canalise those runaway thoughts they often have about generative AI and how it will make every person redundant.

    I don’t think discussing accuracy, efficiency or other measures of “intelligence” is helpful at this stage, as these systems are changing rapidly. To give you an example, I have had to modify the training materials no less than ten times in the last six months. I would suggest a wait-and-see approach before integrating them into fundamental or central processes in your businesses that would provoke significant consequences in the case of error or failure. I would also suggest you integrate human-based verification and validation to the output generated to ensure you don’t fall foul of mis and dis-information, obviously wrong answers, and poor analysis that these LLMs can produce. That doesn’t mean that I don’t support the use of them. Please do. However, please don’t rely on them too much, as you may be sorely disappointed and dissatisfied with the results.

    This is just to lay the ground about how I currently fell about LLMs and AI in general.


    Not all AI is the same

    Generally, I think it is important to be clear about what it is you’re talking about. If two or more people are ostensibly talking about the same subject, then it is imperative that all parties are using a common working definition. This is often not the case when discussing AI, and this failing is actively exploited by firms and individuals for their gain. For example, many people like to discuss “freedom of speech”, particularly in an online context. But what most ignore, is that that definition is not the same for the entire world. The European Court of Human Rights and the European Convention on Human Rights changed their definition from “Freedom of speech” to “Freedom of expression”. Notice the difference. Speech was too limited in scope, in that an act was not counted. With freedom of expression and act (symbolic speech), is included. When you ‘like’ something on the Internet, you’re not speaking, but your actions are a form of expression that you have made public. Of course, we can get into all sorts of debates about what the ‘like’ button actually means and the “correct” meaning of freedom, etc, but that’s not the object of this discussion.

    When talking about AI, what people seem to be talking about is LLM chatbots. This reductive use of the term is dangerous, and it lets the firms that want to control our collective experience of the Internet dictate the terms and use of the definition. It is dangerous because those uninitiated in the technology might think that a system that was fully automated and reliably accurate was run on ChatGPT. Perhaps encouraging them to invest to implement tools in their processes, only to find that for some odd reason their system doesn’t work so well. Have you spotted what’s wrong? Of course, an LLM is a statistical next-word generator and a Machine Learning (ML) algorithm is something entirely different with behaviours and outputs that cannot be compared with an LLM. This example should also make you think about the ultimate use of the outcomes. What if, for example, a dataset was being used to determine whether a person receives a live-saving benefit or not, ultimately determining if the person will live or die. Is it acceptable to misuse the term AI, then?


    Where do I stand on LLMs?

    Off the bat, I should be clear about my feelings on the technology. Separating everything, and looking purely at the technological prowess of LLMs, it is clear to me that there are some advances that are significant and that certain use cases in processing natural language, for example, may prove useful.

    This statement, you should notice, is not particularly a full-on endorsement, nor a definitive and tranchant position. Why is one’s cul entre deux chaises?

    For me, it comes down to several issues that, on balance, feel deeply problematic for me. Unless you’ve been disconnected from the Internet and hidden yourselves from international news, it should come as no surprise that LLM technology has been used for the most revolting and criminal behaviours we have seen, and slowly integrating itself in military applications, such as aiding a full on genocide in Gaza. Is it the fault of the LLMs themselves? Well, in part, absolutely. And, more specifically, it is the fault of the engineers and the managers who develop these systems as much as it is the fault of the end users who willingly partake in the production of these outputs.

    I feel like I shouldn’t need to go into this in detail, but every so often I wonder. Briefly, an LLM is the product of its ingested and calculated material. The very fact that the ingress data is as biased, misrepresentative of the world, and rotten, it should be no surprise that the egress would be anything but similar. The fact that this is frequently denied, misdiagnosed, misinterpreted and misrepresented should tell enough about what you should think of the companies developing these systems. You, alone, should decide where you fit on that scale.

    I, myself, am deeply troubled by the cavalier attitude to the use and misuse of data that in some cases is open (i.e. free to access, use, and reuse), but in numerous instances is clearly protected material that has been misappropriated and has no place in the training datasets being used by the LLM-makers without consent.


    But I think the most egregious ‘fault’ of the LLM-makers is their consistent false narratives about efficacy, productivity and probity of what are, provably, just next-word-prediction-calculators. Am I saying there is no use, and they are a waste of time? Please re-read the first paragraph in this section.

    The way LLM-makers aggressively push their services with demonstrably false claims is nothing short of disgraceful. They have won over the public with two strategies, that I think they chanced upon, rather than plotting world domination like “The Brain”. The first is the age-old trick of the door-to-door salesmen used to sell you crap, that at first glance looks fucking amazeballs, but in subsequent use, you realise that you’ve been had. The first time you type a request into a chatbot, and it blurts a bunch of text about a subject that you’re interested in, it seems quite impressive, and, to be fair, the prose is not terribly written —for that see my writing.😃 But on closer inspection, particularly if you are an expert in the subject, you start to notice areas of supposition, lack of detail, poor factual datapoints, and often, invented information that doesn’t exist. I recently read a quote that went something like this:

    It’s weird. When I ask ChatGPT to tell me something about a topic I don’t know much about, it is really good and shows a level of understanding that is impressive. But when I ask it something about a subject I am an expert in, I find so many mistakes that it is not always that useful.

    If you thought Dunning-Kruger reading that quote, you pass Go and Collect €200.

    I’ll give you an example, one that I use in my training. We ask the LLM of choice to explain why there are so many traffic problems in Martinique. The answers look convincing and are plausible at first glance. And, not dissimilar to a stopped clock being correct twice a day, they even hit upon the ‘real’ reasons from time to time. On inspection, however, they frequently offer up stereotypical (and, just shy of prejudiced) explanations. From them being the fault of tourists creating traffic jams (laughably implausible), to offering up that the islands are mountainous (true) with poor roads in these parts (true) and that contributes. For clarification, there are no regular traffic jams in the mountains. But, as mentioned, there are explanations, such as rush hour, a concentration of economic activity, poor public transport, and others. Yes! It got it right. Oh! What? That’s true for pretty much any place in the world where the conditions align, did you say? Like every town in the world?


    The other way the LLM-makers try to exercise control over the narrative, leading you to believe their immense and super human powers, is by a simple technique of anthropomorphisation.1 As humans, we’re hardwired to see and feel human-like qualities in things. Just as we see a face in the frontend of a car, we see and feel an intense emotion to believe what is written in front of us is human-like. Coupled with our own achilles heel of confirmation bias (a phenomenon that we’re all vulnerable to) we can easily fall into the trap of trust in the outputs.2 ELIZA was the first real-world example of this, and today’s LLMs are no different.

    But there is a difference today. The LLM-makers are actively exploiting that human weakness. They are deliberately using anthropomorphising terminology when talking about these statistical next-word calculators. They use terms like “thinking”, “he/she”, “feels”, “understands” among others. These are conjurer’s tricks to avoid disbelief of the act being performed. They are also very dangerous. The sleight of hand being performed, at its heart, is to displace responsibility from the humans creating and operating these systems onto the computer program, as if a computer has rights and, more importantly, responsibilities like humans do. The desired outcome is to ensure that they are never held accountable for horrific consequences. Like the suicide of a teenager, who, on advice and encouragement from ChatGPT.3 See also the deepfake child pornography site formerly known as Twitter.

    It is not new either. This from 1976:

    If a researcher … calls the main loop of his program “UNDERSTAND,” he is (until proven innocent) merely begging the question. He may mislead a lot of people, most prominently himself. … What he should do instead is refer to this main loop as “G0034,” and see if he can convince himself or anyone else that G0034 implements some part of understanding. … Many instructive examples of wishful mnemonics by AI researchers come to mind once you see the point.

    My source: https://www.techpolicy.press/we-need-to-talk-about-how-we-talk-about-ai/


    Then we arrive at a highly disputed topic concerning the development and operation of LLMs, that of the power and cooling necessary to run huge GPU farms destined for training and operating to reply to your frivolous little chat to find a recipe for tonight’s supper.

    Currently, the debate about the power use of LLM-makers is either presented as something that is accelerating the destruction of the planet due to the burning of evermore fossil fuels to power datacenters, to a use that is, in the grand scheme of things, inconsequential. This is a highly disputed topic and one that I don’t have clear answers for, but there have been a number of reports and studies that show that these systems do indeed use a lot more energy than previously estimated. I know there are plenty of reports bounding around that show that the LLMs only use marginal amounts of electricity and cooling and that we shouldn’t worry our pretty little heads with that and carry talking to our digital mistresses and making unsavoury content on the previously mentioned deepfake child pornography site.

    In 1964, the Surgeon General of the United States of America released the first report on smoking and health. It concluded that smoking was:

    • A cause of lung cancer and laryngeal cancer in men
    • A probable cause of lung cancer in women
    • The most important cause of chronic bronchitis

    But this was after the tobacco industry had released their report, in 1954 entitled “A Frank Statement to Cigarette Smokers” whereby they proceeded to tell people that smoking was not detrimental to health. The tobacco companies all ‘knew’ that this was a lie, but they kept peddling it until the lie was untenable and enough evidence had accumulated that they finally caved sometime around 1999. (Read that date again).

    I believe what we are witnessing with the LLM-makers is the same dynamic being played out, albeit at 2020s speeds. I believe the LLM-makers are fully aware and are deliberately holding off granting full and unbridled access and analysis because it will be shown to be worse than most people think. I believe they are doing so because they have started to enjoy the smell of their own farts about a statistical next-word guesser ‘finding and innovative solution to the world’s energy issues’. It is delusional, and in saner times would have most of us recommending psychiatric treatment to these people.


    Possibly the biggest issue of all, however, is that it seems the amount of money required to keep the lights on currently —and required in the near future— cannot be sustained to the point at which a bubble (multiple bubbles?) will burst faster than on a silly TV game show.

    This essay is already very long, so I’ll spare you the excruciating detail, there are plenty of others who have and have done a much better job than I could.4 Suffice to say that the money required to build a datacenter, plus the costs of training the models, added to the costs of running the applications for each query far outweighs the revenues, to a point that multiple tens of billions of dollars are being lost in the industry per quarter and there is absolutely no path to profit for the foreseeable future.

    Add to that the development of free and open models from the competition that effectively reduce the ‘value’ of an LLM to zero, and that makes the situation much harder for the hyperscalers. Finance bros and tech bros like to talk about “moats”, ignorant that they weren’t the great defences they are cracked up to be. In other words, why would you pay, if you could get access to something similar for free? Probably jingoistic tendencies, if we’re honest, but that won’t be enough of a population to help turn a healthy profit with today’s systems. The confluence of the previously discussed inefficacy and mild productivity gains in certain circumstances would lead us to understand that something has to correct or reset. Not the boosters, though. Oh, no. If anything, they say it “proves” that they need more money, not less. Like a homeopathy peddler convinced that even less of the active ingredient makes the medicine stronger. (I use the term medicine lightly here, as I’m sure you’re aware).


    What have I done, and what am I doing going forward?

    Well, those are two different questions of course, but if I were to summarise, it’s a combination of limited use, mindful language when discussing these systems, and taking care and time to train people on the regulatory implications, policy, as well as the negative aspects of LLM use to help provide a fair and balanced understanding of a toolset that has for now become imbedded in today’s computer use.

    As far as use goes. I very rarely use them, and I never use the outputs directly in any work I produce. I have and will use them now and again for ideation, or to get the “motor kickstarted”, where I sometimes find them helpful to combat procrastination and other difficulties. Sometimes, longer form outputs can be helpful for structure, ideas and general pointers. I’ve recently used Claude to help me resolve a couple of formatting issues in the .css on a Hugo-based system. They were helpful, but not the panacea the vibe-coding fraternity will have you believe. Leading me in circles at times, but giving me enough of a pointer to find real help that actually resolved the issue. Note: I use grammar-checkers that are increasingly “AI-powered”, so it is difficult to totally avoid LLM use when writing.

    When I’m training, I take time to explain how LLMs work in simple terms then quickly move on the uses and abuses of the LLMS, the back end excesses, the atrocious conditions of the moderators, the risks of data privacy breaches, the risks of being had by the machine and I take time to stress that any use of the system unequivocally requires responsibility and expertise. If you abdicate your responsibility over to an LLM, you will regret it. As experts in a given field or subject, you should maintain expertise and only use the systems for assistance. If you are not an expert in a topic, and you rely too heavily on the output of an LLM, you are playing with fire, as plenty of legal personnel have found to their detriment. There have been plenty of fabricated legal texts submitted through the use of LLMs, in Turkey, Austria, Brazil, the USA, Czech Republic, Argentina, …need I list them all?5 I confront the racialised and racist outputs generated, and give advice on how to spot them, as well as explaining how that happens —hint: 💩in = 💩out.

    I don’t know what the LLMs will eventually become, but I do know that the current mindset of the devs and operators, including the tech bros behind them are not the right path, and it is something that requires correction soon.

    One last point I’d like to include here, and something that merits more discussion in the future, is that a cult belief has recently been born out of fear of a bubble bursting, that it doesn’t matter if there’s a crash because it’ll leave all this lovely datacenter infrastructure available for future innovation, just like the fiber deployments in the USA did after the dot-com crash of 2002. (Not true). TBC.


    Thanks for reading.

    Have a most excellent week.


    1. The Free Dictionary: To ascribe human characteristics to things not human. ↩︎

    2. Confirmation bias - Wikipedia ↩︎

    3. We can argue specifics, if you want. I’m open to discussion ↩︎

    4. Read Ed Zitron, for example. A cursory search will find many more. ↩︎

    5. Taken from the OECD AI Incidents and Hazards Monitor ↩︎

    → 14 January 2026, 08:30
  • 📅 It’s bullshit, and it always was

    It’s bullshit and it always was. I don’t think I need to go into too much detail here, but I think it is important for to take note on where I stand about some of the things that are happening in the world of tech. I’m not sure that I’m ready to share certain parts of that discussion just yet, butI do think that I’ll address them in good time.

    When I started to think about writing recently, I remembered that I started this blog, and the ideas behind pre-blog, a long time ago. I’ve been an off and on blogger for over 20 years. Back then, I had a profound sense of optimism about tech and how it would transform society. Oh boy, was I naive! I really thought that tech would be used in ways that would make things better for most people, if not all. There was a certain inevitability to it. And sure enough, little by little, things improved for a few and then —like my favourite Hemingway quote— it suddenly got awful for everyone except the few. And it all happened so quickly. I think that we’re all a little punch-drunk from what’s happening in tech and almost certainly not seeing it for what it is, and we seem to be blindly falling into outcomes that are not just inconvenient, but downright dangerous for society. And that’s a lot of what this note is mostly about.

    You can’t code your way out of social problems

    I think I should pause here and tell you that I have not become “anti-tech”. I still think that technology can play a significant role in the betterment of the world. What has changed since my youthful positivity for technology, is a profound understanding that technology alone is not enough to fix things. Code also creates new problems that need to be solved. But the biggest issue is the people behind the tech companies and what they’re doing in the world.

    It used to be said that “build it, and they will come” (I still hear it regularly). But It’s bullshit, and it always was. There is a belief that those that built tech-focused companies and made millions or billions somehow are ’superior’ to the rest of us. It’s bullshit, and it always was. There is a belief that tech is inevitable and that we must get onboard or be left behind. It’s bullshit, and it always was. I’ll discuss that another time. For now, this note is about a one of those people I mentioned above, and how he’s trying to impose his will on a small island developing state, namely Saint Kitts and Nevis.


    ARIN AC elections results

    Update on the ARIN elections. I didn’t get voted on to the Advisory Council this time around, and I didn’t disgrace myself with the number of votes I obtained. I’ll put myself forward again in the future, as places become available.


    A grotesque vision for Nevis

    Earlier, I read that an early Bitcoin “investor”, Olivier Janssens, a Saint Kitts and Nevis’ Citizenship by Investment scheme (CBI) beneficiary, has been buying up land and now proposes to set up a “state-within-a-state” called Destiny. He has denied this, of course, but evidence about his beliefs would suggest otherwise.

    The CBI, as you may know, is a way that relatively rich people can “buy” their second citizenship from a smaller nation for a fee. Although the list of countries authorised for visa-free travel is rapidly diminishing with recent changes in Europe and the USA. It is difficult to imagine the reasons why citizenship of Saint Kitts and Nevis is so important to him.

    Typically, people buy their ‘citizenship’ to obtain visa-free entry into Europe or the USA. Presumably because a) its such a massive burden to apply for a visa that several hundred thousand dollars is worth it to avoid the paperwork, or b) their past is murky and would likely be rejected if they applied directly, or c) they think themselves as above mere mortals as you and me, and that they have the inalienable right to do whatever they want using cash as their fast track ticket, however immorally that wealth was attained. 1 So why the link with Bitcoin?

    It seems that Janssens is an early Bitcoin investor (clearly he ‘made’ a lot of money),and a co-founder of Freedom Investment Group, that seems to have abruptly shut down in November. Possibly unrelated, but an interesting data point nonetheless. It’s difficult to get a handle on what the organisation is and —there are plenty of real estate agents with the same name—, but the Libertarian vibes are all over it. He was also a board member of the Bitcoin Foundation, that had its tax-exempt status revoked in 2022 for “… not filing a Form 990-series return or notice for three consecutive years”, and seems to have been trying to take over the foundation with a $100K “bounty” to “replace the Bitcoin Foundation”.2

    What is more relevant, and in relation to the Caribbean, is that he is also a Libertarian (in the modern American sense), an anarcho-capitalist, but seems to have trouble articulating that clearly (see above YouTube video). What he is, however, is actively anti-democratic. He is also part of the cultish Network State. If you don’t know what that is, then have a read of Gil Duran’s work at The Nerd Reich. These people want a world in which democracy is destroyed so they can do as they please, like modern-day Louis XVIs. They imagine a future beyond the nation-state, where society is divided into corporate-run territories and techno fiefdoms. Don’t be fooled by the seductive language either. They openly espouse real violence in public theatres, with some calling for public hangings. Others openly call for extra-judicial murders of people in boats in your local waters (including some local politicians). Whether you believe those on the targeted boats are drug smugglers or not, don’t they have a fundamental human right to fair justice? To be fair to Janssens, I haven’t seen anything that shows he’s called for violence personally, but being part of the Network State or otherwise sympathising to its ideology is very troubling at the very least. But the real reason for the investments and project is evident; it is a systematic plan that is in motion, to build a perverse vision of “freedom” for their ilk. What amounts to an antidemocratic state-thin-a-state3 is suspiciously similar to a typically colonialist overtaking of a nation, and one that will undoubtedly exact violence upon the local economy, simultaneously taking zero responsibility for any negative effects. The blurb for the project is seductive (if a little grotesque architecturally), but don’t be fooled, it is a project that is not for you. It’s for those that want to extract and protect themselves from the very society that they’re doing so much to burn down. Is this just a modern (digital) expression of the same colonial past? Time will tell.

    I’ve talked in the past, as have many others, about Digital Colonialism and how international platforms that have too much power over our online experiences, when and what we see on the Internet, that clearly affects how we view the world. It distorts democracy, education, health, language, and even knowledge as we know it. This project is the physical embodiment of projects like this. Others are already in motion in other parts of the region.4

    These people are not your friends, the friends of your government, or the friends of your country. For example, Janssens has, in recent past, openly called government evil! Evil? Sure, governments can be pretty stupid at times, but Evil? Where do you think that talk logically ends up? To be absolutely clear, this is not a personal attack on Janssens, it is an attack on the politics he and others in the cult have, and their contempt for the rest of the world. It is an alarm call against the very real threat this ideology poses to the Caribbean where the power dynamic is often not in our favour. Some of you may not know (or may have forgotten) about Allen Stanford. How about Jim Jones? This is not the first time something like this has happened, and it won’t be the last if we don’t so something about it.

    On the wider implications of this, it is becoming increasingly clear to a wider audience that “tech” is becoming synonymous with authoritarianism, extreme right-wing and anti-democratic principles. Types like Janssens are finally showing themselves to be who they really are, having been working in the shadows for so long, enabled by a lacklustre attitude from popular media for decades. The seeds of this behaviour have been brewing for a very long time, much longer than many of us realised. The signs were there back then. A few picked up on it early, many of us didn’t. If there is a text that I think you should read to get a better historical understanding and framing of what was going on (and still is), then getting a copy of Cyberselfish by Paulina Borsook is a must. (The book is sadly out of print and getting a copy is like finding rocking-horse shit, but there is a campaign to get the book back into print, which I hope succeeds.) If you can’t find a copy, then the following paper is a great read, The Californian Ideology, by Richard Barbrook and Andy Cameron.5 It might help you contextualise what is really happening.

    Digital Colonialism

    Speaking of Digital Colonialism, I recommend reading I’m Kenyan. I Don’t Write Like ChatGPT. ChatGPT Writes Like Me. It is very interesting.

    Note on CBI schemes

    Am I against CBI schemes? Sort of. But I don’t know enough currently to fully make my mind up. I see some benefits for small nations, but there seem to be some very real downsides that could be disastrous for locals. In the example above, how is this materially different from colonialism and the plantation economy, short of rebirthing slavery? Although, I wouldn’t put it past people like this. CBI schemes seem to me well-intentioned investment vehicles that are ripe for exploitation. There’s a whole industry setup to target wealthy people to get dual nationality status, some with questionable advice (going as far as to “debunk myths” that are clearly agenda-ridden), and some with questionable politics behind the flashy websites, touching on individual sovereignty (that’s a big fucking red flag), as well as other outright grifts.

    They seem to me to be a little like the Cruise Ship trap that some are only just starting to openly question about their real-world worth to the local economy. I have talked about this for several years with peers and in the tourism industry locally, as it is clear that they have made zero material difference in Martinique (anecdotally corroborating the reasons expressed by Dr Delisle Worrell in the article mentioned). I would also add that many small islands have had to invest a lot of money to get their infrastructure ‘compliant’ with the demands of the cruise companies, seeing little to nothing in return. The parallels are there if you scrutinise a little. There is more to discuss about this …


    Of note

    I fully deleted an account that I had created on WhatsApp, so if you’ve sent me a message and I haven’t replied, now you have an explanation. Why? Do you pay attention to Meta and what it does and what it stands for? I couldn’t, in all good conscience, keep endorsing the platform from use and association. So I permanently deleted it. I had set it up for research purposes and clearly marked the account as such, letting people know not to get in touch with me there, going as far to state in the bio that I would ignore any contact. Despite this, I received plenty of personal messages and unsolicited commercial communications. Rude!

    I think that it comes down to the fact that many would like a presence on the Internet, and microblogging and social media platforms are a quick and dirty way to achieve that. They cost nothing, as long as you’re OK to sell your soul in the process.

    I would recommend everyone to get that presence on a simple blogging system and then syndicate to social media (but only if you must), then encourage friends, colleagues, and others to follow through simple, practical and privacy-respecting tools like RSS.6 But having a website can cost decent money! Well, not exactly. It does if you use things like WordPress (don’t, given that Automattic’s CEO is a deeply problematic character. Note: Also the owner of DayOne and Gravatar). So I’ve compiled a list of options that are both cheap and cheerful, or even free in some cases:

    Bear Blog (https://bearblog.dev/)

    Micro.blog (https://micro.blog/)

    omg.lol (https://home.omg.lol/)


    On a personal note, I am currently looking for projects / work. I have over 30 years in ICT in various technical roles. I write market research reports (example here), policy development (IT/AI), and do technical consulting. Furthermore, I am also a certified trainer (identifying need through to the development of structured training). I regularly work in both English and French.

    Please get in touch (reply to this mail or email matthew at dgtlfutures.com) if you or anyone you know could use my services.

    Thanks for reading and being a supporter. Have a pleasant week.


    1. Read David Golumbia’s excellent text on it - The Politics of Bitcoin. It’s dirt cheap and not that long ↩︎

    2. https://apps.irs.gov/app/eos/ ↩︎

    3. Janssens has publicly denied this, but similar projects from his peers would seem to show otherwise. See the Network School Dashboard. ↩︎

    4. The Network State Dashboard: https://ns.com/dashboard ↩︎

    5. The following Wikipedia entry has further information and details the links to what I have been discussing: https://en.wikipedia.org/wiki/The_Californian_Ideology ↩︎

    6. Syndication in this instance means cross-posting. Something most of the services I highlight support. The idea is that you can write your post just like on Instagram, but it’s on your site, you own (subject to terms and conditions) and then automatically have it on your social media account. ↩︎

    → 23 December 2025, 05:00
  • Normal service to resume, and an ask 🙏

    I know, I know, I’ve taken an extended break from writing here, from what started as an intended short break in Spring, turned into a six-month absence.

    I haven’t been idle; quite the opposite, but I haven’t had the opportunity to devote enough time to the subjects I wanted and still want to write about. It is a big effort each time, and a few things in my personal life kind of got in the way. I’m hoping to pick up the writing bug shortly, but sadly not today.

    I have an altogether ulterior motive for this quick email…

    As you know, if you’ve been reading my work for the last few years or so, I have been more and more involved in aspects of Internet Governance, specifically with ARIN, the American Registry for Internet Numbers. I’ve been on the Fellowship program twice. I volunteered to help assess their newly released training platform before it was available to the public, as well as make contributions to the selection committee for future fellows. Last year, I was on the ballot for a place on the Advisory Council (AC), and although I didn’t get elected, I mentioned that I would put my name in the hat again this year, which is precisely what I have done.

    That brings me to the purpose of this email, and I apologise upfront, as I am not comfortable requesting things of people and abusing the trust you put in me when subscribing to this email newsletter.

    I’m asking you for your help. The ARIN election process provides a space for the nominees to request statements of support. It helps a lot to have many people post such a statement of support, and I’d be grateful if you could take the time (only a few minutes) to provide a statement of support for candidacy in the ARIN AC.

    ARIN have made it really simple (see screenshot), and you don’t need to write some Shakespearean prose. Just authentic, brief expressions of support are all that is necessary.

    If you can spare a few minutes, please write a statement of support to help me get elected on the ARIN AC, here:

    https://arin-elections.net/elections/advisory-council/cowen-matthew/

    Thanks in advance, and I look forward to serving on the AC and also continuing to write here over the coming months.

    PS. I have a lot of ideas and topics, but nothing fully formed as yet.

    → 22 September 2025, 18:16
  • 📅 March 26 - April 06 | It’s not DNS …

    If you have been reading my writing for the last few years or so, you’ll know that I have been trying to better understand the workings and innards of the Internet and share that understanding as much as I can in a coherent manner. I fail sometimes, and I get it right other times. That’s okay, and that’s the only way to learn something deeply.

    Ever since I first connected to a university terminal and hopped several links to get to the Internet properly, I have been connected and in some way involved with the Internet.

    I saw the deep potential back then and naively believed that bringing it to everyone would do some good. How stupid of me and the many others who built the Internet. We owe you an apology.


    But that is not what this note is about. This is a note about something I wanted to discuss quickly, something I have previously highlighted and discussed in fairly vague terms. Given my goal to learn and share more, I thought it would be good to do just that today.

    The Internet is built on infrastructure, and despite what Internet lore tries to tell you, it has its vulnerabilities. Some technical. Some political and some societal. This is about a technical issue being exploited by societal and political postures.

    I’m an engineer by heart and by trade so I automatically latch on to technical aspects, perhaps not fully understanding other facets. I went from pure electrical engineering, where I built industrial-scale electrical distribution panels —I have fond memories of that job, and I always think about the installations that use the panels I built —to industrial air conditioning, then on to programming Building Management Systems and ultimately on to network infrastructure and the consulting that has resulted from that. And as I recently discussed on the ICT Pulse podcast, and here too, tech is no longer a siloed vertical tool and has crept into every aspect of life. And this is the driving factor of modern discussions about the Internet.

    Internet governance is no longer, and cannot be, a discussion about protocols, RFCs, and the like. The Internet affects billions of lives, and seemingly innocuous rules and regulations can have unintended consequences for individuals and societies alike.


    I should stop waffling and tell you that this is about DNS. The Domain Name System.

    DNS is the backbone of the Internet, and as the popular (in technical circles) meme goes:

    Attributed to: SS Broski

    And it is not just me saying that:

    The Domain Name Server (DNS) is the Achilles heel of the Web. The important thing is that it’s managed responsibly. — Tim Berners-Lee

    ICANN, the Internet Corporation for Assigned Names and Numbers, is the governing body responsible for overseeing the management of DNS, but the actual day-to-day management is handled by providers, ISPs (OVH, Hover, etc.) and the RIRs (Regional Internet Registries) dealing with IP address governance.1 Despite its scope for names (DNS) and numbers (IP addresses), it has spent a considerable amount of time on policy development for DNS, notably the internationalisation of domain names —essentially allowing multilingual domain names and not solely anglicised ones, as is currently the case. It has also worked to push for standards to secure DNS and there are now implementations of secure DNS (DNSSEC) gaining ground. For example, my local router is capable of DNSSEC, as is the outgoing filtering application on my Mac (Little Snitch). ICANN sets out contractual obligations to the DNS providers that mostly focus on the technical aspects of management, security, availability, etc. However, they do not fully cover issues of a more squishy nature, like human rights, privacy, freedom of expression, etc.

    This is a clear example of where the Internet has suddenly become confronted with realities of human existence, having been very much isolated from these issues in the past.

    We’re seeing more and more targeted efforts by governments and organisations that, without any other recourse, attack the very fundamentals of the Internet through this relatively straightforward path. For example, recently, CANAL+, a billionaire-owned and particularly right-wing leaning group, successfully argued in court to have a number of DNS entries blocked from the DNS servers of a select few providers in France (Orange, SFR, etc). The case was about pirated live streaming of football matches, so there’s some merit to the petition. But what about if it was about restricting a democratic right to protest against said company? It would unlikely win on that basis alone, but the precedent of DNS blocking has now been set, and it is simply a matter of testing where the line lies. Something that billionaires can and do regularly.

    Following a 7.7 magnitude earthquake in Myanmar and what looks likely to be a major disaster for human life in the country, it was still not possible to use the Internet in a way that could be used to coordinate aid and get news out to those who need it most. The Myanmar firewall was commissioned in 2024 and restricts a whole host of Internet services, giving the military junta unprecedented control over citizens through techniques like DNS censorship and others.

    This all leads me to think that the next big battleground for democracy in the US, Europe and Great Britain will play out in large part through DNS and the associated services. As I have explained, if you cripple DNS, you can cripple everything.


    Reading

    A couple of articles that I have been reading:

    “Meta has stolen books”: authors to protest in London against AI trained using ‘shadow library’

    The title says it all. And it is just as bad as you think it is. There’s an odd thought experiment going on in the minds of some of the Internet pioneers. Something about data being free and that free flow of data will create a better world. I’d say, take a look around you and tell me if the world is currently a better place from where you’re standing. Some of you will say yes. But that will be because you’re standing in an extremely privileged position. More on this in the future.

    The Tech Fantasy That Powers A.I. Is Running on Fumes

    Some of the major media organisations are starting to understand the real impact of what absolutely shouldn’t be called AI. Other factors, such as the phenomenally wasteful nature of the big models, are also starting to be called into question. Even Microsoft has just cancelled a number of major datacenter building projects, presumably because they see the numbers, and they’re not quite as “number go up” as they led us all to believe.

    Trump administration’s blockchain plan for USAID is a real head-scratcher

    The systematic dismantling of USAID, an organisation I have fond memories of working with, is such a shame. Sure, it wasn’t a perfect organisation, but the people I worked with were there for genuine reasons. This, unsurprisingly from DT, is just baffling.


    Wishing you a great week.


    1. Disclaimer: I am a twice ARIN Fellow ↩︎

    → 7 April 2025, 20:36
  • 📅 March 17 - March 23 | Agency, the splinternet, and the digital plantations

    I was recently invited to participate as a regular contributor to two podcasts last week. I recorded for the ICT-Pulse Podcast (English) and Innovation, Agilité & Excellence (French). I’ll post the links as soon as they’re published.

    One topic that came up, admittedly from me, is an idea and reflexion I’ve been having about digital platforms, more specifically social media digital platforms, lately.

    The other topic that was the focus of discussion on the ICT Pulse podcast was about digital agency over digital sovereignty. It’s a fairly new idea and one that isn’t well-formed. I’m not quite sure how I stand either, but I felt it was a discussion worth having to help clarify, if not resolve some thoughts about Internet governance and the future of the internet.

    I know some of the topics I’ve been writing about here have been pretty dark recently, I’m sorry. But I felt I needed to get them down on something, if not for you, but for me to at least state my position on the shenanigans.

    At the time of writing this introduction, I think this one should be a bit more pedestrian. We’ll see. I hope you enjoy it anyway, and please feel free to reach out.


    Digital agency

    To get the discussion started, I think it is necessary to have a quick rundown about digital sovereignty. It is something that governments around the world have been discussing and even pursuing for a while now. Some longer than others. What it boils down to is essentially regulating and controlling the Internet to ensure that the country remains in control of what happens on it in the confines of its own state. It often manifests itself in ways that are imperceptible to many, or totally draconian like we see in states like Russia, North Korea and China, generally in the name of autonomy and self-reliance.

    Digital agency, however, is thought to be more of an idea closer to the ideals of the Internet’s beginnings, an open universe for the use of all equally. But we have observed that those ideals have come up against cold hard realities of political difference, power brokering and all sorts of levers being placed on countries, businesses, institutions, and people. The internet today is not an open universe, and it is certainly not free for use equitably for all. Just the numbers of people without meaningful access to the Internet show that it is still very much a privileged few that benefit the most from the Internet.

    Agency, in psychology, is having the ability to use strategies and actions that bring us what we want. Without it, we are powerless to decide our destiny.

    The ideals put forward by digital agency try to address this by attempting to wrestle control back through three principles: multi-stakeholderism, realising the potential of technology and promoting collaboration. Of the three principles, I can only see one, possibly two, that are realistic and achievable. TLDR; the first one. (I’ve briefly discussed here before, so I won’t go over old ground here).

    Where I think that these ideals will fall down is with the other two. Although, if I’m honest, I do think that at some point we might be able to better realise the potential of technology. However, it is not in its current guise, or under the stewardship of the goons that are currently in power over tech and our tech lives.

    The third, collaboration, for me is the achilles heel of the entire endeavour. History has shown time and time again, and it will keep repeating itself ad infinitum, that we are incapable of collaborating on a global scale. It will start off with good intentions, but at som point, politics, power struggles, egos, and all the things that make us human will enter the room and suck out any opportunity for meaningful collaboration on projects like this.

    I think it is a reality that we just have to get used to, and it is an inevitable consequence of technology becoming horizontally embedded in every aspect of life. Tech is essentially political now, and it will take an enormous shift for it not to be.

    I’m sure I’ll be developing this topic in the future. For now, I’m keeping up on the discussion and developments.

    The digital (social media) plantation economy

    This is a thought that came to me a while back that I didn’t really expand upon, despite subtly touching upon, in over the last few months. I’m drawing a parallel between the mechanisms and organisational systems, not the people involved, past or present. Bear that in mind while you’re reading this.

    This is a story about digital platforms and how they extract everything out of them for themselves, without regard to anyone or anything. And you should never forget that today’s digital platforms exist solely to sit in-between the supplier and the demander and extract, extract, extract. Their only service is arbitration of supply and demand and to skim off a profit, harming everything else.

    The big question for me is whether mediation is necessary in the first place? In many instances, it is absolutely not necessary and only hinders the exchange, resulting in worse outcomes for both the supplier and demander. This is often represented in the way many artists eventually go solo and start selling their goods directly to their customers, at both a lower cost to demanders and better profits for suppliers.

    Of course, this isn’t a path available to everyone and frequently relies on the fact that said artists are already renown and can therefore “afford” to do so. This could be construed as an argument for platforms, in that they provide a valuable service getting customers in front of sellers. And for many that’s precisely what they used to achieve, and as a service, it had value for people and the service providers were rightly paid for it. But something has changed over the last few years, and it is not to the benefit of suppliers or demanders.

    Digital platforms have steadily eroded the value they provide to their customers (suppliers and demanders), extracting the last drops of any value and diverting them to their balance sheets. This has been done by their accumulation of power over the arbitration process, whereby they control both the supplier’s price and costs, and the demanders they promote to and the eventual cost to the demanders. This has been done by rigging the game to a point where they are the only winners. The tool to achieve this is two-fold; Adtech and a Plantation Economy.

    I’ve discussed today’s adtech industry already, and I will no doubt expand upon that in the future. TLDR; it should all be burnt to the ground and rebuilt to be fairer, private, voluntary, and more equitable for its participants. (See Reading section: Facebook to stop targeting ads at UK woman after legal fight)

    The plantation economy is an economy built on scale and scalability. Already that should sound familiar, thinking about the previous paragraphs. It is an economy built as a platform to drive production to an absolute extreme without totally breaking it (although it frequently does), to the detriment of those subjected to it. Sounding more familiar, it should be. And it is an economy that profits one organisation or person without any regard for the welfare of those that did the actual work to valorise the products or services. It is basically theft.

    Social Media is a digital plantation where you work for little or no remuneration, for a system where there is generally one owner that has total power to do whatever, whenever, however he or she feels. One major difference between the old and new plantations is that you are ostensibly free to leave at any time, but because of the network effects of these platforms, exercising that right is neither possible nor practical. The trick then is for the owners to dupe you into staying on the platform, but more importantly, spending as much time as you can and for as much of your lifetime as you can for them to monetise your participation. Monetise is Silicon Valley doublespeak for wealth extraction and theft in this context.

    This is why the Internet needs to be re-wilded and corporate social media needs to decay and whither way to make way for a better model where we all benefit.

    Distributed multi-stakeholder and federated services are the start of this, but they too will be subject to forces that will pollute them and entice them away from what they should be. But it is a start, and the more of us that try to built something on good foundations, the more chance we have to accomplish and Internet for the good. It won’t be perfect, I’m not that naive, and it’ll still be inhabited by criminals and scammers, liars, thieves, etc. But it’ll be more like the real world and not some digital dystopia that it is fast becoming.

    The one thing I know, is that humans generally follow the pattern of the Hemingway quote: slowly, then all at once. The cracks will appear and the online mobilisation will start slowly, then suddenly, without warning, the empires will be burnt to the ground.

    I’m already seeing a massive wave of interest on the federated services networks in support of ditching US services, and “Big Tech” as a whole. Several governments have even voted to ban the use of them and will start shortly replacing those services with European alternatives.

    And this is where Agency is important and why I am still undecided with my feelings about the digital agency movement and the so-called splinternet. They cross over and are contradictory in some places and totally rational in others.

    To be continued…


    Reading

    A few things that caught my interest recently.

    Careless People - NYT No. 1 Bestseller

    Last week I mentioned a book that looked like it would be an instant bestseller, not because it is worth it (it might be, I haven’t finished reading it yet), but because its subject tried to have it shut down, thus invoking the Streisand Effect. Well, it is now top o’ the charts.

    Facebook to stop targeting ads at UK woman after legal fight

    In a first of its kind, a U.K. woman has won a legal fight with Meta preventing the latter from using targeted advertising. Meta, characteristically, was disingenuous and specious in its response, first by lying that it couldn’t achieve the request, which it can miraculously do now, and secondly trying to use a straw man argument suggesting that the sanction meant the company had to give aways its services for free. This is a lie and they know it. The ruling is specifically and only about targeted ads. Meta is still free to place non-targeted privacy respecting ads to O’Carroll. It’s just that they can’t rip off both sides with these types of ads, so they will try everything to avoid it.

    We should all follow suit, and I am actively seeking how to go about this.

    The USAID report I co-wrote is still offline, until now

    That report is still offline, so I thought I’d make it available until I get a takedown notice. If you want it, it is here.

    Think your democratic government won’t block the Internet? Think again.

    “Italy is using its Piracy Shield law to go after Google, with a court ordering the Internet giant to immediately begin poisoning its public DNS servers. This is just the latest phase of a campaign that has also targeted Italian ISPs and other international firms like Cloudflare.”


    Have a great week.

    → 25 March 2025, 06:55
Page 1 of 25 Older Posts →
  • RSS
  • JSON Feed
  • Privacy Policy
  • License