matthewcowen.org
About Categories Reading Subscribe Search Also on Micro.blog
  • šŸ“… January 20 - January 26 | Analog Privilege

    In what I can only describe as a fantastically depressing week, with events worldwide kicking off as expected, but simultaneously going further than anyone imagined.

    I wanted to keep this blog apolitical, concentrating mostly on tech, with a little bit of cultural commentary thrown in and a dose of analysis to try to understand how technology has, is and will affect us in the region. That can no longer be the case because I cannot stand by and let powerful (and incredibly vicious) white me in Silicon Valley take everything away to make them and their cronies even richer than obscenely rich. I get the idea that you’d like to make a lot of money, I would too. But there is a point where your wealth is too much. It’s disgusting. It’s a cancer. And just like cancerous cells, it grows and grows at the expense of healthy cells, eventually killing the host. If you need me to spell out who is who in that analogy, I invite you to consider where you place yourself today.

    /rant over

    I promised to write about an article I recently read, and I think now is a good time to do so. The article is called Analog Privilege. It is written by Maroussia LĆ©vesque. The article piqued my interest because it spoke about a long-standing secret in tech. Something that has been reported on, but not enough and not with any conviction either. The purveyors of the tech pushed on us either don’t use it, don’t let their children use it, or have special ā€œoverrideā€ privileges that shield them from the actual damage their wares have on the everyday person.

    If it is good enough for Meta to surveil us 24 hours a day, even when we have explicitly requested they do not, and from applications and products that are not and should not be related, but by backroom deals still spy on us, passing that data to Mark Zuckerberg. The minimum we should require is absolute and complete access to his life. And when a fraction of that intrusion was discovered (the various bots following the private jets of these people), there were shutdowns as a ā€œviolation of privacyā€. Really? In reality, we should be the holders of our private information, and their requests to use it should be ephemeral, restrained in scope, revocable and erasable at any instant.

    My text here combines the topics discussed in the article with my feelings and discussion of them. You’ll note where I use the noun, I use here original American spelling, and use the British spelling for everything else.

    This is called analogue privilege.


    The article starts with a very real-world example of how predictive analysis used by child welfare services in the United States is practically used to stigmatise the poor whilst simultaneously sparing the rich from said stigma. In short, the rich get to pay for real doctors, and genuine patient care, whereas the poor, and soon you too, will be thrust into a semi-automated hellscape of healthcare that is only ā€œcareā€ in name.

    Future Hospital visit: [www.youtube.com/watch

    When listed, the range and scope of interactions required by humans in everyday life, the potential for abuse through the replacement of humans driving those interactions, is enormous. But what is often lost in that replacement is that human interaction is analogue, and consequently messy by definition.

    We, including me, discuss the ā€œdigital divideā€ and how the underprivileged are being excluded from society and how that is a bad thing generally. And that is true. Typically. But this understanding is changing because more and more technology is detrimental rather than beneficial. She illustrates this using the table below:

    		</th>
    		<th>
    			Beneficial
    		</th>
    		<th>
    			Detrimental
    		</th>
    	</tr>
    </thead>
    <tbody>
    	<tr>
    		<td>
    			Inclusion
    		</td>
    		<td>
    			Voluntary opt-in: power
    		</td>
    		<td>
    			Involuntary lock-in: vulnerability
    		</td>
    	</tr>
    	<tr>
    		<td>
    			Exclusion
    		</td>
    		<td>
    			Involuntary lock-out: vulnerability
    		</td>
    		<td>
    			Voluntary opt-out: power
    		</td>
    	</tr>
    </tbody>
    

    The ā€œanalogue havesā€ have the power to decline participation in systems that use AI, for example. Her article focuses mainly on voluntary opt-out of detrimental systems, which she calls ā€œanalog privilege.ā€

    Paying particular attention to the use of AI in systems, the article highlights how, in real-world use, LLMs show ā€œsignificant downsidesā€ for people subject to error-prone (hallucinations), probabilistic guesstimates (stochastic parrots) that incorrectly assume the future mirrors the past.

    What do all trading adverts say? ā€œPast performance is no guarantee of future results.ā€ Why is this obvious statement reserved only for these systems and not AI?

    Chapter I discusses beneficial and detrimental AI systems, making reference to how automation is about streamlining management and breaking down work into ordered, manageable items to be subsequently automated. Placing AI on a scale from beneficial, where we are freed of repetitive, mindless tasks, to detrimental, where enforcement is supercharged, consequently providing adverse consequences for the ā€˜enforced’.

    Real-world examples are widely known nowadays, like the Amazon delivery drivers forced to keep empty bottles in their vans so they can urinate because the algorithm doesn’t factor in human needs through a cycle requiring ever-faster deliveries to hit the targets set by a detrimental AI. This is an example of how a beneficial AI morphed into a detrimental one, and the article shows how automation can easily and quickly degenerate with disastrous consequences.

    Noted in the article is that a lot of literature and discussion has focused on the potential AI harms, while missing who AI spares. The debate has centered around inclusion, fairness, and guarantees of transparency, but has failed to discuss the divide between the analogue haves and the analogue have-nots as an important issue.

    One example discussed is LegalTech, where a two-speed system is observed, one analogue for the rich and the other automated approximations of legal representation for the poor. Similarly, social media studies have shown the trade-offs of automated content moderation, where being flagged, struck off and even banned for life is reserved for the analogue have-nots, with no recourse, precisely because the analogue route (talking to someone) is tightly cut off for the likes of you and me.

    With the prevalence and accelerating deployment of these systems, the actual harms being caused are not the world-ending consequences plied by the AI grifters, they are the everyday acts of violence committed by an unfeeling algorithm that suffers no consequences for poor decision-making and possesses absolutely no compassion or empathy for the human using that system.

    A thread that comes up repeatedly in any serious analysis of these systems has been coined as technosolutionism. Big Tech’s true sin is its arrogance and belief that an algorithm can solve anything. As I discussed earlier, people are messy and illogical, and relying solely on algorithms is a recipe for disaster for those who find themselves in the ā€˜Involuntary lock-in: vulnerability’ quadrant of the table above. Tech used to work, and it used to solve very well, but only for a small set of very specific and defined problems. As tech has become ubiquitous, tech is showing its limitations, but the tech leaders still believe that the next .0 of their wares will fix these. What those releases generally do is resolve old issues and create a whole host of new ones.

    The paper also highlights the obvious conclusions of digital colonialism, where analogue haves are de facto at the top of the pile, benefiting from the exploitation of those underneath. Akin to gated communities, the elite extract themselves from society, refusing to participate meaningfully and contributing nothing to the local community.

    The paper then gets into details about how and where LLMs fall short, with a particularly telling conversation with a chatbot:

    ā€œPromt: Get your sofa onto the roof of your house, without using a pulley, ladder, a crane …

    Human response: I will build a large wooden ramp … on the side of the my house with platforms every 5 feet

    GPT-3 response: Cut the bottom of the sofa so that it would fit through the window… break the windows to make room for the sofa.ā€

    Yeah, sure.

    And if you think I’m a little melodramatic, take facial expression technology. It is entirely based on the already-discredited pseudoscience of Phrenology —I’ll spare you the racist and sexist origins of it— but that isn’t stopping companies from working hard to integrate it into products from criminal investigations to dynamic pricing in supermarkets based on the face of the potential buyer, such a disgraceful and immoral thing to do. And please keep in mind that these systems are known for making stuff up, so it could be you who falls victim, and you wouldn’t even know it! (See below in the reading section).

    Maroussia LĆ©vesque has a far more eloquent discussion of this topic, and I would definitely suggest you read it. Although her case study focuses mainly on LegalTech, it isn’t overly technical in that domain and is very readable.

    You can find it here.


    It’s probably time I concluded this…

    Despite this being a largely critical look at tech in general and AI specifically, I do acknowledge that there are many potential benefits, but I would say that it is incumbent upon us to ensure that tech is used for good and for the benefit of everyone. That requires remaining vigilant to the grift, the abuses, and the apparent dog-whistling of products and services that will be used against us as soon as they possibly can.

    One area that seems to be taking hold is the use of federated systems, which wouldn’t necessarily solve the problems above but would contribute to better governance. I’ve discussed federated services previously, and I hope you start building out systems along those lines for beneficial use. The most widely used federated system existing today is email. The millions of email servers on the internet all talk to each other, allowing efficient communication from user to user without requiring that user to be beholden to a specific company. Google tried to break email and centralise it for its own selfish use and future extraction. Thankfully, it didn’t work. The more services that are generated, the more chance we have of saving tech and the Internet from its current destructive path.


    Reading

    āž” ā€œZuckerberg Poisons the World — Selling Digital Opioidā€

    I don’t think the future of social media is bright in its current form. I think that bears out with Meta’s investments and panics towards the ā€œMetaverseā€ (how’s that going?) and now AI. Looking for the next grift is an endless task.

    āž” Automation in Retail Is Even Worse Than You Thought

    ā€œNew technology is not just making shopping more challenging for workers and consumers—it’s poised to rip off the most vulnerable.ā€


    Have a lovely week, and thanks for reading.

    → 28 January 2025, 08:00
  • šŸ“… January 13 - January 19 | All your Internets are belong to us

    Tiktok, by Nick Anderson.

    A picture tells a thousand words.

    I didn’t set out to write this. I had planned on writing about the recent paper (Analog Privilege) that I read on a recent trip and came back with many notes in the margin of the printout. But events sometimes take over what you had thought about doing. What is it they say?

    Best-laid plans are laid to waste.

    Yeah.

    This started life in the ā€˜of note’ column, and as I started writing it, I figured it would be better in the main section. Like always. I have thoughts.


    I wanted to discuss my feelings about some of the major issues affecting the tech world and the Internet today. Yesterday (as when this gets emailed) was inauguration day in the United States of America. The American people have chosen, democratically, a candidate that, for want of a better description, acts like a Mafia boss. This will have, and has already had, significant consequences on how the Internet will change over the coming years.

    More than ever, the Internet is central to the lives of billions of people, for better or worse. The incoming regime will test the institutions of the Internet more than at any time in history, from areas concerning Section 230 that protect online platforms such as social media from sanction for any publication on or via its system (wrongly in some cases, in my view).

    It will challenge the underlying governance structure of the Internet, which is currently open to all (although many nations in the Global South would, rightly, dispute that), exacerbating the so-called Splinternet as decisions on ā€œun-Americanā€ products like routers, firewalls, switches and services linked to regimes this administration considers ā€œanti-americanā€ —which by definition is anything not American— will be targeted for sanctions, punitive tax structures, and even outright bans.

    This will inevitably lead to retaliation from those targeted. And despite the bluster and political propaganda from (generally) right-wing journals, the EU has not stopped its program on reigning in abuses of the Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs), as the ongoing saga of its investigation into Google’s refusal to fact-check its search results, or in recent step up its investigation of Twitter’s abuses of the DSA, or even Meta’s wholesale abandonment of fact-checking and moderation.

    Note: I’ll use the term VLOPs from hereon. Not because I agree 100% with the DSA, but it is a convenient term rather than Big Tech (not specific enough), Social Media (Networks or Media? Active or Performative?) or other loose definitions.

    Side note here: You may have knee-jerk reactions to the DSA and the DMA. You should take the time to read what the acts are; here is a good start. You may not entirely agree with them, but you cannot argue that they are anti-consumer. In fact, this is the most significant difference in legislation and governance of the Internet seen between the three largest groups in the world that have enormous sway in how the Internet is run. America is pro-business, to the point that it willingly throws its consumers under the bus; the EU is pro-consumer, to the point that it throws businesses under the bus; and China is pro-CCP, meaning that everyone gets thrown under the bus if they are out of line. This is a poor abbreviation of the much more detailed and nuanced arguments in Anu Bradford’s book, ļæ¼Digital Empiresļæ¼ (available in bookshops and most online retailers).

    If the recent events of TikTok are any indication, they show us what many of the large platforms will do. No later than a few hours after TikTok went dark this weekend, Meta announced that it was developing and soon releasing a blatant rip-off of a much-used TikTok service, CapCut.

    What we see is an entirely political manoeuvre by Mark Zuckerberg to exploit a situation for his own gain. Proving that he has virtually no morals, not that he had strong ones in the first place.1 His machinations will prop up a more nationalistic and corrupt government in his home country, thus legitimising the same in many other nations worldwide. I’ve said as much before the Brexit referendum, that the nationalistic overtones of that consultation (that incidentally, the outcome of which was not legally binding and was advisory only) would entrain a slow march to nationalism in Europe as it would empower and legitimise the would-be despots and autocrats to push harder than before. We’re seeing it in France, where the RN (Front National) made significant gains by laundering their image (it’s a lie, and France will regret it if they’re elected), in Italy with a populist right-winger that will shortly show her true colours, and again in Germany, where, clearly, much of the population has entirely lost any connection to their past. Plenty of ink will be spilt on this topic and those related to it. I’ll let you form your own opinions.

    The Internet is central to all of this, and it is the Internet that could help pull us out of this downward spiral, but it is not going to be done on the VLOPs as they will (and are doing so already in the US) controlling what you can and cannot see. It is going to be done from the ground up. Not by actively using the big platforms and staying passive while rights are being squashed. Under US law, the VLOPs private businesses and thus can allow/disallow whomever they like. Just remember, they don’t have your interests at heart.

    A better way for the future of the Internet is more federation. Federation means that you don’t need to accept the bullshit of VLOPs, and you can easily move to a new instance without losing the connection to your friends, family and other professional connections. Currently, if you get banned on LinkedIn for an unfair or unreasonable reason, you can do nothing (see above about private business). You might even suffer professionally. With a federated alternative, you would be able to move all that data to a new instance without losing access.

    Even better, and I’d recommend this: you should set up your own website with a domain you control and publish your CV (minus personal data), pointing people to it when needed, similar to a LinkedIn link, but on a platform where you are in charge, and without the privacy-invading surveillance. You are then only subject to your country’s laws regarding what you cannot publish.

    The simple fact is, and the one thing I would like you to take away from this, is that the Internet is changing and will not be the same as it was just a few years ago. It is changing for the worse, despite the efforts of the EU and others (however poorly you may think they are done). And its current centralisation in the hands of overt extremists will be used against you at the earliest opportunity.


    Reading

    A few articles and things I’ve been reading.

    Lessons From Red Teaming 100 Generative AI Products

    Microsoft Red Teamed one hundred generative AI products. Their conclusion? ā€œLLMs amplify existing security risks and introduce new onesā€. To boot, they note that securing AI is an impossible task in that it will never be complete.

    New data set reveals 40,000 apps behind location tracking

    ā€œ380 million location data from 137 countries: a previously unknown data set from a US data broker shows the dangers of global data trading. 40,000 apps are affected ā€¦ā€

    How is this not 1984?

    Five things privacy experts know about AI

    1. AI models memorise their training data
    2. AI models then leak their training data
    3. Ad hoc protections don’t work
    4. Robust protections exist, though their mileage may vary
    5. The larger the model, the worse it gets

    That fifth point is directly related to last week’s article.

    CEO of AI Music Company Says People Don’t Like Making Music

    Filed under: What the fuck is wrong with these people?

    UK government plans to splurge billions on AI

    Rather than funding things that they can fix today. What could possibly go wrong?


    Of note

    See above.


    Thoroughly depressed and hoping for a better future. Have a great week.


    1. Zuckerberg has desperately tried to retroactively change this historical fact by claiming that it was a ā€œprankā€ website. He is a compulsive liar. Understand that. ↩︎

    → 21 January 2025, 07:57
  • šŸ“… January 06 - January 12 | Are bigger models better for AI?

    Good day everyone, and may I wish you a happy 2025.

    I’m back with the first proper article of 2025 after a well-deserved break from several activities. I kept working over the holiday period, but I took a few days off to be with family who had returned to the island for a short stay. I was very happy to take that time off.

    You’ll have noticed that I snuck in a quick note during that period. I had the idea to lay down some thoughts I’ve been having about the state of (tech) community in the West Indies. It was kindly re-published on the platform that I was talking about (CIVIC - I’ve since discussed it with the forum admin, who revealed the name in his post, so I’m fine with stating it here now. I thank Yacine Khelladi for that). It didn’t generate as much discussion as I was hoping, which proves my point sadly! One company did reach out, and I’ll be responding this week.

    Anyway, this one is about a paper I read in November —yeah, I live an exciting life— about an aspect of the AI hype cycle that is often misunderstood or forgotten. And, as a bonus, invokes Betteridge’s law.

    AI hype has done at least three things aside from putting the marketing term in the general public’s lexicon. One, it has hijacked discussions on more serious matters by installing, like a brain worm, in the minds of decision-makers a sense of concentration on AI that will (should?) fix all the problems. ā€œGot healthcare problems in your country? AI’ll fix it. Just give us more money.ā€ Two, it has been allowed to talk over debate about whether or not more compute is going to solve the very real problems current generative AI has. The third, and most important for me, is that it has wholly obfuscated sensible discussion on what AI is. Almost anything that calculates is AI now, according to some. This is stupidly untrue, and the differences between generative AI (GenAI), Machine Learning (ML), deep learning (DL), and many other types of AI have been completely lost.

    We in the Caribbean have been largely shielded from this locally and only affected during the consumption of products and services from the US and other countries. However, that will change massively in the Caribbean throughout 2025. Everything is going to be AI.

    I’m here to ask you to watch out for this intellectual sleight of hand and question what is actually meant by ā€œAIā€ when a company is trying to push its wares on you. When your local electricity company says you need to replace your meter with one that has AI, What does that actually mean? In that instance, AI has only replaced the last marketing buzzword of the moment in the instrumentation field, Smart. Smart Meters, Smart Controls etc, etc. It was never ā€œsmartā€ and won’t be ā€œAIā€ either.


    The paper reviewed here is from Gaƫl Varoquaux, Alexandra Sasha Luccioni, and Meredith Whittaker. It is called Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI. You can find a pdf here: Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI.

    It discusses the attention and growing investment in Large Language Models (LLMs) and the narrative that bigger is better. That is, more compute and more data is key to a better AI. The paper challenges this view by asking a couple of simple questions: What is this assumption based on? What are the collateral consequences of this?

    The paper looks at one of the most influential studies about AI scale and performance (AlexNet)1that discussed this.

    all of our experiments suggest that our results can be improved simply by waiting for faster GPUs and bigger datasets to become availableā€

    This set the tone for other studies that, already mindful of this study, concluded similarly into what has become an assumption in the same way that the assumption of Moore’s Law has anchored itself into a discussion of processor performance since it was first ushered. Like Moore’s Law, the reality is much more nuanced and complex. For example, modern processors are actually many processors on a silicon die, more akin to a ā€œsystem on a chipā€ than a simple processor. It could be argued that individual processors hit a limit quite some time ago and that a workaround and redefinition of what a ā€œprocessorā€ is was needed to keep the dream alive.

    So it is with LLM models. They follow more of a law of diminishing returns than a ā€œdoubling of transistors every 18 months (and thus performance).ā€ After a certain point, a saturation point is reached on many tasks performed by LLMs and other AI models. Several studies have shown this, as described in the paper.

    This is precisely why we’re starting to see models that can be run on personal computers, with one of the most influential companies in the AI hardware space, Nvidia, announcing its own AI workstations, marketed to individuals and institutions that have hit a wall with cloud-based AI solutions that are both costly and often less secure.

    Not only has performance plateaued in many circumstances, but the demands on resources have exponentially increased for little gain. Yet, it appears that many applications do not need scale to be efficient and useful. In a medical imaging test on ā€œorgan segmentationā€ (an ML application), models over around 1 GB in size plateaued in performance shortly after reaching that GB, despite the medial images themselves often being much larger. Performance tanked as the models got bigger and bigger. Other applications seem to corroborate this, too. In computer vision, performance rises quickly and then tanks as the models pass over an ā€˜optimal’ size. This has also been shown to be the case for LLMs. One test showed that the LLM performance started to decline from around 100 Gb in size.

    Seemingly, a conclusion is being formulated by a number of those in the research community that smaller, more focused models are better for accuracy and performance reasons, but also another factor that I hinted at in the introduction.

    Some assume that feeding the machine with more will automatically render the results of that machine ā€œbetterā€. We have seen in a couple of examples that that assumption is simply not true in many circumstances. But that assumption also ignores another simple constraint: resources are not infinite.

    Compute is constrained by physics and the capacity to manufacture reliably and sustainably. Energy is constrained by multiple factors, such as production, delivery, and cost, to name a couple. Code itself is constrained by developer’s capability, productivity and the very real issue of time. Have a read of The Mythical Man Month, otherwise known as Brook’s Law.

    Its central theme is that adding manpower to a software project that is behind schedule delays it even longer.

    Another aspect often ignored or glossed over is the costs associated with the alleged performance gains. Growth in the computer required to create and deploy AI models grows faster than the compute cost decreases. So, as their accessibility increases to ever more of the population, there is this wishful thinking that efficiency improvements will solve this. However, the paper points out an economic effect called Jevons Paradox.

    It is a well-known phenomenon in economics that when the efficiency of a general-use technology increases, the falling costs leads to an increase in demand, resulting in an overall increase in resource usage.

    The paper discusses the environmental effects of CO2 emissions, concluding that the cost of a single AI inference is growing faster than compute is improving. And given the fact that companies are scrambling to add ā€œAIā€ to everything, something that was painfully on display at this year’s CES in LasVegas last week, this could increase by an order of magnitude the carbon footprint of AI use and thus contribute even further to the collapse of the climate. To understand this, it is important to understand that inference is AI’s biggest compute cost centre. Google ā€œattributes 60% of its AI-related engird use to inferenceā€. Other studies have shown that with a few million users daily on Open AI, energy use of inference outweighed that of training within a few weeks. It is no wonder most big tech firms have gone quiet on their sustainability targets, with companies like Microsoft announcing that they would ā€œmissā€ them. Miss, is an understatement!

    The paper discusses quality, which I’ll let you read as it is, too, pretty much universally ignored by the marketing materials and the hyping up of the ā€œbenefitsā€ of AI by the salesmen who have their interests at heart, not ours. Suffice it to say that bigger models produce more errors at the cost of compute, energy, CO2, etc.

    Lastly, the paper discusses another angle often lost in the discussion about AI. Scale, i.e., bigger-is-better, is really a means to build a moat around these businesses. GPT-type models are, for the most part, pretty simple, and there are enough examples and open-source projects that it is straightforward to build one yourself —given you possess the right technical skills. In other words, you cannot patent them and protect them using IP laws. So, how do you make it harder for anyone else to enter the game? Have you seen the cost of setting up and running a datacenter? That should tell you all you need to know. To give you an idea, Nvidia’s H100 GPU costs about $40,000. You need a lot of them. Meta is estimated to have spent $18 billion on GPUs in 2024 alone. So this essentially eliminates all but the biggest of budgets, thus protecting the established players, not to mention the circular investment deals like Microsoft and Open AI.

    The broader issue with that is that scientific investigation and innovation can only be done with the blessing of these companies, giving them unilateral control to approve, deny, and stop research if that project doesn’t align with its views. Given the recent far-right shift in the politics of Meta, I’m sure this will end well. Nvidia’s aforementioned AI PC is a tiny step in a direction away from this, but it is not enough.

    As Caribbean residents, we are perpetually exposed to the ravages of the climate. I seriously question whether these tools are useful enough to warrant their ubiquitous usage in the region. I’ll leave that up to you and your conscience to determine.

    Go and read the paper; it is just shy of 10 pages long, and the language is not too technical that non-techs can’t understand.


    Reading

    Here is a quick summary of articles I’ve read recently. These are not endorsements of their content. I sometimes vehemently disagree with their premise but feel it is important to read as much of a variety of views as possible.

    Mark Zuckerberg’s commitment to free speech is as deep as Exxon’s commitment to clean energy

    Nuff said.

    IGF 2024 in Riyadh: AI, WSIS+20 and the Global South

    A roundup from CircleID on the IGF in Riyadh.

    ā€œThe Caribbean is a microcosm of Big Tech’s digital colonialism. Small and medium-sized emerging countries are profitable to exploitā€

    Consulting firm Strand Consult discusses what we all know already. Where are they from again?


    Thanks for reading. Please share with anyone you think might like to read. Have a great week.


    1. . ↩︎

    → 13 January 2025, 20:35
  • šŸ—žļø A little therapy and an announcement

    Good morning all, from a sunny Martinique this morning.

    I’ve been going over some old writing as a sort of therapeutic exercise or as an exercise in self-harm. I haven’t worked out which it is yet!

    Anyway, I came across something I wrote two years ago, almost exactly to the day. It is a half-baked manifesto for building community in the Caribbean around tech and society.

    My participation in a fairly moribund forum on tech in the Caribbean motivated it. I will not name it, as it serves no purpose other than to highlight what actually happens when a forum like this exists. The admin does a stellar job and tries to keep it going, regularly posting links and articles about tech in the Caribbean. I and several other stakeholders met to discuss how it could continue, flourish, and develop. The meeting was sparsely attended, and unfortunately, no follow-up ensued.

    It frustrated me because I saw a Caribbean with a lot of potential and abundant talent. I see it because I’m embedded in it. Anyone not in the sector or region, however, cannot and don’t, which contributes to the false impressions of the region and stifles the potential before it gets off the ground (internally and externally). Many institutions and businesses in the Caribbean don’t put enough faith in their fellow companies, consultants and innovators. How do we then expect companies outside the Caribbean to put faith in Caribbeans?

    This was a sort of ā€œget some ideas down on paperā€ rather than a thought-out manifesto, and it addressed a specific case. I wanted to post it to the forum in question, but didn’t in the end. Perhaps I should have!

    But I still stand by what I wrote.

    As always, I’m open to discussion/criticism/correction.


    (Reproduced here, as is. Un-edited)

    Good day to everyone.

    From the outset, my apologies in advance for the brusque language. I don’t wish to offend.

    I’m a reasonably recent lurker on this forum. Some of you may have heard of me, some not. I’ll spare you the CV —please connect on LinkedIn. I’ll be only too glad— but I hope I can contribute to the discussion.

    Firstly, as noted above, I am as guilty as anyone. I don’t contribute enough.

    We have to address the elephant in the room. This forum is essentially a roundup of news and a few discussions that peter out fairly quickly. Why?

    Most discussion is offline, direct, WhatsApp, and P2P —therefore hidden and lost to the community. We must foster, encourage and change how we all communicate on topics related to tech and the ecosystem in and around the region.

    An in-person event is all well and good (and I think it should be done). But it’s useless if there’s no follow-up, no follow on, no ongoing discussion, idea-sharing, etc. However, don’t forget that getting the meatware (humans) around the Caribbean is a big and expensive problem. I recently spoke to someone who had come to the FWI from Jamaica - Two days there. Two days back. šŸ¤¦ā€ā™‚ļø

    Regarding: online discussions, if I look at the online groups I participate in, discords, slacks, and forums, there is healthy traffic and real (deep) debate on topics. If anyone asks a question, there are several responses. If anyone doesn’t understand something, many people are willing to help out. Plus, the other ā€œoff-topicā€ discussions about food, games, etc. etc. … pretty much a real community.

    We all seem so frightened of losing business by talking out in the open and trying to work together on projects. I’ve seen it so many times firsthand. I’ve tried to get people working together, only to fail because of a lack of trust. So as a result, we don’t work together.

    I’m not saying it never happens, but there needs to be more of it.

    I would like to see a Caribbean working hand-in-hand with healthy ā€œcoopetitionā€. Take a look at the economic model in Prato, Italy. The Prato model decentralises and specialises the processes (recycling) involved across many small companies, each expert in a specific domain. We have too many ā€œgeneralistsā€ and not enough specialists working together. We have too many individual consultants (me included). We have too many MSMEs.

    95% of businesses in LATAM are MSMEs (counting for 65% of all employment with a GDP contribution of … 25%. JFC, this has to change).

    Just look at the way RFPs go. To outside companies or consulting firms with a PO Box in Trinidad. FFS! We can’t objectively compete as it stands.

    We have to build it. No one will do it for us.

    It would be awfully remiss of me to post this without humbly offering some suggestions:

    • A better discussion platform. The current one is no longer fit for purpose, I believe. There are many options - Discord, Slack, Teams, Mastodon Instance, etc. I’ll make no judgement.
    • More discussion will slowly build trust for a wider population. Perhaps even creating genuine partnerships/agreements to work together on larger and larger projects.
    • A social media manager that can promote and animate topics and discussions to the broader region and diaspora. (See my last point too).
    • Building on the above, a podcast? A newsletter? What can we do to build interest and contribution?
    • An annual (in-person?) event - targeted to the members’ needs (survey-based feedback would help clarify those) with a virtual attendance component for those of us in transport purgatory.
    • A monthly/quarterly virtual meeting designed to keep/build links - on specific topics of the moment.
    • Incentives - what brought you here? What do you get out of it? What can it bring for you and your business? People will only contribute if they get something out of it. That is not the same thing for each person/business. Some it’ll be recognition, others access to trade.
    • We shouldn’t exclude other sectors. We need them in the way they need us. Tech is no longer ā€œcomputers and ICTā€. It touches EVERYTHING. Attracting those that are not from our background is key to us having a better understanding of the opportunities. Let’s not copy the Tech Bros of Silicon Valley. Their arrogance and belief that code can solve the world’s ills is bullshit and always has been. We’re better than that and have more skin in the game.

    Feel free to critique. With pleasure. I’m open to discussion.

    I don’t profess to know everything or have all the answers.

    I wanted to share my feelings, impressions, and thoughts. Nothing more, nothing less.

    To the community, I’ll make myself available to help out where and how I can.

    Have a great day, and I look forward to building alongside you all.

    Happy New Year, 2023


    Updates on my online life

    I regularly participate in discussions on two well-known podcasts, the ICT-Pulse Podcast (English) and Innovation, AgilitƩ & Excellence (French).

    I do a piss-poor job self-promoting this work, so I thought I would do something to rectify that this year and also promote their work and thank them for the effort they put in regularly to bring interesting topics to us all.

    To start, here are the last couple of episodes that I featured:

    ICTP 330: ONLINE ADVERTISING, LEARNING AND COLLABORATION AND DIGICEL’S LATEST DIGITAL TRANSFORMATION REPORT

    ƉPISODE 187: PROMESSES ET RƉALITƉS DE L’IA AVEC MATTHEW COWEN

    NB: If you don’t speak French, there’s a really simple trick to get the text in English. Use a Whisper model, like MacWhisper, to output a transcript. Then, use your favourite translator (even LLMs will do a reasonable job), and you’ll have the discussion.


    Announcing the Caribbean Digital Compass

    This year will see the launch of a project I’ve been trying to get off the ground for a while. I’ll be working closely with Michele Marius (ICT Pulse); we have already started putting things in place. It will start slowly but surely and evolve based on your feedback.

    It’s called the Caribbean Digital Compass. It will specialise in regularly discussing two to three topics (released as and when written, i.e., no specific timetable). It will give you the background to understand the topic, provide context for the Caribbean, and provide opinion and analysis where relevant.

    We plan to write frequently but concisely, keeping the word count reasonable. We also want to record some podcasts that discuss the thinking and research behind the articles, but I suspect that will come later.

    Anyway, I hope you come along with us for this journey, and you can sign up here at caribbeandigitalcompass.com/subscribe/

    The site is currently a bit rudimentary but will evolve over the coming months. For now, all articles will be published there, and subscribers will receive an email as soon as an article is published.

    Just one more heads up. We intend to make this a paid product in the future, but we’ll see how it goes. I wanted to set your expectations. If we go paid, there’ll be a free component, so you can still support us. The cost to subscribe will be small as it is targeted to cover the maximum of the business population in the Caribbean, who, as we know, have very different earnings from island to island!


    Happy New Year, 2025

    → 6 January 2025, 15:53
  • šŸ“… December 16 - December 22 | A short(-ish) review and thoughts for next year

    If you know me well, you know that this is one of my least favourite times of the year. It’s complicated, so I’ll spare you the details. But It does seem to be tradition for blogs and newsletters to do a retrospective of the, soon to be over, year. It’s possibly something inherited from the traditional media industry. And whilst I’m not really a fan of this tradition —you can call me a miserable git for detesting the Spotify-like ā€œwrappedā€ emails that have been invading and polluting my inbox and general use of apps. Please stop!— I thought it might be useful for me to have a look over some of the stuff I’ve been doing here this year. You don’t have to read this; please just click Mark as Read if it’s not your thing. I won’t judge.

    I might take a break next week, so if you don’t see a new email next week, you’ll know why.

    In the meantime, have a good holiday.


    I started the new year with a new site (this one) with the intention of bringing all my writing and online presence into one place. The Substack ā€œNazi Barā€ problem precipitated the move, and I needed to move off the bigger platforms onto something more independent and open that also had values around the open web.

    I landed on micro.blog, a small company providing a simplified and highly customisable platform for blogging. It included newsletter capabilities (albeit rudimentary) and embraced federated technologies for authors to publish on other platforms. It has its own community and provides a social-like experience on a smaller and more manageable scale compared to standard social media (the Facebook’s and so on.)

    Since then, I have posted exactly 60 articles, averaging one article a week in newsletter form and a couple of other posts about specific topics. I have been able to keep up a stable rhythm of writing and posting. I generally write starting on Sunday evening and finishing the article on Monday. Editing and proofreading are done on Monday evening so that I can post in the evening or early morning. I have been pretty consistent all year, although lately, I’ve been posting on Tuesday morning.

    This post will be the 61st of the year, pushing the total number of words to over 50,000. Depending on who you ask, that is about the length of a book. It is on the lower end of the word count but book-length nonetheless.

    In July, I started working with a small IT company on a part-time basis, and personally, I was worried that it would make it difficult for me to keep writing these articles. In the end, it worked out rather well, as it gave me a certain structure conducive to reading, researching, and writing a lot. I’m grateful to them for my employment. I’ve been continuing small contracts with other businesses, too, so the variety has been interesting and helped me keep my feet on the ground regarding what small businesses are living rather than the lofty ideals the media and tech companies often project. I can tell you they have literally no idea what is happening or what to do and are just happy to keep the money flowing through subscriptions, up-selling and cross-selling. The day a big firm really understands SMEs and does something for them, they’ll clean up.

    This has been, loosely, the theme of this year’s writing: the disconnect and the disconnection of tech and society. A new word entered the dictionary, enshittification, to describe how businesses like Meta, Google, and plenty of others I have written about here have all turned a corner and are now actively making our online lives as hostile an environment as possible to maximise their profits.

    I won’t re-litigate that here again; you can read a few of the posts I wrote here:

    https://matthewcowen.org/archive/

    I think the most significant topic I have concentrated on this year is Internet Governance.

    I was in the ARIN Advisory Council election, and despite not getting elected, I didn’t disgrace myself. I look forward to being on the next list and continuing my involvement with ARIN. I hope we can do something with the French West Indies this year. I’d had initial talks, but time ran out. I’ll keep you posted.

    I will continue to concentrate on this topic, and I’ll be looking to see if there are any employment opportunities in this space for someone like me who has many years of experience with the Internet and associated technologies and likes researching and writing about it. Please reach out if you know of anything.

    I wrote a little about LLMs and had to admit defeat on the terminology. To re-iterate, LLMs (Large Language Models) like ChatGPT and the like are not Artificial Intelligence. Statistical next-word predictors are all they are. Impressive sometimes, yes, but absolutely useless most of the time. Unfortunately, that’s the conclusion that many are coming to now, as the rubber has hit the road. I’ll remind you of my experience in the Microsoft Copilot training, where the trainer used Copilot with a prompt that didn’t produce what was expected and then took a bunch of time to coax it into working, arguably spending more time than just doing the damn thing manually!

    This year, I’ve tried to take a more pragmatic approach to discussion tech, Digital Transformation and the topics that seem to attract more smoke and mirrors discussion than real-world and practical analysis. I don’t think I’ve done an amazing job at that yet, but I think I’m definitely better at it than I was and probably better than a number of prominent tech journalists. To clarify, I’m not suggesting I’m a better journalist. I’m suggesting that I’m perhaps a little better at critical thinking about tech.

    I hope to continue writing like this. I enjoy it, even if it is painful and cringe-inducing at times. I hope you’ve enjoyed reading it.

    My ambitions for 2025 are to be more structured in my approach to articles. Next year, I plan to explore a topic in more detail, so maybe I should write a series of articles. Let me have a few glasses of Rhum over the holidays, and I’ll come back refreshed and ready to continue.

    Thank you for reading, and I’ll be back in the new year.


    Reading

    My reading has only increased this year. I suspect next year will be no different. I have a ton on the backlog to read, and more is coming out daily.

    Artificial Intelligence Isn’t Actually That Amazing - The reality can’t match the hype.

    I think the title says it all. I’ll leave you to discover the short article. Just bear in mind that this is from an economist.

    Dark Patterns in Cookie Banners: CNIL issues formal notice to website publishers

    You may or may not like the cookie consent forms (which, by the way, exist because of the perverse interpretations from the ad tech industries of the requirements of EU regulation). Still, they try to move some control to you, the reader, rather than the ad tech industry. Well, they’re unhappy and not content enough with the already vast sums of minutiae about you and your life. They’re also sneaky bastards trying to trick you into clicking where you don’t want.

    The ā€œSovereign Democratic Infrastructureā€ Hyperscalers Trick. Why We Shouldn’t Fall for It, and What We Should Do Instead

    This is a very considered discussion of technology, sovereignty, data centres, and control. I’m not sure I agree with every point made, but I’m glad I read it.

    European Union Member States Speak Up for Encryption

    Noticeably absent… France, Spain, Belgium, Italy and others that should know better.


    Enjoy your break if you have one, and have a great week.

    → 23 December 2024, 20:08
← Newer Posts Page 3 of 46 Older Posts →
  • RSS
  • JSON Feed
  • Privacy Policy
  • License