Matthew Cowen
About Newsletter Categories Working Library Subscribe License Search Also on Micro.blog
  • Computational

    In this essay, I explore the meaning and consequences of Computational Computing. It’s a deep reflection on the state of tech currently and a work in progress. Some of it might not be coherent or have a conclusion, but it is all how my mind is thinking about the world today.

    I have to say, overall, I’m more optimistic than I am pessimistic. Not that you could tell from this piece. 😁

    What do you think? Hit reply or post in the comments.

    Share


    We’re entering the era of computational everything.

    For many of us in the IT industry or those of us who are closely related to tech, it comes as little surprise. But what is surprising is just how much “computational” is enhancing and augmenting our capacity as humans.

    Steve Jobs famously said that computers were bicycles of the mind. He quoted a study in Scientific American, where the Condor was outclassed by a human using a bicycle in terms of efficiency (distance travelled versus calories used). Humans were classed well below average without the bicycle. This is what computational-everything is exactly like.

    Much of what we do today, and have been doing over the last couple of decades or so with computers, has largely relied on skill. Your raw talent or your tenacity have generally determined if you have become highly skilled or an expert in your chosen domain. Malcolm Gladwell’s 10000 hours principle.

    But computational doesn’t mean computing. Let’s have a look at what I mean by computational.

    Computational beginnings

    Before computers were the ubiquitous tools we all interact with in modern times, with much work completed manually. Simple tools developed over hundreds, if not thousands of years, enhanced calculations and reduced errors. From the abacus to the slide rule, these basic augmentation systems had a profound impact on human’s ability to complete more and more complex tasks.

    Moving forward, the first computers —in the modern sense of the definition— like ENIAC (Electronic Numerical Integrator and Computer) or CNC (Complex Number Calculator)1 pathed the way for people and businesses to exploit their powerful and rapid algorithms to produce results faster and with fewer errors than teams of humans. Business ledgers rapidly took advantage of the new possibilities and these have essentially stayed the same since then. Accounting applications today, despite being much faster, are ostensibly no different from the early versions, save a few bells and whistles.

    Modern computers, or more precisely modern CPUs, have evolved to such an extent that an entirely new way of computing is now possible. Hence, the term Computational chosen

    And once again, ARM is at the forefront of this revolution in computational implementations. Based on ARM designs, Apple’s in-house designed and developed M1 processors that were recently revealed and are used in its line of Mac computers as well as the new iPad Pro features a 16-core Neural Engine —the hardware required for Machine Learning. The computational advantages, the bicycle for the mind if you like, are startling, allowing developers and scientists to imagine uses beyond what used to be possible.

    We’re starting to see today’s applications gain some of that capability in mundane productivity applications like Microsoft Word or in writing augmentation applications like Grammarly. Translations tools are only just starting to show their promise, and tools like DeepL produce text so good that only the pickiest natives can spot errors…in multiple languages too!

    The bicycle for writing, the bicycle for … Just about anything we will do on computers is upon us.

    Computational photography

    A few years ago this new era of computational photography kicked off with the introduction of applications that could suggest ameliorations and more for your photos. Apps designed to enhance your meh photos into something presentable.

    Portrait Mode, Studio Lighting, these computational photography aids, let us take photos of subjects in ways that were prohibitive or expensive. Computational photography has only just started to scratch at the surface of what is going to be possible. Take a look at framing for example. Framing is an integral part of photography skills. Learning how to Frame an image is often the difference between a good photo and a great photo.

    New devices and apps will soon tell you ‘how’ to frame the image. The phone fine-tuned to analyse in realtime the image touching the sensor and calculating in a matter of a split-second that the tree on the left would look better a little less at the edge, or that the face orientation would be better slightly turned more to the right and aligned on the right ⅔ line, prompting you to tell the subject to turn a little right.

    That’s not all. The apps will post-process previously taken photos with the eye of a seasoned expert, extracting not only the optimal image quality based on exposure, temperature, and a few other basic metrics easily adjusted, but by image enhancements using Machine Learning to digitally enhance otherwise lost parts of the scenery (Pixelmator already does this to some extent). But it will frame the photo better, suggest or automatically apply a degree of touch up only the most trained and experienced could do in hours of work, in a fraction of a second. The edit notes all saved so that any one or more, separately can be removed or modified manually.

    Computational audio

    Just this week, Apple introduced Spatial Audio to its music streaming service Apple Music. Spatial audio enhances the separation between voices and instruments to provide a more “inside the music” feel to your listening. For now, these tracks are limited to a few thousand in the catalogue, with more on the way soon. But in the not too distant future, computational audio will be capable of real-time post-processing to enhance the audio you feed it.

    Suddenly old tracks or tracks recorded on the master tapes in mono will gain stereo or spatial audio simply by request.

    Currently, Spatial Audio is a bit hit-and-miss, as it requires a good mix from a good producer. Not all are that good. Soon, computational audio for music tracks will auto-analyse and re-mix music to sound better with little human intervention.

    Other forms of audio, hearing aids, or headphones, will all gain capabilities that augment the experience. Purists may baulk at this, possibly with reason, but it is an inevitability that will be widely welcomed by those that are just looking for a decent sound. Those with hearing issues will rejoice.

    The computational audio applications don’t stop at playback, recording may be fundamentally affected too. Microsoft and Apple both include microphones that automagically enhance the voice audio for online meetings. Possibly exacerbated by the pandemic, but the perfect testbed for even more integration of CPU and microphone to provide a better and simpler recording experience. 

    Anyone who has run a podcast will tell you how the microphone setup is important for the sound quality of the recording and hence the quality of the show.

    Apple’s WWDC keynote featured just such an application with Voice Isolation mode. It showed a leaf blower just behind the video conferencer being completely obscured by the computational audio application. A “coming soon” feature that can only get better and processor and programming experience increase. Conversely, it also features Wide Spectrum mode for when you want all the sounds included.

    Computational everything

    And this is where this essay is leading. Too much time, effort, or words spilt on Artificial Intelligence, and how the robots are going to take over the world do us no good. In some sense, yes they are, of course. But that doesn’t automatically mean the world will be ruled and driven entirely by them. What it means is rather more subtle. Possibly more of the framework I outlined above. By having computational everything, much of our lives can be enhanced and augmented, the bicycle for…, if you wish.

    Autonomous cars are a long way off still, by computational driving is already here and will get better. Lane Assist, Park Assist and the various cruise-control augmentations are nothing, if not computational driving.

    Even the most mundane of products can be augmented by liberal CPU usage. Your washing machine for example designed to reduce the water and detergent required simply applying ML to the load in the drum. Analysing (chemically) the water coming in, the type of detergent, the clothes, and their makeup, Cotton or other, this bicycle for washing machines would substantially contribute to more efficient usage of scarce resources and perhaps even better control the rejected “dirty” water.

    Without writing a list-ickle, I’m sure you yourself can see a plethora of bicycle of the … applications.

    Computational evil

    It would be remiss of me not to discuss the negative side of this coming wave. As in all technology, not only does it enhance those with good intentions, but also those with bad intentions. And those with evil intentions get to ride on the bicycle too.

    Augmented, or enhanced, everything means exactly that. Enhanced crime. Enhanced Racism. Enhanced Sexism, etc. You cannot have the good without the bad.

    The Internet has shown us this.

    I’ve been an Internet fanboy for as long as I can remember —at least, as long as I learnt there was a big network of computers that we could all use, connecting us closer together.

    It was fun in the 90s. I had to connect from the university network, JANET, to NIST (National Institue of Standards and Technology), then out to the big bad world of what was still essentially ARPANET.

    The World Wide Web was still in an experimental state in CERN. I hadn’t even heard of it at that point. We used the Internet like animals; terminal commands, long waits, FTP, Gopher, WAIS, and Usenet, none of the graphically oriented interfaces that we see today. As I fell in love with the Internet and I stumbled upon an early copy of Wired Magazine in 1993, imported into my native UK. I fully bought into the idea that the Internet was nothing but good for the world.

    It would connect us, it would open our eyes to other things, would educate us, and it would even feed us. It would completely revolutionise the way the world works, for the better. It was the bicycle for the social experience.

    I had no idea at the time that the very fact that the world became more and more connected, actually drives us more and more apart.

    We, as humans, can comfortably ingest, process, and analyse only a few cognitively demanding elements simultaneously. At school, you would have only a handful of friends, and only one or two you could call a best friend. If like me, you were in a large secondary school of around a thousand pupils, it was overwhelming to be in a general assembly (with the whole school in one room). The morning going to school, with its never-ending procession of pupils arriving, break time with the crowd pilling out of the building to run around on the playground. All these people surrounding us are too much for one human to get to know, either intimately or on a cursory level.

    The Internet completely explodes that model, and we are confronted with tens of thousands, if not millions of possible interactions whilst connected. Twitter, Facebook, Clubhouse, and their indifference of their capacity to overwhelm us is creating a different type of human culture that is, in my view, detrimental to the world. Polarisation, populism, the immediacy of need. These are all consequences that are not propice for the sain development of the world and a direct result of Computational Socialising. 

    I wish I had understood this when I was first becoming charmed with the Internet. Perhaps I could have contributed to doing something to protect us from its inevitable negative consequences. Funnily enough, it was all there then for us to see. Read Neuromancer, and you’ll understand what I’m talking about.

    It’s a dystopian view of the Computational Everything world I’m discussing in this essay.


    I hope you liked the topic. Let me know.

    Photo by Adi Goldstein on Unsplash

    1

    Interestingly, was also the first-ever live implementation of remote computing. In 1940 a demonstration in Dartmouth connected to the CNC from a terminal over phone lines. Sound familiar?

    → 3:28 PM, Jun 10
  • Thoughts on Clubhouse. Innovative Disruption in the Caribbean 

    I got on Clubhouse so you don't have to

    I recently got an invitation to Clubhouse from a friend. Thanks, you know who you are 🙏 I have some ...er, thoughts. Read on for those.

    I have some good news and some bad news. The bad news first as is customary. There’s no podcast version of this essay yet. It’s actually a very time-consuming process, and I’ve had a lot on my plate recently. Furthermore, the stats show that it isn’t as popular as some had led me to believe. I’m thinking of changing up the format and doing a more “traditional” podcast that complements these essays, but I haven’t quite finished thinking that through. I’d love a regular partner to podcast with. Let me know if you’re interested.

    Now on to the good news … well, for me anyway. I got my first COVID-19 vaccine today and looking forward to the next dose. It’s the least one can do to protect the vulnerable around me. I hope you get yours soon too.

    On to this week’s essay.

    Share


    Thoughts on Clubhouse

    I got on to Clubhouse, so you don’t have to. It is actually a fascinating idea but one I can’t quite adhere to fully for numerous reasons, some of which I’ll dive into here. 

    But first, whenever I jump into a room on Clubhouse, there is literally nothing that couldn’t be better served in podcast format. The ability to stop/start when you like, the offline capability and the accompanying show notes that often point you to supporting materials, are all clearly missing from Clubhouse. This has been confirmed by the fact that many presenters are actually recording and publishing their “talks” through podcasts and YouTube post-room.

    In fact, that’s the primary reason I got on the platform, to try to understand its relation to podcasts and to see if it would disrupt them as so many have been predicting. I’m happy to say that no, Clubhouse will not put a nail in the coffin of podcasts any time soon. If anything it is more like to become the model on which conference panel discussions get digitalised (and subsequently marginalised in value). There’s scope for the democratisation and digitalisation of many of the panels that are hosted around the world. COVID-19 has accelerated the acceptance of that reality. As a panel host/guest speaker invested in that market, i.e., if it’s your main job, I’d be worried about where well-paid work will come from in the next couple of years.

    In fact, I’d go as far to say that if the platform becomes very popular, it could decouple live panel discussions from conferences and even kill off local discussion in-person forums. And like any platform on the internet at internet scale, the problem quickly becomes discovery. How do you find out about those interesting and informative conversations? How do you stop from getting placed in a social bubble (remember you’re linked to your contacts)? What part does moderation play in this?

    But what is Clubhouse? I think a good way think about it is a cross between a phone-in radio show from a small town, populated by procrastinators, narcissists, and grifters. Its sudden popularity has meant that it is the latest target for dollar store wisdom mongers, snake oil merchants and outright fraudsters. That is not to say that there aren’t any interesting and enlightening discussions taking place on the platform, of course there are, just like we’ve seen on TED. But boy, there’s a lot of absolute crap out there too! If you do join, just beware of the VCBS and the pathetic rich-splaining like ‘Ooh look at me, I’m a millionaire’ or “Get More Clients in 2021”. I think I’ve said enough.

    Screenshot 2021-03-09 at 15.27.21.png

    Clubhouse

    From an analytical point of view, I can see it as an ancillary service in digital conferencing —something that despite trying, we still haven’t cracked meaningfully, particularly the conference-goer interaction space. You’ve all been there, when the filthy mic gets passed around the hall in the Q&A session. You’ve probably all spent time in a Zoom-like conference wanting to get to talk to the panel/presenter and couldn’t because the tools don’t allow for that yet. Using Clubhouse as a digital alternative might possibly be very compelling.

    The big question, of course, is how is Clubhouse going to monetise. I’ll put that to bed immediately because there is only one proven solution to qualitative tools on the Internet. Ads. Only businesses pay for quality (ahem) software. Consumers wilfully (or ignorantly) allow spying to be performed on their footsteps in cyberspace for that to be monetised later using some flaky and downright fraudulent claims on accuracy and ROI. And so it will pass. Clubhouse will become Clubhouse + ads. The funding round mostly from A16z practically guarantees this. They have bet big and will want big returns or nothing.

    There is also a technical and practical dilemma for Clubhouse too. How it can interject adverts without the speakers announcing “This room is sponsored by …” —something I’m not even sure is possible in the T&Cs. (Note to self: Check the terms for advertising clauses). If it is audio, i.e., the primary reason you get on the platform, then having your favourite show interrupted by an advert about a website builder or better yet, the next “hot” Clubhouse room is so user hostile that I can only imagine adverts inserted as you enter or leave a room. Interstitial adds are super agressive and frictional to the point that many of us might reduce the use of the app. The other option is visual ads either static of video-based. Again, this is a tricky prospect as many people open the app, join a room and turn the screen off listening on headphones, the phone’s speaker or AirPlay-ing it to the voice in the box. I mean, where’s the moat? How is this different from live-feed podcast?

    As it stands today, Clubhouse is just a feature waiting to be copied by the big boys in the classroom. Twitter and Facebook have started doing just that. They’re unlikely to stop until they can kill off the disruptor before it gets a foothold or be told to stop by legislation. It’ll most likely be achieved through two strategies; using their already hard-won networks and graphs, and out-featuring the features of the product for nothing more than is little more personal data.

    I wrote this passage a few weeks ago as I was taking notes using Clubhouse:

    Just as an aside, a note about building the network. Clubhouse requires, yes requires, you to upload your entire contacts list if you want to invite someone to the party. You get two invites when you’re successfully integrated. If you store contact details on any European citizen (regardless of where you like), you are defect breaking GDPR laws unless you’ve got permission from the person being invited. I make no judgement, I inform. Think about that for a minute. I currently have 1346 contact cards on my Mac (some are old or defunct), but Clubhouse wants 1300+ just to send two invites. I suspect around 800 or more of those contacts are EU citizens; therefore I’d be breaking the law over 800 times.

    That paragraph is meaningless today, as the app has been updated to allow an invitation to be sent to individual phone numbers thus avoiding the wrath of the EU for now. Who knows if they’ll go after those who have already broken the law. 🤷‍♂️ For them, Clubhouse has provided means by which you can delete the contacts you uploaded. Looks a bit like shredding the papers before the inspectors to me. As far as I’m aware, French authorities have opened an investigation to determine if there was indeed a breach of law by Clubhouse.

    I doubt much will come of it, though. But it is a sign of the very different times in which startups in the tech industry are trying to get off the ground, of which they will be no doubt acutely aware.

    Podcast: Innovative disruption discussion on ICT-Pulse

    You may well recall that I have been a guest on the ICT-Pulse podcast a couple of times in the past; well, a while back, Michele Marius released an episode we recorded together with Tunde Mottley, where we talked about the state of innovation in the Caribbean, and what we perceived as confusion between Smart and Digital. I’ll write about the second topic in the future, but I thought the first topic would be good to expand upon here. The format of the podcast is to propose a question and then discuss it unscripted. My proposed question was:

    Does our size, complexity, lack of investment and poor data preclude the Caribbean from truly innovative disruption in business?

    This is a multi-layer question and one that can’t be answered in one paragraph, and to be honest, depending on how many words I end up writing past this point, I might break it up in the smaller essays. 😰

    What got me thinking about this in the first place is the evidence of a complete lack of popular services we take for granted in parts of the US and Europe. The Uber’s, the JustEat’s and Deliveroo’s of the world are all absent here. Even the well-implanted franchises like McDonald’s and KFC don’t offer the same services as their counterparts in other regions. Why is that?

    When you take the Caribbean basin in total, some 44 million people, it could be equated to around ⅔ of the population of the UK or France. Those are big markets and provide plenty of scope for business development. A market of 44 million people should be comparable in terms of GDP, but according to my research, that is far from the case.

    Of several structural issues, the Caribbean is a disparate, separated geography that makes it counterproductive to large-scale development. The 44 million people are spread over 31 countries separated by sea, language, currency and political persuasions that break the Caribbean down into small chunks that on their own cannot command the scale of funding, development and potential sales to attract investors.

    Our banks and our investors have no appetite for the types of funding required to produce the kinds of startups —that once built could be exported across the globe— because they refuse to bet money the way the likes of Andreessen Horowitz does. They want an immediate return on their cash and guarantees that are inexistent in the startup world. That is not how much of the new world of business that understands internet assumptions are built.

    Benedict Evans revealed in 2016 some startling figures that enlightened us on how returns on VC were broken down. Around half of the investments returned less than the initial investment, i.e., made a loss. Perhaps even more surprisingly, approximately 6% of investments returned 10x, which made up 60% of all returns.

    Without attempting to oversimplify, and to be fair it is more complicated than these aggregate figures suggest, it would appear that investors would need to accept significant losses but keep going to find those breakout projects that return well above expectations, paying for all the other bets. We have no organisations/investors that have the stomach for that here.

    When we look at companies that are flying high, take DoorDash —not to be confused with Deliveroo that is having a bad time on the FTSE—, it’s the leading to-the-door food delivery service in the US, but in its recent IPO prospectus, DoorDash notes that it made 149 million US$ from revenue of 1.9 billion US$. And that is after losing hundreds of millions of dollars over the last few years. Clearly, there’s potential in a market that can afford these kinds of losses over multiple years until the internet-sized scale tips things in their favour. With a population of 300 million and essentially, one jurisdiction, the incentives are clear and easy to spot.

    Where do our innovators and entrepreneurs stand in the local markets? Multi-jurisdictions, multi-currencies, multiple languages and cultural differences would not seem propitious for development in the digital economy.

    The 44 million in the Caribbean living in a digital divide that is still marked, together with a just-getting-going digital economy, can we build lasting value-added services? The internet is only part of the solution. Until the other pieces are in place, there will be slow growth and no breakthrough innovation. I wish I had a few billion to spend! I know we don’t lack ideas or the technical chops to wow the world, my research has shown me that over the last few years. And, the Caribbean has done it before1. It can do it again.

    ––––––––––––––––––––––––––––––––

    1 Search for “History of Reggae Music”

    → 5:06 PM, Apr 1
  • A missed opportunity for Apple and Apple Music 🤷‍♂️

    Despite being criticised, and rightly so for some products, much hi-fi equipment is far from being snake oil. It tends to follow the laws of diminishing returns, for sure, but looking at that from the starting end of the graph, it means that spending just a little more will yield large returns on investment in sound quality. It generally follows that build quality and robustness also follow when you increase the budget of your hi-fi equipment.

    Sadly, that world is full of promises and downright fraudulent claims, particularly in the cable market. But on the whole, a decent small-batch hi-fi manufacturer providing reasonably priced components will prove a wise strategy to get the best out of recorded music for you.

    And that world is becoming more affordable as sources, components, and reproduction are all moving to digital. Looking at the middle-to-high end —brands like Naim Audio and Linn— are providing digital systems of the all-in-one design. Some model lack speakers, which is likely to capture a large chunk of the budget, but other models are true all-in-one systems conceived for the digital age. These systems are capable of producing remarkable sound for their size and budget. But the music industry has had a harder time convincing users of the benefits of higher definition audio.

    Some of that has to do with the fact that some people just cannot hear the difference, others pretend they can and scientific experiments have all but proven that the benefits of high definition audio sources are only marginal. The human’s average hearing range is well inside the bandwidth of high definition audio, so it is difficult to prove the benefit to listeners.

    That hasn’t stopped online streaming services like Tidal and Spotify from offering those products to their users. In fact, Tidal’s business model was predicated on the promise that it had the best sounding streams on the planet.

    To play these sources as well as locally ripped or produced high definition sources, there are more products on the market adapted to this trend. One such product is the Buchardt Audio A500. It is a 4000€ speaker + hub package (delivered worldwide incl.) that negates the need for any other component in your system. You plug the speakers in, link them to the hub, and you can start streaming in less than ten minutes. The product goes much further, but that’s not the remit of this blog. Take a look at someone like DarkAudio for a better review.

    But yes, this 4000€ product is out of the range of most listeners, either by wealth or by value perceived. And this is where I think Apple has a fantastic opportunity on two fronts to make the ultimate “everyday man’s” dream hi-fi system.

    We’ve seen and heard what two HomePods in paired mode can produce in sound quality, and it is mightily impressive. Even two HomePod minis sound superb for $200 when paired! But the original HomePod was a floored design initially. It was Apple-only (through Apple Music) and could stream through AirPlay (an Apple proprietary streaming protocol). They worked very well but only suited those heavily invested in the Apple ecosystem.

    Apple subsequently added the possibility for its AppleTV set-top box to use them as the default output device, but this only worked sporadically and relied on good wi-fi and internet. Most people use a variety of TV boxes and TV sets, and in those circumstances the go-to solution was to buy an AV amplifier and speakers —sometimes 8 (7+1)!

    Where I think Apple could meaningfully contribute to the market, a market that is self-proclaimed to be significant to Apple, is on one hand provide a high definition streaming plan to the Apple Music subscription. An extra $5 or so a month would be picked up by a sizeable market, I believe (whether they hear the difference or not!). Let’s call it Apple Music+.

    The second prong of the strategy would be to produce a device in the vein of the Buchardt Hub. A small set-top box that has AppleTV built in, inputs for line (both RCA and Minijack), USB and HDMI. The device would take the input, either wired or wireless through AirPlay, and output quality stereo sound to the two linked speakers using the same communications as the existing HomePods. With a little more work, it may be possible to even add additional speakers to the mix, providing the immersive all-round sound film buffs tend to favour.

    The price of the package could be $500-$700 and would sell like hot cakes I would guess.

    Think about a small, easy to set up, great-sounding all-in-one package that could replace the hi-fi, the AV amp and god-awful ugly speakers.

    I’d go for that.

    31 March 2021 — French West Indies

    → 8:16 AM, Apr 1
  • Market Trends 2021 and beyond

    Good day from Martinique.

    I’ve been on a hiatus for a few weeks as I wind down a multi-month project researching and investigating the state of ICT in the Eastern Caribbean, I hope you don’t mind. Rather than the 2,000-ish words I write here, I’ve written and re-written over 50,000 words in the last month or so. Looking at the statistics for Grammarly —yes, I use it happily— I have written over half a million words over the previous year, despite being on a reduced schedule. I’m back malgret tout(1), and I’m hoping to share some of the insights and information I’ve been researching.

    In addressing COVID-19, the good news here in Martinique, infection rates are stable, and there is much hope to look forward to with the start of vaccination campaigns soon throughout our region. If I have a bad word to say, it’s about France’s strategy, which frankly was woefully inadequate and too reserved for dealing with a national and global tragedy like we’re seeing. Things are picking up now, so hopefully, we’ll have more good news in a few months. If I have any explanation for the French government’s attitude, it is rooted in technology, as almost everything is these days.

    This is a discussion for another day, but France is one of the most vaccine skeptical nations globally, so much so, the government was forced to introduce legislation that barred school children from attending schools if they didn’t have their vaccinations up to date. Social Media, particularly Facebook, had played a significant role in the amplification of misinformation. As a result, more and more parents had chosen not to vaccinate their children, making herd immunity less effective as the absolute numbers of vaccinated people dropped. I’d like to dig into regulation and control of “Internet Power” in the future, and I’m gathering my thoughts and researching, to bring you an informed point of view, hopefully.

    Back to COVID-19, around the Caribbean, the story is a mixed bag, but on the whole, the Caribbean has suffered far less from the virus in terms of infections and death, but paradoxically suffered far worse than others from the economic effects as the effects of substantially reduced tourism is affecting our region.

    I thought I’d start the 2021 season with a quick outline of some of the trends I see in the global market and how they may or may not affect us here. Let’s get started.


    Some of the trends I’m seeing

    Cloud Computing

    Despite the best efforts of various despots and nationalists worldwide, we are living in an increasingly global marketplace where local trends are being driven from outside sources, having profound effects on how a business operates locally. The increasing use of Cloud Computing, for example, is a trend that started outside the region and has now shoehorned itself into local politics and business strategy. COVID-19 has accelerated that push, but it has also exposed many weaknesses in our digital infrastructure that will profoundly affect our countries over the coming years. Subjects like Digital Health and Digital Healthcare are now starting to be taken more seriously than at any time in the past and looming regulations of “Big Tech” are also on the list of topic discussions for governments and businesses throughout the Caribbean.

    Cloud Computing is a catch-all term used to describe the tools and services offered by companies managed and controlled by remote data centers dotted around the globe. For example, in the region, Digicel has built a data center network in Jamaica and Trinidad and Tobago to host Infrastructure and Software services throughout the region to its customers. The two most prominent players in the market are Amazon with AWS (Amazon Web Services) and Microsoft with its Azure offerings.

    Cloud computing, as an opportunity, is still massive for the region, with IDC predicting more than 2T US$ of global cloud sales by 2023 with a reasonably even split over the types of service offerings (IaaS 9%, Managed Services 20%, Consulting 16%, SaaS 19%, and Support 17%) for the Microsoft Partners surveyed. (Source: IDC Software Partner Survey, January 2020). The Caribbean will be no exception in the coming years.

    Digital Transformation and Work from Home

    You know I’m not a fan of this overused and abused term, but it still resonates for many businesses just starting to open their eyes to the prospect of integrating at a deep level, digital technologies in their businesses. But it is precisely because it has become ubiquitous that there is now a serious drive around the world for companies to embark upon their transformative projects. COVID-19 has probably done more for the cause than any big-budget marketing campaign from the likes of Microsoft and Google.

    Lockdowns and new working practices are now beyond the point of being stop-gap solutions to stem a hemorrhage of income. Companies have been forced to experiment with new ways of working together, initially entirely remotely —which didn’t go down well for some— and now in a more hybrid mode. Indicators are starting to appear to suggest that we are in the midst of a sea change of working practices that legislation will likely adapt to. Vacant office space is at an all-time high in some cities, and immigration/emigration figures between different states in the USA show that a recalibration of resource distribution is taking place.

    The COVID-19 pandemic has forced a significant shift in working remotely, collaboration and the need to be in a single-space to produce. For many years, we have talked about this possibility, but very few organisations have been able or willing to make the human and financial investments necessary to enable this new way of working. COVID-19 has come along and wholly blown all previous notions out of the water, making all but the most resistant organisation think deeply about how they can change their working practices to take advantage of a situation. That is not likely to be resolved in the next few months.

    Digital Transformation will be the backbone, or the operating system if you will, of this adoption. Companies that adopt digital throughout the value chain will be those that adapt to the new.

    Security

    Security is becoming a defining differentiator for solutions that are becoming increasingly complex as the old-world security perimeters are broken down as we move more services to the cloud. Security is no longer limited to firewalls, passwords, and antivirus. Technologies like two-factor authentication (2FA), Virtual Private Networks (VPN), encryption, and Single Sign-On (SSO) services are increasingly in demand and an example where expertise is not readily available in the region. These services cannot be isolated from the implications to business, marketing, and operations as sophisticated attacks are no longer driven by teenagers driven by pride and disruptive cyber-graffiti exploits. In 2018, the island of Sint Maarten (Dutch side) suffered an incident that took offline government services for the 40000 or so population, and according to the Caribbean Council, Saint Vincent and the Grenadines was the target for ISIS originated hacks on Government websites, although details are scarce on the impact. Therefore, knowledge of the whole security stack and its integration with the business value chain is imperative to develop valued services and advice, such as risk management, BI, and Data Analytics.

    COVID-19 has provided an almost unlimited opportunity for individual, organised, and state actors to target users over COVID-19 fears. Just days after the UK Medicines and Healthcare Products Regulatory Agency (MHPRA) approved Pfizer’s COVID-19 vaccine for emergency use, a sophisticated hack and phishing campaign was mounted to attempt to steal information concerning the logistics process.

    Regulation

    Besides the global pandemic of COVID-19, if 2020 has taught us anything, it is that regulation of a largely unregulated sector of the economy is about to start in earnest. Initially, it is likely to affect the multinationals such as Google, Facebook, etc., mostly. However, make no mistake, much of the legislation that will be implemented on national and local levels will affect businesses down to the small suppliers of technology. The GDPR of 2016, implemented in 2018, ostensibly protected European citizens from personal data transfers and data mining abuses. It affected every company on the planet that needed to collect and store personal data of European citizens. Online marketplaces and social media sites were the legislation’s apparent targets, but any business that dealt internationally was required to hire, train, and implement a Data Controller and Data Protection Officer responsible for ensuring compliance. Liabilities and penalties were harsh for non-compliance, the most mediatised being either a €20 million or 4% of annual worldwide turnover fine for a serious breach.

    GDPR is but one example, with others becoming hot topics in the coming years, such as COPPA (Children’s Online Privacy Protection Act), Do Not Track Legislation, ePrivacy Regulations, the Digital Markets Act, and the Digital Services Act, for example. In this climate, businesses will be required to update current and upcoming legislation continually and implement training, auditing, and compliance adjustments continually through training and consulting services from specialists.

    Many industries are subject to specific, technical regulation, such as Pharma, Oil & Gas, Finance, Utilities, and Cars. Tech and ICT are about to join that list with specific regulations affecting specific issues. It is essential to understand that rules are not usually implemented as wide-brush solutions, and that regulation is highly targeted to treat a particular problem as defined by the various regulatory authorities. “Banks” have never been regulated, only specific products and services in the banking industry are regulated, Deposits, Credit Cards, Pensions, Trading, Mortgages, Futures and Options, by way of example.

    Regulation may also lead to an increase in digital sovereignty, with the above example of GDPR showing how this may come to be. The Great Firewall of China is another extreme example, and the fact that China and India now count for more internet users than the rest of the world in total is showing us how the balance of power over the Internet is moving from being US-centric to something more international. One thing to bear in mind is that regulation is designed to protect a specific point of view. The US-centric perspective is more about keeping prices lower —which explains why the free-to-use products have mainly been let off until now— whereas the European-centric position is about healthy competition and consumer protection. This divergence will play out over the coming years and influence every stratum of business.

    Digital Health

    In an era of ubiquitous access to internet-connected devices from almost anywhere, one of the pre-pandemic concerns was an issue that has been questioned by humanity for centuries; Is the next-generation spending too much time with technology? Much debate had been made over the amount of time people were spending with technologies connected to the Internet. Screen time was such a hot topic that many software providers stepped up with solutions to monitor and control the time people, particularly children, could spend on these devices.

    It is in this direction that new insights about screen time will evolve, and it will be a debate about quality, not quantity. It will be about how we can implement “good” screen time and then monitor and control it. It will be about preventing “bad” screen time with quantifiable justification and suggested preventative solutions. This will likely affect the education sector hugely by providing tools better adapted to this new paradigm. We are only just at the end of the beginning of a change in digital health.

    Disinformation

    Disinformation and conspiracy theories have been part of human nature for millennia. However, recent technical advances like social networking and recommendation algorithms have fuelled to an extent never witnessed, the spread and belief of such disinformation that has consequences for society and possibly even democracy. Today, businesses focusing their marketing and revenue-making activities online should be aware that they could be subject to organic and organised campaigns to discredit their work, profession, or any other attribute that is the current target for attack. An example of this extreme took place in the United States, the United Kingdom, and Italy when nurses suddenly went from national heroes to national conspirators over misinformation about Coronavirus vaccines.

    Disinformation is no longer the “graffiti of the Internet”; it is being used politically and weaponised in cyber-attacks throughout the world—one to watch for.

    Automation and Artificial Intelligence

    Over the last few years, we have seen an explosion in the number and prevalence of automated systems, from website chatbots that provide first-line support services to deeply integrated automation-development platforms such as Zapier and Microsoft Power Automate. Many predict an increase in spending on automation over the next three years, with services companies well-placed to take advantage of this opportunity by providing help in implementing these systems.

    Two main types of automation are emerging as the development targets: Robotic Process Automation (RPA) and Business Process Automation (BPA). RPA can be applied to many general and industry-specific tasks such as Procurement, Marketing, HR, Retail, Telecommunications and Banking, to name a few. BPA is best suited to the processing of unstructured data sets such as voice, images, and natural language systems, and often rely on Artificial Intelligence to accomplish line of business help, for example, real-time translations over video conferencing. GPT-3 is one such language model that has produced human-like text for human interactions through chatbots etc. Currently, only developed markets such as the US, Europe and Asia-Pacific are investing heavily in these capabilities, but the Latin American market is predicted to grow five-fold by 2025, according to learnbonds.com.


    I don’t know what the future will hold, of course, but looking at trends in the wider world can at least give us a heads-up that can help us better understand and adapt. Here’s to a better 2021.

    Thanks for being a supporter, have a great day.


    If you enjoyed The Future is Digital I’d really appreciate it if you would share it to those in your network.

    Share

    If this email was forwarded to you, I’d love to see you on board. You can sign up here:

    Subscribe now

    Visit the website to read all the archives.

    ––––––––––––––––––––––––––––––––

    1 Despite everything

    → 3:45 PM, Jan 27
  • The origins of Rosetta(2) probably lie in a little-known technology from 1996 called FX!32

    Unless you’re a hermit or not in any way linked to the tech industry, you’ll be aware that Apple has released its in-house designed processors to replace the current Intel-supplied ones used in the low-end line of Apple’s computers; the MacBook Air, the MacBook Pro 13” and the Mac Mini.

    Upon reception, people have been benchmarking these processors with nothing short of absolutely stunning results. They are that good it seems. Everything from switching resolutions —which is, by the way, instantaneous with no blanking or delay— to running Apple M1 optimised tasks at over three times the speed for some functions, as compared to even the fastest of the Intel family.

    But I’ve been most interested in this transition to RISC1 from CISC2, or to put it differently, from Intel to Apple arm-based processors, for one reason. Rosetta.3 A Apple officially calls it Rosetta, but we all know it as Rosetta 2 because its original outing was in 2006. Back then Apple was embarking on its first major transition from the PowerPC line of processors to Intel’s x86 line. Rosetta, at that time, provided the bridge between the older PowerPC applications and the newer operating system that was running entirely on the Intel instruction set.4 Rosetta was an emulation software that took PowerPC-based commands and turned them into equivalent Intel-based commands, allowing the application to run, albeit slowly. There is an overhead that is not negligible to run as an emulation. At the elementary level, the processor has to do at least twice the work than an application running natively.

    Rosetta 2 does things a little differently, and as a result, substantially reduces the time required to run the translated applications. The word ‘translate’ is the key to understanding Rosetta 2.

    Back in 1996 during the precipitous misfortune of digital, a major computer company from Maynard, Massachusetts, digital had designed, built and implemented a RISC-based architecture processor called Alpha. The move to RISC was seen as the way forward and was —rightly so, if what we’re seeing today from Apple— projected to be the future of processor design.

    At the time, there was a belief that RISC-based microprocessors were likely to replace x86-based microprocessors, due to a more efficient and simplified implementation that could reach higher clock frequencies.

    (FX!32 - Wikipedia. [en.wikipedia.org/wiki/FX!3...](https://en.wikipedia.org/wiki/FX!32))

    There was, however, one snag, and that was application compatibility with the growing x86 application base that had taken hold at the time, through PCs running various flavours of Windows. One interesting version had been commercialised for a few years, NT or New Technology, and was quickly outdoing the established Unix workstation operating systems, like digital’s own AXP.

    To remove this sticking point, Raymond J. Hookway and Mark A. Herdeg led a team of engineers in developing a much better solution to the CISC ➔ RISC problem than simple emulation. Released in 1996 and discussed in detail in this 1997 Digital Technical Journal article, DIGITAL FX!32 provided the means for the binaries to be “translated” from x86 to Alpha. FX!32 took native x86 binaries and created alpha DLLs, or Dynamic Linked Libraries, and ensured that these ran in the place of the original x86 binaries.

    FX!32 allowed two things to happen. One, FX!32 let non-native x86 code run on the Alpha processors with a much smaller speed penalty than emulation. Version 1.0 reportedly ran at 40-50% of the speed of native Alpha code. It was way faster than emulator software that typically ran at a tenth (or slower) of native speed. Subsequent versions and other optimisations allowed the code to run at over 70% of the native Alpha processors speed. Being that the alpha processor was the fastest processor on the market at the time, this allowed complex applications like Microsoft Office, to run at very useable speeds on Alpha workstations running NT 3.51.

    Secondly, the work done to translate the binary was not lost and re-expended every time the required application was run, as it was in emulation. FX!32 optimised the binaries in the background and stored the translated libraries on-disk which enabled the second-run experience to be virtually unnoticeable. The background translation ran without user interaction and allowed the processor to choose the best possible optimisations in terms of computational resources enabling the user to start the application and get to work after a short delay. Modules not yet used in the application were optimised in the background and on the first run, were fast and responsive.

    The primary goals of the project were to provide 1) transparent execution of x86 applications and 2) to achieve approximately the same performance as a high-end x86 platform. FX!32 achieved both these goals.

    That brings us to Apple’s Rosetta 2 technology. Wikipedia’s entry for Rosetta 2 is two sentences:

    Rosetta 2 is included starting with macOS Big Sur to aid in the Mac transition to Apple Silicon from Intel processors. In addition to the just-in-time (JIT) translation support available in Rosetta, Rosetta 2 includes support for translating an application at installation time.

    Technical information is scarce, as Apple typically shields these types of technical documents. The page dedicated to Rosetta on developer.apple.com is scant in technical detail too. But I suspect the origins of the technology lie in FX!32, updated to run x86 64bit instructions. The difference between now and then, is that Apple’s M1 is so fast that even the 20-30% speed hit allows these computers to run Intel code faster than Intel itself can (on the line of processors Apple is replacing).

    Just. Stunning.

    20 November 2020 — French West Indies

    1. Reduced Instruction Set Computer ↩
    2. Complex Instruction Set Computer ↩
    3. Taken from the Rosetta Stone that enabled historians and scientists to understand 3 languages, as the stone contained translations of Green, Demotic and Hieroglyphic ↩
    4. The instruction set determines how the processors calculates the code it is fed. Both RISC and CISC have their advantages and disadvantages ↩
    → 5:53 PM, Nov 20
← Newer Posts Page 23 of 46 Older Posts →
  • RSS
  • JSON Feed