matthewcowen.org
About Categories Reading Subscribe Search Also on Micro.blog
  • Web 3.0 or Web 1.0 in sheep’s clothing?

    (Photo by Rahul Pugazhendi on Unsplash)

    After my recent hiatus from writing, I’m feeling good about writing a few essays this year on some of the emerging topics that have captured the imagination of the technorati. I’ll also be diving into some other issues that, whilst not specifically about tech, have such a tech element to them they can be considered tech subjects.

    If there is one overriding theme, it is that tech, or digital if you prefer, is sticking its nose into almost everything. As tech enthusiasts, we’ve been used to being sidelined or kept in the corner where we could express our undying love for our tech and not affect anything around us by accident. That is no longer the case.

    Too many articles discuss how tech’s disruption has caused more unintended ill than benefits.

    Does this mean that all tech is terrible? No, no. Of course not. But what it does mean is that we as tech people need to educate ourselves on nearly everything else to comprehend the world around us we are affecting, and perhaps have some insight into bringing on board those that can help us in their specialist subjects. I’ll get into that some other time, but suffice to say that the future will be equally tricky and fascinating.


    Upcoming Podcast

    On several occasions, I’ve been a guest on Michele Marius’s podcast, the ICT Pulse Podcast. We recently started recording what we hope will be an interesting conversation around a topic that feels far off for many of us in the Caribbean but is perhaps closer than we realise.

    I’ll let Michele announce and release these before giving the game away.

    I wanted to say how much I enjoy this format —an open discussion on a specific subject in approximately one hour.

    Look out for it on your podcast app of choice soon, and do give us some feedback or pointers into things we may have missed or poorly explained. I’d love you to contribute.


    Web 3.0

    There has been a lot of interest of late over this thing called Web 3.0. It is hard to accurately describe what it is precisely because no one definition exists. Web 3.0 is whatever you want to suit your world view of the tech environment. I wanted to jot some initial thoughts I’ve had and submit them for discussion. I don’t pretend to be an expert in all of this. I’m trying to figure this out too. I’m just doing it out in the open here, from the perspective of a longtime tech expect.

    Of course, there are some common elements in all definitions, but that doesn’t tell the whole story.

    Most people agree that Web 3.0 is a collection of new technologies that will provide a fundamentally different experience on the internet from the world of Web 2.0. Some of those technologies you’ll have heard of or even have dabbled in, like Cryptocurrencies, NFTs and VR/AR.

    “But Juanita never comes to The Black Sun anymore. Partly, she's pissed at Da5id and the other hackers who never appreciated her work. But she has also decided that the whole thing is bogus. That no matter how good it is, the Metaverse is distorting the way people talk to each other, and she wants no such distortion in her relationships.” — Neal Stephenson, Snow Crash

    I’ve avoided the early 90’s noun coined by Neal Stephenson in his novel Snow Crash, Metaverse, purposefully because I have a feeling that, for the moment, it is just one massive land grab on the internet to get it “out there” that you are the most critical player in this field. Again, I’m not judging it as a concept. Even I declared having seen a part of the future when Marshmallow performed in Fortnite to 27 million fans in my essay The New Reality. Still, it is a little cynical in my view for companies to pretend that is it the next big coming purely for the benefit of themselves solely.

    I’ll leave you with one last thought. I wrote about Fortnite hosting a live mega-concert for Marshmellow, with something like 10.7 million concert-goers — not counting the countless Twitch streamers (estimated at a total of 27 million people all told). I wondered then, as I do now, how could this technology be used for more “serious business” purposes. I was a regular attendee of Microsoft Conferences over the last 13/14 years, conferences that attract up-to 25 000 attendees in one place I have a badge and a letter to prove it 🏅. So how could Microsoft replace these conferences with an entirely digital experience? One thought that comes to mind is precisely that blueprint trialled by Fornite.

    However, let’s start with the technology.

    Web 3.0 is based on three technologies slowly gaining ground in capability and notoriety. Blockchain (and hence cryptocurrencies and NFTs), Augmented Reality, and Virtual Reality. I am starting with the former in this essay.

    Despite different use cases and outcomes, I tend to group Cryptocurrencies and NFTs in essentially the same basket. I know they’re not the same but bear with me. These technologies are based on Blockchain tech, and honestly, I genuinely think that Blockchain is interesting tech with some compelling use cases:

    the underlying technology of these currencies is actually quite interesting and has place for use in Digital Transformation, hence why I’d like to talk about it in this week’s issue. That technology is, of course, Blockchain, or as it was originally known as, Block Chain. — Blockchain ≭ Cryptocurrency

    Blockchain’s current problem is that it is both slow and energetically inefficient. Scaling Blockchain to applications that require hundreds of thousands of transactions per minute (banking anyone?) is currently a pipe-dream, despite advances being made regularly. But a little like Linux Desktop, it’s always ‘coming soon’. From an energy standpoint, it seems a little immoral to me that extraordinarily greedy systems, from an electrical perspective, operate without the slightest regard for the environment.

    Take a look at what happened in Kosovo and in China,1 2 where the governments have banned crypto mining because of the strain on the electrical grid and illegal electrical supply diversions being employed by the less scrupulous. There are attempts at cleaning the face of this technology. For example, an initiative to carbon offset NFTs is a thing3 but is unlikely to have broad appeal or affect the energy requirements of Blockchains materially.

    The other elephant in the room over cryptocurrencies are the obvious parallels to pyramid or Ponzi schemes. Several articles in various reputable media outlets like the FT etc. show how much of the “value” of cryptocurrencies and NFTs is speculation. Speculation that requires new entrants into the market to prop up the value higher up the chain. With the clear Achilles heel, if the supply of those at the bottom of the stack —i.e., those who lose their investments— stop pumping money to the higher level of the stack, the whole thing will most likely come crashing down.

    We should additionally consider the two central tenants of the “reason” for Blockchain. Decentralisation and Immutability.

    Starting with the latter, immutability is the unique property of an entry in the distributed ledger (we’ll get to that in a minute) that, once written, is permanent. That is, it can’t be altered. Let’s look at that a little closer.

    The most popular Blockchain of the moment is Ethereum. Countless projects have been built on Ethereum to mint and distribute stable coins, CBDCs, NFTs and unbacked cryptocurrencies. Late last year, a bug was discovered in Polygon, a scaling project of Ethereum. It was such a severe bug as to allow all assets to be put at risk of being taken away. Stolen is the technical term. Twenty-four billion dollars were at stake.4 And, Polygon paid bug bounties that amounted to nearly $3.5 million5, but not before 2 million dollars were lost. This is not a one-off either. Several “hard fork” instances have had to be enacted to save the Blockchain from total pillage, thus resetting the original assets to $0.

    And good luck in getting redress if it’s your asset that was stolen.

    This brings us to the second tenant, decentralisation.

    The growing backlash against “Big Tech” has fuelled a view that stripping any one entity of absolute power is the answer to abusive centralisation. Meta (who are you kidding Facebook?), Alphabet (née Google), Apple and Microsoft, amongst others, are all accused of abusing their power and essentially rent-seeking6 their users.

    The idea behind decentralising the Blockchain is to prevent it from being domiciled on any one platform or property. The apparent advantages are resilience and allegiance. By definition, a distributed system is more resilient as failures in any single node are unlikely to disrupt the whole. Similarly, by decentralising the system, if one participating actor becomes abusive or otherwise falls out of favour, it’s easy to turn off the tap and squeeze them out of the system.

    But what most people don’t realise is that this distributed nature of Blockchain is mostly a myth, and at best, being grossly overstated.

    The issue lies in using the APIs necessary to build Blockchains such as the aforementioned Ethereum. There is only a tiny amount of APIs used currently, with prevalent ones like Alchemy being used by many. These APIs oversee the read/writes to the ledger. Other APIs used to pull information from the Blockchain are developed and run by a minuscule population. That is, a lot of trust is being placed on a tiny group for pretty much all Blockchain interaction.

    Additionally, if you want to buy or sell assets, like NFTs, again, only a handful of platforms exist currently. You may have heard of OpenSea, for example. They account for around 95% of the NFT market.

    Then ask yourself the question about where all this stuff runs? On the cloud, of course. But who’s cloud? Amazon’s and Microsoft’s, for the most part.

    There are instances of whole markets becoming inoperable —i.e., no trading, no consuming— during AWS cloud outages.7 Thankfully they are not that often, but when they do occur, the inconvenience is enormous.

    Then remember where the actual data for the NFT is stored. Generally on someone’s shared drive on a cloud drive service (at best) or a home-built server somewhere, god forbid.

    This doesn’t sound as decentralised as perhaps you first thought.

    To be clear, I’m playing Devil’s Advocate here deliberately to inform you and to provoke discussion. I’m crypto-neutral for the time being. I can see huge benefits, particularly to the unbanked or the vulnerable that are slowly being excluded from taking part in society. I can see the “value” of digital assets in the same way a 16th Century oil painting by some obscure bloke can be worth millions. Why not?

    But without proper oversight, redress in the case of fraud/theft and backing from governments, I can only see risk and potential for propping up devastating illegal activity with my blessing. It’s no surprise drug dealers are turning to crypto. I can’t morally take that position and refuse to prop up Ponzi schemes.


    In the following essay, I’ll get into the other aspects of Web 3.0, namely AR, VR and a few additional thoughts. Let me know if you have anything to contribute. It would be my pleasure.


    The Future is Digital Newsletter is intended for anyone interested in digital tech and its effects. Feel free to contribute. A little share from time to time would also be most appreciated.

    Thank you for reading The Future is Digital. This post is public so feel free to share it.

    Share

    If this email was forwarded to you, get on board! You can sign up here:

    Subscribe now

    Visit the website to read all the archives. Have a great day.

    1

    Kosovo Moves to Ban Crypto Mining in Face of Energy Crisis

    2

    China Crypto Bans: A Complete History

    3

    Now you can offset your NFT footprint too (paywall)

    4

    'Critical' Polygon bug put $24 billion in tokens at risk until recent hard fork

    5

    Bug bounty program

    6

    Rent-seeking

    7

    Decentralized dYdX Went Down Due to Reliance on Centralised Cloud Services

    → 16 January 2022, 11:11
  • Market Trends 2021 and beyond

    Good day from Martinique.

    I’ve been on a hiatus for a few weeks as I wind down a multi-month project researching and investigating the state of ICT in the Eastern Caribbean, I hope you don’t mind. Rather than the 2,000-ish words I write here, I’ve written and re-written over 50,000 words in the last month or so. Looking at the statistics for Grammarly —yes, I use it happily— I have written over half a million words over the previous year, despite being on a reduced schedule. I’m back malgret tout(1), and I’m hoping to share some of the insights and information I’ve been researching.

    In addressing COVID-19, the good news here in Martinique, infection rates are stable, and there is much hope to look forward to with the start of vaccination campaigns soon throughout our region. If I have a bad word to say, it’s about France’s strategy, which frankly was woefully inadequate and too reserved for dealing with a national and global tragedy like we’re seeing. Things are picking up now, so hopefully, we’ll have more good news in a few months. If I have any explanation for the French government’s attitude, it is rooted in technology, as almost everything is these days.

    This is a discussion for another day, but France is one of the most vaccine skeptical nations globally, so much so, the government was forced to introduce legislation that barred school children from attending schools if they didn’t have their vaccinations up to date. Social Media, particularly Facebook, had played a significant role in the amplification of misinformation. As a result, more and more parents had chosen not to vaccinate their children, making herd immunity less effective as the absolute numbers of vaccinated people dropped. I’d like to dig into regulation and control of “Internet Power” in the future, and I’m gathering my thoughts and researching, to bring you an informed point of view, hopefully.

    Back to COVID-19, around the Caribbean, the story is a mixed bag, but on the whole, the Caribbean has suffered far less from the virus in terms of infections and death, but paradoxically suffered far worse than others from the economic effects as the effects of substantially reduced tourism is affecting our region.

    I thought I’d start the 2021 season with a quick outline of some of the trends I see in the global market and how they may or may not affect us here. Let’s get started.


    Some of the trends I’m seeing

    Cloud Computing

    Despite the best efforts of various despots and nationalists worldwide, we are living in an increasingly global marketplace where local trends are being driven from outside sources, having profound effects on how a business operates locally. The increasing use of Cloud Computing, for example, is a trend that started outside the region and has now shoehorned itself into local politics and business strategy. COVID-19 has accelerated that push, but it has also exposed many weaknesses in our digital infrastructure that will profoundly affect our countries over the coming years. Subjects like Digital Health and Digital Healthcare are now starting to be taken more seriously than at any time in the past and looming regulations of “Big Tech” are also on the list of topic discussions for governments and businesses throughout the Caribbean.

    Cloud Computing is a catch-all term used to describe the tools and services offered by companies managed and controlled by remote data centers dotted around the globe. For example, in the region, Digicel has built a data center network in Jamaica and Trinidad and Tobago to host Infrastructure and Software services throughout the region to its customers. The two most prominent players in the market are Amazon with AWS (Amazon Web Services) and Microsoft with its Azure offerings.

    Cloud computing, as an opportunity, is still massive for the region, with IDC predicting more than 2T US$ of global cloud sales by 2023 with a reasonably even split over the types of service offerings (IaaS 9%, Managed Services 20%, Consulting 16%, SaaS 19%, and Support 17%) for the Microsoft Partners surveyed. (Source: IDC Software Partner Survey, January 2020). The Caribbean will be no exception in the coming years.

    Digital Transformation and Work from Home

    You know I’m not a fan of this overused and abused term, but it still resonates for many businesses just starting to open their eyes to the prospect of integrating at a deep level, digital technologies in their businesses. But it is precisely because it has become ubiquitous that there is now a serious drive around the world for companies to embark upon their transformative projects. COVID-19 has probably done more for the cause than any big-budget marketing campaign from the likes of Microsoft and Google.

    Lockdowns and new working practices are now beyond the point of being stop-gap solutions to stem a hemorrhage of income. Companies have been forced to experiment with new ways of working together, initially entirely remotely —which didn’t go down well for some— and now in a more hybrid mode. Indicators are starting to appear to suggest that we are in the midst of a sea change of working practices that legislation will likely adapt to. Vacant office space is at an all-time high in some cities, and immigration/emigration figures between different states in the USA show that a recalibration of resource distribution is taking place.

    The COVID-19 pandemic has forced a significant shift in working remotely, collaboration and the need to be in a single-space to produce. For many years, we have talked about this possibility, but very few organisations have been able or willing to make the human and financial investments necessary to enable this new way of working. COVID-19 has come along and wholly blown all previous notions out of the water, making all but the most resistant organisation think deeply about how they can change their working practices to take advantage of a situation. That is not likely to be resolved in the next few months.

    Digital Transformation will be the backbone, or the operating system if you will, of this adoption. Companies that adopt digital throughout the value chain will be those that adapt to the new.

    Security

    Security is becoming a defining differentiator for solutions that are becoming increasingly complex as the old-world security perimeters are broken down as we move more services to the cloud. Security is no longer limited to firewalls, passwords, and antivirus. Technologies like two-factor authentication (2FA), Virtual Private Networks (VPN), encryption, and Single Sign-On (SSO) services are increasingly in demand and an example where expertise is not readily available in the region. These services cannot be isolated from the implications to business, marketing, and operations as sophisticated attacks are no longer driven by teenagers driven by pride and disruptive cyber-graffiti exploits. In 2018, the island of Sint Maarten (Dutch side) suffered an incident that took offline government services for the 40000 or so population, and according to the Caribbean Council, Saint Vincent and the Grenadines was the target for ISIS originated hacks on Government websites, although details are scarce on the impact. Therefore, knowledge of the whole security stack and its integration with the business value chain is imperative to develop valued services and advice, such as risk management, BI, and Data Analytics.

    COVID-19 has provided an almost unlimited opportunity for individual, organised, and state actors to target users over COVID-19 fears. Just days after the UK Medicines and Healthcare Products Regulatory Agency (MHPRA) approved Pfizer’s COVID-19 vaccine for emergency use, a sophisticated hack and phishing campaign was mounted to attempt to steal information concerning the logistics process.

    Regulation

    Besides the global pandemic of COVID-19, if 2020 has taught us anything, it is that regulation of a largely unregulated sector of the economy is about to start in earnest. Initially, it is likely to affect the multinationals such as Google, Facebook, etc., mostly. However, make no mistake, much of the legislation that will be implemented on national and local levels will affect businesses down to the small suppliers of technology. The GDPR of 2016, implemented in 2018, ostensibly protected European citizens from personal data transfers and data mining abuses. It affected every company on the planet that needed to collect and store personal data of European citizens. Online marketplaces and social media sites were the legislation’s apparent targets, but any business that dealt internationally was required to hire, train, and implement a Data Controller and Data Protection Officer responsible for ensuring compliance. Liabilities and penalties were harsh for non-compliance, the most mediatised being either a €20 million or 4% of annual worldwide turnover fine for a serious breach.

    GDPR is but one example, with others becoming hot topics in the coming years, such as COPPA (Children’s Online Privacy Protection Act), Do Not Track Legislation, ePrivacy Regulations, the Digital Markets Act, and the Digital Services Act, for example. In this climate, businesses will be required to update current and upcoming legislation continually and implement training, auditing, and compliance adjustments continually through training and consulting services from specialists.

    Many industries are subject to specific, technical regulation, such as Pharma, Oil & Gas, Finance, Utilities, and Cars. Tech and ICT are about to join that list with specific regulations affecting specific issues. It is essential to understand that rules are not usually implemented as wide-brush solutions, and that regulation is highly targeted to treat a particular problem as defined by the various regulatory authorities. “Banks” have never been regulated, only specific products and services in the banking industry are regulated, Deposits, Credit Cards, Pensions, Trading, Mortgages, Futures and Options, by way of example.

    Regulation may also lead to an increase in digital sovereignty, with the above example of GDPR showing how this may come to be. The Great Firewall of China is another extreme example, and the fact that China and India now count for more internet users than the rest of the world in total is showing us how the balance of power over the Internet is moving from being US-centric to something more international. One thing to bear in mind is that regulation is designed to protect a specific point of view. The US-centric perspective is more about keeping prices lower —which explains why the free-to-use products have mainly been let off until now— whereas the European-centric position is about healthy competition and consumer protection. This divergence will play out over the coming years and influence every stratum of business.

    Digital Health

    In an era of ubiquitous access to internet-connected devices from almost anywhere, one of the pre-pandemic concerns was an issue that has been questioned by humanity for centuries; Is the next-generation spending too much time with technology? Much debate had been made over the amount of time people were spending with technologies connected to the Internet. Screen time was such a hot topic that many software providers stepped up with solutions to monitor and control the time people, particularly children, could spend on these devices.

    It is in this direction that new insights about screen time will evolve, and it will be a debate about quality, not quantity. It will be about how we can implement “good” screen time and then monitor and control it. It will be about preventing “bad” screen time with quantifiable justification and suggested preventative solutions. This will likely affect the education sector hugely by providing tools better adapted to this new paradigm. We are only just at the end of the beginning of a change in digital health.

    Disinformation

    Disinformation and conspiracy theories have been part of human nature for millennia. However, recent technical advances like social networking and recommendation algorithms have fuelled to an extent never witnessed, the spread and belief of such disinformation that has consequences for society and possibly even democracy. Today, businesses focusing their marketing and revenue-making activities online should be aware that they could be subject to organic and organised campaigns to discredit their work, profession, or any other attribute that is the current target for attack. An example of this extreme took place in the United States, the United Kingdom, and Italy when nurses suddenly went from national heroes to national conspirators over misinformation about Coronavirus vaccines.

    Disinformation is no longer the “graffiti of the Internet”; it is being used politically and weaponised in cyber-attacks throughout the world—one to watch for.

    Automation and Artificial Intelligence

    Over the last few years, we have seen an explosion in the number and prevalence of automated systems, from website chatbots that provide first-line support services to deeply integrated automation-development platforms such as Zapier and Microsoft Power Automate. Many predict an increase in spending on automation over the next three years, with services companies well-placed to take advantage of this opportunity by providing help in implementing these systems.

    Two main types of automation are emerging as the development targets: Robotic Process Automation (RPA) and Business Process Automation (BPA). RPA can be applied to many general and industry-specific tasks such as Procurement, Marketing, HR, Retail, Telecommunications and Banking, to name a few. BPA is best suited to the processing of unstructured data sets such as voice, images, and natural language systems, and often rely on Artificial Intelligence to accomplish line of business help, for example, real-time translations over video conferencing. GPT-3 is one such language model that has produced human-like text for human interactions through chatbots etc. Currently, only developed markets such as the US, Europe and Asia-Pacific are investing heavily in these capabilities, but the Latin American market is predicted to grow five-fold by 2025, according to learnbonds.com.


    I don’t know what the future will hold, of course, but looking at trends in the wider world can at least give us a heads-up that can help us better understand and adapt. Here’s to a better 2021.

    Thanks for being a supporter, have a great day.


    If you enjoyed The Future is Digital I’d really appreciate it if you would share it to those in your network.

    Share

    If this email was forwarded to you, I’d love to see you on board. You can sign up here:

    Subscribe now

    Visit the website to read all the archives.

    ––––––––––––––––––––––––––––––––

    1 Despite everything

    → 27 January 2021, 15:45
  • The state of Digital Transformation in 2020 and some follow-up on the disruption of Intel

    My apologies for the long lapse between writing essays. If I’m honest, the world has been just too much to afford me the mental space to think, write, edit and record these essays. I’ve taken on more responsibilities and simultaneously fell down the rabbit hole with the US presidential elections. And, despite not being directly affected by the ousting of the “Great Orange Liar”, I can’t help but be touched on a personal level. Seeing the once-great nation disintegrating in front of my eyes bothers me much. That’s all I’ll say for the moment.

    On a more positive note, as we draw ever-closer to deploying what might be a solution to our number one problem of the novel coronavirus, I thought I’d take a look at what the pandemic has exposed and what digital transformation means today.

    One last note on housekeeping, the keen-eyed will notice that I have moved this newsletter to my companies’ domain (dgtlfutures.com). Everything else stays the same and all the existing links still work. It is now easier to find, as it is at newsletter.dgtlfutures.com


    The state of Digital Transformation in 2020

    When the pandemic appeared to be a real threat to the countries in which it took hold, many including myself estimated that this might be the impetus required to get companies to start, deploy and finish their digital transformations. The reality couldn’t be further from the truth, and it comes back to some difficulties I’ve been discussing for years.

    Let’s take a look at where we are in digital transformation today. I’ll go on to show what is missing, what is needed and how we get there.

    The first wave - The implementation of computerised back-end systems

    The likes of IBM with their AS/400 and DEC with their VAX minis and mainframes were the masters of this. Not because it was hard, but because it was easy and it was a licence to print money. Ever since the first VisiCalc spreadsheet showed the CEO that he or she didn’t need to manually calculate the columns and rows themselves (or have one of the minions to do it), it was inevitable that computerised system would penetrate deeper and deeper in businesses. Those that afforded the outrageous costs of the time, were given an advantage that had not been seen since the invention of the wheel (the innovation that disrupted the movement of atoms from one place to another).

    Quickly, IBM and their competitors spun up massive sales teams that crisscrossed the globe demonstrating and selling stock control systems, basic accounting ledgers and simple statistical reporting programs. On the back of this, software-focused companies ramped up work producing ever more complex designs that piggy-backed on the already-deployed hardware. Being that the business model of the IBMs and DECs of the era was to sell high-margin hardware and lucrative support contracts for that hardware, they were pleased to let the software houses integrate their software as it made the hardware even more desirable.

    This symbiotic relationship even gave rise to the “Killer App” moniker we use today.

    The businesses that needed to make ever-quicker decisions wrote practically open cheques to the manufacturers, as they were confident that the returns on the investments would outdo the spend. And they were right to believe this, as the entire industry structured itself to become a self-fulfilling prophecy.

    A long time ago, I interviewed for a support role in one of the world’s largest banks, in their London office. I was applying for a position as a support engineer that would be dispatched to the trading floor. I was genuinely intrigued by the floor and asked if it would be possible to see the environment in which I would be working. After a short pause, the IT manager agreed to the unusual request and led me down the stairs to the big oak doors that displayed an ominous sign on a big brass plate. “Do Not Feed The Animals” it read. I chuckled and braced myself for the spectacle that was a Trading Floor in the early 90s. I’ll tell that story another day. But what I most remember was that each trader had two complete AS/400 systems under their desks. This, a room with perhaps 60 to 100 people in it.

    That would be a multi-million pound budget by today's rates. But this was standard issue, as the opportunity cost was so high for traders that were just that slightly slower than their competition during trading hours. Their killer app was the trading platform that operated on super-slim timing to augment the trader’s abilities.

    The second wave - The paperless office

    After the fury of this first wave of digital transformation, it was clear to businesses that they needed to go further, and hunting season was declared on paper—a hunting season that has not, by any stretch of the imagination, finished yet even in 2020.

    But that is beside the point.

    End-user facing documents and reports were the next low-hanging fruit, and businesses proceeded to digitise these objects as and when they could based on the technologies at their disposal. Very few companies bought software with the sole intention of moving paper reports to digital versions of themselves. It was just the icing on the cake for most. So now, timesheets, TP reports (see Office Space) and countless other document types were converted.

    Operators would export data from the ERP systems like SAP, import them into Excel and produce the charts that ended up in Word and PowerPoint documents. But even at this stage, paper wasn’t entirely eliminated, as often these reports (and I see this still today) were printed out in colour and distributed manually or by mail (the physical internal and external postal systems) for analysis and treatment. At some point, people realised that this could be made more efficient. With the advent of internal email systems gaining popularity, sending the PPT over email was the method of choice that enabled faster and better “collaboration”.

    I used the quotes, as, by today’s standards, this could be nothing but further from the truth of what collaboration is. The back-and-forth of individually saved and edited documents on the network led to an exponential growth in data storage needs for both the email systems and the personal data storage systems.

    When working on sophisticated archiving systems, I discovered that it was not that uncommon to have approximately 100 copies of the same documents on the network. That email sent to 20 colleagues, saved on their “personal space” produces in one step 41 copies!

    The third wave - The age of collaboration

    When the apparent limitations of this model became apparent, and the software had caught up, we started to build-out specific collaboration software to address these limitations.

    As a side note, it must be said that IBM was (not for the first time) way ahead of this curve. Whilst the likes of Novell and Microsoft were supplying the pipes to connect businesses with unstable and simplistic networking hardware, IBM bought and extended a company that built a virtually limitless collaboration system that was too much too early. Lotus Domino was the first “proper” collaboration tool that let business easily deploy just-what-was-needed software to decision-makers. It included storage, sharing, app-building and elementary database capabilities that were far ahead of the curve at the time. Its complexity and frankly, the hostile user interface was part of its downfall, but it was an essential step in the computing-business interface building world.

    Fast-forward to pre-pandemic, and the state of collaboration today. We see that companies that had implemented the new generation of basic collaboration systems could provide some semblance of business continuity. Whilst those who hadn’t yet taken the steps, scrambled to implement tools, shoehorning them into day-to-day operations. With varying degrees of success, it should be noted. The pandemic has forced many companies to evaluate if the tools work. They work that is for sure, and work surprisingly well, as they are developed from years of research and experience testing. Forcing them on to unprepared staff will only expose the frailty of your operations if you don’t do the hard work of real digital transformation.

    But back to the pandemic. Businesses that have been forced to close their doors to receiving public in their offices and shops have turned to their most pressing problem of managing the customer relationship. How do I sell to someone who would previously wander around my shop for 15 minutes before picking an item and purchasing it?

    Facebook and WhatsApp, for example, have provided a means to interact with the client, and possibly vehicle some sales. I’d argue that those sales were probably not lost in the first place, but let’s not split hairs. However, they don’t address the fundamental problems of a wholesale shift in the customer journey fro discovery to purchase and beyond. Plasters on broken leg might make you feel a little better, but they don’t repair the break! So, as we progress in the pandemic, most are preoccupied with the customer-facing elements, ignoring the opportunity to implement real change that would set them up for the afterworld.

    In trying to schematise the idea, I’ve settled on three blocks; the back-end, the operations and the customer-facing parts.

    7ABF27A9-E19B-40C8-86AC-F06DDA0A6602_1_201_a.jpeg

    Source: Matthew Cowen

    The back-end has been deployed for many years and is efficient at what it does. What is doesn’t do is the problem we have today, however. Legacy AS/400 and DEC systems are still prevalent all around the world. Those legacy systems are notoriously difficult to interface with, notoriously poor at real-time and notoriously poor at providing reusable data for business intelligence, or BI.

    But the customer-facing elements are the new centre of focus. It doesn’t take much work to find hundreds and hundreds of businesses that increase your visibility online and help to market and promote your wares, and that’s just in our region. If you think about Internet assumptions, each one of these businesses competes with the millions around the world that are providing the same thing. It’s the reality of the Internet. But let’s not get hung up on that, and focus on the value-added service they’re providing in a world where it increasingly more challenging to be found. This value-add is only limited in that it doesn’t interface well with the real issue for businesses tackling digital transformation, the operations and the back-end. You can sell online, great, but how does that affect the whole value-chain and all the interlacing parts in your business?

    The fourth wave (we’re not here yet) - Reimagining the value chain

    The elephant in the room is that big block in the middle; Operations. Real digital transformation comes from looking at the whole can of worms that make up your daily operations. The simple tasks, right up to the complex multi-layer, multi-purpose and multi-approval workflows.

    I’m not diminishing the value of the fire-fighting going on today in any way — it is a case of survival in many instances—, but the real work should start now. Businesses need to evaluate in detail every single process that makes up their very existence. In assessing, they need to determine three things; 1) the reason a process exists: is it there simply because that’s the way it has always been done? 2) the worth of the process, or to put it another way: what value does that process bring to the business? And 3) the justification for the process: which is not the reason, nor the individual value, but more of an evaluation about how it fits into the whole. Is the sum of its parts greater than the whole? 

    Redesigning and redeploying those processes is necessary and the only path to digital transformation that will bear its fruits in the future. It will undoubtedly put in question your back-end and will almost certainly change your front-end. And as hard as it will be, it will likely be the difference between your businesses prosperity or ultimate demise.


    Intel’s Disruption Followup

    I’ve been studying and writing about the disruption taking place on Intel’s x86 line for several years. I’ve written substantially about it here (here and here) in this newsletter and in unpublished form. I explained why it is hard to spot disruption, even if it is happening in front of us. It becomes doubly hard when we are focused on our businesses staying alive, as so many of us are in this current pandemic:

    From Intel’s Pohoiki Beach and Disruption Theory:

    The problem with theories like these is that it is pretty much the same problem we have when we discuss human or animal evolution. We find it hard to understand the future direction of the evolutive process in real-time, mostly because it happens so slowly and over many generations. From a retrospective position, we can see what happened, and we can often even, have an informed guess as to why it happened. Reliable prediction, it seems, is just unreachable. With modern digital technologies and the pace with which they evolve, we might be able to see enough into the future to discern and predict outcomes for companies in this new world.

    I described the history behind Intel’s disruption:

    Intel is a well-run long-established microprocessor design and fabrication company, with a phenomenal marketing arm and deep links to the most important companies in the computing industry. Founded in 1968, a year before AMD, it has run head to head with AMD and in nearly every battle beaten AMD on just about every level that means anything; marketing, price, availability, design, availability, to name a few. 

    The new entrant in the microprocessor market is known as ARM, or as it was previously known, Advanced RISC Machine and before that Acorn RISC Machine — giving you an idea to its origins, powering Acorn Archimedes personal computers. Founded in Cambridge, UK, in 1990 (22 years after Intel), the processor design was a complete revolution and rethink of classic processor design, with the clue in the companies’ original name; RISC.

    RISC means Reduced Instruction Set Computer. The Instruction Set of a processor is a fundamental element to how the processor behaves but more importantly, how it is directed to do things, what is more commonly known as programmed. Modern terms, such as coding are basically the same things. Different microprocessors can have the same instruction set, allowing programmers to write the same code, or for the compilers — software designed to turn more human-readable code into native machine language, that is virtually impossible to understand as a human — to translate into the same instructions.

    Compared to ARM, Intel microprocessors are CISC, Complex Instruction Set Computers. Without going in to microprocessor design, an instruction is a type of command run by the processor to achieve a desired outcome, like a multiplication, division, comparison, etc. Complex instructions can be of variable length, take more than one processor cycle to execute (processor cycles govern how fast the microprocessor can operate), but are more efficient in memory (RAM) usage. RISC instructions are more simplified and standardised, and critically, take only one processor cycle to execute. They have trade-offs, managing memory (RAM) less efficiently and require the compilers to do more work when translating the code into machine language, i.e., potentially slower development times whilst waiting for the compilation to finish.

    The memory issue was only an issue until recently, when memory has become effectively abundant and cheap, allowing hardware designs to incorporate huge amounts of RAM in their designs.

    At the begginingg of my writing, I’d tried to frame it in terms of Clayton Christensen’s Disruption Theory, an observation that was, at that time, not frequently put forward. A recent blog post on Medium by one of Christensen’s students vindicated what I’d been writing about for a long time. Have a read if you’re interested in the theory. He does a great job of articulating it.

    But it got me thinking about how one could spot disruption or more importantly, one could use the framework to try to provoke disruption. In this essay, I’m delving into that first point.


    How do you spot disruption?

    Going back to the basics, and ignoring all the incorrect uses of the word —no, disruption is not just doing the same thing but cheaper— I thought I’d try to give you the tools to see disruption happening in your markets.

    The big red flag to look for is a product or service that does some of what your existing product or service does, but does it better on a couple of metrics simple metrics; price, efficiency and friction. 

    Cheaper is not disruption

    A less-expensive product alone is not a sign of disruption. What is, however, is a product that evolves steadily providing more and more features and competing on more and more parts of your product or service all whilst its price is significantly lower than yours. Again, it might just be a cheaper product to produce and sell. You have to ask from the customer’s point of view. Is it fulfilling the job to be done? Is it good enough that it makes your buyers question the need to pay your prices? Can your customers successfully integrate and use the competing product despite its shortcomings over your product and still find value?

    Efficiency is key

    If the competing product is more efficient, be that in the sales cycle, the use-cycle or the complete lifecycle of the product, buyers will, of course, investigate and evaluate that product. Can the product do essentially the same job as the existing product on the market? Does it do this faster, better with more predictable outcomes? Efficiency might afford you competitor lower costs too.

    Friction is proportional to sales

    Friction is often overlooked as an important force that influences buying habits. The more you reduce friction in the purchase operation and reception of the product, the more chance the buyer tends to have in choosing your product over a competing one. If the new entrant has substantially reduced this friction compared to the friction required to purchase your product, this should be another indicator that things might be difficult from this point forward for you.

    Loyal customers may soften the impact at first, but even the most faithful will switch if the product fulfils several of the criteria I have highlighted above.

    But that alone is not enough, you need to have historical data, or you need to develop a way to project into the future and make assumptions about where you think the product and service are going or where it could go. Just look at the image that explains disruptive innovation below. Looking at the “Entrant’s disruptive trajectory” line as compared to the “Mainstream” line shows how, eventually, disruptive innovation will surpass the mainstream product or service when “performance” is evaluated based on the metrics I’ve highlighted above (to name a few).

    VL950101_AT.png

    Source: HBR

    If you are the incumbent and you wish to stay profitable and dominant, you have but two choices. Embark on a “sustaining trajectory” or innovate your own disruptive innovation. Just be aware, a sustaining trajectory has its upper limits!


    The Future is Digital Newsletter is intended for anyone interested in Digital Technologies and how it affects their business. I’d really appreciate it if you would share it to those in your network.

    Share The Future is Digital

    If this email was forwarded to you, I’d love to see you on board. You can sign up here:

    Subscribe now

    Visit the website to read all the archives.

    Thanks for being a supporter, have a great day.

    → 17 November 2020, 15:43
  • Digital Commerce. Blockchain (again)

    If you listened to the podcast version, you’d note I added some music. I broke out my skills in Garageband to make a quick accompanying jingle to spice up the podcast. Let me know what you think. 🎵

    On to this week’s topics.

    I’m astonished I didn’t get roasted for completely dissing Blockchain as a useless technology a couple of weeks ago. I thought I’d talk about an example where I’m actually quite bullish about the technology. But first I wanted to expand upon a thought I had following a conversation I’d had with someone, discussing why digital commerce is so different from brick and mortar commerce. Follow on for my thoughts.


    The unique challenges of digital commerce for physical goods

    The difficulty for online retailers selling physical goods in the digital economy is that the value of digital products is significantly reduced, and in some cases, is virtually zero. That inherently puts pressure on the value of physical goods that are commoditised. Fortunately, luxury goods are seeing less pressure on their perceived value, but that is more a function of time rather than real value.

    Luxury goods houses and retailers are seeing these changes and are starting to act. Apple today has changed its retail sales processes to resemble more of a luxury brand one-to-one service rather than a Walmart get-it-off-the-shelf-yourself operation.

    From Apple’s press release:

    When iPhone 12, iPhone 12 Pro, and iPad Air are available Friday, October 23, customers can get their all-new products directly from Apple through tailored purchase experiences offered online, by phone, or in store. From a chat session with a Specialist that starts online and finishes with contactless delivery, to visiting select Apple Store locations for a one-on-one session with an Apple Specialist, customers can find the best way to get the products they’re looking for.

    Apple_new-ways-to-shop-for-iPadAir-iPhone12Pro-iPhone12-infographic_10212020.jpg

    Source: Apple

    So, the question is, how do businesses make money now that the products they sell are virtually worthless (economically speaking)?

    Answers to some of that lie in the services surrounding the product. Be that sales (see above), installation/delivery, support, subscription, ongoing help and many other possibilities, the key lies in giving the customer an ‘experience’ rather than a sale. Deriving value comes from developing and innovating on several levels to create a whole greater than the sum of its parts.

    If we look at the above example from Apple, you’ll note they’re selling a phone. But that phone is so much more than a simple “… Widescreen iPod with touch controls, a revolutionary mobile phone and a breakthrough internet communicator.”. The iPhone has replaced by some estimates, over 50 products; mobile phone, point and shoot camera, a torch, calendar, SatNav, personal assistant, to name a few. But it is a product in a sea of other products that are designed to resemble each other closely. There is little material difference between the flagship Samsung and Google devices that are directly competing with the iPhone. And, Smartphones themselves are becoming commoditised, as is evidenced by the increasingly smaller and smaller gains made in hardware design and technologies deployed. The differentiator is the software and what that software can enable hardware to do for the overall end-user experience.

    Two excellent examples are computational photography and health analytics.

    In computational photography, we are nearing the phase whereby even specialist cameras of the past are being innovated out of existence. It is only a matter of time before the computational aspect will outperform pure optical limitations of smartphone camera modules. In health, simple movement sensors initially enabled step tracking, instantly killing a growing market segment, and eventually enabled detailed sleep tracking that has (itself) been out-innovated by smartwatches. It is no longer science fiction to imagine the doctor’s office on your wrist.

    The overall experience of owning these products and their potential beyond the initial use-case is what I mean when I say ‘customer experience’. Apple has gone that step further and developed and Covid-friendly white-glove shopping experience previously reserved for the rich and famous. You book a 1 to 1 either in-store or directly on the Apple Retail site, and you are led through your purchase to contactless delivery or pick-up. It is a personal shopping service for the rest of us.

    Who doesn’t want to be made to feel special when buying something?


    Blockchain. Again

    I’m surprised I didn’t get more of a roasting from my somewhat sceptical articles about Blockchain here and here:

    According to a detailed academic-style “peer-reviewed” study by the Centre for Evidence-Based Blockchain and reported in the FT:

    “… outside of cryptoland, where blockchain does actually have a purpose insofar as it allows people to pay each other in strings of 1s and 0s without an intermediary, we have never seen any evidence that it actually does anything, or makes anything better. Often, it seems to make things a whole lot worse.”

    Worse, the report repeatedly highlights that the technology is a solution currently looking for a problem. The antithesis to the Jobs to be Done theory that helps us better design and provide solutions. With over 55% of projects showing no evidence of useful outcomes, over 45% showing “unfiltered evidence” (i.e., next to worthless), it would appear that Blockchain is a belief-system rather than a technological solution.

    And …

    …it is a huge energy consumer and hence by definition is inefficient. That, sadly, is not its only efficiency problem. Blockchain is actually extremely limited in its speed and quantity of transactions and scales poorly. So much so that in 2016 several banks exploring the possibility of using the technology in the personal and business banking sector abandoned the work as blockchain was just too slow.

    Quite the downer if I’m honest. But whilst many projects show no use for Blockchain, some projects show promise. One such example is Dominica’s Economic Growth team at the Climate Resilience Execution Agency for Dominica (CREAD).

    They are currently developing a parametric insurance product that uses blockchain technology to help small businesses and the typically underserved by traditional insurance products, manage their risk of natural disasters in an innovative way. It’s called BHP or Blockchain Hurricane Protection. From the article on LinkedIn:

    BHP aims to extend coverage to those excluded from traditional indemnity insurance, and provide Dominicans with a flexible and affordable tool for managing climate risk. Total damage from Hurricane Maria which struck the island as a category 5 storm on September 18, 2017 was US$1.3 billion, representing 226% of GDP. Uninsured losses were US$1.1 billion, or 86% of total damages. Damages to the housing sector totalled US$520 million. MSMEs suffered US$73 million in damages, and agriculture also suffered US$73 million in damage. In the years since Hurricane Maria, premiums for traditional indemnity policies have increased by more than 80%.

    This was the background to Dominica’s drive for innovation to better protect itself after multiple incidents that substantially affected citizens and businesses over the last decade.

    So, what is a Parametric Insurance and why blockchain?

    From Wikipedia:

    Parametric insurance is a type of insurance that does not indemnify the pure loss, but ex ante agrees to make a payment upon the occurrence of a triggering event. The triggering event is often a catastrophic natural event which may ordinarily precipitate a loss or a series of losses. But parametric insurance principles are also applied to agricultural crop insurance and other normal risks not of the nature of disaster, if the outcome of the risk is correlated to a parameter or an index of parameters.

    What’s great about this project is that it is intelligently using technology in the right places to fulfil the “Job to be Done”. Again, from that LinkedIn article:

    Once customers have downloaded the mobile wallet to their smartphone, they simply indicate the level of coverage that they would like to purchase, tag the location where they want the policy to apply, enter some basic information, and pay the premium. The policy is then issued and stored in the blockchain. In the event of a triggering event that meets the criteria of the policy, the payout is generated automatically and delivered to the customer's mobile wallet within three days of the triggering event.

    This product reduces friction at the critical stages of an insurance lifecycle; the signup and the payout.

    You’d be right in asking why a “normal” insurance product couldn’t do the same. And there’s no reason traditional insurance can’t reduce friction when it comes to the signup and management of the product. Payout is where the difficulty lies. Frequently, insurance companies need to wait for a “Natural Disaster” to be declared or during a smaller indecent, assessors and inspectors to audit and report back to the insurer before the insurer can start the payout process, which itself can be lengthy and time-consuming.

    In this product, they are disrupting traditional insurance at a particular level — this is not general insurance — and that disruption, like all disruptions, is to the benefit of customers in the way of simplification and increased speed in onboarding and payouts. Not to mention the pricing that makes it more accessible and hence more likely to be adopted, benefitting all in the process.

    But the interesting aspect from a tech point of view is the use of blockchain. In this instance, it is playing to blockchain’s strengths and not trying to overcome its weaknesses (see above). And that’s the intelligent way to use it.

    BHP is a product that doesn’t need to scale to hundreds if not thousands of transactions per millisecond like traditional banking systems deployed around the world. For one, Dominica (thankfully) doesn’t suffer a significant natural disaster every day, and secondly, its population is currently only around 73 000 people. Today’s blockchains are more than capable of sustaining the likely transaction requirements of this implementation. And suppose BHP is available to a broader audience throughout the Caribbean, it is still unlikely to overwhelm the system, as processor designs and energy efficiency gains are around the corner.

    A cursory glance shows that the major chip designers and builders are all exploring the possibilities of tailoring their products for blockchain applications to overcome the shortcomings of the technology.

    https://www.amd.com/en/technologies/blockchain

    https://www.intel.com/content/www/us/en/security/blockchain-overview.html

    Interesting times.


    You can find the archives of all my essays here:

    The Future is Digital Archives


    The Future is Digital Newsletter is intended to inform and entertain on the topic of tech and the digital world. Share it to someone you know who would like it.

    Share The Future is Digital

    If this email was forwarded to you, I’d love to see you on board, here’s where you can sign up spam-free:

    Subscribe now

    Thanks for being a supporter, have a great day.

    → 21 October 2020, 20:26
  • Nvidia, AI and the Big Picture

    There are currently many moving parts to the tech industry, and as tech becomes more and more pervasive in society, it is getting roped into discussions and being judged by standards that never applied in the past. Debates are ranging from whether or not big tech has built unmatched and unrivalled monopolies, whether those monopolies are legal or not, to whether big tech is going to be responsible for the downfall of democracy and ultimately the next world war. I can’t give you a credible answer, but I can say that it is mentally draining to follow the tech industry. Not because there’s nothing to read and write about, precisely the opposite. The fire-hose of news in this industry has relentlessly increased, driving an information flow-rate that is impossible to manage.

    Too much choice is ultimately a bad thing.

    I’m currently researching the ICT industry in the Eastern Caribbean, and the same data points appear continually. Small businesses are trying to survive by offering the same services competing with the same compatriots on the same value propositions. It is not only a zero-sum game, but it is also so misaligned to what is possible if we consider internet assumptions.

    I’ll write more about this topic in the future as I clarify my thoughts and the research reveals further insights.

    I thought I’d write a follow-up on the last newsletter, as pretty-much right after I’d recorded and published news broke about the sale of the subject of that issue, ARM. Read on for my thoughts on this.


    ARM’s History

    When I wrote about the disruption of a part of Intel’s processors’ design and build process, in the issues: The Tale of Intel’s Disruption and Blockchain is Useless and Intel’s Pohoiki Beach and Disruption Theory, and I delved further into Disruption Theory, in Disruption Theory. Is Wizzee a Disruptor? I was trying to give you an overview of Disruption Theory and how it may apply to your own business. I recommend you read those articles for a better context of this essay.

    Getting back to that news. I was aware of the potential sale of one of the most important actors in that field, ARM Holdings. What I didn’t expect was such a quick sale and a sale to a company that logic would reason is not best suited to the type of business it was buying.

    Let’s back up here just a little and recap on the timeline and where I think this is going.

    ARM Limited, as it is now known, was initially incorporated in 1990 under the name Advanced RISC Machines. Funnily enough, even that wasn’t its original name. It was born as Acorn RISC Machine, from the Acorn Archimedes computer that was powered by the new microprocessor design. The change was apparently at the request of Apple that objected to the name of a competitor in the name of one of the processors it was responsible for jointly designing and using in that ill-fated (but arguably necessary step) of the Apple Newton. Advanced RISC Machines became ARM Holdings for the Initial Public Offering (IPO) that took place in 1998.

    In 2016, Softbank, a Japanese Telecoms company with an appetite for Venture Capital, purchased ARM for an amount of approximately 32 billion USD. That transaction guaranteed the operations to continue as they were. That is a UK headquarters and offices in Silicon Valley and Tokyo. It allowed ARM to be close to the world’s disrupters and designers (Silicon Valley) and the world’s builders (Taiwan and China). ARM capitalised on this, and the catalogue of products that currently use ARM chips designs is un-fathomable. Just about every device that requires a processor of some kind, that isn’t a computer, contains an ARM chip. And that’s before we even talk about the just-starting revolution of the Internet of Things, or IoT.

    Image: nvidia

    ARM has just sold to Nvidia for an announced price of 40 billion USD, an 8 billion USD premium over its purchase price, or a 25% profit over four years. SoftBank will retain a 10% stake in ARM too. This money will go some way to stopping the haemorrhage it recently suffered when it indicated that it might lose up to 80 billion USD from failed investments—WeWork (cough, cough).

    As a recap, ARM makes no processors itself. And in some cases, it doesn’t even design the subsequent generations of some of its processor designs. ARM licenses its intellectual property, or IP, to anyone, following a long tradition of British tech design houses that have sprung up of the last couple of decades, like Imagine Technologies and ARC International. Depending on the license terms, companies are more or less free to use the designs as they see fit. ARM presents itself as the Switzerland of processor designs i.e., neutral. ARM reports that its designs are in around 180 billion processors in use to date.

    Qualcomm uses them for their processors that run a majority of Android phones, and most famously Apple has a lifetime license (from the days it was one of the original designers) and uses its asset to design and implement the most advanced mobile processors on the planet currently. But even Apple doesn’t build those processors; it farms that work out to the specialist I mentioned in the last newsletter, Taiwan Semiconductor Manufacturing Corporation, TSMC. You can’t get a more explicit name that reflects the companies’ primary purpose than that! Which leads me to where I think this is going.

    ARM-ing the Future

    The big question is why a graphics card builder like Nvidia would splash for a chip designer? 

    Part of the answer lies in the fact that Nvidia, itself, is a licensee of ARM, and presumably that annual fixed cost will be removed from the books being that it is now the owner of the company it used to buy a licence from. It’s an upfront investment that pays off over several years, and if the value accumulation of ARM continues, the investment might be justified relatively quickly (from an accounting point of view).

    But I think it goes beyond that. I hinted earlier that ARM processors are just about everywhere and are integrated into more and more devices in the form of SMART tech. The fridge, the toaster, the Espresso machine are all candidates for a coming home-smarts revolution. And the already processor-enabled world of home appliances like washing machines will be enhanced by the technological possibilities available to their builders.

    The TAM, or Total Addressable Market, for their IP is almost infinite. The ubiquity of wifi and the incoming 5G avalanche only goes to reinforce the inclusion of ARM-type processors in devices: even the comms technology itself, the routers, switches and amplifiers, use ARM processor designs. ARM is set to become the de facto processors of things that are not traditional PCs.

    Besides Apple, Microsoft is using more and more of the technology in its designs. The new Surface Duo is an ARM-based foldable phone/tablet hybrid with impressive screen technology, all running on a customised ARM design. The Surface X Pro is a new generation of the popular Microsoft Surface PC line, and is also ARM-based, running a customised ARM compiled version of Windows.

    Beyond Computers

    But again, it goes beyond this, to what will inevitably be as pervasive a technology as oil-powered personal transport has become. I’m talking about AI or Artificial Intelligence.

    From simple statistical models to more advanced nuanced-based algorithms like GPT3, AI is set to be included in everything from your everyday carry phone to the entertainment system of the future. Think Bladerunner 2049. Where does Nvidia step in then?

    From the Nvidia Deep Learning AI website:

    I AM A VISIONARY, A HEALER, A CREATOR, AND SO MUCH MORE. i am ai.

    POWERING CHANGE WITH AI AND DEEP LEARNING

    AI doesn’t stand still. It’s a living, changing entity that powers change throughout every industry across the globe. As it evolves, so do we all. From the visionaries, healers, and navigators to the creators, protectors, and teachers. It’s what drives us today. And what comes next.

    Many Data Scientists and technology teams around the world realised that they needed powerful processors to perform a highly reduced and specific set of calculations, for which only specialised and extremely expensive super-computers could perform. Super-computer makers like Cray and IBM sold their systems to large research institutes and universities with high profit-margins on the back of their uniqueness in their ability to calculate rapidly and massively parallel, an important factor when designing calculations of that type.

    On the other end of the computing spectrum, users wanted to get better quality graphics for video-gaming and image manipulation. Nvidia started to design and build and sell specialised video cards to OEMs (Original Equipment Manufacturers) like DELL, for integration on their motherboards for a win-win situation. Better graphics meant that computers became more desirable as games or design machines. These designs evolved over the years and are sold as separate cards for builders to include in their offerings.

    In a quirk of circumstance, the type of processing required to produce detailed and fluid graphics for games was also ideal for the type of calculations required for AI. At a fraction of the cost, scientists and researchers could equip banal PCs with a bank of processors that could compete with the multi-million $ super-computers. As you’ve guessed by now, these cards are powered by ARM processors. With Nvidia as a pioneer in AI, it has developed a deep understanding of the field as the above website indicates. That pivot let Nvidia surpass Intel a few weeks ago, as the worlds most valuable chip maker.

    With AI becoming ubiquitous, like it or not, the purchase of what Nvidia sees as a cornerstone of its technological chops, the purchase of ARM will no doubt allow Nvidia to extend a lead in the AI world.

    Challenges remain in that ARM licenses its technology to direct competitors like AMD (Advanced Micro Devices) and just how they will navigate those forces is unclear. If you think about it, Nvidia has to develop and execute a way it can successfully operate and profit from two very different business models. Selling IP is nothing like selling processors, and that job has just become much more complicated with the inherent competing forces of buyers of the IP and the manufacturers.

    For now, Nvidia has stated that the business will continue to run as-is with Nvidia itself being one of many of ARMs customers to “buy” the technology. Only time will tell if they manage to pull this off, but I’m currently positive on the long term prospects of the deal.

    Intel’s Disruption Train

    As you know, I firmly believe we are witnessing a considerable disruption of an incumbent in the tech industry. It doesn’t happen as often as the tech press would like you to think. But it is an exciting spectacle to observe from a purely theoretical point of view. There will be many articles and possible books written on the subject if we still have books by then.

    With the Nvidia acquisition of ARM, Intel’s woes in the mobile processor, AI and Datacenter fields, have just got worse. Above, I purposely stuck to a more consumer side of the implications. But it is in the Datacenter that the next battle for processor superiority will commence in earnest soon. We’ve already seen inroads that Google and Amazon are making with the design of ARM-based servers that slot into the racks alongside thousands of other servers all furnishing your email, photo management and countless other modern-day necessities.

    With Nvidia, the risk for Intel is that they successfully manage to vertically integrate the ARM designs in their products in a much tighter manner, thus producing even more effective cloud server designs for specialist applications like AI and Machine Learning. They could, of course, do this without having to buy ARM, but the ARM acquisition may give them a head start that puts the nail in the coffin of significant business for Intel.

    I have no crystal ball, and I may be horribly wrong, but I think the next few years are going to be critical for Intel for it to survive in its current form. As I’ve noted in other essays, it is not lost on Intel, and they are making business decisions that keep their margins up and fix the short term. I would like to see Intel be bold and try to out-disrupt itself. I think this deal allows Intel an opportunity to profit from the scepticism in the industry surrounding Nvidia’s long term intentions and stun us with something truly new. It is the time for Intel to look at the research labs for the next thing and give it a shot. That’s easier said than done, however.


    The Future is Digital Newsletter is intended for anyone interested in Digital Technologies and how it affects their business. I’d really appreciate a share to those in your network.

    Share The Future is Digital

    If this email was forwarded to you, I’d love to see you on board. You can sign up here:

    Subscribe now

    Thanks for being a supporter, have a great day.

    → 25 September 2020, 15:44
← Newer Posts Page 9 of 25 Older Posts →
  • RSS
  • JSON Feed
  • Privacy Policy
  • License