Computational

In this essay, I explore the meaning and consequences of Computational Computing. It’s a deep reflection on the state of tech currently and a work in progress. Some of it might not be coherent or have a conclusion, but it is all how my mind is thinking about the world today.

I have to say, overall, I’m more optimistic than I am pessimistic. Not that you could tell from this piece. 😁

What do you think? Hit reply or post in the comments.

Share


We’re entering the era of computational everything.

For many of us in the IT industry or those of us who are closely related to tech, it comes as little surprise. But what is surprising is just how much “computational” is enhancing and augmenting our capacity as humans.

Steve Jobs famously said that computers were bicycles of the mind. He quoted a study in Scientific American, where the Condor was outclassed by a human using a bicycle in terms of efficiency (distance travelled versus calories used). Humans were classed well below average without the bicycle. This is what computational-everything is exactly like.

Much of what we do today, and have been doing over the last couple of decades or so with computers, has largely relied on skill. Your raw talent or your tenacity have generally determined if you have become highly skilled or an expert in your chosen domain. Malcolm Gladwell’s 10000 hours principle.

But computational doesn’t mean computing. Let’s have a look at what I mean by computational.

Computational beginnings

Before computers were the ubiquitous tools we all interact with in modern times, with much work completed manually. Simple tools developed over hundreds, if not thousands of years, enhanced calculations and reduced errors. From the abacus to the slide rule, these basic augmentation systems had a profound impact on human’s ability to complete more and more complex tasks.

Moving forward, the first computers —in the modern sense of the definition— like ENIAC (Electronic Numerical Integrator and Computer) or CNC (Complex Number Calculator)1 pathed the way for people and businesses to exploit their powerful and rapid algorithms to produce results faster and with fewer errors than teams of humans. Business ledgers rapidly took advantage of the new possibilities and these have essentially stayed the same since then. Accounting applications today, despite being much faster, are ostensibly no different from the early versions, save a few bells and whistles.

Modern computers, or more precisely modern CPUs, have evolved to such an extent that an entirely new way of computing is now possible. Hence, the term Computational chosen

And once again, ARM is at the forefront of this revolution in computational implementations. Based on ARM designs, Apple’s in-house designed and developed M1 processors that were recently revealed and are used in its line of Mac computers as well as the new iPad Pro features a 16-core Neural Engine —the hardware required for Machine Learning. The computational advantages, the bicycle for the mind if you like, are startling, allowing developers and scientists to imagine uses beyond what used to be possible.

We’re starting to see today’s applications gain some of that capability in mundane productivity applications like Microsoft Word or in writing augmentation applications like Grammarly. Translations tools are only just starting to show their promise, and tools like DeepL produce text so good that only the pickiest natives can spot errors…in multiple languages too!

The bicycle for writing, the bicycle for … Just about anything we will do on computers is upon us.

Computational photography

A few years ago this new era of computational photography kicked off with the introduction of applications that could suggest ameliorations and more for your photos. Apps designed to enhance your meh photos into something presentable.

Portrait Mode, Studio Lighting, these computational photography aids, let us take photos of subjects in ways that were prohibitive or expensive. Computational photography has only just started to scratch at the surface of what is going to be possible. Take a look at framing for example. Framing is an integral part of photography skills. Learning how to Frame an image is often the difference between a good photo and a great photo.

New devices and apps will soon tell you ‘how’ to frame the image. The phone fine-tuned to analyse in realtime the image touching the sensor and calculating in a matter of a split-second that the tree on the left would look better a little less at the edge, or that the face orientation would be better slightly turned more to the right and aligned on the right ⅔ line, prompting you to tell the subject to turn a little right.

That’s not all. The apps will post-process previously taken photos with the eye of a seasoned expert, extracting not only the optimal image quality based on exposure, temperature, and a few other basic metrics easily adjusted, but by image enhancements using Machine Learning to digitally enhance otherwise lost parts of the scenery (Pixelmator already does this to some extent). But it will frame the photo better, suggest or automatically apply a degree of touch up only the most trained and experienced could do in hours of work, in a fraction of a second. The edit notes all saved so that any one or more, separately can be removed or modified manually.

Computational audio

Just this week, Apple introduced Spatial Audio to its music streaming service Apple Music. Spatial audio enhances the separation between voices and instruments to provide a more “inside the music” feel to your listening. For now, these tracks are limited to a few thousand in the catalogue, with more on the way soon. But in the not too distant future, computational audio will be capable of real-time post-processing to enhance the audio you feed it.

Suddenly old tracks or tracks recorded on the master tapes in mono will gain stereo or spatial audio simply by request.

Currently, Spatial Audio is a bit hit-and-miss, as it requires a good mix from a good producer. Not all are that good. Soon, computational audio for music tracks will auto-analyse and re-mix music to sound better with little human intervention.

Other forms of audio, hearing aids, or headphones, will all gain capabilities that augment the experience. Purists may baulk at this, possibly with reason, but it is an inevitability that will be widely welcomed by those that are just looking for a decent sound. Those with hearing issues will rejoice.

The computational audio applications don’t stop at playback, recording may be fundamentally affected too. Microsoft and Apple both include microphones that automagically enhance the voice audio for online meetings. Possibly exacerbated by the pandemic, but the perfect testbed for even more integration of CPU and microphone to provide a better and simpler recording experience. 

Anyone who has run a podcast will tell you how the microphone setup is important for the sound quality of the recording and hence the quality of the show.

Apple’s WWDC keynote featured just such an application with Voice Isolation mode. It showed a leaf blower just behind the video conferencer being completely obscured by the computational audio application. A “coming soon” feature that can only get better and processor and programming experience increase. Conversely, it also features Wide Spectrum mode for when you want all the sounds included.

Computational everything

And this is where this essay is leading. Too much time, effort, or words spilt on Artificial Intelligence, and how the robots are going to take over the world do us no good. In some sense, yes they are, of course. But that doesn’t automatically mean the world will be ruled and driven entirely by them. What it means is rather more subtle. Possibly more of the framework I outlined above. By having computational everything, much of our lives can be enhanced and augmented, the bicycle for…, if you wish.

Autonomous cars are a long way off still, by computational driving is already here and will get better. Lane Assist, Park Assist and the various cruise-control augmentations are nothing, if not computational driving.

Even the most mundane of products can be augmented by liberal CPU usage. Your washing machine for example designed to reduce the water and detergent required simply applying ML to the load in the drum. Analysing (chemically) the water coming in, the type of detergent, the clothes, and their makeup, Cotton or other, this bicycle for washing machines would substantially contribute to more efficient usage of scarce resources and perhaps even better control the rejected “dirty” water.

Without writing a list-ickle, I’m sure you yourself can see a plethora of bicycle of the … applications.

Computational evil

It would be remiss of me not to discuss the negative side of this coming wave. As in all technology, not only does it enhance those with good intentions, but also those with bad intentions. And those with evil intentions get to ride on the bicycle too.

Augmented, or enhanced, everything means exactly that. Enhanced crime. Enhanced Racism. Enhanced Sexism, etc. You cannot have the good without the bad.

The Internet has shown us this.

I’ve been an Internet fanboy for as long as I can remember —at least, as long as I learnt there was a big network of computers that we could all use, connecting us closer together.

It was fun in the 90s. I had to connect from the university network, JANET, to NIST (National Institue of Standards and Technology), then out to the big bad world of what was still essentially ARPANET.

The World Wide Web was still in an experimental state in CERN. I hadn’t even heard of it at that point. We used the Internet like animals; terminal commands, long waits, FTP, Gopher, WAIS, and Usenet, none of the graphically oriented interfaces that we see today. As I fell in love with the Internet and I stumbled upon an early copy of Wired Magazine in 1993, imported into my native UK. I fully bought into the idea that the Internet was nothing but good for the world.

It would connect us, it would open our eyes to other things, would educate us, and it would even feed us. It would completely revolutionise the way the world works, for the better. It was the bicycle for the social experience.

I had no idea at the time that the very fact that the world became more and more connected, actually drives us more and more apart.

We, as humans, can comfortably ingest, process, and analyse only a few cognitively demanding elements simultaneously. At school, you would have only a handful of friends, and only one or two you could call a best friend. If like me, you were in a large secondary school of around a thousand pupils, it was overwhelming to be in a general assembly (with the whole school in one room). The morning going to school, with its never-ending procession of pupils arriving, break time with the crowd pilling out of the building to run around on the playground. All these people surrounding us are too much for one human to get to know, either intimately or on a cursory level.

The Internet completely explodes that model, and we are confronted with tens of thousands, if not millions of possible interactions whilst connected. Twitter, Facebook, Clubhouse, and their indifference of their capacity to overwhelm us is creating a different type of human culture that is, in my view, detrimental to the world. Polarisation, populism, the immediacy of need. These are all consequences that are not propice for the sain development of the world and a direct result of Computational Socialising. 

I wish I had understood this when I was first becoming charmed with the Internet. Perhaps I could have contributed to doing something to protect us from its inevitable negative consequences. Funnily enough, it was all there then for us to see. Read Neuromancer, and you’ll understand what I’m talking about.

It’s a dystopian view of the Computational Everything world I’m discussing in this essay.


I hope you liked the topic. Let me know.

Photo by Adi Goldstein on Unsplash

1

Interestingly, was also the first-ever live implementation of remote computing. In 1940 a demonstration in Dartmouth connected to the CNC from a terminal over phone lines. Sound familiar?

Matthew Cowen @matthewcowen