Innovation in Technology
By Dr John Yardley, Managing Director, Threads Software Ltd.
Despite those awful clichés much beloved by the investment fraternity, it is unlikely that the technologies evolving today are any more “disruptive” than was the telephone in the 1900s or the steam engine in the 1800s. They both had the capacity to dramatically change people’s lives and they both had an all-pervading effect on every sector of society. Steam engines didn’t just end up in locomotives, they powered hundreds of industries from mining to weaving. Similarly, telephones didn’t just let people talk to each other. They made it unnecessary to travel and slashed communication time. They both caused people to lose their jobs, but they created many other jobs for them to fill. They enabled people to live longer and enjoy better health.
Today, we see a similar thing happening with Information Technology. The original need that led to the evolution of computers was the solution of mathematical equations. But mathematics is simply a model which is contrived to replicate almost any application, so it should be no surprise that IT has been the prime mover for almost every other technology.
Take genetics, for example. In the 1950’s Crick and Watson published their landmark paper on the structure of DNA – with a lot of help from X-ray pictures and 3D models with sticks and balls. They had virtually no help from computers. A typical strand of DNA – the building block of life – contains around 3 billion base pairs and it is the various combinations of those pairs that defines every characteristic of every living thing. Establishing what those sequences were simply could not have been achieved without computers. Sequencing DNA may not seem like a mathematical problem, but it most surely is – mathematics is not just about equations.
There is hardly any aspect of technology today that has not been significantly advanced by the application of IT.
However, there is more to it than that. OK, so steam engines were very inefficient and can hardly bear comparison with even combustion engines of today, but the improvements in engine efficiency are not enough to provide the sort of gains that computers have made. Yes, we can now have a high definition video conference with someone in Australia, but we do not actually exchange significantly more information than we did via a plain old telephone. So what is special about IT?
There are four essential ingredients that have combined such that the whole is massively greater than the sum of the parts. These are
- Fast, cheap hardware
- Fast, cheap software
- Fast, reliable, cheap internet
- Cloud computing
Fast, cheap hardware
In the 1960s, a mass-produced 12-bit general-purpose computer with 4KB of memory cost over £100,000 in today’s terms. In 2021, an iPhone with a 64-bit processor, 2GB of memory costs less than £1,000 and runs thousands of times faster and is thousands of times smaller. It is no small wonder that you can do a lot more on today’s computers than you could in the 1960s.
Fast, cheap software
Important as the hardware is, it is nothing without the computer programs (or code) that it executes. In many ways, programs have become far less efficient that they ever were in the early days. That was because “bits” were so expensive that programmers would constantly strive to make their code more efficient – often at the expense of readability. But compared to the reductions in hardware cost the gains in code efficiency pale into insignificance, so much so that programmers commonly refer to data using long sentences rather than a few cryptic characters. For anyone reading their code later, the benefits are easy to see.
Furthermore, despite the maturity of the software industry, programming as a discipline is not as simple as it once was. Writing a program to execute on one computer is a drastically different process than one that executes on several computers separated by potentially enormous distances – as happens with many of today’s web applications. If you write an instruction for a single computer, you may reasonably assume it will be executed in a fixed time. If your instruction involves communicating with another computer somewhere on the Internet, you have no idea how long it will take that instruction to be completed, if at all. This means that the program code must never “lock up” waiting for the impossible to happen.
But, just the cheap hardware has more than mitigated the cost of inefficient software, it is sociology that comes to the rescue of software rather than engineering. This is in the form of OpenSource software – or the general ethos of sharing your code for free.
A good programmer is not necessarily the person that most quickly writes the most solid code, but the person that produces the most solid code fastest. The available corpus of OpenSource code is simply staggering. You may wonder why anyone would want to give away code that has cost them money to write, but the answer is that overall, the sharing philosophy is much more cost-effective. It is not unlike trade barriers with imports and exports – they generally cost more to administer than they collect.
Access to OpenSource code has not only removed the wastage of “re-inventing the wheel.” It has been a prime move in encouraging the portability of code, making it much easier to adapt to different applications. For example, finding sequences of base pairs in DNA is essentially a pattern-matching exercise, and code that is developed for, say, finding abnormal genes, might be equally applicable to locating a feature on a map.
The savings cannot be underestimated, and some of the OpenSource code available can represent literally hundreds of man/years of work. Take the recognition of human speech for example. The input and output are very simple, but the process in between is immensely complicated. To take an application that is driven by text and converting it to work with human speech is something no one would have considered before the turn of this century. Now, it is just like picking a desired process off the shelf. Which leads us to the third enabler, the Internet or more precisely…
Fast, reliable, cheap Internet
The Internet has been around since the 1970s, but while the concept was great, there was not the technology to deliver it to the mass market. What it lacked were speed and reliability. We have already talked about the challenge to programmers of writing distributed applications – that is applications that rely on two or more computers communicating – but it is one thing being able to recover from a failed communication, and dealing with the frustration of unreliability.
To a large extent, the reliability issues were overcome because hardware came down to a price that would support much more reliable methods of communication. That ranges from techniques like data compression to infrastructure improvements with fibre-optics, cellular networks and wifi.
With this infrastructure in place, use of the Internet had no place other to go than up. And as the cost of access came down, so the addressable market also went up too.
Not only did the Internet act as an enormous shop window for products and information generally, it also caused users to think about whether they really needed local computers at all, and so the final part of jigsaw evolved..
The Cloud
Before the widespread availability of the Internet, most companies would deploy software applications on computers located in their own premises. This may be in the form of native applications like word processors running on users’ workstations, or shared applications such as database servers, or telephone systems running in air-conditioned server rooms.
This software was mostly distributed on hard media such as disks or tapes.
There were two main issues with this way of working. First, that each time a software update was issued, it was a laborious and (often risky) task to update it. Second, that the investment in infrastructure to support these local services was high. Computers were required to keep running and up to date, serviced and perhaps air-condition too. All expensive operations.
A further issue, although sadly not one often considered, was that these computers needed to be backed-up in case of deliberate or accidental damage.
Once the speed and reliability of the internet started to approach that of a local network, then the time was ripe to stop owning and maintaining local servers and give the job to someone else. This was the birth of the Cloud – but only a part of its success.
Sure, with the right Internet infrastructure, it can be cheaper to let someone else worry about keeping the hardware up and running while sharing the cost, but equally important, the scene was set to provide ready-made applications that could be run from Cloud servers rather than local servers. As a result, in the last five years we have seen incredible growth in the software services available from computers in the Cloud.
Returning to our speech recognition example, we mentioned how complex this process is and, powerful as they now are, enough to bring a cellular phone to its knees. So the Cloud allows these sorts of resource-hungry applications to be run on high powered computers anywhere in the World and the results returned in a fraction of a second.
Needless to say, with no Internet connection, Cloud services are useless, so their success can be partly attributed to the very high reliability of network services.
The sum of the parts
I do not believe that in any other industry have circumstances combined to provide such phenomenal technological growth. But it is worth considering further the reason why IT has so benefitted all the other technologies.
The computer can simulate anything that the programmer can model. A £500 desktop computer can simulate the effect of some genetic mutation 100 generations down the line. To model that any other way would take a great deal of time and money, but more relevant, on a computer it can be done by a 12 year old school child in their bedroom. They have as much access to the hardware, the Internet, OpenSource code and the Cloud as does Apple or IBM. The cost of entry into almost any field is within the grasp of everyone and so the result is that there is an explosion of creativity – not just in computing but everything.
The downside of all this is that we now have far more computing resources than we have people to write programs for them. One way to overcome this is artificial intelligence – by which I mean (in the best Alan Turing tradition), the ability of machines to fool humans that the machines are humans. Some of the methods we are seeing to tackle this involve simulating human processes themselves, the so-called neural network approaches that soak up information and work out how to solve problems without necessarily understanding the problem. In general, these approaches work well but require large amounts of computing power, so rely on access to the internet. But more important, they allow the programmer not to work out what is really going on and without that, it is easy to miss the blindingly obvious solutions.
Summing up
IT has played a pivotal role in accelerating the growth of other technologies, not simply because of its intrinsic capabilities but because it has allowed the involvement of many more people. No longer is it necessary to work to invest thousands of pounds in resources to do cutting edge research and development, or simply be highly creative.
There is a danger however, that access to such power – for example with neural networks – will obviate the need to understand the problems that are being solved. Without any constraints, we tend to take the path of least resistance, yet we learn the most from the things we cannot do.
About Author:
Dr John Yardley
Managing Director, JPY Limited and Threads Software Ltd
John began his career as a researcher in computer science and electronic engineering with the National Physical Laboratory (NPL), where he undertook a PhD in speech recognition. In early 2019, John founded Threads Software Ltd as a spin off from his company JPY Ltd to commercialise and exploit the Threads Intelligent Message Hub, developed originally by JPY Ltd.
Today, JPY represents manufacturers of over 30 software products, distributed through a channel of 100 specialist resellers.
John brings a depth of understanding of a wide range of the technologies that underpin the software industry.
John has a PhD in Electrical Engineering from the University of Essex and a BSc in Computer Science from City University, London.
In his spare time, he enjoys playing jazz saxophone and debating astrophysics.
Wanda Rich has been the Editor-in-Chief of Global Banking & Finance Review since 2011, playing a pivotal role in shaping the publication’s content and direction. Under her leadership, the magazine has expanded its global reach and established itself as a trusted source of information and analysis across various financial sectors. She is known for conducting exclusive interviews with industry leaders and oversees the Global Banking & Finance Awards, which recognize innovation and leadership in finance. In addition to Global Banking & Finance Review, Wanda also serves as editor for numerous other platforms, including Asset Digest, Biz Dispatch, Blockchain Tribune, Business Express, Brands Journal, Companies Digest, Economy Standard, Entrepreneur Tribune, Finance Digest, Fintech Herald, Global Islamic Finance Magazine, International Releases, Online World News, Luxury Adviser, Palmbay Herald, Startup Observer, Technology Dispatch, Trading Herald, and Wealth Tribune.