It is official – Moore’s Law has met its prophesied demise after suffering from a prolonged event of unshrinking transistors. A bearer of these tidings, I recite this eulogy of the Law that has been instrumental to the surge of the byzantine IT industry for a good fifty years. But this tribute isn’t a somber mourning of the dead; rather, an interpretation of what the passing of the 53-year-old Law means for the ongoing Industrial Revolution and the Revolutions to come.
The theatrical turn of events began with Intel Corporation, the leading producer of some of the most powerful silicon-based processors, drastically slowing down the pace with which it launches new chip-making technology after more than five decades of its inception; and things haven’t been the same for the Silicon Valley behemoth since. Currently, transistors in Intel’s processors already have modules as minuscule as 14 nanometers (nm).
Manufacturing transistors in the Nano range, however, is proving expensive and difficult as admitted by the company itself. Intel had to postpone the production of its new 10-nm processor edition further into the future. To add to its woes, the company’s existing 14-nm chip line (codename: Kabylake) is deviating from Moore’s Law and collapsing under the weight of competitors such as AMD, Qualcomm, and ARM. However, this doesn’t mean that Moore’s Law lives on through Intel’s rivals.
The only difference between Intel and its Silicon-Valley counterparts is that the latter have bought themselves more time with new innovations that enhance computing through methods other than just compacting chips. Intel, which was co-founded by Gordon Moore, giver of the eponymous law, forged processors that followed Moore’s observation and aided in stellar breakthroughs in disciplines such as artificial intelligence, genetics, and Astronautics, and in bringing advanced smartphones and powerful Internet services to the world.
The technology giant’s ironic announcement isn’t just a revelation of its predicament, but also a foreshadowing of the fate of its competitors, the IT macrocosm, and, eventually, the rest of the modern human civilisation that relies heavily on processor-powered machines in all frontiers of science, technology, and society.
But the event isn’t about the semiconductor-chip manufacturer losing their sheen, or about anyone corporate entity. Its ramifications are already being felt across nerve centers of scientific research that are propelling the ongoing Industrial Revolution or Industry 4.0 as technology savants call it, to add more vigour and gravitas to the movement.
The year 2015, especially, saw breakaway success in Artificial Intelligence (AI) with advancements in machine learning including deep neural networks, self-driving cars and cognitive computing crystallising as a commercial reality, and the integration of Cloud computing with machine learning.
Entrepreneurs such as Jeff Bezos of Amazon, Ginni Rometty of IBM, Elon Musk of Tesla, Open AI, Neuralink, and SpaceX, Ray Kurzweil of Google, and several other prominent engineers and scientists are spearheading the pocket of the Industry 4.0 concerned with the advancement of practical and theoretical machine learning and artificial intelligence.
These technocrats and scientists have awe-inspiring, and outlandish, visions for the future of AI and other forms of computing technology, and claim that someday, these advanced forms will helm every domain from agriculture to space travel. A few techno-entrepreneurs and futurists such as Musk and Kurzweil are popular oracles of an era led by sentient machines and foresee a momentous change in human civilisation caused by these super-intelligent beings.
While billion-entrepreneur Musk raises fears of a Skynet-like future where his SpaceX will, ironically, use AI-guided rockets to help humans escape Earth and survive elsewhere, Kurzweil claims that the technological singularity – exponential technological growth lead by artificial super-intelligence resulting in unfathomable changes to human civilisation – will occur by as soon as 2045 with “species” of self-aware machines championing all our innovations in science, technology, and art.
If not for the demise of Moore’s Law, the possibility of these quixotic pipedreams materialising would still be on the cards. The effects of the phenomena are already glaring. Despite the steep reduction in costs in accordance with the Law, the technology required to elevate the current level of awareness in AI is still lingering (there are, broadly speaking, five levels). This stagnation, tragically, has led Musk’s Tesla’s and Uber’s self-driving cars to claim their first casualties.
And although modern computing technology is greatly aiding industries such as Agriculture, Space, Healthcare, and Pharmaceuticals, its decelerating rate of evolution is presenting itself as a bottleneck to those industries. Particularly in Healthcare, the role of AI over the next few decades may be limited to taking over the mundane tasks of doctors without providing any critical substitution for humans in complex surgical procedures. The ones that do (such as STAR) are limited to assisting surgeons through telesurgery and workflow simulation.
But the human spirit is relentless in its pursuit of greatness and has an innate drive to shatter limitations and disrupt existing technology; scientists are marching an armada of new inventions and theories to bulldoze the cadaver of the law that’s occluding the path to the future. Quantum computing is spearheading one of the units of the legion and is said to be key to producing a worthy successor to the Law.
This form of computing is the deepest technology has ever delved into the chasms of fundamental physics. Put in fewer words, the exotic application of quantum physics uses a special property of electrons called “spin” to process information; unlike traditional computing, which uses binary outputs (0 and 1) of voltage. And unlike the conventional method, quantum computing harnesses the probabilistic nature of the electron to give vastly greater matrices of output, disrupting it entirely.
Quantum processors accomplish this feat through two strange phenomena of the subatomic world - quantum entanglement and superposition. Until now, quantum computers, although several times faster and less demanding than the fastest traditional supercomputers, have had their limitations. Such raw power was until now not needed for regular cases and was being used mainly in fields such as meteorology, astronomy, and quantum physics itself.
But with Moore’s Law now obsolete and the growth of computer technology decelerating, companies are investing in research centers and academia to create new methodologies that will eliminate the elusiveness of quantum computing and introduce it in the Industry and build quantum-powered processors for artificial intelligence.
One such method is being explored by National University of Singapore (NUS) researchers who, in a recent study published in the Physical Review Letters journal, proposed a new kind of algorithm – quantum linear system – for faster analysis of larger data sets through a quantum computer. This algorithm, when integrated with AI, according to the research, will disrupt the current models of machine learning and allow the machines to learn in new, more complex ways, making possible higher levels of awareness in AI.
The troubled chipmaker Intel may not be out of the race yet, as it is leading its own ‘last resort’ in the Armada after it launched a breakthrough 49-qubit quantum processor chip at the 2018 Consumer Electronic Show (CES) in Las Vegas, USA, for use in commercial streams and technology such as cognitive computing and machine learning, AI and deep learning.
It also has a new ace up its sleeve - Neuromorphic Computing chipset. This chipset - named Loihi – works on an entirely unique principle and mimics the human brain by creating artificial neural networks that learn without being restricted to logical circuits and algorithms. And according to the Massachusetts Institute of Technology (MIT), the Neuromorphic processor “looks more brain-like than ever.”
Besides the realisation of these futuristic technologies that are outgrowing Moore’s Law, there also has been a new discovery that is set to free the IT industry from the dead weight of the Law – the discovery of virtual particles called skyrmions, which can store data in them unlike the current method that uses magnetic storage devices and solid-state devices.
Any number of these virtual particles can be generated at any given point in space, giving engineers the freedom to create, in theory, an infinite number of skyrmions and make them carry humongous amounts of data with almost no power consumption or added weight. Once a method to read the stored data is tested and finalised, it will not just bulldoze the cadaver of Moore’s Law, but annihilate it entirely.
The demise of Moore’s Law isn’t an event to bemoan or a portent of the end of innovation. Instead, it is a portal to a whole new realm of infinite possibilities – where humans and machines may merge, cerebral processors may guide humans to distant stars, industries may become fully mechanised to benefit humanity, bleeding-edge discoveries may happen at a rate never experienced before and, most of all, technology such as AI may finally be used to meaningful ends such as solving societal challenges.