Celebrate Your Worth

End of Moore’s Law?

Microchips might not go on improving forever; leaner codes may be the order of the day 

The integrated circuit (IC) chip was one invention that changed the face of computing technology. Though the concept and inception of integrated circuits was a result of work done by several physicists – including Jack Kilby, Kurt Lehovec, Robert Noyce and Jean Hoerni – it was the contribution of Robert Noyce that led to the monolithic IC chip, or “microchip” as we know it today. Way smaller, much faster, cheaper and more efficient than previous bulky hybrid integrated circuits, the microchip is a set of electronic components joined with extremely thin aluminium metal connectors on a tiny wafer-like flat piece of silicon – a semiconductor material. It proved to be a game changer in the field of personal computing devices and had lent the hallowed Silicon Valley its name.

First designed in 1959, the beauty of the microchip lies in its ability of packing in a large number of discrete transistors within a tiny surface. It is cheap too, because all the components are printed as one monolithic unit at one go through photo-lithography. And they are super-efficient because being integrated, the components switch on fast and consume negligible power. All modern electronic devices use microchips and since their invention, they are getting smaller and faster by the day.

Well… not exactly by the day, though! Gordon Moore – engineer, businessman, and the co-founder of Intel Corporation along with Robert Noyce – had made this very calculation in 1965. In a short article unambiguously titled “Cramming more components onto integrated circuits” and published in the journal “Electronics”, Moore estimated that the number of components on a monolithic IC chip would double every year to reach 65,000 by 1975. When that prediction correctly materialized in 1975, Moore reworked his estimates, declaring that “the cost per component is nearly inversely proportional to the number of components.”

Being an entrepreneur-cum-engineer himself, he could also identify scope for better engineering designing to safely insert more and more transistors at lower costs. Eventually he proposed that the number of transistors on a microchip would double every two years – effectively doubling computing power. This estimation is the celebrated Moore’s Law and computing technology has proceeded along its lines ever since.

And microchips have got refined to a mind-boggling extent indeed. While top-end IC chips in 1975 could hold around 65,000 transistors, now they can have 65 billion of them. At this rate, the number could reach 2 trillion transistors on a chip by 2030 – making the processors 30 times faster than they currently are.

But there are fears that this dream run might end soon, and the growth rate would slow down because cramming in more transistors might not be materially possible. It took 5 years to reduce chip size from 14 nanometres to 10 nanometres. As such, producing even smaller transistors can pose fundamental problems of physics. Innovation, redesigning and use of extreme ultraviolet radiation for lithography had so long seen them through – but for how long is the question. Such state-of-the-art production engineering is skyrocketing manufacturing costs.

Chip fabrication units are costly, and their costs are increasing by 13% each year – which means a doubling of costs in six years. A fabrication unit in 2022 will be priced around 16 billion USD. Not only will this rob the affordability factor from microchips, investors too will shirk away from the chip-making industry. In fact, while there were 25 top-end chip manufacturers in 2002, the number fell drastically to 8 in 2010 and currently there are just three players in the top category.

Experts are not all pessimistic, though. Intel, the leading chipmaker, has an 8000-strong team of microchip engineers and they think there are more factors to be considered while improving chip capacities, and more innovative methods to keep up More’s Law are yet to be explored.  And computer scientists assure us that microchip capacity is not the only way to augment computing speeds.

Now might be the best time for software programmers to write leaner and lighter codes for faster and more efficient running. This was an area that had been traditionally ignored as microchips went on improving and bulky software relied on hardware speeds to make up for complex codes. Perhaps the time has come for software coders to repay that debt!

Leave a Reply

Your email address will not be published. Required fields are marked *

*