X
Innovation

Why Intel x86 must die: Our cloud-centric future depends on open source chips

Perhaps the Meltdown and Spectre bugs are the impetus for making long-overdue changes to the core DNA of the semiconductor industry and how chip architectures are designed.
Written by Jason Perlow, Senior Contributing Writer

The new year has indeed started out with a bang for the computer industry.

Two highly publicized security flaws in the Intel x86 chip architecture have now emerged. They appear to affect other microprocessors made by AMD and designs licensed by ARM.

And they may be some of the worst computer bugs in history -- if not the worst -- because they exist in hardware, not software, and in systems that number in the billions.

These flaws, known as Meltdown and Spectre, are real doozies. They are so serious and far-reaching that the only potential fix in the immediate future is a software workaround that, when implemented, may slow down certain types of workloads as much as 30 percent.

In fact, the potential compromise to the affected systems is so widespread that the flaws are exhibited in the fundamental systems architecture of the chips themselves, and they may have been around in some form since 1995.

That's going back to when Friends was the hottest show on TV. And I still had hair and was freshly married. Oh, and most of us were using Windows 3.1.

The bloodline has to die out entirely

Without going into detail about exactly how these flaws present themselves -- because the explanation is highly technical in nature and you need to be a chip weenie to really grok it -- let's just say that they exploit certain basic functions used by modern microprocessors to optimize performance.

Read also: Massive Intel CPU flaw: Understanding the technical details of Meltdown and Spectre (TechRepublic)

It's very much analogous to DNA. DNA provides the blueprint and firmware programming for how an organism functions at a very basic level.

If you have a flaw in your DNA, you have a genetic disease. You can try to mitigate it with various treatments and medications, but you can't really cure it. Well, you have stuff like CRISPR, but there's no hardware equivalent to that.

Essentially, the only cure -- at least today -- is for the organism to die and for another one to take its place. The bloodline has to die out entirely.

The organism with the genetic disease, in this case, is Intel's x86 chip architecture, which is the predominant systems architecture in personal computers, datacenter servers, and embedded systems.

Ten years ago, I proposed that we wipe the slate clean with the Intel x86 architecture. My reasoning had much to do with the notion that, at the time, Linux was gaining in popularity and the need for continuing compatibility with Windows-based workloads in the datacenter and on the desktop (ha!) was becoming less and less of a hard requirement.

What has transpired in 10 years? Linux (and other related FOSS tech that forms the overall stack) is now a mainstream operating system that forms the basis of public cloud infrastructure and the foundational software technology in mobile and Internet of Things (IoT).

Virtualization is now widespread and has become standard business practice for large-scale enterprise systems' design and scalability.

Read also: Tech giants scramble to fix critical Intel chip security flaw

Containerization is now looking to augment and eventually replace virtualization for further growth and improved security in a multi-tenant, highly micro-segmented network future driven by DevOps and large-scale systems' automation.

Since 2008, Microsoft has since embraced open source and is successfully pivoting from being the Windows company to being the Azure/Office 365 cloud company that writes cloud exploitive application software for not just Windows, but also Linux, iOS, and Android.

All these advances are not necessarily tied to compatibility with x86. If anything, they potentially free us from writing this type of dependent code because of the levels of abstraction and portability that we now have at our disposal.

Despite these advances, our dedication to this aging but beloved pet -- the x86 systems architecture -- has not waned. We have been giving it all sorts of medical treatment over the years, now going on four decades, to keep it alive.

The question is not so much should we put Old Yeller Inside to sleep. It's what breed of puppy do we replace him with? Another purebred prone to additional genetic defects? Or something else?

We need to stop thinking about microprocessor systems' architectures as these licensed things that are developed in secrecy by mega-companies like Intel or AMD or even ARM.

Sun had the right idea

In 2008, when I wrote the precursor to this article, the now-defunct Sun Microsystems -- whose intellectual property assets are owned today by Oracle -- decided to open-source a chip architecture, the OpenSPARC T2.

The concept at the time did not exactly fly and didn't get any real takers. What has since happened to Sun in its absorption by Oracle has been less than pleasant for all the parties involved, and given the extremely litigious nature of the company, it is understandable why nobody has latched onto OpenSPARC.

Read also: Windows Meltdown-Spectre patches: If you haven't got them, blame your antivirus

However, despite the history, I think Sun had the right idea at the time. We need to develop a modern equivalent of an OpenSPARC that any processor foundry can build upon without licensing of IP, in order to drive down the costs of building microprocessors at immense scale for the cloud, for mobile and the IoT.

It makes the $200 smartphone as well as hyperscale datacenter lifecycle management that much more viable and cost-effective.

Just as Linux and open source transformed how we view operating systems and application software, we need the equivalent for microprocessors in order to move out of the private datacenter rife with these legacy issues and into the green field of the cloud.

Read also: Intel's new chips: Low-power, lower-cost Gemini Lake CPUs for PCs, 2-in-1s, laptops

This would have more benefits than just providing a systems architecture that can be molded and adapted as we see fit in the evolving cloud. The fact that we have these software technologies that now enable us to easily abstract from the chip hardware enables us to correct and improve the chips through community efforts as needs arise.

We need to create something new

Indeed, there are some risks, such as forking, which has been known to plague open-source systems -- but, more often than not, it creates an ecosystem of competition between the well-run communities and the bad ones.

And, more often than not, the good ones emerge as the standards that get embraced.

I cannot say definitively what architecture this new chip family needs to be based on. However, I don't see ARM donating its IP to this effort, and I think OpenSPARC may not be it either.

Perhaps IBM OpenPOWER? It would certainly be a nice gesture of Big Blue to open their specification up further without any additional licensing, and it would help to maintain and establish the company's relevancy in the cloud going forward.

RISC-V, which is being developed by UC Berkeley, is completely Open Source.

The reality is that we now need to create something new, free from any legacy entities and baggage that has been driving the industry and dragging it down the past 40 years. Just as was done with Linux.

Do we need a new open-source microprocessor architecture for the cloud-centric future? Talk Back and Let me Know.

Previous and related coverage

Major Linux redesign in the works to deal with Intel security flaw

A serious security memory problem in all Intel chips has led to Linux's developers resetting how to deal with memory. The result will be a more secure, but -- as Linux creator Linus Torvalds says -- slower operating system.

Critical flaws revealed to affect most Intel chips since 1995

Most Intel processors and some ARM chips are confirmed to be vulnerable, putting billions of devices at risk of attacks. One of the security researchers said the bugs are "going to haunt us for years."

Editorial standards