Skip to Main Content
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

Is Moore's Law Alive and Well? Depends on How You Define Scaling

While many people conflate Moore's Law with speed, it's actually a measure of the rate in the increase in complexity of the minimum component.

December 14, 2015
Cost Per Transistor – Intel 2015

There's been a lot of talk lately about Moore's Law slowing down and the challenges facing chipmakers as they try to move to ever smaller dimensions. Certainly, PCs aren't getting faster at the rate they once were and the challenges facing chipmakers have never been higher. Still, Intel continues to insist that "Moore's Law is Alive and Well," when talking about its plans for 10nm and 7nm production. To try to figure out what's going on, I looked at some varying measures of progress, and got some different answers.

While many people conflate Moore's Law with speed, it's actually a measure of the rate in the increase in complexity of the minimum component, more or less stating that the numbers of transistors will double periodically. In the initial 1965 paper, this doubling was occurring every year, though by 1975, Moore was updating his projection to doubling every two years, which has generally been the mark chipmakers have been striving for ever since.

At Intel's investor day last month, Bill Holt, executive vice president and general manager of the technology and manufacturing group, again showed slides suggesting the number of "normalized" transistors per area was continuing to decrease on a pace better than doubling, although pointing out that the cost of production was increasing even faster than expected. The result, he said, is that cost per transistor has remained on pace.

Composition Matters – Intel 2015Composition Matters – Intel 2015

But for the first time I can remember, he emphasized that different kinds of transistors within a chip require different amounts of area on the chip, with SRAM memory cells being about three times more dense than logic cells. He used this assertion to deflect questions about the average transistor density compared to Apple A9 chips made by Samsung or TSMC.

To get a closer look, my colleague John Morris and I looked at Intel's published statistics on its chips since 1999, from the Pentium III (known as Coppermine), which was produced at 180nm, up to last year's Broadwell Core chips, the first made with 14nm technology.

Gate Pitch ScalingGate Pitch Scaling

First we looked at Gate Pitch Scaling—the minimum distance between the gates that make up a transistor. Traditional scaling would suggest that this is decreasing 70 percent per generation to get the 50 percent overall scaling. On this measure, it's clear that while scaling continues, we're not seeing quite as much reduction as we would expect.

SRAM Cell ScalingSRAM Cell Scaling

But other techniques that chipmakers use are changing that a bit. Looking at SRAM memory cells, the most dense and most basic part of a chip, we can see that until recently this was giving us a 50 percent reduction per process generation, though it seems to be slipping.

Logic Area ScalingLogic Area Scaling

In recent years Intel has also emphasized total logic area scaling, which is the product of the gate pitch and the minimum pitch of the metal interconnects that route signals around that chip and connect it to the outside world. This makes some sense because if the logic transistors scale, but the interconnects do not get any smaller, the overall chip size and cost won't decrease. For example, TSMC's 16nm FinFET process uses the same back-end metal process as its 20nm planar chip, so it offers little in the way of shrink (though it is faster and uses less power). In terms of logic area scaling, Intel seems to be on target in recent generations.

There are many ways of looking at the trends, and one thing that seems clear is that it is now taking longer to get to the next node than it has taken in the past 20 years. Instead of two years between nodes, for the 14nm and the upcoming 10nm node, it will actually be closer to 2.5 years, with 10nm chips slated to arrive in the second half of 2017.

Intel points out that over the long run—going all the way back to the first microprocessor, the 4004—the time between new generations of chip technology has always been a bit flexible.

Intel Moore Laws SlideIntel Moore Law's Slide

Intel uses this slide (which Intel Fellow Mark Bohr has shown many times) to indicate the cadence of Moore's Law, from the first microprocessor, the Intel 4004, which used 2,300 transistors on a 10 micron process in 1971, to today's 14nm process. In looking at this chart, Intel says the average cadence has been a new node every 2.3 years. In that view, a 2.5 year pace for 14nm and 10nm is not all that significant. I look at it and see a speedup of Moore's Law from about 1995 to about 2012, when the first 22nm Ivy Bridge products began to appear. Now the cadence seems to be slowing once again.

(Note that Intel stopped giving die size and transistor information with the 14nm generation citing competitive issues, so the latest numbers we have for a quad-core come from the 22nm Haswell, which had 1.4 billion transistors in a 177 mm2 die.)

So is Moore's Law slowing down? It depend on how you look at it. It's certainly clear that on some measures the pace looks to have slowed, and that the challenges facing chipmakers get harder with each generation. Today only four companies—Intel, GlobalFoundries, Samsung, and TSMC—claim to have 14 or 16nm processes. Creating a new chip on one of these new processes is more expensive than ever. But there is enough reason and enough incentive to expect that we will see 10nm chips around 2017, and that 7nm, 5nm, and 3nm chips will follow.

Get Our Best Stories!

Sign up for What's New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.


Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

Sign up for other newsletters

TRENDING

About Michael J. Miller

Former Editor in Chief

Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. From 1991 to 2005, Miller was editor-in-chief of PC Magazine,responsible for the editorial direction, quality, and presentation of the world's largest computer publication. No investment advice is offered in this column. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed, and no disclosure of securities transactions will be made.

Read Michael J.'s full bio

Read the latest from Michael J. Miller