Science —

Are processors pushing up against the limits of physics?

A perspective on whether Moore's law will hold, as well as whether it matters.

When I first started reading Ars Technica, performance of a processor was measured in megahertz, and the major manufacturers were rushing to squeeze as many of them as possible into their latest silicon. Shortly thereafter, however, the energy needs and heat output of these beasts brought that race crashing to a halt. More recently, the number of processing cores rapidly scaled up, but they quickly reached the point of diminishing returns. Now, getting the most processing power for each Watt seems to be the key measure of performance.

None of these things happened because the companies making processors ran up against hard physical limits. Rather, computing power ended up being constrained because progress in certain areas—primarily energy efficiency—was slow compared to progress in others, such as feature size. But could we be approaching physical limits in processing power? In this week's edition of Nature, The University of Michigan's Igor Markov takes a look at the sorts of limits we might face.

Clearing hurdles

Markov notes that, based on purely physical limitations, some academics have estimated that Moore's law had hundreds of years left in it. In contrast, the International Technology Roadmap for Semiconductors (ITRS), a group sponsored by the major semiconductor manufacturing nations, gives it a couple of decades. And the ITRS can be optimistic; it once expected that we would have 10GHz CPUs back in the Core2 days. The reason for this discrepancy is that a lot of hard physical limits never come into play.

For example, the ultimate size limit for a feature is a single atom, which represents a hard physical limit. But well before you reach single atoms, physics limits the ability to accurately control the flow of electrons. In other words, circuits could potentially reach single-atom thickness, but their behavior would become unreliable before they got there. In fact, a lot of the current work Intel is doing to move to ever-smaller processes involves figuring out how to structure individual components so that they continue to function despite these issues.

The gist of Markov's argument seems to be that although hard physical limits exist, they're often not especially relevant to the challenges that are impeding progress. Instead, what we have are softer limits, ones that we can often work around. "When a specific limit is approached and obstructs progress, understanding its assumptions is a key to circumventing it,' he writes. "Some limits are hopelessly loose and can be ignored, while other limits remain conjectural and are based on empirical evidence only; these may be very difficult to establish rigorously."

As a result, things that seem like limits are often overcome by a combination of creative thinking and improved technology. The example Markov cites is the diffraction limit. Initially, this limit should have kept the argon-fluorine lasers we use from etching any features finer than 65 nanometers. But by using sub-wavelength diffraction, we're currently working on 14nm features using the same laser.

Where are the current limits?

Markov focuses on two issues he sees as the largest limits: energy and communication. The power consumption issue comes from the fact that the amount of energy used by existing circuit technology does not shrink in a way that's proportional to their shrinking physical dimensions. The primary result of this issue has been that lots of effort has been put into making sure that parts of the chip get shut down when they're not in use. But at the rate this is happening, the majority of a chip will have to be kept inactive at any given time, creating what Markov terms "dark silicon."

Power use is proportional to the chip's operating voltage, and transistors simply cannot operate below a 200 milli-Volt level. Right now, we're at about five times that, so there's the potential for improvement there. But progress in lowering operating voltages has slowed, so we may be at another point where we've run into a technological roadblock prior to hitting a hard limit of physics.

The energy use issue is related to communication, in that most of the physical volume of a chip, and most of its energy consumption, is spent getting different areas to communicate with each other or with the rest of the computer.

Here, we really are pushing physical limits. Even if signals in the chip were moving at the speed of light, a chip running above 5GHz wouldn't be able to transmit information from one side of the chip to the other. The best we can do with current technology is to try to design chips such that areas that frequently need to communicate with each other are physically close to each other. Extending more circuitry into the third dimension could help a bit—but only a bit.

What’s further out on the horizon?

Markov isn't especially optimistic about any of the changes on the horizon, either. In the near term, he expects that the use of carbon nanotubes for wiring and optical interconnects for communication will continue the trend of helping us avoid running into physical limits, but he notes that both of these things have their own limitations as well. Carbon nanotubes may be small—some can be under a nanometer in diameter—but they still have physical dimensions. And photons require both hardware and energy if they're to be used for communications.

A lot of people are excited about quantum computers, but Markov isn't necessarily one of them. "Quantum computers—both digital and analog—hold promise only in niche applications and do not offer faster general-purpose computing because they are no faster for sorting and other specific tasks," he argues. There's also the issue of the energy use involved in getting quantum hardware down to the neighborhood of absolute zero, as the performance of a lot of the devices is terrible at room temperature.

But all computing relies on quantum effects to one extent or another, and Markov thinks there may be things we can learn from quantum systems: "Individual quantum devices now approach the energy limits for switching, whereas non-quantum devices remain orders of magnitude away." Obviously, even gaining some of the efficiency found in these systems could make a huge energy difference when implemented for the entire chip.

Another physical limit that Markov highlights is the fact that erasing a bit of information has a thermodynamic cost that can't be avoided—computing will always cost energy. One idea for getting around that limit is what's called "reversible computing," where the components are returned to their original state after a calculation. This method could, at least in theory, allow some of the energy used to be extracted back out.

The idea is not completely theoretical, however. Markov cites work involving superconducting circuitry (which he terms "highly exotic") that provides reversible behavior and an energy dissipation below the thermodynamic limit. Of course, these things are operating at four microKelvin, so there's more than a little energy being expended just to make sure they can operate.

Beyond physics

While physics and materials science set a lot of the limits for the hardware, math places some limitations on what we can do with it. And despite its reputation, the limits on math are also much fuzzier than the ones provided by physics. For example, we still don't even know the answer to the P vs NP question despite years of effort. And even though we can prove certain algorithms are the most efficient for general cases, it's also easy to find ranges of problems where alternative computational approaches perform better.

The biggest problem Markov sees here is the struggle to extract greater parallelism from code. Even low-end smartphones now have multiple cores, but we've still not figured out how to use them well in many cases.

Overall, however, one comes away with the sense that the greatest limitation we face is human cleverness. Although there are no technologies on the horizon that Markov seems to be especially excited about, he's also clearly optimistic that we can either find creative ways around existing roadblocks or push progress in other areas to such an extent that the roadblocks seem less important.

The thing about these creative solutions is that they're hard to recognize until they're actually underway.

Nature, 2014. DOI: 10.1038/nature13570  (About DOIs).

Listing image by Intel

Channel Ars Technica