The End of Digital Tyranny: Why the Future of Computing Is Analog

Most of us rarely think about it, but when we turn on our smartphones and PCs, we’re giving ourselves over to machines that reduce every single task to a series of 1s and 0s. That’s what digital means. But according to Doug Burger, a researcher with Microsoft’s Extreme Computing Group, this may be coming to an end.
Image may contain Electronics Speaker and Audio Speaker
Microsoft's Doug Burger, speaking on Monday at the company's headquarters in Redmond, Washington.Photo: Microsoft

Our world is ruled by 1s and 0s.

Most of us rarely think about it, but when we turn on our smartphones and PCs, we're giving ourselves over to machines that reduce every single task to a series of 1s and 0s. That's what digital means.

But according to Doug Burger, a researcher with Microsoft's Extreme Computing Group, this may be coming to an end. Burger thinks we could be entering a new era where we don't need digital accuracy. To hear him tell it, the age of really big data may well be an age of slightly less-accurate computing. We could drop the digital straightjacket and write software that's comfortable working on hardware that sometimes makes errors.

For about half a century now, companies like Intel have made their microprocessors faster and faster by adding more transistors -- and, lately, more and more processor "cores" that can operate in parallel. But these regular performance boosts seem to be coming to an end. It's inevitable, really. Chip parts are getting so small, they can't be shrunk much more.

Intel's current state-of-the art chipmaking process will soon shrink to 14 nanometers, aka 14 billionths of a meter. When transistors get that small, it becomes devilishly hard to keep them operating in the precise on-or-off states required for digital computing. That's one of the reasons why today's chips burn so hot.

Burger calls it the digital tax. And over the next decade this tax is going to become too big for computer makers to keep paying it. "Transistors are getting leakier. They're getting noisier," he said on Monday, speaking during a webcast of an event at Microsoft's headquarters. "They fail more often."

"The challenge is that at some point along this road, as you get down to single atoms, that tax becomes too high. You're not going to be able to build a stable digital transistor out of a single atom."

But if our future performance gains aren't going to come from smaller transistors, how do we improve things? That's right. We go analog.

"I think there's an opportunity to break this digital tyranny of this abstraction of 1s and 0s that's served us so well for 50 for 60 years, and that's to embrace approximation," he said. "Some people call it analog."

Burger has been working with Luis Ceze, an associate professor at the University of Washington to create a brand new way of programming. Instead of following binary instructions, they break up the code. Some of it -- the part of your banking app that sends retrieves your account balance from the bank, for example -- has no tolerance for errors. Other parts -- the app's check scanning software -- can handle some errors. Ceze and Burger's programs watch how applications work and then build neural network models that they run on special neural processing accelerators, called NPUs. "We're using a neural network to approximate things that are supposed to run in a regular processor," Ceze said in an interview earlier this year. "What we want to do is use neural networks for your browser, for your games, for all sorts of things."

The researchers aim to build compete systems -- processors, storage, and software -- that use this approximate computing model. They think they'll be able to run them at far lower voltages than conventional systems, which will save money on power and cooling. They've built their first NPU's using programmable chips, but they're now crafting NPUs out of analog circuits, which will be faster and use much less power than their digital equivalents. "Going analog is a huge efficiency gain and much more approximate too," Ceze said.

This approach makes some slight mistakes, so it doesn't work for all programming models. You wouldn't want to build a calculator this way. But for many types programs -- image processing software, for example -- it's good enough.

Image recognition, bioinformatics, data mining, large scale machine learning, and speech recognition could all work with analog computing, according to Burger. "We're doing an enormous number of things that intersect with the analog world in fundamental ways."

Burger and Ceze are not the only ones peering into an analog future. Last year, the Defense Advanced Research Projects Agency (DARPA) kicked off a program called UPSIDE, short for Unconventional Processing of Signals for Intelligent Data Exploitation, seeking solve these same problems.

It will be a long time -- maybe 10 to 15 years -- before the systems that Burger describes have a chance of real-world use. But this may well be the way that the next generation of computers get their juice. "We have no idea how far we can push this," Burger said. "But once you're willing to take a little bit of error and you break again this digital tyranny, you can start using again these devices that are noisy -- and you don't have to pay that enormous tax to guarantee a 1 or a 0."