For Google, Quantum Computing Is Like Learning to Fly

The chief researcher on Google's D-Wave quantum computer compares it to the Wright brothers at Kitty Hawk. "It has worked in principle. The thing flew."
FEATURED devicecommunicationfeatured REUSEOK
Then One/WIRED

At a NASA lab in Silicon Valley, Google is testing a quantum computer---a machine based on the seemingly magical principles of quantum mechanics, the physics of things like atoms and electrons and photons. This computer, called the D-Wave, carries a $10 million price tag, and the idea is that it can perform certain tasks exponentially faster than computers built according to the laws of classical physics---the physics of the everyday world.

The trouble is that even top quantum computing researchers can't quite tell whether the D-Wave will provide this exponential leap when applied to tasks that are actually useful, that can improve how the everyday world operates, that are more than experiments in a lab. But after several months with its D-Wave computer, Google believes that this machine can prove quite useful indeed.

In the future, says Hartmut Neven, who oversees Google's experiments with the D-Wave, it may significantly improve machine learning, identifying spoken words, understanding natural language, and, maybe one day, mimicking common sense.

Neven---who helped write the Google research paper, released earlier this week, that details the company's experiments---compares the D-Wave to the airplane the Wright brothers flew at Kitty Hawk in 1903. The Wright Flyer barely got off the ground, but it foretold a revolution. "Their airplane took a trajectory through the air," he says. "That's the point"

In the same way, he says, the D-Wave has solved problems following a flight path that defies the laws of classical physics. "In fact, the trajectory went through parallel universes to get to the solution," he says. "It is literally that. That is an amazing, somewhat historical, event. It has worked in principle. The thing flew."

What Has Quantum Computing Done for Me Lately?

That said, the message Neven delivers---and the message Google delivers in its paper---is measured. And it's not exactly the message some popular tech publications delivered after reading it. Headlines had Google proclaiming it had proven the D-Wave "actually works," that it's 100 million times faster than today's PCs. But that overstates the situation.

Google has shown the D-Wave can significantly outperform on traditional chips in a few, very specific situations---and these situations are merely experimental. A computational problem "needs to be difficult enough for your quantum resources to start to matter," Neven says, and it must suit the particular architecture of the D-Wave. That said, Neven very much believes that if the company behind the D-Wave continues to improve the system, it could exceed the status quo in machine learning and other real-world tasks.

Other researchers are also hopeful. "There's a lot of promise," says Daniel Lidar, a University of Southern California researcher who also has worked with the D-Wave. "We're not quite there yet, but we're on the way." Some researchers, however, say that there we don't yet have evidence that the machine will ever real-world applications. "It's not better than the best classical code you can write," says Matthias Troyer, a professor of computational physics at ETH Zürich. "[Google] really fine-tuned the problems to give the D-Wave an advantage over classical algorithms."

Taking the Superposition

A British physicist named David Deutsch first proposed the idea of a quantum computer in 1985. A classical computer---the kind you're using to read this story---stores information in tiny transistors, and each transistor can holds a single "bit" of data. If the transistor is "on," it holds a "1." If it's "off," it holds a "0." But Deutsch proposed a machine that could store data in a quantum system, or "qubit." Thanks to the superposition principle of quantum mechanics, this qubit could store a "0" and a "1" simultaneously. And two qubits could hold four values at once: 00, 01, 10, and 11. Adding more and more qubits, you could, in theory, create a machine that was exponentially more powerful than a classical computer.

If that's hard to wrap your head around, it's even harder to build a quantum computer that actually works. The rub is that when you look at a quantum system---read the information it holds---it decoheres. It becomes an ordinary bit that can hold only a single value. It no longer behaves like a quantum system. The trick lies in finding a way around this problem, and researchers have spent decades trying to do just that.

In 2007, D-Wave Systems, a company in British Columbia, unveiled a commercial machine it called a 16-bit quantum computer. And it has since expanded this machine to more than 1000 qubits. But these claims are controversial. For one, the D-Wave is not a "universal quantum computer," meaning it's not suited to just any type of calculation. It's designed to handle what are called "combinatorial optimization problems"---problems where a vast number of options are reduced to the best possible choice. Solving such problems is part of everything from genome sequence analysis to, yes, machine learning---but it's still unclear whether the machine can handle these tasks better than classical computers.

The Landscape Metaphor

The latest D-Wave, the D-Wave 2X, contains about 1,000 superconducting circuits---tiny loops of flowing current. The machine cools these circuits to almost absolute zero, and at that temperature, the circuits enter a quantum state in which current flows clockwise and counterclockwise simultaneously. The machine then uses various algorithms to run particular calculations across these qubits. Basically, these algorithms complete these calculations by determining the probability that certain circuits will emerge in a particular state when the system raises their temperature.

The aim is to achieve what is called quantum annealing, a step well beyond a classical practice called simulated annealing. Simulated annealing is a way of searching for a mathematical solution. In describing simulated annealing, computer scientists use the metaphor of a landscape. It's like looking for the low point in a vast expanse of rolling hills. You travel up and down the hills until you find the deepest valley. But with quantum annealing, you can find that valley by moving* through *the hills---or, at least, that's the metaphor.

"Classical system can only give you one route out. You have to walk up over the next ridge and peak behind it," Neven says, "while quantum mechanisms give you another escape route, by going through the ridge, going through the barrier."

For a while, researchers questioned whether the D-Wave really did offer quantum annealing. But Google is now confident that it does. Others agree. "There is good---fairly strong---evidence that quantum annealing is going on," Lidar says. "There are very few doubts left that there are indeed quantum effects at work and that they play a meaningful computational role." And in certain situations, Google says, this quantum annealing can outperform simulated annealing that runs on a single-core classical processor, running calculations about 108 times faster.

To explain this, Neven returns to the landscape metaphor. If you only have a few small hills, then quantum annealing isn't much better than simulated annealing. But if the landscape is extremely varied, the tech can be very effective. "When landscape is very rugged, with tall mountain ridges, then quantum resources help," he says. "It depends on how wide the barrier is."

Quantum Neural Nets

For skeptics like Troyer, Google's tests still don't show the D-Wave will be useful for real applications. But Neven says that as time goes on and the world generates more online data, optimization problems will only get harder---making them better suited to the kind of architecture the D-Wave provides. At the moment, he says, it's difficult to feed such problems into the D-Wave. In fact, it works well with only a small subset of these hard problems. "It is not so easy to represent such problems, to input such problems," he says. "But it's possible." But as the machine evolves, Neven says, this will get easier.

In particular, Neven argues that the machine will be well suited to deep learning. Deep learning relies on what are called neural networks---vast networks of machines that mimic the web of neurons in the human brain. Feed enough photos of a dog into these neural nets, and they can learn to recognize a dog. Feed them enough human dialogue, and they may learn to carry on a conversation. That's the aim, at least, and Neven sees the D-Wave as potential means of reaching so lofty a goal. With quantum annealing, a neural net could potentially analyze far more data, far more quickly. "Deep neural net training would essentially amount to finding the lowest point in a very rugged energy landscape," he says.

But Neven says this will require a system with more qubits and more connections between them---connections that allow for more communication from qubit to qubit. "The D-Wave qubits are very sparely connected. ... That doesn't suit a neural net. You have to connect each qubit to so many more," he says. "Connectivity has to get denser. If you make these qubits desenser, that's one step closer to representing these rugged energy landscapes."

Building such a system could take years. But that's to be expected. Think how long it took to build a viable jetliner after the first flight at Kitty Hawk. "Are we ready to bring the luggage and the family in and fly over to some other country? Not yet," Neven says. "But, in theory, it works."