Science —

Plastic synapses offer hardware alternative to neural networks

Plastic man will have plastic brain, probably does feel pain.

Plastic synapses offer hardware alternative to neural networks

Neural networks were all the rage for a while, but progress eventually slowed and interest cooled. Then, as computing power increased, the field experienced a renaissance, and deep learning was the new big thing.

Throughout this ebb and flow of interest, there has been an underlying, annoying fact: neural networks as currently implemented are not that great. Especially when you compare them with the brain of... well, pretty much any creature. Researchers have been trying to make neural networks that have all the advantages of the brain (and none of the disadvantages) for as long as the field has existed. And it may be that they've gone about it wrong. Now, some new work is suggesting that the only way to get the advantages of the brain is to accept the disadvantages as well.

Brains vs. memristors

The brain has two features that no inorganic computer has. One is that it is highly interconnected. Each neuron may be connected to a vast number of other neurons—not just neighbors, but also neurons that are well separated spatially. This natural interconnectedness is what makes the brain such a powerful computational tool. The brain is also highly efficient. A synapse—the connection between two neurons—consumes at most 100 femtoJoules per event. Once you realize that the entire human body is about the equivalent of an 120W light bulb, you can see that the efficiency of the brain is just astounding.

That efficiency and interconnectedness comes with a downside. Each synapse fires just a few times per second. Compared to inorganic devices that can switch millions of times per second, this is disappointing. So what we'd love to do is combine the best of both worlds: high interconnectivity, low energy consumption (per operation), and many operations per second. Then you'd have the ultimate computing machine.

But synapses are not like transistors in a digital system. Synapses have multiple states, and they remember their state for long periods of time. This state influences how readily they are activated and deactivated. In addition to their history, neurons are activated by signals from other neurons, and the degree of activation scales linearly with the stimulus. So a synapse that has received 50 pulses from other neurons is about twice as charged as one that receives 25 pulses (assuming they have the same history).

In the inorganic world, this sort of behavior is seen in a specific device: a memristor. Unfortunately, for inorganic memristors, there seems to be an inherent trade-off in this behavior. Think of it like this: if it takes very little energy to change the state of a device, then it is subject to noise, so the computation becomes unreliable. While increasing the energy required to change the state makes it safer, that's less energy efficient. The increased energy consumed per operation also limits the speed of operations because, even when speed is used as a marketing gimmick, no one actually likes to melt their computer.

Just to top it off, memristors aren't very linear. The amount of activation is not doubled by doubling the stimulus. To be sure, you can still use memristors in neural networks. It's just that they will be limited.

Going organic

An alternative is to use organic devices. In the past, organic devices have kind of sucked because they are slow, not particularly linear, and not hugely energy-efficient. But researchers have come up with a new device that seems to overcome these problems.

The researchers used a system that looks very much like a simplified synapse. A trio of molecules form a kind of extended redox pair (yes, I'm aware that a pair made from a trio is logically inconsistent, but bear with me). In this redox system, when a positive voltage is applied, charge is transferred from one molecule to a second. This charge strips a hydrogen from the third molecule to neutralize the acquired charge. In doing so, the conductivity of the polymer drops. A negative voltage reverses the effect, transferring the hydrogen back to its parent molecule and increasing the conductivity.

This has several advantages. Since it's only hydrogen and electrons moving around, changing the conductivity of the electrode happens much faster than traditional organic memristors, which often require moving entire molecules around.

Another advantage is that the conductivity changes linearly with the number of hydrogen atoms that have jumped the gap. And the hydrogen atoms are only free to move around if the voltage is above a certain threshold. So to change the conductivity, you simply apply a voltage pulse of a certain time. This scales easily as there are lots of molecules—or more precisely, many repeating units of a polymer—in a single device. To keep increasing the conductivity, you just apply more pulses and shuffle around a few more hydrogens.

If this sounds familiar, it should—this is kind of how your own synapses work.

Energy efficiency

Since each molecule only changes charge state by one electron, the molecules don't interfere with each other. Many voltage pulses can be applied, each changing the conductivity of the polymer by the same amount. This sort of linearity is normally very difficult to achieve and is highly desirable to make computations easier.

We aren't done yet, though. The power consumption also looks more like that of a natural synapse than an inorganic device. The researchers got down to about a picoJoule, which is only 1,000 times more than the estimates for a single synapse. However, the size of the researcher's electrode is on the order of a micrometer, and the researchers estimate that they can reduce the power drain by a factor of a million by reducing the electrode area.

Not content with demonstrating the energy scaling and linearity, the researchers also constructed a series of neural networks. One was fairly simple, designed to replicate Pavlov's experiment. For those who don't know, in Pavlov's experiment, dogs drooled when Pavlov held dog food. He noted that they did not drool when he rang a bell. However, after a training period that involved holding food while ringing a bell, he found that ringing the bell was sufficient to start his dog drooling. The dog associated the bell with food.

Pavlov's experiment can be replicated with just three neurons. One neuron is designed to fire when it "hears" a bell, and a second is designed to fire when it "sees" food. If the second neuron fires, so does a third, causing the computer to drool. At the start of the experiment, only the food neuron causes drooling. However, after some training, the bell neuron can also cause drooling. This is a well-reported experiment, and replicating it here was just the beginning.

Constructing physical neural networks is often challenging, so the researchers used a model of their neuron to calculate the performance of a more-useful neural network. In this case, the researchers implemented a three-layer network and trained it to recognize handwritten characters. The limitations of the network are such that the maximum accuracy (which depends on the number of pixels in the image) is around 99 percent for a 784-pixel image. The researchers predict that a real implementation of their memristor neurons will end up at 97 percent.

This is all pretty great, but there's a caveat: it takes about 14ms for a voltage pulse to change the conductivity, since we have to wait for a chemical reaction to happen. This is about the same as a natural synapse. I often confuse irony with rain on my wedding day, but I'm pretty sure this counts as ironic. After a large amount of effort, researchers have shown that if you want a neuron that is energy-efficient, linear, and stable, you end up operating at a speed that is remarkably close to that of the neurons in our brain.

This result has implications. If this research does turn out to be the way forward, neural networks will not operate faster than our own brains. And it will take a lot of development before artificial neural networks are as interconnected as the neurons of our own brains. If neural networks are going to show an advantage, it has to come from other directions.

Nature Materials, 2017, DOI: 10.1038/NMAT4856

Channel Ars Technica