X
Business

Why neuromorphic engineering triggered an analog revolution

Maybe we can’t keep packing transistors onto substrates the way Gordon Moore showed us how to do. So how about if we replaced those millions of transistors with components “inspired by the true story” of the brain?
Written by Scott Fulton III, Contributor

The word "neuromorphism" means "taking the form of a brain." It doesn't imply how well that brain functions, or how smart its bearer should be. Rather, it's the study of the mechanism of the brain: Why does it remember information, and with what? How many neurons have to "fire" before a decision is made? Can a pathology be reversed? A brain has many functions, some of which we're not all that familiar with, but a neuromorphic device may be made to model just one.

For a device or mechanism to be "artificially intelligent," it should be given the means to perform tasks that, for a human being, would have required intelligence. An AI program or algorithm could render an analysis, or the result of a simulation, which may seem smart enough — at least until a smarter algorithm comes along. A neuromorphic device, on the other hand, may function in some aspect in a way that's mechanically analogous to our understanding of some part of the brain (assuming, of course, we actually do understand the brain). That part may perhaps be memory, logic, calculation, or optimization of a method. It isn't exactly accurate to say that such a device qualifies as AI, because intelligence (the real kind) requires all of these functions working in tandem, while a neuromorphic mechanism typically mimics just one.

"A neuromorphic architecture is literally made up of neurons and synapses," explained Dr. Catherine Schuman, a research scientist at Oak Ridge National Laboratory.  "In the hardware, those are often physically implemented as neurons and synapses. And there's lots of them. . . and they operate in a massively parallel way — all of the neurons and synapses potentially operating together."

part-of-trees-and-undergrowth-by-van-gogh.jpg

Close-up of "Trees and Undergrowth" by Vincent Van Gogh, 1887. Part of the collection of the Van Gogh Museum in Amsterdam. Photograph in the public domain.

The word "neuromorphism" means "taking the form of a brain." It doesn't imply how well that brain functions, or how smart its bearer should be. Rather, it's the study of the mechanism of the brain: Why does it remember information, and with what? How many neurons have to "fire" before a decision is made? Can a pathology be reversed? A brain has many functions, some of which we're not all that familiar with, but a neuromorphic device may be made to model just one.

For a device or mechanism to be "artificially intelligent," it should be given the means to perform tasks that, for a human being, would have required intelligence. An AI program or algorithm could render an analysis, or the result of a simulation, which may seem smart enough — at least until a smarter algorithm comes along. A neuromorphic device, on the other hand, may function in some aspect in a way that's mechanically analogous to our understanding of some part of the brain (assuming, of course, we actually do understand the brain). That part may perhaps be memory, logic, calculation, or optimization of a method. It isn't exactly accurate to say that such a device qualifies as AI, because intelligence (the real kind) requires all of these functions working in tandem, while a neuromorphic mechanism typically mimics just one.

"A neuromorphic architecture is literally made up of neurons and synapses," explained Dr. Catherine Schuman, a research scientist at Oak Ridge National Laboratory.  "In the hardware, those are often physically implemented as neurons and synapses. And there's lots of them. . . and they operate in a massively parallel way — all of the neurons and synapses potentially operating together."

Symbology vs. morphology

There are a number of types and styles of artificial intelligence, but there's a key difference between the branch of programming that looks for interesting solutions to pertinent problems, and the branch of science seeking to model and simulate the functions of the human brain.

  • Neuromorphic computing, which includes the production and use of some forms of neural networks, deals with proving the efficacy of any concept of how the brain performs its functions — not just reaching decisions, but memorizing information and even deducing facts.
  • Neuromorphic engineering is the science of creating new architectures for computing devices, modeled after analogies for how the brain operates. Many of these architectures are not digital at all, but rather electro-mechanical. That is to say, they're not von Neumann architectures (pronounced "NOY · man")

There is a critical difference between neural hardware — the architecture of processors designed to run neural networks, or modes of computing based on the theory of neurons — and neuromorphic engineering. It's one that may seem subtle on the surface, but is actually so profound that I've rewritten and republished this article to take account of it.

"People often lump these two together, but they are actually very different," remarked Dr. Schuman.  "Neuromorphic computer systems implement a different type of neural network computation: spiking recurrent neural networks [SRNN]. And they can be suitable for neuroscience simulation. They're taking a little bit more inspiration from biology, than your neural hardware systems."

neural-net-02tkm11.jpg
Scott Fulton III

Neural networking is a means of simulating "learning" by symbolizing a theory of pattern processing — how the brain retains information, and thus from one perspective, "learns." This symbolism derives from a theory of cognition dating back to the turn of the 20th century, and the work of Spanish neuroscientist and artist Santiago Ramon y Cajal. Neural nets don't take physical form; rather, they use algebraic symbols to represent the relevant properties of neurons with respect to a computation — in this case, the "weight" which, as depicted above, gives rise to memory, making it easier for electricity to travel down "remembered" paths. All "deep learning" (DL) research is patterned to some degree from neural networking symbolism.

But this is not neuromorphic, as Dr. Schuman points out, because it relies upon symbology rather than morphology. Granted, even John von Neumann, the father of digital computing, claimed to have been inspired by modeling the calculating processes of the brain (back when we knew even less about them). Charles Babbage, who invented the mechanical Difference Engine, was so fascinated by the brain that he donated half of his own to Britain's Science Museum.

The point, as with any work of art, is whether we represent our subject with realism or surrealism. Neuromorphic science goes beyond the question of whether our understanding of reasoning may be symbolized by the values stored electronically by a digital computer. It studies whether devices themselves may be constructed using mechanisms — be they solid-state, analog, or a mix of the two — that function the way we believe the brain could function. Some architectures go so far as to model the brain's perceived plasticity (its ability to modify its own form to suit its function) by provisioning new components based on the needs of the tasks they're currently running.

If the long-range goal of neuromorphic engineering isn't really to create brains-on-a-chip, or to grow brains in a jar, engineers may very happily settle upon just one simple discovery: why brains require so little energy to maintain so much information and produce so much analysis. You may have noticed already, we're not all plugged into transformers attached to diesel generators. Perhaps even a moderately accurate simulation of brain activity may not be necessary, if even a few wild conjectures could lead to a mechanism that yields the kind of observations that may only come from intuition, powered by little more than a lantern battery.

Examples of neuromorphic engineering projects

Today, there are several academic and commercial experiments under way to produce working, reproducible neuromorphic models, including the following:

SpiNNaker

spinnaker-at-univ-manchester.jpg

SpiNNaker [pictured above] is a low-grade supercomputer developed by engineers with Germany'sJülich Research Centre's Institute of Neuroscience and Medicine, working with the UK's Advanced Processor Technologies Group at the University of Manchester. One of SpiNNaker's principal jobs has been to use about 540,000 Arm processing cores (there have been multiple citations; this one comes from the project itself) to simulate the functions so-called cortical microcircuits — models of how the neurons in a mammalian brain are hard-wired. SpiNNaker is now conducting what is believed to be the largest neural network simulation to date, involving about 200,000 neurons connected by some 1,000,000 plastic synapses.  (By "plastic" in this context, we mean adaptable, not polymer.)

191128-andrew-rowley-spinnaker.jpg

The "NN" in SpiNNaker is capitalized to emphasize the machine's role in testing Spiking Neural Network architecture (SNN), which is especially suited for neuromorphic architectures because it benefits directly from the plasticity of physical synapses. As Manchester University Research Fellow Andrew Rowley explained during a November 2019 conference, "The idea is to make a million-core machine, with a million mobile phone-like processors, and the idea — at least originally — was to model about 1 percent of the human brain, or 10 mice." Rowley conceded they'll settle for one mouse at this point.

BrainScaleS

brainscales-3rd-hicann-dls-prototypeasic-png-1170x0-q85-subsampling-2-upscale.jpg
Human Brain Project

Like SpiNNaker, BrainScaleS likes to mix up its capital letters, and is also funded by the European Union's Human Brain Project. But unlike SpiNNaker, BrainScaleS is a long-running, Heidelberg, Germany-based effort to take the models that SpiNNaker is digitally simulating using Arm cores, and deploy them to a physically modeled, biomimetic platform. At the foundation of this platform are chips based on its own integrated circuit [diagrammed above], classified as a High Input Count Analog Neural Network (HICANN) chip.

brainscales-system.jpg
Human Brain Project

According to a May 2020 document submitted to the European Union [PDF], when fully assembled [above], BrainScaleS model NM-PM-1 consists of five 19-inch racks supporting a total of 20 wafer modules (the neuromorphic counterpart for a "server") plus power supply and cooler. Every wafer module includes 384 of these HICANN processors, each of which represents 114,688 dynamic synapses for 512 neurons. Neurons are typically "placed" on wafers automatically, although it's possible using a domain-specific plugin for Python called morocco to specify neuron placement and configuration manually.

Intel Loihi

Intel is experimenting with what it describes as a neuromorphic chip architecture, called Loihi (lo · EE · hee). Up until very recently, Intel has been reluctant to share extensive details of much of Loihi's architecture, though we now know by way of reporting by ZDNet's Charlie Osborne, Loihi is producible using a form of the same 14 nm lithography techniques Intel and others employ today to build x86 processors. For understandable and perhaps predictable reasons, Intel is designing Loihi to function as a kind of co-processing device for x86 systems.

intel-neuromorphic-system-3.jpg
Tim Herman for Intel Corp.

First announced in September 2017, and officially premiered the following January at CES 2018, Loihi's microcode (its instructions at the chip level) include statements designed specifically for training a neural net. But as with SpiNNaker, which the Human Brain Project explicitly describes as a simulation of neuromorphic processes as opposed to the real deal (BrainScaleS), Loihi's design has been called "brain-inspired" — which may be a bit like saying, "Inspired by a true story" — precisely because these instructions are microcoded, as opposed to hard-wired. A cluster of 64 Loihi chips form a neuromorphic accelerator, such as the model shown above. This accelerator was made available to the research community in July 2019, and may be attached to a standard x86 motherboard like a typical FPGA accelerator card (only somewhat larger).

IBM TrueNorth

IBM maintains a Neuromorphic Devices and Architectures Project, which has been actively involved with new experiments in analog computation. This year, even amid the pandemic, the company has stepped up its efforts to develop at least a respectable simulator for neuromorphic activity, using conventional chip fabrication.

In a research paper published in the June 2018 edition of the journal Nature, the IBM team demonstrated how its non-volatile phase-change memory (PCM), which relies on the crystalline or amorphous state of antimony, accelerated the feedback or backpropagation algorithm associated with neural nets. These researchers are now at work determining whether PCM can be utilized in modeling synthetic synapses, replacing the static RAM-based arrays used in its earlier TrueNorth and NeuroGrid designs. As engineers will point out, like SpiNNaker and Loihi, TrueNorth is not so much physically neuromorphic as a simulator of neuromorphic principles.  IBM Research now correctly refers to TrueNorth as a "brain-inspired" design.

"One of the most appealing attributes of these neural networks is their portability to low-power neuromorphic hardware," reads a September 2018 IBM neuromorphic patent application [PDF], "which can be deployed in mobile devices and native sensors that can operate at extremely low power requirements in real-time. Neuromorphic computing demonstrates an unprecedented low-power computation substrate that can be used in many applications."

Why bother experimenting with neuromorphic designs?

You may have noticed something about human beings: They've become rather adept with just the brains they have, without the use of fiber optic links to cloud service providers. For some reason, brains are evidently capable of learning more, without the raw overhead of binary storage. In a perfect world, a neural net system should be capable of learning just what an application needs to know about the contents of a video, for example, without having to store each frame of the video in high resolution.

Conceivably, while a neuromorphic computer would be built on a fairly complex engine, once mass-produced, it could become a surprisingly simple machine. We can't exactly grow brains in jars yet (although we may have good reason to wait for the announcement). But if we have a plausible theory of what constitutes cognition, we can synthesize a system that abides by the rules of that theory, perhaps producing better results using less energy and requiring perhaps an order of magnitude less memory.

As research began in 2012 toward constructing working neuromorphic models, a team of researchers including the California NanoSystems Institute at UCLA wrote the following [PDF]:

Although the activity of individual neurons occurs orders of magnitude slower (ms) than the clock speeds of modern microprocessors (ns), the human brain can greatly outperform CMOS computers in a variety of tasks such as image recognition, especially in extracting semantic content from limited or distorted information, when images are presented at drastically reduced resolutions. These capabilities are thought to be the result of both serial and parallel interactions across a hierarchy of brain regions in a complex, recurrent network, where connections between neurons often lead to feedback loops.

Self-synthesis

A truly neuromorphic device, its practitioners explain, would include components that are physically self-assembling. Specifically, they would involve atomic switches whose magnetic junctions would portray the role of synapses, or the connections between neurons. Devices that include these switches would behave as though they were originally engineered for the tasks they're executing, rather than as general-purpose computers taking their instructions from electronic programs.

Such a device would not necessarily be tasked with AI applications to have practical use. Imagine a set of robot controllers on a factory floor, for instance, whose chips could realign their own switches whenever they sensed alterations in the assemblies of the components the robots are building. The Internet of Things is supposed to solve the problem of remote devices needing new instructions for evolved tasks, but if those devices were neuromorphic by design, they might not need the IoT at all.

Here's something that neuromorphic engineers have pointed out — a deficiency in general computer chip design that we rarely take time to consider: As Moore's Law compelled chip designers to cram more transistors onto circuits, the number of interconnections between those transistors multiplied over and over again. From an engineering standpoint, the efficiency of all the wire used in those interconnections, degraded with each chip generation. Long ago, we stopped being able to communicate with all the logic gates on a CPU during a single clock cycle.

Had chip designs been neuromorphic one or two decades ago, we would not have needed to double the number of transistors on a chip every 12 to 18 months to attain the performance gains we've seen — which were growing smaller and smaller anyway. If you consider each interconnection as a kind of "virtual synapse," and if each synapse were rendered atomically (or, to borrow the neuromorphic term for it, electroionically), chips could adapt themselves to best service their programs.

Can neuromorphic machines simulate consciousness yet?

Some of the most important neuromorphic research began in 2002, oddly enough, in response to a suggestion by engineers with Italy's Fiat. They wanted a system that could respond to a driver falling asleep at the wheel. Prof. James K. Gimzewski of UCLA's California NanoSystems Institute (CNSI) responded by investigating whether an atomic switch could be triggered by the memory state of the driver's brain. Here is where Gimzewski began his search for a link between nanotechnology and neurology — for instance, into the measured differences in electric potential between signals recorded by the brain's short-term memory and those recorded by long-term memory.

Shining a light on that link from a very high altitude is UC Berkeley Prof. Walter Freeman, who in recent years has speculated about the relationship between the density of the fabric of the cerebral cortex, and no less than consciousness itself — the biological process through which an organism can confidently assert that it's alive and thinking. Freeman calls this thick fabric within the neocortex that forms the organ of consciousness the neuropil, and while Gimzewski's design has a far smaller scale, he's unafraid to borrow that concept for its synthetic counterpart.

Gimzewski premiered his research during the 2014 Bristol Nanoscience Symposium. There, he showed photographs of a grid of copper posts at near-micron scale, that have been treated with a silver nitrate solution. Once exposed to gaseous sulfur, the silver atoms form nanowires from point to point on the grid — wires which behave, at least well enough, like synapses.

"We found that when we changed the dimension of the copper posts," said Prof. Gimzewski, "we could move... to more nanowire structures, and it was due to the fact that we can avoid some instabilities that occur on the larger scale. So we're able to make these very nice nanowire structures. Here you can see, you can have very long ones and short ones. And using this process of bottom-up fabrication, using silicon technology, [as opposed to] top-down fabrication using CMOS process... we can then generate these structures... It's ideal, and each one of these has a synthetic synapse."

The CNSI team's fabrication process is capable, Gimzewski claims, of depositing 1 billion synaptic interconnects per square centimeter. (In March 2017, Intel announced it managed to cram, to use Gordon Moore's word for it, 100 million transistors onto a one square-centimeter CPU die.)

What makes a neuromorphic chip more analog?

There's one school of thought that argues, even if a sequence of numerals is not truly random, so long as the device drawing inferences on that data is not informed, it won't matter anyway. All neural network models developed for deterministic systems operate under this presumption.

The counter-argument is this: When a neural network is initialized, its "weights" (the determinants of the axons' values) must be randomized. To the extent that it is possible for one random pattern to be similar or exact to another one, that extent must be attributed as a bias, and that bias reflects negatively on any final result.

Why real randomness matters

There's also this: Electromechanical components may be capable of introducing the non-deterministic elements that cannot be simulated within a purely digital environment, even when we put blinders on. Researchers at Purdue University are experimenting with magnetic tunnel junctions (MTJ) — two ferromagnetic layers sandwiching a magnesium oxide barrier. An electric current can tease a magnetic charge into jumping through the barrier between layers. Such a jump may be analogous to a spike.

An MTJ exhibits a behavior that's reminiscent of a transistor, teasing electrons across a gap. In this case, MTJ enables a division of labor where the receiving ferromagnetic layer plays the role of axon, and the tunnel in-between portrays a synapse.

The resulting relationship is genuinely mechanical, where the behavior of charges may be described, just like real neurons, using probability. So any errors that result from an inference process involving MTJs, or components like them, will not be attributable to bias that can't be helped due to determinism, but instead to bugs that may be corrected with the proper diligence. For the entire process to be reliable, the initialized values maintained by neurons must be truly randomized.

The case against neuromorphic

It probably should come as no shock to computer engineers for a neurologist or biotechnician to downplay any neuromorphic computing model as stopping well short of simulating real brain activity. Some go so far as to say that, to the extent that the components of a neuromorphic system are incomplete, any model of computing it produces is entirely fantastic, or something else beginning with "f." Maybe it's effective at identifying the letter "F," but it won't work the way a mind works.

Dr. Gerard Marx, CEO of Jerusalem-based research firm MX Biotech Ltd., suggests that the prevailing view of the brain as a kind of infinite tropical rain forest, where trees of neurons swing from synapses in the open breeze, is a load of hogwash. Missing from any such model, Marx points out, is a substance called the extracellular matrix (nECM), which is not a gelatinous, neutral sea but rather an active agent in the brain's recall process.

With plenty of evidence to back himself up, Marx postulates that memory in the biological brain requires neurons, the nECM, plus a variety of dopants such as neurotransmitters (NT) released into the nECM. Electrochemical processes take place between these three elements, the chemical reactions from which have not only been recorded, but are perceived as closely aligned with emotion. The physiological effects associated with recalling a memory (e.g., raised blood pressure, heavier breathing) trigger psychic effects (excitement, fear, anxiety, joy) which in turn have a reinforcing effect on the memory itself. Writes Marx with his colleague Chaim Gilon [PDF]:

We find ourselves in the inverse position of the boy who cried: "The emperor has no clothes!" as we exclaim: "There are no "naked neurons!" They are swaddled in nECM, which is multi-functional, as it provides structural support and is a hydrogel through which liquids and small molecules diffuse. It also performs as a "memory material," as outlined by the tripartite mechanism which identifies NTs as encoders of emotions.

This is not to say neuromorphic computing can't yield benefits. But if the theory is that it will yield greater benefits without taking all the other parts of the brain into account, then Marx's stance is, its practitioners should stop pretending to be brain surgeons.

Building a neuromorphic device may inform us about how the mind works, or at least reveal certain ways in which it doesn't. Yet the actual goal of such an endeavor, at least today, is to produce a mechanism that can "learn" from its inputs in ways that a digital computer component may not be able to. The payoff could be an entirely new class of machine capable of being "trained" to recognize patterns using far, far fewer inputs than a digital neural network would require — consuming considerably less energy, and thus potentially breaking AI's present dependency upon the public cloud.

Learn more — From CNET Media Group

Elsewhere

Editorial standards