1. Home >
  2. Defense

How neuromorphic 'brain chips' will begin the next era in computing

Neuromorphic computers will change what computers can do and, perhaps more importantly, where they can do it.
By Graham Templeton
truenorth head

IBM recently released(Opens in a new window) new details about the efficiency of its TrueNorth processors, which sport a fundamentally novel design that cribs from the structure of the human brain. Rather than line up billions of digital transistors all in a line, TrueNorth chips have a million computer 'neurons' that work in parallel across 256 million inter-neuron connections ('synapses'). According to these reports, the approach is paying incredible dividends in performance and, more importantly, power efficiency. Make no mistake: Neuromorphic computing is going to change the world, and it's going to do it more quickly than you might imagine.

The development of neuromorphic computers is thematically pretty similar to the development of digital computers: First figure out the utility of an operation (say, computing firing trajectories during wartime), then develop a crude way of doing it with the tools you already have available (say, rooms full of people doing manual arithmetic), then invent a machine to automate this process in a much more efficient way. Part of the reason a digital computer is more efficient than a human being is its transistors can fire with incredible speed -- but so can our neurons. The bigger issue is a digital computer is designed from the ground up to do those sorts of mathematical operations; from a certain perspective, it's a bit crazy we've ever tried to do efficient mathematical work on a computer like the human brain.

TitanX-FeatureShiny new GPUs may be fast, but they're also incredibly inefficient when compared with coming neuromorphic competitors.

Similarly, we will eventually look back at the attempt to do learning operations with digital chips, including GPUs, as inherently unwise or even silly. The much more reasonable approach is to design a thinking machine suited to such operations from the most basic hardware level, as naturally predisposed to machine learning as a Celeron chip is to multiplication. This could not only greatly increase the speed of the processor for these tasks, but dramatically reduce the energy consumed to complete each one. That's what IBM has in the works, and it's much further along than many expect.

When tasked with classifying images (a well understood machine learning task), a TrueNorth chip can churn through between 1,200 and 2,600 frames every second, and do it while using between 25 and 275 mW. This leads to an effective efficiency of more than 6,000 frames per second per Watt. There's no listed standard frames/second/Watt figure for conventional GPUs using the same sorting algorithm and dataset. But considering modern graphics cards might draw 200 or even 250 watts all on their own, it's hard not to imagine a host of low-power, high-performance applications.

truenorth 2Examples of the output of 12 different analytical filters assigned to example images.

Most significantly, there is the incredible expense of modern machine learning. Companies like Apple, Facebook, and Google can only deliver their advanced services by running expensive arrays of super-computers designed to execute machine learning algorithms as efficiently as possible. That specialization comes at a crushing cost(Opens in a new window). Even leaving that aside, electricity alone becomes a major expense when you're running that many computers at or near capacity, 24 hours a day. Just ask Bitcoin miners.

Lee Sedol (right) sits across from a human surrogate making AlphaGo's moves.IBM's victory in the game of Go was only possible thanks to advanced machine learning of the sort TrueNorth does natively.

So, early, expensive neuromorphic hardware will likely be a major boon to service providers. We can only hope this will be passed along to consumers in the form of improved performance and wide-ranging savings. But the speed and efficiency offered by neuromorphic chips won't stop there -- reducing power draw by several orders of magnitude will allow such tasks to come out of the cloud entirely.

Want a Babel-fish-like wearable that auto-translates any foreign speech in your vicinity, without the necessity of an always-on internet connection? What about a fitness tracker that knows your every move without ever having to upload that information to a separate computer for analysis? A self-driving car that can go off the grid, or an interplanetary rover that can make unforeseen decisions while out of communication range and running on a tiny nuclear battery?

Neuromorphic chips are currently the most likely way of actually getting such of jobs done. Right now there's no indication conventional hardware could succeed in its place.

truenorth 3The NS16e.

IBM says(Opens in a new window) it has a new rig called NS16e, an array of 16 TrueNorth processors totaling about four billion synaptic connections -- nothing compared with a human brain, but seemingly more than enough to tackle modern machine learning problems. These 16 chips can talk to one another thanks to passive message-passing connections between them, broadly mirroring the function of the corpus callosum that connects the two hemispheres of the brain, though multidirectionally.

But IBM isn't the only one lunging for this particular finish line. There are the requisite rumors of a Google research project. More notably, Qualcomm has claimed to have neuromorphic capacity in some of its upcoming Snapdragon processors, though it was always a bit unclear how that would work, and there hasn't been much chatter on that front in recent times. Private investment in this space has been tentative, with most of the progress made at IBM coming thanks to an infusion of cash from DARPA.

GPS-Darpa-chip-pennyHere's another DARPA invention: super small and efficient GPS tracking devices.

Yes, DARPA. After all, soldiers are constantly tromping around areas of the world with poor data coverage and trying to communicate with people who speak truly specific languages. The traditional means of trying to tackle this problem is called Natural Language Processing (NLP), and right now soldiers in the field are doing mostly data retrieval for centralized NLP analysis. With neuromorphic computing available, their translators could begin(Opens in a new window) breaking down a novel dialect right away, improving translation in real time.

Soldiers aren't the only ones with a need for rugged portability, however. In particular, it seems the quickly oncoming wave of smart eyewear, from Google Glass 2.0 to Snapchat's social media Spectacles, can only realize its true potential by removing distant data-servers from their workflow. We might imagine a pair of glasses that layer a helpful augmented reality HUD over the world in real-time, highlighting useful elements for you. That sort of functionality will be difficult to roll out for hundreds of millions of electronics consumers if it requires constant, high-throughput data streaming to some suburb of San Francisco.

Elon Musk's global internet will be cool if it ever materializes, but eve it won't be fast enough to let everyone stream everything all the time.Elon Musk's global internet will be cool if it ever materializes, but eve it won't be fast enough to let everyone stream everything all the time.

The issue isn't just that such a broad, constant data stream will kill our batteries -- though it will -- but that performance, cost, and in particular privacy will all be fundamentally improved by doing these complex tasks locally. All things being equal, the only real downside is to the service provider, which can use or sell the unique personal insights it can gain in managing all your personal requests.

Wearable computing, augmented reality, sensory assistance -- all these emerging trends require the application of cutting-edge machine learning algorithms. Right now at IBM and elsewhere, we're seeing the emergence of the technology most likely to let those algorithms spread fast enough and far enough to fully realize all of that potential.

Check out our ExtremeTech Explains series for more in-depth coverage of today’s hottest tech topics.

Tagged In

TrueNorth Transistors Extremetech Explains Machine Learning Processors

More from Defense

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of use(Opens in a new window) and Privacy Policy. You may unsubscribe from the newsletter at any time.
Thanks for Signing Up