X
Innovation

Intel unveils the Nervana Neural Network Processor

The chipmaker explains the architecture behind the new AI-focused processor, formerly known as Lake Crest, and names Facebook as a collaborator as it brings the Nervana NNP to market.
Written by Stephanie Condon, Senior Writer
14131nervanachipboard101217angle2a.png

Intel's Nervana chip board

Intel

Intel on Tuesday is taking the wraps off of the Nervana Neural Network Processor (NNP), formerly known as "Lake Crest," a chip three years in the making that's designed expressly for AI and deep learning. Along with explaining its unique architecture, Intel announced that Facebook has been a close collaborator as it prepares to bring the Nervana NNP to market. The chipmaker also laid out the beginnings of a product roadmap.

While there are platforms available for deep learning applications, this is the first of its kind -- built from the ground up for AI -- that's commercially available, Naveen Rao, corporate VP of Intel's Artificial Intelligence Products Group, told ZDNet. It's rare for Intel to deliver a whole new class of products, he said, so the Nervana NNP family demonstrates Intel's commitment to the AI space.

AI is revolutionizing computing, turning a computer into a "data inference machine," Rao said. "We're going to look back in 10 years and see that this was the inflection point."

Intel plans to deliver silicon to a couple close collaborators this year, including Facebook. Intel collaborates closely with large customers like Facebook to determine the rights set of features they need, Rao explained. Early next year, customers will be able to build solutions and deploy them via the Nervana Cloud, a platform-as-a-service (PaaS) powered by Nervana technology. Alternatively, they could use the Nervana Deep Learning appliance, which is effectively the Nervana Cloud on premise.

In a blog post, Intel CEO Brian Krzanich said the Nervana NNP will enable companies "to develop entirely new classes of AI applications that maximize the amount of data processed and enable customers to find greater insights--transforming their businesses."

For example, social media companies like Facebook will be able to deliver more personalized experiences to users and more targeted reach to advertisers, Krzanich noted. He cited other use cases such as early diagnostic tools in the health care industry, improvements in weather predictions and advances in autonomous driving.

With multiple generations of Nervana NNP products in the pipeline, Intel says it is on track to meet or even exceed its 2016 promise to achieve a 100-fold increase in deep learning training performance by 2020. Intel plans on putting out Nervana products on a yearly cadence or possibly faster.

"This is a new space, and iteration and evolution are really important in this space," Rao said.

There are three unique architectural characteristics to the Nervana NNP, which provide the flexibility to support deep learning primitives while making core hardware components as efficient as possible.

The first is a different kind of memory architecture that allows for better utilization of the computational resources on a chip. "In a general-purpose processor, we don't know where the data's coming from, what we're going to do with it, where we're going to write it out to," Rao explained. "In AI, it turns out you know up front."

For this reason, the Intel Nervana NNP does not have a standard cache hierarchy. On-chip memory is managed by software directly. This ultimately means the chip achieves faster training times for deep learning models.

Next, Rao said, the Nervana NNP uses a newly-developed numeric format, which Intel calls Flexpoint, to achieve higher degrees of throughput. General-purpose chips typically rely on models built on continuous numbers to reduce data "noise." However, since neural networks are more tolerant of data noise -- and it can even help in deep learning training -- "we can get away with many fewer bits of representation for each computation," Rao explained.

Third, the Intel Nervana NNP is designed with high speed on- and off-chip interconnects that enable massive bi-directional data transfer. This can effectively allow multiple chips to act as one large virtual chip, enabling larger neural networks that can train faster.

With the addition of the Nervana NNP, Intel now offers chips for a full spectrum of AI use cases, Rao said. It complements Intel's other products used for AI applications, including Xeon Scalable processors and FPGAs.

"We look at this as a portfolio approach, and we're uniquely positioned to take that approach," he said. "We can really find the best solution for our customer, not just a one-size-fits-all kind of model."

For instance, if a new customer were at the beginning of their "AI journey," Rao said, "we have the tools to get them up and running quickly on a CPU, which they probably already have." As their needs grow, "we have that growth path for them," he continued, calling the Nervana NNP the "ultimate high performance solution" for deep learning.

Previous and related coverage

    AI training needs a new chip architecture: Intel

    Rather than strip down one of its existing architectures to make a chip optimized for AI, Intel went out and bought one.

    Intel announces self-learning AI chip Loihi

    Intel said its new AI test chip combines training and inference, meaning autonomous machines can adapt to learnings from their environment in real time instead of waiting for updates from the cloud.

    Intel has invested more than $1 billion in AI companies

    Intel CEO Brian Krzanich penned an op-ed Monday that touts the company's AI investments.

    Editorial standards