BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Intel And Baidu Collaborate On Neural Network Processor For Training

Following
This article is more than 4 years old.

At Baidu Create, an AI developer conference hosted in Beijing last week, Intel has announced that it is working with Baidu on AI hardware and software platforms.

INTEL

Intel and Baidu have a long history of cooperation, and Intel has a range of solutions that help Baidu in its efforts of developing cutting-edge AI technologies.

On the hardware front, Baidu is collaborating with Intel on the research and development of Nervana Neural Network Processor for Training (NNP-T), a hardware accelerator optimized for deep learning. Announced in 2017, NNP-T 1000 includes processor cores based on Intel’s Ice Lake architecture to optimize neural networks.

Apart from the collaboration on NNP-T, Baidu is also using Intel Xeon Scalable Processors to power its infrastructure running Baidu Brain, an AI platform providing intelligence to internal and external applications. The platform exposes over 100 AI services including, natural language processing, facial recognition, voice processing and recognition among other services. Intel and Baidu have worked on optimizing the hardware based on 2nd Generation Intel Xeon Scalable processors to accelerate performance for workloads incorporating speech synthesis, natural language processing, visual applications, and other scenarios.

NNP-T 1000 is expected to accelerate the AI efforts at Baidu by complementing the Xeon Scalable Processor infrastructure by significantly speeding up deep learning training jobs.

Baidu is also working towards optimizing its deep learning framework, PaddlePaddle (PArallel Distributed Deep LEarning), for Intel NNP-T. Baidu claims PaddlePaddle to be the first China-based deep learning framework. It is positioned as an alternative to TensorFlow, an open source ML framework from Google.

According to Intel, PaddlePaddle was also the first framework to integrate Vector Neural Network Instructions (VNNI). These new enhancements deliver dramatic performance improvements for image classification, speech recognition, language translation, object detection, and other important functions as part of Intel® Deep Learning Boost (Intel® DL Boost), a group of acceleration features introduced in our 2nd Generation Intel Xeon Scalable processors.

To increase data security, Intel and Baidu have been working together on MesaTEE, a memory- safe Function as a Service (FaaS) computing framework, an approach that enables security sensitive services like banking, autonomous driving and healthcare to more securely process their data on critical platforms, such as public cloud and blockchain.The solution builds on Intel Software Guard Extensions (SGX), enabling security-sensitive services to more securely process their data on public clouds and other environments.

Intel and Baidu are investing in technologies to protect AI algorithms for cloud and edge computing devices.

Intel is moving fast to capture the AI market by building processors that accelerate training and inferencing. The combination of Intel Xeon Scalable Processors and NNP-T delivers unmatched performance in training deep neural networks. Intel has also built NNP-I 1000, a discrete accelerator designed specifically for scaling and inferencing of AI models.

Follow me on Twitter or LinkedInCheck out my website