X
Innovation

NVIDIA GPUs now work with Arm processors, Magnum open source I/O accelerates data workloads for AI

NVIDIA expands its ecosystem, flexes its software muscle, and takes a bet on new processors, workloads, and use cases. The developments paint a new picture in the AI chip race in the cloud and the edge.
Written by George Anadiotis, Contributor

NVIDIA made a number of important announcements today at SC19. First, NVIDIA introduced a reference design platform that enables companies to quickly build GPU-accelerated Arm-based servers, driving a new era of high performance computing for a growing range of applications in science and industry.

Second, it introduced NVIDIA Magnum IO, a suite of software to help data scientists and AI and high performance computing researchers process massive amounts of data in minutes, rather than hours. In a third announcement, NVIDIA announced the availability of a new kind of GPU-accelerated supercomputer in the cloud that is available on Microsoft Azure.

Let's unpack the announcements and what they mean for NVIDIA, and the data and compute ecosystem at large.

High performance, low power, strong ecosystem

Lately, Intel's near monopoly in the data center is under threat. Besides AMD, who has managed to stick around as an alternative to Intel CPUs, there is new competition from Arm. Arm processors have up to now mostly been used in mobile phones and edge compute scenarios, owing to their low power consumption. 

Although Arm processor performance may not be on par with Intel at this point, their frugal power needs make them an attractive option for the data center, too, according to analysts. AWS was the first to play on that strength in 2018, adding new instances utilizing Arm CPUs in its arsenal. With newly announced support for Arm CPUs, NVIDIA achieves a number of things. 

First, NVIDIA future proofs itself, and strengthens its ecosystem and its software platform. By embracing Arm, NVIDIA lets cloud vendors and data center managers everywhere know that they can expect NVIDIA GPUs to run seamlessly whatever CPU they may use. Jensen Huang, NVIDIA CEO, made that clear:

"There is a renaissance in high performance computing. Breakthroughs in machine learning and AI are redefining scientific methods and enabling exciting opportunities for new architectures. Bringing NVIDIA GPUs to Arm opens the floodgates for innovators to create systems for growing new applications from hyperscale-cloud to exascale supercomputing and beyond." 

nvidiamagnum.jpg

NVIDIA is flexing its software muscle, and expanding its ecosystem with Magnum IO, and support for Arm processors. Image: NVIDIA

At the same time, NVIDIA brings Arm and its ecosystem partners, such as including Ampere, Fujitsu and Marvell, onboard with the NVIDIA software ecosystem and its CUDA-X software platform. As we have noted in the past, NVIDIA's lead in the AI chip market is sought after by innovative startups. Although startups may come up with new hardware designs, however, NVIDIA's software stack will be hard to match. For Arm, a partner like NVIDIA certainly makes prospects for adoption look much better.

This ecosystem aspect was highlighted by NVIDIA as well as Arm. Going beyond Arm, however, NVIDIA also referred to the HPC (High Performance Computing) ecosystem more broadly. In addition to making its own software compatible with Arm, NVIDIA is working closely with its broad ecosystem of developers to bring GPU acceleration to Arm for HPC applications such as GROMACS, LAMMPS, MILC, NAMD, Quantum Espresso and Relion.

NVIDIA and its HPC-application ecosystem partners have compiled extensive code to bring GPU acceleration to their applications on the Arm platform. To enable the Arm ecosystem, NVIDIA collaborated with leading Linux distributors Canonical, Red Hat, Inc., and SUSE, as well as the industry's leading providers of essential HPC tools.

Innovation at Arm's length: from the edge to the data center, and back again

But there's more going on here. Huang has recently pointed towards the edge as a key goal for NVIDIA. In the recent 2019 Q3 earnings call with analysts, following NVIDIA's introduction of its EGX compute platform for edge AI, Huang was adamant:

"This quarter, we have laid the foundation for where AI will ultimately make the greatest impact. We extended our reach beyond the cloud, to the edge, where GPU-accelerated 5G, AI and IoT will revolutionize the world's largest industries. We see strong data center growth ahead, driven by the rise of conversational AI and inference."

Indeed, NVIDIA foresees the growth of workloads on the edge, and wants to be ready for this. Besides another recent ecosystem expansion targeting 5G with Ericsson, Red Hat, and Microsoft partnerships, NVIDIA has also introduced its Jetson line of SoMs (System-on-Module, with CPU, GPU, PMIC, DRAM, and flash storage).

These SoMs are well suited for edge applications, but they are not the only game in town. Arm, too, has its own line of processors geared towards AI workloads in the edge. The fact that now Arm and NVIDIA play well together means that in the future, we could also see a combination of Arm and NVIDIA processors in devices and applications deployed in the edge, too. 

nvidia2.jpg

GPU-powered data analytics and AI workloads are getting a massive boost by NVIDIA's newly introduced Magnum IO software stack. Image: NVIDIA.

The Magnum announcement plays on similar dynamics, too. At the heart of Magnum IO is GPUDirect, which NVIDIA says provides a path for data to bypass CPUs and travel on "open highways" offered by GPUs, storage and networking devices. Magnum is compatible with a wide range of communications interconnects and APIs, - including NVIDIA NVLink™ and NCCL, as well as OpenMPI and UCX - GPUDirect is composed of peer-to-peer and RDMA elements.

NVIDIA noted that Magnum IO is optimized to eliminate storage and input/output bottlenecks, and delivers up to 20x faster data processing for multi-server, multi-GPU computing nodes when working with massive datasets to carry out complex financial analysis, climate modeling and other HPC workloads.

Besides being a huge win for everyone whose workload will be boosted by Magnum IO, Magnum also strengthens NVIDIA's ecosystem, and broadens its lead in terms of software. NVIDIA has developed Magnum IO in close collaboration with industry leaders in networking and storage, including DataDirect Networks, Excelero, IBM, Mellanox and WekaIO.

As far as software goes, just ask yourself how easy it will be for AI chip upstarts to not only develop something like Magnum IO, but to get other hardware vendors on board in the way NVIDIA has. When asked about the ecosystem aspect of Magnum IO, and whether it's open for other vendors, Ian Buck, NVIDIA VP and GM Accelerated Computing concurred.

But that does not mean that NVIDIA is not at the driver seat here: upstarts wanting to challenge its position in the data center will either have to play along with Magnum IO, or reinvent it. Magnum IO may be open source, but it's not without leadership.

Innovation again, and lots of it, in the AI hardware market

Regardless of all that, the performance improvement that Magnum IO brings seems massive, and users with data science and AI workloads leveraging NVIDIA GPUs in their data centers or cloud vendors should take note. Case in point, NVIDIA's final announcement on the largest deployments of Microsoft Azure's new NDv2 instance.

NVIDIA says NDv2 ranks among the world's fastest supercomputers, offering up to 800 NVIDIA V100 Tensor Core GPUs interconnected on a single Mellanox InfiniBand backend network. NDv2 enables customers to rent an entire AI supercomputer on demand from their desk, and match the capabilities of large-scale, on-premises supercomputers that can take months to deploy.

SC19 certainly was a fitting venue for these announcements. The applications, and implications, however, go well beyond HPC. As Peter DeSantis, AWS VP of infrastructure put it recently, we're seeing innovation again, and lots of it, in the AI hardware market.

Editorial standards