Skip to main content

Intel blasts back at Nvidia, saying Xeon dominates 97% of A.I. servers

Diane M. Bryant, Intel executive vice president and general manager of its Data Center Group, speaks with Jing Wang of Baidu, at the 2016 Intel Developer Forum in San Francisco on Wednesday, Aug. 17, 2016. During her keynote address, Bryant and Donovan spoke of plans for Intel and AT&T to continue work on the future of 5G. (Credit: Intel Corporation)
Image Credit: Intel

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


The tech world would be boring without the occasional war of words. Graphics chip maker Nvidia has said often that it has made big inroads into data centers with its GP-GPU computing initiatives, using graphics chips to process artificial intelligence applications such as deep learning neural networks.

But Jason Waxman, corporate vice president and general manager of the Data Center Solutions Group at Intel, said in a blog post today that Intel’s Xeon chips dominate the market for artificial intelligence servers, with a 97 percent share of A.I. hardware.

“While there’s been much talk about the value of GPUs for machine learning, the fact is that less than 3 percent of all servers deployed for machine learning last year used a GPU,” Waxman said.

Yet he promised to work to stay at the leading edge of computing, as rivals are questioning whether Intel is doing enough to foster big A.I. enhancements that are needed for tech such as self-driving cars.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

“Intel is in the leading position to bring us the hardware and the architectures to foster this open community that we really do need to make progress,” said Pedro Domingos, professor in computer science and engineering at the University of Washington, in a statement.

“Our industry needs breakthrough compute capability — capability that is both scalable and open — to enable innovation across the broad developer community,” Waxman said. “Last week at the Intel Developer Forum (IDF), we provided a glimpse into how we plan to deliver the industry-leading platform for AI.”

While Nvidia launched a 15-billion-transistor chip to focus on A.I. applications earlier this year, Intel has responded with the disclosure that its next Xeon Phi processor, code-named Knights Mill, will debut next year with a focus on A.I. Intel also said it was buying Nervana Systems (reportedly for more than $350 million) to beef up its A.I. expertise.

Waxman said, “A.I. is nascent today, but we believe the clear value and opportunity A.I. brings to the world make it instrumental for tomorrow’s data centers. Intel’s leadership will be critical as a catalyst for innovation to broaden the reach of A.I. While there’s been much talk about the value of GPUs for machine learning, the fact is that fewer than 3 percent of all servers deployed for machine learning last year used a GPU.”

Waxman also took a swing at Nvidia’s effort to correct Intel’s comments about benchmarks for the A.I. systems.

He added, “It’s completely understandable why this data, coupled with Intel’s history of successfully bringing new, advanced technologies to market and our recent sizable investments, would concern our competitors. However, arguing over publicly available performance benchmarks is a waste of time. It’s Intel’s practice to base performance claims on the latest publicly available information at the time the claim is published, and we stand by our data.”

And, Waxman added, “As data sets continue to scale, Intel’s strengths will shine. The scope, scale and velocity of our industry underscores the importance of broad, open access to AI innovations. And the industry clearly agrees. Consider [this testimonial]: From Baidu’s Jing Wang: ‘The increased memory size Intel Xeon Phi provides makes it easier for us to train our models efficiently.'”

We’ve asked Nvidia for a response.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.