Skip to main content

Nvidia’s Titan GPUs get optimized software for machine learning

Nvidia Titan X chip has 8 billion transistors.
Image Credit: Nvidia

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


Nvidia released a tool today that’s designed to help developers and data scientists build and test machine learning systems on their personal computers before moving to production on a more powerful machine.

The Nvidia GPU Cloud provides researchers with software containers that are designed to provide developers with the fastest execution environment for training machine learning systems using the chipmaker’s silicon. Those containers were already available for use with the machine learning-oriented DGX-1 and DGX Station computers, along with cloud instances powered by Nvidia Volta chips that run on Amazon Web Services.

But now customers can use them on consumer hardware — in this case, Nvidia’s Titan series of chips. Those high-end consumer GPUs won’t provide as much firepower as a massive machine learning-oriented computer, but they’re less costly and more readily available.

Because the Nvidia GPU Cloud software is all kept inside software containers, it’s possible for developers to take the systems that they’ve trained on a personal machine and more easily deploy them on one of Nvidia’s larger-scale AI machines, or in the cloud.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

All told, the move is supposed to help people get off the ground with machine learning systems and iterate faster on systems that could help them solve business problems and drive the field of AI forward.

While large scale tech giants have no trouble throwing dozens, if not hundreds of GPUs at a single machine learning problem, developers and researchers will frequently begin testing their systems on smaller, personal machines without as much firepower. This announcement should give them a bit of a speed boost.

The news comes as part of the Conference on Neural Information Processing Systems (NIPS), which is taking place this week in Long Beach, California. That show brings some of the brightest minds in AI together to share key developments from their research.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.