1. Home >
  2. Extreme

AMD Claims It Has Enough Radeon VIIs to Meet Demand

AMD is claiming it will have enough GPUs on-hand to meet Radeon VII demand, despite rumors to the contrary, and is touting the chip's machine learning performance overall.
By Joel Hruska
Radeon-VII-Feature

AMD's Radeon VII announcement at CES earlier this month was the major surprise of the event. While the company had previously launched a 7nm Vega-based GPU design, AMD had given every impression that the GPU was intended strictly for the professional market, where it would compete as an AI and machine learning card against Nvidia's Tesla products. After the announcement, rumors began to spread that the 7nm Vega would be a short-lived product, with very limited availability -- supposedly at or around 5,000 GPUs. Rumors also suggested that AMD would lose money on every single card, thanks to the GPUs use of expensive 16GB HBM2 buffers.

Now, AMD has gone on record to address some of these rumors. The company has released a statement, reading: "While we don’t report on production numbers externally,’ the statement reads, “we will have products available via AIB partners and AMD.com at launch of Feb. 7, and we expect Radeon VII supply to meet demand from gamers."

It has been implied that there are no AIB custom boards coming at all, though reference designs from some third parties will apparently be available.

As for the question about the GPU'sSEEAMAZON_ET_135 See Amazon ET commerce(Opens in a new window) profitability, it's not clear how much AMD is paying for HBM2 these days. Reports from a year ago(Opens in a new window) suggest that a 4-Hi stack of HBM2 (Radeon VII uses four of these) cost about $65, while 8-Hi stacks from Samsung were supposedly ~$120. Clocks were not mentioned in that reporting, though we can assume higher frequencies are more expensive. The interposer reportedly added an additional $25 or so to the final bill. GDDR6 is also currently running ~70 percent more expensive than GDDR5, so trying to figure out how AMD's memory cost compares to Nvidia's is the very definition of a moving target. If we use the older figures from last year -- which almost certainly aren't accurate, given how RAM prices have changed in the past 12 months -- we'd wind up with 16GB of HBM2 running $285 with the interposer cost factored in as well.

RadeonVII-vs-Vega64Radeon VII performance projections from AMD. Graph by

16GB of hypothetical GDDR6 (using 3DCenter.org's prices from earlier this year(Opens in a new window)) clocked at 14Gbps comes in at $187. Then again, AMD's HBM2 interface unambiguously delivers more memory bandwidth than any RTX GPUSEEAMAZON_ET_135 See Amazon ET commerce(Opens in a new window), which speaks to some of the intrinsic pricing difference. Whether the latest AMD GPU can effectively leverage that bandwidth in gaming relative to the Nvidia GPUs is, of course, its own question. AMD has an advantage over Turing in terms of overall die size -- the Radeon VII is a 331 mm2 GPU. All of the RTX cards are significantly larger: RTX 2070 is a 445mm2 GPU, while 2080 is 545mm2. Being able to yield a higher number of GPUs per wafer will help keep AMD's cost lower, assuming 7nm and 12nm yields are comparable.

Meanwhile, AMD is touting excellent performance in Microsoft's DirectML library as potential evidence that Team Red could offer a DLSS-like feature in the future. In an interview with 4Gamer, AMD's Adam Kozak stated the following(Opens in a new window):
Last year's GDC 2018, Microsoft announced a framework "Windows ML" for developing machine learning based applications on the Windows 10 platform, and "DirectML" that makes it available from DirectX... We are currently experimenting with obtaining the evaluation version SDK of DirectML, but Radeon VII shows excellent results in that experiment... Based on these facts, I think NVIDIA's DLSS-like thing can be done with GPGPU-like approach for our GPU.

(If anyone speaks Japanese and can offer a superior translation to Google Translate, please let us know).

The implication that AMD can handle DirectML workloads effectively isn't surprising to anyone who has followed the long-term trajectory of GCN compute performance. GPU compute has been a general strength of the architecture dating back to GCN 1.0 back in 2012, and excellent performance in tests like Luxmark generally show the card's capabilities. But tentative performance evaluation in a Microsoft SDK isn't the same thing as a feature that's ready to ship -- and the feature being discussed, in this context, is a feature that's still only supported in a bare handful of titles.

This relatively anemic discussion highlights the fact that beyond its basic stats, included game bundle, and price, there's a lot about Radeon VII that we don't know, including whether or not AMD will make any kind of technology introduction or launch around the part. With a price tag set to take it head-to-head against the RTX 2080, the gaming community is curious to know what other features or capabilities AMD will additionally announce to sweeten the competitive standing.

Now Read:

 

Tagged In

Ray Tracing DirectML Radeon VII AMD DLSS

More from Extreme

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of use(Opens in a new window) and Privacy Policy. You may unsubscribe from the newsletter at any time.
Thanks for Signing Up