1. Home >
  2. Gaming

Coming soon to a Radeon near you: AMD unveils its plans for High Bandwidth Memory

AMD is finally willing to talk about its high bandwidth memory solution. It'll arrive in the not-too-distant future -- with significantly improved graphics performance.
By Joel Hruska
AMD-HBM

During its analyst day two weeks ago, AMD confirmed that its next iteration of high-end Radeon cards would adopt High Bandwidth Memory, or HBM. We've previously covered HBM's technical implementation in some depth, but we haven't had formal acknowledgment from AMD that it would release the technology, or official data on how it compared to GDDR5. Now, we do. and the final figures point to potent performance for the upcoming Radeon.

AMD decided to invest in HBM research seven years ago, when it became apparent that a new memory standard would be needed to replace GDDR5. Conventional GDDR designs have scaled extremely well over the past decade, but as the slide below shows, conventional DRAM scaling and the difficulty of routing so many traces around the GPU itself had become a significant problem.

HBM-1 Simply scaling GDDR5 to higher clock speeds in order to meet the demands of faster GPUs was no longer sufficient. Earlier this year, Samsung announced that it had begun producing GDDR5 rated for up to 8Gbps, but that's just a 14% bandwidth increase over existing 7Gbps GDDR5. Like CPUs, DRAM has a non-linear power consumption curve. Higher clocks require higher voltages, and power consumption increases as the square of the voltage increase. A new approach was needed -- and HBM provides it.

Introducing High Bandwidth Memory

AMD's next-generation Radeon will be packaged together with its memory through the use of a 2.5D interposer. The diagram below illustrates how this is accomplished. Instead of connecting to off-package DRAM through a variety of circuit traces, the GPU and its memory connect through the interposer itself.

HBM-3 AMD's HBM implementation is the first iteration of High Bandwidth Memory to come to market and stacks four DRAM die, one on top of the other. Each individual DRAM die contains two gigabits of memory, which means four DRAM integrated circuits (ICs) adds up to 1GB. The first-generation HBM technology that AMD has deployed here allows for up to four DRAM stacks of 1GB each. Each stack is accessed through a 1024-bit memory channel and clocked at up to 500MHz (1GB/s effective transfer rate). This suggests a maximum throughput of 512GB/s for a four-way memory controller. Total scaling performance is expected to be excellent, with 4GB of HBM providing resolution scaling equivalent to 2-3x as much GDDR5.

Power consumption, die size

One of the major improvements HBM will bring over existing GDDR5 is in power consumption. According to AMD, a high-end GPU like the R9 290X spends 15-20% of its power budget on its RAM. On a card with a 250-300W TDP, this suggests that 37W-60W of a GPUs total power consumption is spent on memory while under load. According to AMD, adopting HBM slashes this by more than 50%, with a huge increase in total bandwidth delivered per watt.

HBM-5

This is in-line with other estimates we've seen for relative power consumption between DRAM and GDDR5. These aren't the only savings -- the combined interposer is also far smaller than the GPU + DRAM stack on a conventional GPU. Again, AMD estimates its slashed its PCB footprint nearly in half.

HBM-6 These advances should make dual-GPU designs far simpler to build in the future than they've been to-date. The dual boards themselves need not be as complex, and the total area devoted to memory will be an order of magnitude smaller than in current designs like the R9 295X2.

HBM: Nice for GPU, but an APU game-changer

AMD will be the first company to bring HBM to market in a mainstream part, and we don't yet know how many SKUs the company is launching. All of the sources we've spoken to -- sources in a position to know -- say that Fiji will have 4GB of RAM when it launches in the not-too-distant future, not 8GB. Given that Nvidia's highest end consumer GPUs offer 4GB and 4GBish worth of memory (not counting the $1000 Titan), Fiji should compete well on that front.

Even more exciting is what HBM could mean for APUs 18-24 months from now. While AMD obviously isn't giving timelines, the company confirmed that it intends to extend HBM across its product stack, including future APU designs. Even a single link would provide a 1024-bit memory bus, dwarfing the performance of even a high-speed quad-channel DDR4 design.

That much memory bandwidth could crack the bandwidth problems that have choked integrated graphics to one degree or another from the very beginning. Sharing main memory and competing with the CPU for bandwidth has always hurt integrated graphics performance, going all the way back to the Cyrix MediaGX. By 2017-2018, AMD may have solved that issue for good.

 

Tagged In

AMD High Bandwidth Memory GDDR5 Fiji Radeon

More from Gaming

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of use(Opens in a new window) and Privacy Policy. You may unsubscribe from the newsletter at any time.
Thanks for Signing Up