1. Home >
  2. Gaming

AMD's Radeon R9 Fury X: Previewing performance, power consumption, and 4K scaling

AMD's new Radeon Fury X debuts today. Fed Ex shenanigans kept us from having much time with the GPU, but we've got a preview of the card ready to go, including preliminary findings on power consumption, 4K scaling, and overall performance.
By Joel Hruska
FuryFeature1

Today, after months of previews, leaks, and a smattering of official disclosures, AMD is launching its much-touted Radeon R9 Fury X. The new GPU packs a number of potent firsts -- it's the largest GPU AMD has ever built, the first GPU to use HBM, and the first high-end card AMD has launched to compete directly against Nvidia's Maxwell since that card debuted almost nine months ago. AMD has been promising a GPU that would truly leapfrog its previous Hawaii-class cards in terms of both performance, power consumption, and noise.

FuryCardThe perspective isn't twisted -- this GPU is small

 

Unfortunately, our initial examination of the Radeon R9 Fury X is going to be more constrained than we originally planned. Due to a miscommunication with Edelman and some truly astonishing incompetence from Fed Ex, our sample GPU that was supposed to arrive on Friday actually arrived on Tuesday, less than 24 hours before this morning's 8 AM launch. A full evaluation of the Radeon R9 Fury X under these circumstances was impossible. Instead, we'll be previewing some initial findings and continuing to work on comprehensive testing.

The three big questions

Over the past few months, readers have expressed three primary concerns about the Fury X. First and most obvious: Would it match Nvidia's overall performance? Second, would it improve on Hawaii's power consumption or performance per watt? While AMD's 2013-era GPUs competed fairly well against Kepler, Nvidia took an aggressive lead on overall power consumption with Maxwell. Third, would the 4GB memory buffer on the Fury X harm scaling at 4K resolutions?

We intend to visit all of these topics in greater detail, but the data we're going to present right now is indicative of the trends we're seeing in every category. Let's start with overall performance previews. All of our tests were run on a Haswell-E system with an Asus X99-Deluxe motherboard, 16GB of DDR4-2667, and Windows 8.1 64-bit with all patches and updates installed. The latest AMD Catalyst Omega drivers and Nvidia GeForce 353.30 drivers were used. Our power consumption figures are going to be somewhat higher in this review than in some previous stories — the 1200W PSU we used for testing was a standard 80 Plus unit, and not the 1275 80 Plus Platinum that we’ve typically tested with.

We've also included results for a slightly higher-end GTX 980 Ti from EVGA, the GeForce GTX 980 Ti SC+ ACX 2.0+ with a $679 price tag (up from the $649 MSRP on the standard GTX 980 Ti.

BioShock-Infinite

BioShock Infinite has historically been a hair faster on Nvidia hardware than on AMD, but Fury closes the gap here, rocketing forwards to tie the GTX Titan X reference design in both 1080p and 4K. The super-clocked variant from EVGA is still a hair faster, but also a touch more expensive, at $679 compared to $649. The Fury X isn't going to match the Radeon R9 295X2, but it's 36% faster than the R9 290X in 1080p and a whopping 70% faster in 4K.

Shadow of Mordor

In Shadow of Mordor, the R9 Fury X hits between the GTX 980 and the 980 Ti, but sits closer to the latter than the former in 1080p. In 4K, however, the tables turn a bit -- here, the R9 Fury X is faster than any other single-GPU solution, save for the overclocked EVGA GTX 980 Ti , where it narrowly loses.

The degree to which the Fury X can match the GTX 980 Ti varies somewhat from game to game, including some titles we haven't finished testing yet. At its best, the card seems to offer equivalent performance to the GTX 980 Ti when tested at our standard detail levels and configurations, but it doesn't win every benchmark. It's always faster than the GTX 980, however, at least in everything we've had a chance to test.

Previewing 4K scaling

One of the most sustained questions we got from readers was whether or not the Fury X could handle scaling at 4K and beyond. I've always felt this was unlikely to be a major issue, given that Nvidia's own $500 GTX 980 GPU is a 4GB part and most games -- even those that can tap more than 4GB of VRAM, haven't generally needed it. In point of fact, it's genuinely difficult to convince most games to use that much RAM in single-GPU configurations (multi-GPU configurations are slightly different, as these have some intrinsic memory overhead).

I only had one title easily on-hand that regularly allocates more than 4GB of VRAM -- Shadow of Mordor. Dragon Age: Inquisition can burst to above the 4GB mark, but generally keeps itself to a 2-3GB envelope, even in a system with a 6GB GTX 980 Ti or 12GB Titan X. Scaling measurements in a single title don't provide us with a comprehensive look at the situation, but again -- this is a preview, not the full review.

Memory usage in Shadow of MordorMemory usage in Shadow of Mordor

First, let's talk about how Shadow of Mordor allocates RAM across three different cards: The GTX 980 Ti (6GB), the GTX Titan X (12GB) and the Radeon R9 Fury X (4GB). We tested the game at multiple resolutions, from 1920x1080 through full 8K (7680x4320). The Shadow of Mordor engine can simulate an 8K image by supersampling at 4K, and while this isn't exactly the same as 8K native output, the difference in performance between supersampled 4K and actual 4K in this game is less than 5%. RAM usage is identical between the two modes.

At first glance, the game definitely seems to be making use of all the extra memory, but note the differences at 1080p between the Titan X and GTX 980 Ti. VRAM load isn't identical between each run of the game, which means the figures listed here should be seen as ballparks rather than absolutes. Nonetheless, the trend is clear -- Shadow of Mordor definitely loads more data into VRAM as the game progresses.

Next question: How does that impact performance of our GPUs?

SoM2 Shadow of Mordor performancePerformance at each resolution for each card

Here's the framerate measurements for each card in graph and chart form (the graph is rather busy to embed the data points directly). At 1080p, the GTX 980 Ti has a 16% lead over the Fury X, but that decreases steadily as we move to higher frame rates. By 4K, the Fury X is actually slightly leading the EVGA card, despite its higher clocks. This changes past the 4K point, with the Fury X's scaling falling faster than the Nvidia cards.

Note, however, that none of the GPUs we tested maintain a playable framerate above the 4K mark, making the entire question of memory allocation academic. Furthermore, while the Titan X doubles the GTX 980 Ti's frame rate at 8K, it can't quite break 10 FPS. Even if you're an SLI owner, you're not going to be running at resolutions that strain the 6GB memory buffer on the GTX 980 Ti, much less the Titan X.

While this article is only a preview of memory scaling, I'd like to note that Shadow of Mordor displays the strongest case for needing more than 4GB of VRAM of any game we've tested yet. Furthermore, the 5760x3240 resolution we tested packs more than 2x the pixels of 4K and a further 26% more pixels than 5K. The market for this kind of visual fidelity is limited -- if you're buying a $649 video card, you're almost certainly going to replace it before you outgrow your VRAM buffer.

Preliminary power consumption

The last topic I want to touch on is power consumption and performance per watt. AMD made a number of claims regarding the performance of the R9 Fury X and its power consumption improvements. Most of these appeared tied to the use of HBM as opposed to other factors, but after the drubbing the company has taken from Maxwell, can it regain some luster in this area?

Note: These results are taken from tests run with Nvidia's 352.90 drivers. The 353.30 drivers improve performance in Metro Last Light by up to 25%, but we didn't have time to re-measure performance per watt. 

First up, standard power consumption figures. Idle power is measured at idle on the desktop after 10 minutes, load power is measured in Metro Last Light at 4K resolutions with SSAA enabled and details turned to Very High.

PowerConsumption

Absolute power consumption on Fury X is still higher than the GTX 980 Ti, and by no small margin -- but the card's performance per watt is much improved from what we saw with the Radeon R9 295X2. Here's those results:

MetroLastLight1 In our performance-per-watt metric, the Radeon R9 Fury X is markedly more efficient than the previous generation of card. No, it's still not tied up with Maxwell, but it's far better than AMD's previous designs.

Preliminary conclusions

I've got a few thoughts to share before heading back to the testbed. Here they are, in no particular order:

Cooler performance: The Radeon R9 Fury X that AMD shipped us whines and gurgles at several different frequencies. According to AMD, this is a problem that's already been fixed in production hardware, so consumers won't encounter it at all. (It's caused, according to AMD, by the high-speed agitation of fluid in the radiator and that makes sense given the noise profile.) The GPU cooler's performance is excellent -- our GPU tends to stick around 50C and the Nidec fan is extremely quiet. FuryCard2 Overall performance: The Radeon R9 Fury X is a huge leap forward for AMD compared to the R9 290X, but gamers who were hoping to see Team Red deliver the bruising that Hawaii dished out in 2013 are going to be disappointed. Fiji is a very solid design, but it trades shots with the 980 Ti -- it doesn't beat it across the board. Some reviewers are reporting that the driver stack could've used a bit more polish and that's likely true -- this launch was very 'hot' as these things go. VRAM loadout: So far, there's no sign that VRAM loadout is a problem for Fiji. If you have specific titles you'd like to see tested, drop them off in comments or email me directly and we'll take a look. No overclocking headroom: Based on comments from AMD, other reviews, and my own experience thus far, it's a mistake to look to the Fury X as an overclockable card. Memory clocks are hard-locked (AMD tells us that overclocking the RAM has very little effect) and the GPU clock of 1050MHz has no headroom in it. Even a 4.7% overclock failed on test. A bit more voltage headroom might change that, but current applications like Afterburner can't detect the card's operating voltage at all, much less change it.

Given how mature 28nm technology is, we don't expect overclocking headroom to change much going forward, though it's possible it will. For now, don't plan on squeezing more performance out of a Fury X via overclocking.

Based on what I've seen so far, the R9 Fury X is impressive on a number of fronts. It's a huge leap forward compared to the R9 290X. It's more power efficient, it's small -- tiny, in fact, compared to typical high-end cards. I like the 50C temperature and near-silent operating mode, and I like the move to HBM, which clearly helped AMD on the power efficiency front.

Balanced against that, however, is the fact that the drivers need some further polish and the card's $649 price tag means it's trading on style points against the GTX 980 Ti in some titles, rather than offering best-in-class performance across the board. At $500-$550, against or a notch above the GTX 980, the Fury X would be a slam dunk. At $600, it would neatly split the difference between the GTX 980 and the 980 Ti, and offer an extremely compelling alternative. At $649, we need to see some driver improvements and very much want to test the revised cooler design.

Tagged In

VRAM Team Green Maxwell AMD Team Red

More from Gaming

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of use(Opens in a new window) and Privacy Policy. You may unsubscribe from the newsletter at any time.
Thanks for Signing Up