I can say that in September I didn’t quite think they’d pull it off, but here we are now, with the next Radeon announcement done, and I do think AMD has pulled off something quite good here.
With the RX 6900 XT (heh, nice), the 6800 XT, and the 6800, AMD is poised to shoot directly across Nvidia’s bow, offering, respectively – 3090 performance at $500 less, 3080 performance at $50 less, and 3070 performance at $80 more but with twice the VRAM. All 3 cards will be out by year’s end, with the 6800 family launching on 11/18/2020 and the 6900 XT on 12/8/2020, with partner cards supposedly pretty close behind.
Without white papers or technical briefings, there is only speculation about how some of the features work, but let’s dive into what they discussed today.
Infinity Cache: I mentioned this in my preview because it really does seem to be the glue that holds this generation of cards together for AMD. Rather than pushing for higher-clocked, higher power consumption GDDR6 (since GDDR6X is an Nvidia collaborative effort), they’ve instead gone with the current-fastest standard GDDR6 at 16 Gbps, and then added a 128 MB on-die cache to the GPU to accelerate memory accesses. The idea seems to be to use it to buffer nearly any memory access in the short term, allowing things to be far more rapidly read and written. They claim that overall, it should push performance to almost double that of a 384-bit memory bus with the same GDDR6 memory, while consuming less power. That is a bold claim that needs some more third-party analysis, but the performance numbers do reflect what seems to be better-than-expected memory performance, so I would suspect that it is doing a lot of work.
Smart Access Memory: The real weird feature of the announcement, but one that does intuitively make sense in the enthusiast market. A lot of AMD fans want a full AMD system, and currently, don’t really get a performance uplift for doing so outside of certain workloads. The Smart Access Memory feature is a strange bit of tech which purports to allow a Ryzen 5000 CPU on a 500-series chipset to be able to have full access to the memory on a Radeon RX 6000 graphics card, with the inspiration from this feature drawn from the consoles and how the CPU/GPU there share memory. It shows a performance uplift in games when used, although not always impressively so, but I have some functional concerns about how well it would work. While GDDR6 is miles ahead in effective bandwidth compared to DDR4 memory, the PCI-E 4.0 bus speed at 16x is only a smidge under 32 GB/s effective bandwidth, while most dual-channel memory setups on Ryzen will exceed 50 GB/s, and that assumes that the CPU is using the PCIE bus to the GPU solely for memory access and not for, you know, all the other things it needs to do. Still, it could be faster for some things (pushing graphics assets to GPU VRAM faster than waiting for the GPU to call them, etc) and it seems likely that it’ll be the mechanism by which AMD’s RTX IO competitor works, since they did mention DirectStorage support in passing. And, to be fair, that same PCIE bus bandwidth problem is one that will also exist for RTX IO, which will almost certainly need a PCIE 4.0 host to work to its fullest potential.
RAGE Mode: Not a fan of the marketing of this, but it also harkens back to the original ATI graphics lineup in the nineties, so I’ll give it a pass. RAGE mode (am I supposed to full-caps rage?) is simply an auto-power target boost, akin to upping the power slider to max in any current graphics overclocking suite. It itself doesn’t do anything but boost the board past 100% power consumption, but because of how boost clocks work (opportunistic increases up to a maximum power/thermal load/voltage), it will increase boost performance and should raise average clock speed by a small amount. AMD showed the increase off folded in with Smart Access Memory turned on, and the best increase was an outlier of 13% for both features combined, with the average settling around 5-8%. Until we have more details, it doesn’t innately seem to offer anything that you can’t currently do in Wattman on Radeon or on Nvidia cards with Afterburner/Precision X/Rivatuner, but I have to believe that marketing makes a deal out of it for some reason – perhaps an intelligence aspect like Precision Boost Overdrive on Ryzen?
Raytracing: It’s here! But AMD was somewhat cagey about discussing it, which logically tracks with the rumors that RT performance is the one place where they haven’t been able to catch up to Nvidia, and yet again, without a whitepaper, we can only speculate as to why. My biggest guess here is that they are working on platform-agnostic features via DirectX 12 Ultimate and DXR instead of going their own way, and the current state of DXR is not complete. What I’ve read of that suggests that DX12.2 will have a version of resolution scaling akin to DLSS that is API-baked in rather than added on by a vendor, and given that even Nvidia is loathe to show off RTX performance without also using DLSS, I suspect this is why. They discussed a bunch of features that do seem to tie into raytracing, though – they’re updating their FidelityFX suite with tools for Contrast Adaptive Sharpening, Denoising, Variable Shading, Ambient Occlusion, Screen Space Reflections, and Super Resolution – all of which tie in quite handily with a full RT featureset. They did also show off video from Shadowlands (!) with raytracing off and on on Radeon hardware, which is promising – RTX cards apparently aren’t great with WoW raytracing and if Radeon cards this generation get some raytracing partners on-board, that could be a big deal. Either way, happy to see it working and the hints that as API support grows more robust, we should too see Radeon RX 6000 cards get better support for it.
The power claims and efficiency claims, while great, also tie into the last tidbit for today – performance. As I mentioned above, AMD has exceeded most reasonable expectations on performance here, with cards that dance around their Nvidia counterparts on all sides. The RX 6800 offers a lot of performance, beating the 2080 Ti (and by extension the 3070) while having more VRAM for $579. The 6800XT is the clear flagship, at $649 and trading wins with the RTX 3080. Meanwhile, the later-launching RX 6900XT does something I know I certainly didn’t expect – match or beat the RTX 3090 at a price $500 lower than the starting price of its main competitor!
However, here’s the thing. I can and will laud the RX 6900XT (heh, nice) for having a highly competitive price in line with the current market, and likewise, I think the positioning on the RX 6800 XT is solid. However, the pricing on the RX 6800 pushes it too high up the stack for most in my opinion. $580 compared to the close-enough RTX 3070 is a bad value comparison, and while the doubled-VRAM is a good thing, it doesn’t actually mean much of anything unless you are playing at high resolutions, rendering at high resolutions and downscaling, or running absurd features like maxed out SSAA anti-aliasing. None of these cards actually get to the level of the console SOC versions of RDNA2, as even the 6800 has 8 more CUs and a higher clock speed than the Xbox Series X, while it has 24 more CUs than the PS5 at a similar maximum clock speed. This leaves room for actual midrange hardware (an RX 6700, 6600, etc) to offer better performance to mainstream price points, and should theoretically allow AMD to eventually release what is effectively the console versions of the GPU at lower price points (although, of course, not at a lower price point).
There also remains the question of availability. COVID-19 has meant production shortfalls in many key components and continues to cause low availability of basic board components like capacitors, inductors, and other power delivery components, while AMD’s deal with TSMC for silicon manufacturing on their 7nm process, while good, is also all-encompassing of wafers for both these new cards, the previous RX 5000 series, and all current Zen CPUs, with Ryzen 3000, the 4000-series mobile and desktop APUs, 5000 series, and the Epyc server chips, both Rome and Milan (Zen 2 and 3, respectively). Nvidia has been plagued by short supply (regardless of what they will say publicly) and coupled that with large demand to result in some of the highest amount of difficulty that I’ve ever seen with getting a graphics card. Scalpers ruled Nvidia store purchases to such an extent they stopped selling Founder’s Edition cards themselves and pushed the responsibility to designated retail partners per region, and board partners have had to implement stronger bot protection while also doing things like EVGA’s notify me priority list actually reserving boards for sale in 5 hour windows as availability opens up.
There are some signs that AMD is prepared for that in a way that Nvidia wasn’t, though. Firstly, instead of working with a new process and new foundry partner, they are working with an existing foundry partner under an agreement for sale of wafers they already have had in place for over a year with great results. TSMC’s 7nm is a process known publicly even for having high yields, which should result in a large number of chips off the line that can be used. By appearing to base all 3 top-stack cards off the same Navi 21 silicon, they have multiple binning targets that can be easily met – the best dies can be RX 6900XTs, any slight defect in CUs can go to a 6800XT, and chips with larger defects can be cut down to a vanilla 6800. Using stock GDDR6 memory means availability should be good, as it has been in supply (even in the 2 GB modules AMD will be using) for 2+ years now, where Nvidia’s top stack cards are using a memory technology that was only confirmed as existing in production since August 2020. Lastly, AMD has published a set of guidelines it wants retailers to follow to avoid scalpers and bots like per-purchase limits, checkout and cart management processes, and other ideas to at least reduce the number of scalper and bot purchases and to ensure cards make it to more actual end users.
Then there is the matter of board partners. The word on the street is that while board partners have not had a long time with the GPUs as of yet, they’ve had a good chunk of time to work on designs and some have even been working out how to push boost clocks far higher in order to bring in more customers than their competitors, with one BIOS shown by Igor’s Lab to have a maximum boost clock of 2,577 MHz! If such a card does come out (and especially if it can be done on the RX 6900XT) then AMD will have board partner cards that flat-out win the generation on pretty much every gaming front.
With the staggered launch, something I think is also good (whether intentional or not is another thing) is that people wanting the top-end card will be able to see reviews and feedback on driver issues with the 6800 family first, with 3 weeks between review embargo on those cards and the launch of the 6900 XT.
Which is good news for fence-sitters and patient folks like me, because at this point, if PowerColor announces a LiquidDevil (preinstalled water block) version of the 6900 XT, it’ll probably be where I go!