top of page
Writer's pictureSasha W.

Tech Babble #5: Radeon.

Updated: Mar 16, 2020

I just got an RX 5700, and I've had a chance to play some games on it, and test it. And since Navi represents a pretty monumental milestone for Radeon Graphics hardware. I decided I'd type a load of crap into a Tech Babble. Remember, these are "Babbles" for a reason, they come from my un-filtered babbly thoughts on a subject.


 

Well, we all know how much AMD is kicking butt in the CPU department, right now. I mean, Zen was so disruptive we've seen Intel's almost decade-long quad-cores replaced by 6 and now 8 cores on their "Mainstream" socket. Oh, and you were going to have to pay two thousand bucks for a 10-core if Threadripper didn't show up, now you can get 18 cores. Ehm, still not hugely great value but what I'm trying to say is Ryzen is kicking butt.


Radeon, on the other hand? Well, I will be fair. Radeon has been there, the underdog for sure, but they've been there. Sitting in Nvidia's shadow, and to the smart value-conscious consumer, they've often been the best choice. I only actually started following Radeon and their products, around 2013, starting with Hawaii and the R9 290 series, before that I was firmly an Nvidia customer, and honestly a bit ignorant of the competition.


But, that lines up to about the same time when I started really getting into computer hardware. So, naturally, I wanted to explore the alternatives. From that moment, I've been backward and forward from Team Red to Team Green, like a GPU enthusiast YoYo. It was only from later 2016 or 2017 that I really started getting beind Radeon and since then I've pretty much stuck with them. The RTX 2060 SUPER I bought recently was a bit of a blip, since I've wanted to test Ray Tracing so badly.


Anyway, I'm the master of digression so I will try and get back to the point of this post.


The Underdog

In all honesty, Radeon has had few moments in recent history were it was the top dog, in terms of raw performance. And those few moments were short lived and the Nvidia response firmly took back the crown. I'm thinking about the first Graphics Core Next based processor; Tahiti, and the HD 7970 graphics card based on it, it beat out the incumbent GeForce king, the GTX 580, but the 680 was only a few months away. Honestly, at the top of the stack, you've always been looking at Nvidia. That changed a bit with the R9 290X, but the GTX 780 Ti (let's pretend the 290X and 7970 didn't have the last laugh all these years later :), took back the crown for that generation. With Maxwell, things really started going downhill for Radeon and GCN, in terms of technological leadership. I won't go into details about feature support or API implementation, both of which GCN is pretty solid for, especially versus Maxwell and prior architectures, but for the most part, Maxwell (VRAM Issues aside) firmly established technological dominance over GCN's 28nm parts.


Pascal further cemented that lead, and Turing had essentially put the icing on the cake (or the nail in the coffin) for GCN and Radeon. AMD did have a final shot at the high-end with the (now EOL) Radeon VII, but the performance wasn't enough to fully de-throne the now 2 year old GTX 1080 Ti, in all levels. Despite offering more Video Memory on a consumer graphics card ever, and excellent prosumer capability and even scientific application from the relatively huge DP rate (1:4 FP32 on RVII), GCN's last hurrah didn't leave a good feeling from me.


I loved my Radeon VII, but the reference card was hot and loud, and the performance didn't blow me away, especially since it'd already been on the market for nearly 2 years prior...


But, let's not forget the champs!

Okay, I'm being harsh. But before you bite my head off and kill me for heresy against our Lord and Saviour Dr. Lisa Su (I think I'm being serious...), GCN-based Radeon has had some absolutely fantastic products over these years, so let's not forget...


By the way, the list is non-exhaustive, just champs that I personally remember, although my dear friend who will read this post will likely tell me about more champs so expect edits :) (You know who you are). In no particular order:


  • HD 7950

  • R9 290

  • R9 380 4GB

  • R9 390

  • RX 470 8G

  • RX 570 8G

  • RX Vega 56


I think, the best for me is the Radeon RX 570 8G. This little guy - to this day - is the reigning value champion, and the people's graphics card. I mean, the 8GB models are so cheap now it essentially makes the competition at its price point completely irelevant for a decently built desktop gaming tower.


Anyway, as I was saying, this can be summarised by the honest fact that (for the most part) there is no bad Graphics Card, just a Bad Price. And Radeon honestly has nailed the value part quite well over the years. Especially versus NVIDIA (390 vs 970 comes to mind...)


Where are you going with this?

Well, I just wanted to say it's not all bad, but my point is, technologically AMD Radeon has been quite far behind the competition from Nvidia. Value aside, they've needed larger dies, more transistors and more power to achieve the same performance. Some even using exotic memory technologies (HBM). Vega is an example of what I mean, to this day the Vega 10 GPU doesn't provide significantly more performance than its competitor, GP104, despite being over 170 square milimeters bigger, having around 5 billion more transistors, and consuming a lot more power, and using High Bandwidth Memory.


Vega 10 was built to target GP102. Everything about the original marketing campaign, the paper-specifications and the launch price of the Liquid Edition, says to me Vega missed the mark. Raja Koduri did tweet once, if I remember correctly that it's not fair to compare Vega to Pascal consumer dies, due to the extra hardware in the Vega GPU for HPC/Compute focused applications. This is somewhat true, but it's also the case of almost all GCN GPUs versus their competition. They need more shaders, more bandwidth, more everything to match Nvidia.


 

Navi!

I'm just typing this from how I feel, by the way. You don't have to read it. Anyway, about our first taste of the new "RDNA" architecture, and the first processor built using it; "Navi 10" in the RX 5700 series. I have some concerns regarding the architecture but let's start with the positive bits.


AMD has finally managed to bridge the somewhat huge disadvantage they had in translating shader number-crunching power, to frame rates in games. Now, there's a variety of reasons for this to my knowledge, from the lop-sided Compute/throughput focused design of GCN, to shader under-utilisation that I typed a bit about already. That gap is finally closed if we look at the Navi 10 silicon, which doesn't have wildly more shader processors than the equaviliant performing NVIDIA part, and honestly, may even have better shader performance because it also lacks the dedicated INT pipelines that Turing has.


That's pretty impressive, and it shows AMD has been working hard to address GCN's big issue with gaming workloads...


 


A Graphics Processor actually built for... Graphics

Navi 10 is the first chip from AMD, in a long time that I can actually say I feel is built entirely for processing graphics. The somewhat ironically named "Graphics Core Next" has always struck me as a number-crunching, Compute powerhouse, with a graphics pipeline tacked onto it. And indeed, GCN is fantastic at crunching simple numbers, an area where it even outmuscled the competition many times.


RDNA's new SIMD structure is where the heart of the action is. It was re-designed to be a low latency-focused, game-shader crunching, efficient machine. In fact, RDNA has some similarities with recent NVIDIA architectures in design, notably Pascal. That is because Pascal itself is a fantastic "gaming graphics" architecture, so it's not a surprise they are broadly similar (at the 35,000 feet top level of course).


But anyway! What I am trying to type here, is that I actually feel Radeon has a very positive future. RDNA, Navi and RX 5700 series are not "Just another GCN re-hash", but an entirely new design, built for gaming graphics. Of course they won't abandon Compute and HPC (server market is big money), and right now, GCN holds that fort. But I expect more compute focused features to be brought to RDNA, but keeping that basic, gaming graphics pedigree that GCN just never had.


 

Here's my concerns. Number one: Transistor budget.

Just to let you know, this is based on my speculation. I could be entirely wrong, but here it is anyway ~


There are some areas of concern for me, regarding Navi 10, and those lie in two distinct areas. Firstly, transistor budget. Navi 10 packs in a little over 10 billion transistors in its relatively small 250mm2 die, using TMSC's 7nm lithography. It's not apples to apples to compare die size to Turing built on 12nm, so even though AMD has better performance on a smaller die (Vs TU106), the process discrepency invalidates that argument. On the same process, the two chips would be largely the same size.


Yes, Navi 10 silicon does provide consistently more performance than TU106 (not by a huge margin but it is there), but when you take into consideration that TU106's budget for transistors also must cover logic blocks such as dedicated Integer pipelines, Ray Trace cores, and the Tensor cores - the latter two of which are not even used in games that you compare these processors in - it does get a little concerning that AMD is once again using more transistors for the same performance. By the way, Turing actually has worse performance per transistor than Pascal, on account of the seperated SIMD logic for Float and Integer data types. (a Float32 and Int32 shader take up a bit more space, with asociated registers and control logic, than a single shader than can excecute both types, in my understanding).


Now, we don't know what's going on under the hood of that circuit, only AMD and their engineers know that. But what I am saying here is, account for the Tensor and RT hardware and NVIDIA has higher performance per transistor.


What they don't have a distinct advantage in, is "Performance per TFLOP", that's interesting because it shows to me that a Navi graphics core needs more transistors for a given amount of shaders than a Turing one, despite lacking dedicated integer hardware. Of course, we are comparing two entirely different microarchitectures, but in laymans terms (and a bit of deducation) Nvidia is using transistors more efficiently, at least that is how I see it.


Navi also does not support Variable Rate Shading, whereas Turing does. I haven't been able to find any information that says it supports it, and AMD didn't advertise it. However, with Vega 20 ("GCN5.1"), AMD made some pretty big modifications for lower and mixed-precision data types. I am not sure (Read: I don't know if this is correct), but this might allow it to use VRS in some capacity.


Anyway, as I was saying, as far as we know there could be some additional capabilities within Navi 10 that AMD isn't advertising, maybe to avoid a repeat of the Vega (Remember Primitive shaders? Incidently, they work in Navi) fiasco. So this is just speculation, but at least AMD has significantly improved the "Performance per transistor" from Vega.


Could Navi have some undisclosed DXR hardware? I don't think it does, but who knows...


 

Number Two: Performance per Watt at high clock frequencies

This is my biggest concern honestly. Now, I know many people will bite my head off for saying this, but objectively, all things considered: Navi isn't as efficient as I think it should be. This is very much exacerbated as the clock rates go up (as is often the case with any processor, but more in this in a moment...), look at the difference in Perf/watt between the RX 5700 non XT, and the XT. The lion's share of the performance delta between those cards is clock rate, the non XT model is forced to a lower frequency and as a result its Navi core is significantly more efficient.


Nvidia's Turing GPUs, on the other hand, seem to be able to hit much higher frequencies at lower relative power. Honestly, they impressed me with Pascal on that front. Now, Navi is by no means an inefficient chip, I didn't say it was, I am just saying that when you factor into the equation, that it has a significant lithography process advantage (an entirely node-shink ahead) and is a brand new micro architecture, comparing to an older design, on an inferior process... Do you see what I am saying? Navi is the same or a bit worse performance per watt than Turing (If we look at the XT model), the non XT model fairs a bit better but the point is largely the same.


Process being equal, Turing would likely have a sizable performance per watt advantage, something reminiscent of the NVIDIA Vs GCN days. So that's a concern for me. As with GCN, I don't think RNDA is an inherently inefficient design. (GCN can actually be almost as efficient as Pascal, when both GPUs are in their "sweet spots", NVIDIA called that "Max-Q" on the mobile parts. By the way, Mobile Laptop Vega 56 exists). What I believe to be the issue is that AMD's architecture is having issues with leakage (you need more current) or voltage scaling issues reaching speeds upwards of around 1600 or 1700 MHz, where Turing does it very effectively. Could this be an issue with TSMC's 7nm process? Since we don't have any 12nmFFN Radeons to compare to, it does beg the question how much process is involved for these GPUs. That said, Zen2 on the same 7nm design is highly efficient, but that too has clock speed issues (I think they are related to the L3 cache design, but I'm digressing...)


Why does it matter, I have a 9000 Watt Powersupply?

It matters to me because performance per watt is also important to scaling up an architecture, to higher levels of performance without drawing unreasonable amounts of power. Look at GCN5, with 250W AMD gave you GTX 1080 Performance, with 250W, NVIDIA gave you 30-35% more performance. It took a 7nm shrink to allow that GCN5 part to achieve the same(ish) performance at the same(ish) power use (although it's a tad more).


When NVIDIA reaches 7nm, you'll see that in action again, and this is important to me, I will explain just now. Which brings me to the final part of this babble.


I want Radeon to be the Top Dog

I dream of a day where you are looking to buy the most powerful Graphics Card on the market, for gaming, and you don't just have to default to getting a GeForce. And all these things considered, I think AMD has a lot of things to do, Navi sure as hell isn't perfect (nothing is) but it is a fantastic step in the right direction to maybe, just maybe one day letting my dream come true.


Thanks for reading <3

Recent Posts

See All

Comments


Commenting has been turned off.
bottom of page