Okay, so, I wanted to make this post for a while, so here it is! If anyone follows this "blog" thing they'll notice that over the last year or so I've gone from an RTX 3090 (which I sold to miner) to a 6900 XT (which I killed and then shot) to my Vega 56 laptop (which I love to this day) to my old RX 590 (which I put out of its misery) and then finally ended up with an RTX 3050 that I managed to snag for MSRP. That is quite... a journey. Well, this "Mini Babble" is about the RTX 3050 as a product, the processor that powers it, how its configured and my thoughts on it.
As you probably noticed in the title, one of the 'interesting' things for me, about the processor that powers the RTX 3050 is just how crippled (as in, cut down) it is. It's the same GA106 chip that powers the RTX 3060, but with a whole extra ton of circuitry laser-cut to make it perform lower, and likely increase yields. Funny about that, as I should expect yields on this ~270mm GPU built on Samsung's 8nm process to be pretty damn good, but even then, NVIDIA may have just built up a bunch of chips with poor Voltage/Frequency scaling or leakage in some SMs, for example, and decided to bin them as RTX 3050s for the budget market. It's also interesting to note that the 3050 has half of its PCI-E 4.0 PHY disabled (8x vs 16x), which is likely more to do with the fact that NVIDIA intends to replace GA106 in the 3050 with a GA107 die; a smaller chip with a native 8x bus, and configured the larger GA106 in this manner to prevent (no matter how marginal) performance differences between the current 3050 model and a GA107-powered "Drop in replacement" later. Since NVIDIA is selling a lot of laptop GA107s, it must have made sense. *Deep Breath*. Okay.
So, I haven't typed a babble in a while, so I am rusty. My fingers are also cold, which doesn't help. (I think I mentioned this before now, in previous posts). Regardless, I wanna look at the block diagram for GA106 and see just what NVIDIA has chopped off this chip to make the 3050 into a potato. So first, let's see a full GA106 block diagram, with all the resources enabled. Note this configuration isn't used in the RTX 3060; which as a single TPC (2 SM, 256 FP32 lanes ('CUDA Cores') disabled.
And finally...
I would like to note that many online publications state RTX 3050 has 32 ROP units, and a single GPC entirely disabled. Some state 48 ROP units which would require all 3 GPC such as the diagram above. I can't find clarification on this yet but I asked on Reddit (oh god) and hopefully someone knows how it is configured. In any case, the other possible configuration for this GPU is:
I don't think there would be a major performance discrepancy between these two configurations in most gaming workloads. That is, because the ROP and Rasteriser (quite late in the 3D pipeline) are likely limited by SM throughput anyway. That is to say, even if you have all 3 Rasterisers and 48 ROP/Clock enabled; you still have fewer SMs to feed them, so it would be marginal at best. However, there would likely be some differences in synthetic or non-gaming workloads, especially relating to synthetic triangle throughput.
In any case, it's still a lot of the chip disabled. Oh, btw, I just went for a poo and had to post my reddit question literally about 10 minutes ago, had a pause and then came back to this post. As a result, my "thought train" has derailed (Thanks ADHD) and I'm going to go play Warframe with my friend until later, when I will hopefully recover my concentration (lol) to continue this post. Unfortunately my friend has to do some work at university because the "hour" I said I would be turned out more like 2. REEEE. Sorry. Anyway, I think I am digressing, oh yeah, the post about RTX 3050. Well, I'mma go do something else anyway as my thought train is still in a ditch beside the track looking quite sorry for itself.
And I'm back, for you, that would have been super quick; a new paragraph, but for me it actually was, like, 12 hours. Anyway! I asked the super-knowledgeable person and they seem to think it was 3 GPC with 48 ROP, too, like I originally thought. So there you go! A crippled potato, yet still quite good at rendering 3D Graphics at 1920x1080 with all the 'Bells and Whistles' enabled, including DLSS upscaling, because this little GA106 isn't quite strong enough to handle 1080p+RT without it.
DLSS allows RTX 3050 to exist.
That said, DLSS Quality is pretty damn good, and has improved a LOT since I first used it in Metro Exodus (non enhanced) back in 2019, on my RTX 2060 SUPER (my first RTX card). Dying Light 2 with DLSS Quality looks almost the same as native in my opinion yet is the difference between playable and unplayable - so if those little Tensor cores and associated software algorithm actually make or break the RTX 3050, whereas for 1080p they'd be a 'nice to have' on a more fully implemented GA106 like RTX 3060 (non Ti).
So, in a way, I think RTX 3050 is actually possible as a product (well, at least to Nvidia's fairly brutal hardware configuration) simply because of DLSS and FSR technologies, though I want to point out that at 1080p, DLSS Quality is noticeably better in Dying Light 2 - the only game that I've played that has both technologies implemented to directly test. It's actually 2 AM right now, and I've got a glass of Pepsi Max Cherry here, and I probably want to play a game for a short while then go back to bed since my cat did an extremely smelly shit and I just poop scooped the turd from his box and flushed it, just in case you wanted to know what I was doing up at this time. Moving on... I had one more thing to add to this digression-riddled mini tech babble.
8GB really helps.
That is, that, 8GB of video memory on this video card allows it, along with DLSS, to be viable for Ray Tracing at 1080p (and just being comfortable with high/max textures anyway). It's essentially the first time NVIDIA has ever put 8GB on a 'budget' model below 300 pounds, and represents a nice bit of progression from the typically 6GB-equipped TU116 models found here in the 16-series prior to the 3050's release. I think, ultimately; given the reasonable symmetric memory configuration choices for the 128-bit memory bus of either 4 or 8GB, this decision was more out of necessity than generosity, since I genuinely believe 4GB would have completely crippled this video card's ability to provide any kind of 'RTX' feature set even at 1080p, especially with how the RTX 2060 6GB's struggles go. Regardless, overall, I am generally positive to the RTX 3050 - but maybe that is because A) I am so happy I got it for MSRP in the current prices, and B) AMD's alternative in this price range is an absolute turd.
Pricing is OK too.
Regardless, even if MSRP was normal; $250/£240 puts this card between the GTX 1660 SUPER and GTX 1660 Ti, with performance that is essentially the same as the 1660 Ti, or a bit ahead, depending on the game - but with 8GB of video memory and full RTX features. It's not amazing value progression, but it does represent some progression (assuming MSRP) unlike RX 6500 XT which is just an insult to even exist at $200/£180-200 MSRP.
Overclocking?
It's also worth pointing out, this card overclocks really well. I think, in general, the lower end Ampere models overclock so well because the architecture becomes rather current limited in the fuller implementations and as such, the voltage sustained drops considerably to fit within the power budget given. For example, my RTX 3090 would rarely exceed 900 mV; and most of the time I would run the card at 750-800 mV for around 1700 - 1750 MHz at 350W (100W of which is GDDR6X, mmm, delicious). This card, however, sits at 1070-1080mV all day long and rarely hits power limit that I notice (around 135W tops, with 110% power limit enforced), as such, has absolutely no problem running at nearly 2100 MHz pretty much all the time (2070-2100 MHz generally). That's a nice boost from the ~1900 MHz stock speed, and represents nearly 10% more performance since the GDDR6 memory also seems to take a whopping 2 Gbps OC perfectly well, too (14 to 16 Gbps). Overall, the card actually ends up performing close to a stock RTX 2060 6GB when fully overclocked, but of course, you have the 8GB instead of 6.
Conclusion!
Ultimately, I am pretty happy with the card. It handles literally every game I play at 1080p, max settings, at a playable FPS. it's not always 60, but it is playable, so unless Playability Doesn't Matter to you, I'd say that's a win for a card that does have a much more palatable budget-focused price tag (if you can get this for MSRP, I really, REALLY don't like the idea of the Crippled Potato being over 300 GBP considering how much they lopped off this chip). Prices do seem to be coming down, so there's that. While most people believe a GPU should be made or broken based on its raw Raster Perf/Price ratio (where the 3050 really isn't that great of an upgrade at all from previous cards like 1660 SUPER), I am a proponent of judging a video card based on all of its features, such as video memory capacity and technology support (there is a reason I supported the RTX 2060 SUPER over the similarly priced RX 5700 XT), with those in mind, I feel the RTX 3050 earns its right to exist as a reasonable product (AT MSRP DAMNIT!) and gets my "Ashley's Seal of Approval" thing that doesn't exist, but now maybe does, or whatever.
Anyway unless I can get an RTX 3060 12GB at MSRP this year, and not a freakin' penny over it (that's £299-320), I won't be upgrading. Historically, my most comfortable budget for GPUs has been under 300 pounds, and going forward, that's where it's going to stay. That means the RTX 3060 on deal or pure MSRP is flirting with my psychologically-enforced budget limit, but otherwise, since prices are on the rise overall, I am likely relegated to the xx50 segment. But, at least, you can still get all the latest features with a product that actually provides a reasonable experience with those features, unlike what the competition offered in this price point. Can you tell I'm really salty about the 6500 XT? Yeah. Its existence at 200 USD / £200 GBP for a 'gaming card' with 4GB is an insult to me and my RX 570 8GB and RX 590 which I got for £150 in 2017 and £180 in 2018, respectively, turns in its grave at the very mention of the 6500 XT.
Bonus Text.
Honestly I think this card is the spiritual successor to the GTX 1650 SUPER, both in name and configuration (20 SM, 128bit) but costing 100 USD more for twice the memory, Ampere architecture, full RTX-suite support and about 30% more raster performance (interestingly enough, much like 3080 vs 2080 Ti, this is a good increase to judge Ampere's per-SM gains in performance vs Turing). That said, it does try and hang with more expensive models and does quite well, the card I have is the cheapest possible model of RTX 3050 (Zotac Twin Edge) and it still has a metal backplate, full GDDR6, VRM, etc cooling contact and feels pretty well built unlike the cheapest GTX 1650 SUPER I owned which literally lacked all of those things. Considering this SKU is binned to be "drop in compatible" with GA107 (or so they say, though it is likely), I think the premium-feel of the board could be more to do with the fact that NVIDIA used GA106 for the first batch - so partners literally just copy & pasted their RTX 3060 boards and coolers but with 3050-binned GPUs. I guess we'll see if GA107-based models allow this SKU to come down to the sweet 200 USD/£200 price point, because I would absolutely be willing to trade the backplate and fancy cooler for more basic variants to shave off 50 quid but with the same performance.
Meow.
Comments