top of page

Search Results

890 items found for ""

  • Warframe Screenshots

    Just making a post to upload some of my favourite screenshots of my favourite game. Yep, that's Warframe! I took these recently. I will update this somewhat often. Some of these are taken on various different hardware of mine. But I really can't be bothered to list each card and CPU and what not, but more often than not you can see the clock speed and cores and stuff on the OSD so can probably figure it out.

  • Pls AMD no

    I didn't hear anything I just thought it was funny. >_>

  • Plans for my Gaming PC (23/06/2019)

    Oh this is going to be funny. I say that because I change my mind like 27 times a minute about what I'm actually going to do with my Gaming PC. And a few things have changed so I figured, wouldn't it be cool for me to record my thoughts right now on this day and see if they change or if I can actually stick to my plan lol. Okay so basically here is what is happening with my PC at the moment. I am doing this more for my own sake to record my thoughts lol. I am no longer using my Radeon VII Rip. I have given up with this card because it is causing me too much stress. After talking to some friends I have come to the conclusion that I have somehow broken the HBM and that is what's causing my issue. So, I plan to send the card to Fritzchen Fritz so that he may make a die-shot of the Vega 20 GPU, an industry first I think actually. Wouldn't that be really cool? Anyway you should totally check out his Flickr page, he has some amazing photography and die shots on there. Polaris Eternal I love Polaris. It's like the eternal GPU reborn three times to do battle with NVIDIA's mid-range and often offering that little bit of extra value. Please don't hate me that I said "value" and implied RX 590's MSRP is "value" because it's not. But I got my 590 for £230 and it's probably the best, or one of the best designs available. They are around the same price as GTX 1660 now, but with 2GB more VRAM and the same performance. You decide what's value in this case. Oh wow what a digression.... anyway about this card... I am using my trusty Sapphire Radeon RX 590 Nitro+ Special Edition graphics card as my main gaming card right now. I put my 4K monitor on the shelf and dusted off my 1080P 144Hz (I'm going to say it now, I prefer to game on 1080p 144Hz than 4K 60Hz, Once you go high refresh; you never go back). The 590 does really well in all my games and I'm in no huge hurry to upgrade it. Yet! The biggest problem with Ryzen Ryzen's biggest problem that isn't widely acknowledged is as follows. When you buy a CPU like the Ryzen 7 2700X that I have, you get an absolute monstrous power-house of a CPU. I mean, like, mine is at 4.3 GHz and this CPU literally gives no craps about anything I throw at it. Ugh. So the biggest problem with Ryzen for me is that: you don't have an excuse to upgrade it. So do you not upgrade it? No, lol. The itch is always present, the problem is that I now have to admit that I am not upgrading because I need more CPU power, but simply because I just have the itch. Anyway the 2700X is fantastic and really doesn't struggle in anything I do with my PC so this is a low priority upgrade point but I am going to see about getting a 3000 series to place in my B450 Mortar motherboard at some point, maybe late this year or whatever. Depends on my money situation but as I said: the 2700X has me covered for the foreseeable future. So what am I going to do this year? I think for me I am looking at upgrading my Graphics Card first, obviously. But the thing is; I kinda want an RTX card. I know AMD just showcased the new "Navi 10" based cards and the new "RDNA" architecture and while I think this is fantastic and going forward very good for AMD GPU competitiveness, I just feel, at the moment "Navi 10" doesn't bring enough to the table for me to consider it over theRTX 2070. Yes, even if the 2070 is a bit slower and a bit more expensive currently, the RTX card interests me through dedicated DXR acelleration (AMD doesn't even want to enable DXR on GCN cards, even though they could probably do it reasonably well, or at least, better than Pascal). Besides, I am planning to buy if they get a price cut in response to RX 5700XT. £450 is a bit much for this performance for me at the moment. I would be likely to buy a 2070 around 400 pounds or less. I want to mess with DXR, and honestly, NVIDIA Turing is the first architecture from this company in, well, as long as I have been following tech that really strikes me as a solid, future-looking architecture. So I am almost entirely set on getting an RTX card for my 1080p monitor this year. I don't know which one but it will have at least 8GB of video memory so I am looking at 2070 (or this 2060 8G super if it's not massively overpriced). So my PC this year will hopefully be Ryzen 7 2700X + RTX 2070. That should have me covered until Big-Navi and Nvidia's own 7nm designs, which should offer a huge uplift. You're an idiot and you're making an mistake! Wait what? I guess you could tell me why would be nice please? Thanks!

  • Ryzen is Dank

    I made a meme.

  • Warframe Gameplay Videos

    Thought I would put up some gameplay videos on the GPUs I have, in my favourite game. In case someone wanted to see how these cards performed in that game. It's Warframe, by the way. Plains of Eidolon was the first open world area developer added, and recently it got a pretty substantial overhaul to the graphics and visuals. At this current time it is the most graphically demanding area in the game, in my testing. Yeah, Jovian Concord literally launched like 20 minutes ago and I'm downloading it. But that's a tile set not an open world as far as I know, so it will run better due to the compartmentalised nature of the scenes. I am also going to do videos in the Orb Vallis, the second open world area which is also highly graphically demanding. But first a little note... Tessellation is set to "Use Application Settings". This won't impact much, Warframe uses light Displacement Mapping on the fly, I doubt they are pushing 64X complex tessellation on invisible water under the map :D For these videos (unless otherwise stated) the CPU is the Ryzen 3 1200 quad core and it is clocked at 3,7 GHz. Yes it is not the most powerful processor you can buy but it was very cheap for me and as you can see in these videos the performance is actually very good and almost always GPU bound. YouTube is stingy with bandwidth so these all get compressed through the floor and are recorded at 1080p native. I don't want to jump through hoops to get them at 4K native, since these GPUs can't handle this area at 4K at anything I'd consider playable at max settings, and Handbrake can't upscale. So blurry pixel mess is what you get. But I think you can still make out the Frame Rate display, so there's that. Oh and I'm just uploading random videos right now. Like not for comparison purposes. I'm going to update this post a lot so it might get new videos etc. R9 280X Plains of Eidolon open world zone 2560x1440 maximum settings with ludicrous GPU particles AMD reference clock, 1000 MHz core, 6 Gbps memory (288GB/s) Here is the 280X in Warframe on the Plains of Eidolon at 1440p. Yes the resolution is a bit high for this card to be properly smooth but I just wanted to see how it handled it, since Warframe looks great with VSR from 1080p native, even downsampled from 1440p. (NVIDIA's alternative looks like crap in my opinion). Oh, and by the way, I'm used to playing this at solid 60 FPS on my Radeon VII so my accuracy kinda sucks. Go Figure. Performance was much lower than I am comfortable with in this game, I am used to 60 FPS solid even at 4K but Tahiti is a bit old now, the poor thing. Either way this is 1440p and literally all settings maxed. Not bad for a 7 year old chip. Now here is the R9 280X in the Orb Vallis at 1080p resolution, a more suitable number of pixels for this now pretty old little chip. Orb Vallis open world zone 1920x1080 maximum settings with ludicrous GPU particles AMD reference clock, 1000 MHz core, 6 Gbps memory (288GB/s) R9 290X Plains of Eidolon open world zone 2560x1440 maximum settings with ludicrous GPU particles AMD reference clock, 1000 MHz core, 5 Gbps memory (320GB/s) And here is the 290X also in the Plains at 1440p. Time of day is a bit different so you probably get a slight variance in GPU load due to lighting differences but hey ho. Performance didn't seem that much higher probably due to more shadows being cast at this time of day. It's still not hitting that 60 FPS sweet spot for me to be comfortable here. Gotta drop the settings to 1080p for that. I played on the Orb Vallis with the R9 290X at 1080p too, you can check it out here. There was a few bits where I was CPU bound but 90-100 FPS, but apparently the 290X can push out Frames faster than my baby Ryzen 3 1200 can feed it. Might have to test the RX 590 and 570 on my 2700X PC. Maybe. But either way you can see the performance is pretty great. Orb Vallis open world zone 1920x1080 maximum settings with ludicrous GPU particles AMD reference clock, 1000 MHz core, 5 Gbps memory (320GB/s) If you notice, on the On-Screen Display the GPU core clock speed sometimes dips to 889 MHz, this appears to be a HWINFO64 reporting error with the 290X, not an actual clock speed indication. Both GPU-z and WattMan itself reported 1000 MHz solid at all times and you can witness no adverse performance effects from the behaviour in the video. Just FYI. Ryzen 3 1200 still holding impressive frame rate, right? Game really well optimised and Ryzen just being pretty damn good. But Ryzen sucks for gaming and playability doesn't matter so... That's sarcasm, by the way. Just in case you thought I actually meant that. One person I used to speak to online who would constantly attack Ryzen's gaming performance versus Intel, actually said that to me, regarding performance in a video game. Playability doesn't matter. Welp. Can't argue with stupid. Here I am gaming on the R9 380X at 1080p in the Orb Vallis. Performance is actually really great and very fluid and responsive. By the way, there are minor spoilers in this video for Orb Vallis characters. One is at 3:25 to 3:42 and the other begins at 4:44 to 4:50. Skip these parts if you want to keep all surprises for the Orb Vallis gameplay if you haven't played it. You can just skip through the video to see roughly how the card performs. R9 380X Orb Vallis open world zone 1920x1080, maximum settings with ludicrous GPU particles AMD reference clock, 970 MHz core, 5.7 Gbps memory (182GB/s) I made a video of gameplay in the Jovian Concord remaster of Jupiter Gas City in Warframe, on my main PC now with RX 590 instead of Radeon VII ... T.T Still performance was great and neither GPU nor my Ryzen 7 2700X had any trouble with maintaining my 59 FPS Frame Rate Target Control limit. So here's the video. I know it's kinda pointless since I capped the FPS but I will add an uncapped one later. Oh the in-game resolution is 1440p but the video is uploaded at 1080p so I didn't have to wait 87 years for it to upload. For some reason the audio in this video sounds like crap, I think it is something to do with the mixing down to Stereo when I recorded it (I have 5.1 surround speakers on my main PC). Sorry about that. RX 590 Jovian Concord Tile set (Jupiter) 2560x1440, maximum settings with ludicrous GPU particles Sapphire Nitro Special Edition factory overclock of 1560 MHz graphics engine and 8.4 Gbps memory data rate CPU is Ryzen 7 2700X at stock speeds, all the CPU cores are at around 4 GHz in the game By the way, I called the GPU core the "Graphics Engine" because I think it sounds cool. Playing again on the Jovian Concord, but this time at 4K. The RX 590 does a surprisingly good job of keeping it playable. Jovian Concord Tile set (Jupiter) 3840x2160, maximum settings with ludicrous GPU particles Sapphire Nitro Special Edition factory overclock of 1560 MHz graphics engine and 8.4 Gbps memory data rate CPU is Ryzen 7 2700X at stock speeds, all the CPU cores are at around 4 GHz in the game I didn't update this with Radeon VII videos so here they are. Well I made a couple. Firstly here is me playing on the Grineer Galleon tileset. I uncapped the FPS and wanted to see how high my PC could keep the frame rate at 4K maximum settings. 2700X is overclocked to 4.3 GHz on all cores Radeon VII at stock speeds, 1750 MHz average and 2 Gbps HBM2 data rate 3840x2160 MAX MAX MAX MAX why would I run anything but MAX? Seriously. And here is another one with me playing on the Orb Vallis at 4K with an uncapped Frame Rate. As you can clearly see~ you don't need Intel or NVIDIA to have a super awesome high end rig. It's refreshing. Like a glass of fizzy Pineapple, Orange and Lemon squash.

  • A small look at Radeon over 7 years: HD 7970 to Radeon VII

    I was very excited to make this post. But my testing wasn't really as long or as detailed as I wanted, because, well, I am using my main PC for this test. I actually overcame the anxiety to place the R9 280X in my main PC to test with, against the Radeon VII. So there's that. Anyway... Back in 2012 AMD released the first ever GPU based on the, at the time, brand new Graphics Core Next architecture. An architecture that we have all grown to know and love these long 7 years. Well here is what I thought would be interesting to test: how much performance has AMD gained from this architecture, in those 7 years? Specifically: I wanted to get a small idea of the performance gain from early 2012's flagship single-GPU GCN part, the Radeon HD 7970, to the flagship gaming-oriented single-GPU GCN part you buy today, in 2019, the mighty Radeon VII. What also makes this interesting is that the Vega 20 GPU powering Radeon VII, is actually around the same size as the Tahiti chip in HD 7970. So it's sort of a measure of "how much performance has AMD squeezed from a ~330-350mm² die area in 7 years", roughly. Okay let's be exact here: This is Tahiti. This processor is approximately 352mm² in size, and contains ~4.3 billion transistors on a 28nm lithrography made by Taiwan Semiconductor Manufacturing Company (TSMC). That is about 12.2 million transistors per every square milimetre. Back in 2012, this was cutting edge technology. Things have changed a bit since then so let's take a look at what 2019's GCN king looks like~ I love this chip. The HBM2 makes it look so cool. Okay, so this is the Vega 20 GPU. Well, that chip in the middle is. It is, of course, surrounded by four stacks of High Bandwidth Memory (2nd Generation). Each stack on this particular package contains four 1GB DRAM chips stacked on top of a logic die, then connected to an interposer and wired directly into the GPU via a 1024-bit interface. That's wide. 4096-bits wide, actually, if you add up the entire interface. Anyway! That GPU in the middle is approximately 331mm². It's actually slightly smaller than Tahiti - but the impressive bit, is it packs 13.2 billion transistors into that space. That is around 40 million transistors per square milimetre, or approximately 3.2X the density of Tahiti. That is made possible by using TSMC's latest 7 nanometre manufacturing technology. Cutting edge to cutting edge. It's also worth pointing out, that Vega 20 used in the Radeon VII isn't actually a full enabled chip. The die has four Compute Units fused off, to improve yields. That reduces the Stream processor count from 4096 to 3840, and the texture unit count from 256 to 224. A relatively minor reduction. HD 7970 is a full chip, so keep that in mind. it's still valid since Radeon VII is the flagship single GPU GCN gaming graphics card for 2019. At least I think it will be, since AMD has announced that Navi-based lineup will co-exist with Radeon VII this year. Okay, now for my boring notes. :D Notes... Well, I could bore you with more specifications and details, I just love typing about GPUs, but I want to get to the results. But before that, I must make a few notes... Firstly, I do not actually own an HD 7970. But I assume, like, everyone who is interested in this information knows that the R9 280X that released as the third-best Radeon single-GPU card in 2013 alongside the R9 290X and 290, was actually simply an HD 7970 GHz edition with a new name. And that card is a factory overclocked HD 7970. That is rebrand-ception. But hey, it works. So I am taking my R9 280X, the exact same silicon as the HD 7970 and I am setting the GPU core and memory clock speeds to match the reference launch specification for the Radeon HD 7970, that released in January 2012. That is 925 MHz GPU core, and 5.5 Gbps on the GDDR5 memory. I know many custom cards existed with higher clock rates, but since Radeon VII is a reference design card and I test at fully stock speeds, it would be apples-oranges to compare a factory overclocked HD 7970 to a reference Radeon VII. Higher clock speeds on Vega are part and parcel with the architecture. And, no, sorry I will not test overclocked results as overclocking graphics cards scares the crap out of me. Okay, maybe I will in the future, but please don't hold your breath over it, okay? But your Radeon VII is liquid cooled! It's not a reference design! Oh snap, that's true. But it is running at reference clocks and honestly, in my testing it doesn't actually throttle during light benchmark results, on the stock cooler. When I used it for a day, then ripped it off because I had ringing in my ears. >_>. The Radeon VII is running at approximately 1750 MHz core, it fluctuates a bit above and below that, as per Vega's dynamic boosting algorithm, but it's about that, I checked it don't worry. HBM2 is obviously at 2Gbps. 1TB/s, yay. My second note is about my test system. For this test as I mentioned earlier, I used my main Gaming PC. I do not usually do this as I'm sure I've said a billion times I have some dumb anxiety problem and putting hardware in and out of my main PC causes me stress, unless I am fully prepared to do it. I have a testing PC to mess with daily (multiple times a day) and that is built for that purpose, and blah blah. But I am not putting my liquid-cooled Radeon VII in the Ryzen 3 1200 PC. So here is the specification for the PC used to host these two GPUs: Ryzen 7 2700X eight-core CPU using increased Precision Boost Overdrive values, ~4.2 GHz on all cores is observed in bench-marking workloads 16GB (2x8GB) 3200 MHz CL14-14-14-30 with tightened sub timings as per this extremely handy tool. These are Samsung B-die chips. MSI B450M Mortar Seasonic Focus Plus 850W Gold Powersupply Radeon Adrenalin version 19.5.2 I know I'm typing a lot here, but I like to be thorough. Please bear with me for a moment :D. I would also like to note, that unlike previous tests, I am using "AMD Optimised" Tessellation settings for both cards here, as this is not a synthetic test of geometry performance. FreeSync is disabled when testing the Radeon VII, as the Tahiti chip doesn't support it. It probably wouldn't make a difference but it bugged me that it might somehow. Textures are often set to Low where possible, as 3GB really is completely inadequate the resolutions I was testing at (to keep the Radeon VII GPU-bound, even Intel CPU would struggle with 150-200 FPS honestly). I made sure the GPU wasn't VRAM starving, which I had observed a few times, that produces inconsistent results (One of my benches was producing ~20% variance in average FPS each time due to VRAM issues. Lowering the resolution fixed this and the results are re-tested for verification purposes). I also monitor GPU load, clock speeds and VRAM allocation using HWINFO64, and analyse this information between runs to assess any potential issues. Also, I didn't test as many games and benchmarks because I was struggling a bit with the anxiety and, you know. That just sums me up, but hey I think these results are interesting enough. Oh jeeze, I am typing a lot. Okay I will stop now and here are the results. Sleeping Dogs Well, you didn't expect it to be a small gap, did you? Seven years of advancement for Radeon has allowed the Radeon VII to be about 260% faster in approximately the same die size. That is quite a bit more performance... 3.6X more performance. I like performance. I also like saying the word performance, apparently. Batman: Arkham Knight I knew it was going to be a lot faster but it's always cool to see by just how much. The gap here in this title is a bit wider, at 277% (3.7X). Vega 20 is flexing its 5th generation muscles. Metro 2033 Redux Just a healthy 248% (~3.5X) increase in performance, passing by. Tomb Raider (2013) 220% (3.2X) in Tomb Raider (2013), the smallest gap measured yet, but still pretty huge. Ashes of the Singularity: Escalation Well, Tomb Raider was the smallest gap until I tested Ashes. That's 198%, or just about 3X more performance. Surprising, actually, given that this is the only native DX12 engine I tested. Huh. (And no, it wasn't CPU bound). World of Tanks Encore benchmark 224% (3.2X) in the World of Tanks Encore benchmark. Quite a few more delicious score points indeed. Now for a few dedicated benchmarks... Unigine Superposition In Unigine's Superposition benchmark we see a 232% (3.3X) increase from (early) 2012 to 2019 Radeon single-GPU flagship. Unigine Heaven Finally, I tested another Unigine benchmark, this time Heaven 4.0. In this one the mighty Radeon VII bests the HD 7970's performance by 255% (3.5X). Conclusion? This has been a really interesting test for me. I like to see just how much progress AMD has made with dedicated graphics in those seven years of GCN up to today. And if we look at all of my results the Radeon VII is, on average, 3.3X faster than the HD 7970. That is a fair amount of performance, but it has been seven long years. In all honesty, with the 28nm process dragging on for, what? Four of those years? Technology did slow down a bit. But it's still a great amount of performance and packed into a die that is very slightly smaller, too. But it's worth noting, that Radeon VII does use a bit more power than HD 7970, it's around 50W more, approximately. I didn't directly test power consumption, though. I keep thinking I should do that, too. But it's not a huge amount more. Radeon VII, is likely around 3X the perf/watt, give or take a bit. Not too bad, I guess, but it is like two full node-shrinks ahead. So make of that what you will. Sorry I typed an absolute ton in this post. But I hope you enjoyed it. ^-^ Thanks for reading it ~

  • Some GCN GPUs in 16 games and 4 benchmarks

    Oh wow this took me ages to do. But here I am actually writing the post on it. Honestly I am really tired and a bit depressed so I am, maybe, going to keep the typing to a minimum and just maybe let the results speak for themselves. Actually I will probably type a ton of crap anyway so just bear with me. I tested my R9 280X, R9 290X, R9 380X, RX 570 and RX 590 in a load of games and a few benchmarks to evaluate how they performed against each other, while limited to 1000 MHz core and 192GB/s memory bandwidth. With some exceptions, more on that in a moment... Okay this is a pure synthetic test honestly, for example the Tahiti GPU powering R9 280X was never designed to be limited to 192GB/s, so it will underperform when restricted like this. But why do it, you ask? Because this under-performing also helps me evaluate how much more efficient (especially regarding memory bandwidth) the newer chips are. Well, the ones with the same top-level structure. There are a few notes though. Firstly, R9 290X wasn't tested at 192GB/s, because this would be completely pointless, as the Hawaii GPU has significantly more Render Back-Ends and Compute Units than the other chips. So I tested this card at reference clocks only. Secondly, RX 590 was tested even though it has 4 more CU than the 280X, 380X and 570. I tested this at the same core clock and memory bandwidth as I was curious to see how much difference the extra 256 SP and 16 Texturing Units actually made, nothing more. I was unable to test the RX 590 at reference clocks because I was having issues keeping it GPU bound in my Test Suite. yes, my Ryzen 3 1200 is now holding back my test PC. Yes, I could have tested the 590 in the main PC with the 2700X but I am suffering a lot from anxiety and I just got the Radeon VII working and I don't want to remove it again. Okay? And lastly, I tested the three main comparisons (280X,380X and 570) at their AMD-specified reference clock rates to measure product performance, this would not be a synthetic result. Please keep in mind that even though I think almost all R9 380X cards launched with higher speeds, AMD reference spec is quite low. Especially compared to 280X. This is what I tested with. Oh look, I typed a huge amount...Anyway let's move on to the results after I give you details on the test rig: Ryzen 3 1200 quad-core CPU @ 3.7 GHz 16GB (2x8GB) 2400 MHz CL14 MSI B350M Mortar Silverstone Strider 1000W Gold PSU AMD Radeon Adrenalin Driver Version 19.4.3 Note: As I mentioned above, yes I am aware the Ryzen 3 CPU here is bit... weak. I mean it was really cheap, but still a good CPU. Remember back before 2017, 4 threads was premium mid-range! (Thanks Intel). At least this one cost what it was worth. Anyway... I am making sure all results are GPU bound by analysing reported D3D Usage reported by Driver and Operating System, and then re-testing results that I suspect, with significantly lower CPU clock rates to verify that the result doesn't change, or at least, not by a huge amount. As you can see most of my results are not particularly high FPS. That is because I am using settings to emphasis GPU workload. Also, I am looking to get a new CPU for my test PC, probably a Ryzen 5 3600. Please just bear with me until then :D Secondary Note: Yes, most of the titles I benched were using the built-in benchmark. Why does this not "bother" me? it is because it is not really a measure of their in-game performance. This is a synthetic test of GPU rendering performance under artificial limitations to compare architectures. It doesn't matter where the scene is benched. That said, A couple games I wanted to test didn't have a benchmark so I benched those in-game with Afterburner. Tertiary Note?: You will notice I am using Low, medium or "normal" textures. That is of course deliberate, as this is a measure of GPU render performance, not VRAM restriction performance. The R9 280X is the weakest link here with "only" 3GB of video memory, and that is actually becoming really tight even at 1080p, let alone 1440p. Textures don't have an enormous impact on rendering performance so the results are still relevant. Results! The first graph results are with all listed GPUs at 1000 MHz core speed, and memory data-rate adjusted to provide 192GB/s. For the 256-bit cards this is 6Gbps, for the 280X which has a 384-bit memory interface, this is 4Gbps. This is synthetic test. The Second graph includes the cards, with 590 replaced with 290X, at reference speeds to compare the GPUs at their "normal" operating frequency. This is less of a synthetic test. Sleeping Dogs Sleeping Dogs gains almost 14% from GCN1 to GCN3, when bandwidth and clock normalised. Tonga does have significantly better bandwidth efficiency due to the ability to compress frame-data on the fly, reducing bandwidth requirements in any given scene. Obviously, you are also seeing major Tessellation and primitive-rate gains thanks to doubled (and improved per clock) Geometry and Raster Engines. RX 570 gains a very small amount, under 2%. This result was repeatable and measurable, but it is within margin of error so make of it what you will. I think Polaris is running into another limitation here, likely Compute through-put. I say that because with the 590 having the full 36 CU enabled silicon, we see 11.2% increase in performance. That is almost linear scaling with the 12.5% more stream processors this GPU has enabled. So Tahiti was unable to feed its compute units due to, likely, lack of memory bandwidth in this test, but Polaris can feed more CU due to larger L2 cache and DCC. Gains! Well, that sort of backs up my theory that Tahiti is bandwidth limited here. Once Tahiti's memory speeds are unrestrained, and the chip can pump 288GB/s, it actually manages to essentially match Tonga's result in the first test. This shows that Tonga does have major bandwidth efficiency gains. Oh, and it's interesting that at AMD's reference spec, the 380X is actually slower than the 280X, but not by much. 290X here leads the pack with is comparatively enormous Compute and bandwidth on tap, but un-restrained RX 570 does get within 10%, at likely half the power consumption (but I didn't test power consumption. Yet). Batman: Arkham Knight Nothing much to report here, small, but definitely noticeable gains between the 3 major comparisons. And the 590 added to see what the full Polaris silicon can do at these speeds, too. But once we set all GPUs to their "reference specifcation" clock rates, the situation changes a bit. The 280X and 380X are equal in performance, but this is okay in my books for gains, if you consider that the Tonga chip is working with significantly less memory bandwidth and slightly reduced engine clocks to achieve that result. Those are gains, but the consumer wouldn't really notice it. I always considered the 380X to be a sort of, replacement of the 280X, and indeed, it was $70 cheaper. So consider it the same performance, for less price, and higher efficiency, and a gigabyte more video memory. RX 570 almost matches the 290X here, I guess Gameworks tessellation effects are less intensive on the Polaris silicon, with its Primitive Discard, in hardware, and significantly higher core clocks. BioShock Infinite Pretty linear scaling here with these GPUs. the Tonga gains 15% over the Tahiti at the same clocks and bandwidth, a decent jump. Tonga to Polaris is half as much, at just over 7% but this is also a decent gain as both these chips feature the same Geometry Front-end. Remember that Polaris has a larger L2 Cache, over twice the size (2MB vs 768KB) and in-hardware Primitive Discard, along with superior DCC algorithms. The extra 4 CU on the 590 provide with with 8.5% higher performance here. That is fairly close to the % gain of extra CU so the scaling is reasonable. Things are different now the GPUs are unrestrained. The 280X shows yet again that it can match or beat the 380X, simply by brute-forcing the bandwidth equation. 570 and 290X are evenly matched, decent gains for Polaris from architecture and of course raw clock speed. HITMAN In HITMAN we see a larger gain from GCN1 to GCN3, and a smaller one to GCN4, like we notice in other games. Of course, actually adding twice the physical hardware logic is the best approach, right? Also, the additional 256 stream processors and 16 texture units on the RX 590's GPU provide 9.6% more performance here. It is worth noting here, that even with the Tonga chip in the 380X operating at its lower, AMD "reference specification", it can still provide more performance, despite the Tahiti GPU in R9 280X having 58% more raw memory bandwidth (and 3% higher GPU engine clock). This game is using a DX12 renderer, so maybe it has something to do with Tonga's superior hardware-feature support for DX12 (12.0 vs 11.2, Tahiti supports DX12 in API/Software only). The 290X's bigger Hawaii GPU muscles to the top here, but only leading over the Polaris 20 (PRO) chip RX 570 by 7.7%. Polaris's higher raw clock rate and architecture improvements are in action. Metro 2033 Redux One of my favourite games! In Metro 2033 Redux, there are nice and solid gains for each card over the previous. Nothing really noteworthy to point out but as you can see between the 3 GPUs with 32 CU (2048 SP, 128 TMU) and 32 ROP, there is 21% "IPC" increase between GCN1 and 4, from various architectural, and other improvements to the non compute/render parts of the GPU. AMD hasn't been doing nothing all this time apparently! (But I knew that already). The actual, product gains here are much smaller between 380X and 280X, as we saw before also. But I am looking at it architecturally, and Tonga is for sure more efficient whatever way you look at it. RX 570 pulls far ahead as expected from that 24% increase in clock rate. R9 290X actually strikes a draw with the RX 570 here, Metro 2033 Redux is quite heavy on the geometry/tessellation aspect so that is likely playing a part in that. Middle-Earth: Shadow of Mordor Here is Shadow of Mordor. I do think this game is bandwidth limited on the Tahiti, as we have seen before. You'll see what I mean in a moment... A common occurrence is for the R9 280X, when using its reference memory speed, to match or beat the R9 380X even during the first test at 1 GHz and 192GB/s. This really does show that when we artificially limit Tahiti to 192GB/s it is actually bandwidth starved in many frames. And also that Tonga is significantly more efficient when working with just that memory bandwidth. Rise of the Tomb Raider Tahiti again showing either major geometry or bandwidth limitation. Probably both. Also this is DX12 so keep that inferior Hardware support in mind (it can make a difference). Otherwise, Polaris needs those four extra Compute Units to really make a difference over Tonga at the same speeds, but there is at least a small gain at 32 CU for GCN4. Okay so from this we can see even with significantly more memory bandwidth, Tahiti is still eclipsed by Tonga, and the latter is actually running at lower core clock, too. Geometry performance potentially important here. Either way, the R9 290X can muscle through the frames (or parts of the frame) that require comparatively less geometry performance, and more Compute, and produces a higher overall frame-rate because of that. Tomb Raider (2013) In the original new remake of the Tomb Raider game, the GPUs are all fairly close looking at the graph but gains can be seen. But adding more shaders seemed to have the most profound effect. But when the cards operate at reference speeds, Tahiti claims a victory over Tonga, by way of brute force. RX 570 is hot on the heels of 2013's Radeon champion, too. Ashes of the Singularity: Escalation Ashes of the Singularity does use an engine built for DX12 from the ground-up, unlike many of the other games using DX12 I tested here, which are largely based on DX11 ground work. It's interesting to note that Tahiti doesn't suffer too badly here. From what I am aware, Tahiti does have Asynchronous Compute cability, but it isn't working to the same extent as GCN2 and above. Anyway it doesn't hurt it too much here, and all the GPUs seem to have gains, but GCN3 to 4 is a bit smaller. Course, since AMD's reference spec for 380X is actually lower than my synthetic test standard, 380X regresses a bit here and the 280X can claim the same performance within margin of error. Still not a bad result for Tonga in my eyes, because it is working with vastly less bandwidth. Also interesting is 290X can only just out-muscle RX 570, something I see before now, too. At normal clock speeds, R9 290X produces around 10% more theoretical 32-bit FLOPS than RX 570, but 60% more raw pixel-fill rate. However, RX 570 can put out 22% more primitives per second in raw hardware, and that is before gains from Primitive Discard are factored in. The Witcher 3 Straight off the bat that's a 32% increase in generation performance from GCN1 to GCN3 with the same number of Compute and Render back-ends. Geralt's NVIDIA-written, HairWorks-accelerated hair may have something to do with that. I believe it is geometry heavy. Polaris also gains here and that too has geometry improvements. Even vastly more raw bandwidth cannot save Tahiti from being beaten by Tonga, so it does seem like AMD has made gains with HairWorks with the generational upgrades. Here, the comparatively hot-clocked RX 570 pulls ahead even in front of the R9 290X, this indicates to me a primtive-rate/tessellation heavy workload. Stalker Clear Sky Benchmark This bench takes 87 years to complete. This bench is the primary reason why I have taken this damn long to make these results. Seriously, it takes like 5 minutes and I retest 3 times so you get like 15 minutes PER CARD of pure waiting around. That's OVER AN HOUR of benching across all cards for this one damn benchmark. Oh, woops. I didn't speak about the results yet, I was too busy moaning. Oh yes, here we can see some gains from Tahiti to Tonga, but none from Tonga to Polaris. They score the same, within a margin of error. This is DX10, so maybe that has something to do with it. Again here the only way Polaris can distinguish itself from its older GCN3 brother is by using those four extra Compute Units it has on silicon, showcased here by the RX 590. Tahiti can catch up to Tonga here, when it unlocks its normal memory bandwidth. R9 290X has a slight lead on RX 570 but you wouldn't notice it, in-game, at least in this scene. But you would notice the leaf blower in your PC in the former's case. Metro Exodus One of my favourite games, aside from Warframe. Okay well Tahiti is well and truly left behind here. By an even larger amount than the additional of 256 Stream processors on the 590. Well it isn't a bandwidth issue for Tahiti. Metro Exodus does have some extremely complex geometry, and it pretty shader intensive, too. The 290X can even trade blows with the RX 570 showing me that it cannot be entirely a geometry issue. Whatever the case, architectural gains are allowing Tonga to extract more performance from the same amount of Render and Compute. Gears of War 4 In Gears of War 4, a native DX12 game, Tahiti is crushed by the newer chips, Polaris has a small, within margin of error lead, so I disregard that honestly. The 256 extra Stream processors allow the 590 to muscle ahead of the other cards, though. At reference speeds, Tahiti cannot even catch the lower clocked Tonga-based card. RX 570 uses power of pure clock speed to pull ahead and even manages to match the once mighty Hawaii-based R9 290X. Fallout 4 Similar story here. Tahiti falls behind the quad-raster chips, and Polaris doesn't seem to gain a lot in this game and scene. Adding more Compute units helps, though. Polaris here showing that increasing the switching speed of the circuit is a sure way to improve performance. But in reality it is also helped by architectural improvements. Here it can beat the R9 290X with much less memory bandwidth. Warframe! Is it just me, or does no one bench this really, awesome game? I'm not bias by the way. But it really is the best game. Ever. But let's look at how these GPUs perform. Okay, well look at that. Warframe really does scale with those doubled geometry engines and/or higher bandwidth efficiency. Polaris also shows some gains over Tonga, here, and even more so when you throw more Compute at the equation. At 10.1%, this is very close to the 12.5% increase in Stream processor count, so it is scaling well with more Number Crunching hardware. I think Warframe's geometry is enough and the FPS is high enough that the dual-raster design of Tahiti is becoming a limiting factor for it. In fact, since the RX 570 manages to produce a sizeable lead of just over 8% on the R9 290X, we can almost safely say this is thanks to the higher geometry performance afforded by the Polaris GPU and its higher clock speeds / architecture improvements. World of Tanks Encore Benchmark Okay so this isn't REALLY a game but, it sort of is. It's more of a game than, say, Firestrike. And I also had the Stalker Clear Sky benchmark in here, too, which is a standalone bench for that game. Anyway, I told my friend and he said I should say that this benchmark isn't indicative of the actual World Of Tanks game performance, at least not anymore as they changed it a lot, I guess? Either way it is another graphics workload so let's look at the results. The benchmark provides a score, not an average Frame Rate, but that is okay as it should be closely linked. We see some noticeable gains from all cards, indicating all iterations of GCN with 32CU have gains over the previous ones. Oh, and 4 extra CU help, too. But we knew that already. Things change a bit, when Tahiti is given its full Bandwidth potential. It manages to edge out the Tonga, also because Tonga is running with a deficit here, too, thanks to AMD's somewhat conservative "reference spec" for this card. R9 290X muscles ahead, even beating out the newer and leaner RX 570, but not by a huge amount. Okay now for some Pure Benchmarks! Unigine Superposition This is a good looking benchmark from Unigine, and using a pretty good engine that no one seems to want to use for some reason. I think one game used, it "Oil Rush" it was called. Anyway Unigine benches have historically favoured NVIDIA, potentially due to high geometry requirements. I'm not so sure, though, as Polaris isn't showing any major gains over Tonga, even though it has hardware level Primitive Discard, which should easily work wonders on the scenes in the bench with all the stuff floating around and overlapping each other, at least that's what I would have thought. Okay, so I think it is probably Compute limited here. Tahiti makes up its loss on the previous graph, when it has access to its 288GB/s of bandwidth. R9 290X pulls ahead of RX 570 by 5%, while producing 10% more raw 32-bit shader TFLOPS. Make of this what you will. 3DMark Firestrike Extreme Here is Everyone's Favourite benchmark! it's Firestrike (Extreme)! All clocks and bandwidth normalised there are gains but they are fairly minor. I know this benchmark is heavy on Compute shaders. That still doesn't explain why the RX 590 doesn't pull ahead by more than what it does, though. Huh. In hindsight I probably should have tested these at 1080p. Oh well, I think the 32 ROP parts are ROP hard-limited, since the 64-pixel-pipeline-equipped R9 290X here powers to the top spot in GT1, and adding clock speed helps the 570 get close. But in GT2 the 290X actually loses to the 570. Different bottlenecks in different scenes I guess. 3DMark Timespy Concluding this test is with 3DMark's DX12 benchmark, Timespy. It's okay but I did read that it wasn't taking full advantage of GCN's implementation of Asynchronous Compute in order to be more friendly to NVIDIA's Pascal architecture, which had an inferior implementation. At least from what I read. Anyway, you can see the results are fairly close, but Tahiti lags behind the pack, and also is the only GPU here that is limited to 2 primitives per clock. Throwing a ton more Bandwidth Tahiti's way doesn't do much to change the situation for it. And here the RX 570 basically ties the R9 290X. I guess Timespy geometry is quite heavy for these GPUs. Okay, so I have actually finished this article. I thought it would take forever. You know, I sat down in two parts and did this almost non-stop. For a person with ADHD (Yes, I have ADHD, but not the kind that lets you mis-behave at school, this is the kind where I bash my head on door frames multiple times a day due to impulsive jumping because hyperactivity and being literally unable to concentrate on anything, including video games for more than 20-30 minutes.) Wait, I forgot where I was going with this. Oh yeah. I am pretty pleased that I managed to get this done. I hope you enjoyed it. Thanks for reading <3

  • Far Cry New Dawn Gameplay Screenshots

    I love this game! It's really fun. I enjoyed Far Cry 5, too. So I picked this game up in a sale on the Origin store (which enables the game in the Uplay store, it's game-launcher-ception!). Well worth the £12.99 I paid for it. I think it was 12.99. But I can't remember exactly, but I am very happy with the game. Best of all I can run around in Pink Unicorn Onesy, that makes me very happy, did you know that? Anyway I took a bunch of screenshots since I think this game looks really great, too. And it runs fantastically well. The textures are fantastic with the HD Texture DLC. And it can eat all 8GB up on my RX 590 at 1440p, and even use all 16GB on the Radeon VII at 4K. Please be aware these may contain spoilers for the story line. WIP!

  • GCN Geometry performance update, feat. Hawaii

    I got a 290X to test Hawaii's geometry / tessellation performance, but this time only in TessMark and Unigine Heaven, because they are most taxing on the tessellators and geometry engines. The other games I tested in my original comparison would be apples to oranges since Hawaii also has twice the Render backend, and 12 more CU (768 shaders, 48 Texture units). It also has a much wider bus, at 512bits and apparently WattMan won't let me drop the memory clock rate below 5Gbps. So I cant match the other cards' bandwidth, not that it really matters anyway. Obviously in this games tessellation isn't the be-all and end-all of performance, so yeah. Anyway... I ran TessMark and Unigine Heaven and my results really surprised me. Actually I'm pretty stumped. Also apparently Anandtech found the same thing with Hawaii. So unless you went and clicked that link I bet you are wondering what I mean. Okay, so with Hawaii AMD stated they literally doubled up on the Geometry engines, that means tessellators too. So I was expecting, you know, something near twice the throughput in these workloads. Except... that really doesn't happen. In fact Hawaii is barely faster than Tahiti, which as only two Geometry engines. So what gives? Anandtech writer does dive a bit into that in that article too. Anyway, my results. TessMark, with all Five GCN revisions!: (oh, by the way, all my tests are conducted with Tessellation set to "Use Application Settings" in Radeon Driver. Otherwise AMD likes to put a cap on the max factor, to, you know, levels that don't make GPU cry with no visual gain ahem Crysis 2 ahem). What. Okay so Hawaii's doubled engines give it almost no major gain over Tahiti. There is a slight, and measurable gain of 4.7%. Bit small given the 100% higher geometry pipelines. What gives? I have no idea. Let's move on to Heaven, using these settings the engine is basically a synthetic test of Pure Unrelenting Tessellation Doom. I didn't test Vega 20 here yet but I might add that later. This is a custom bench run of the scene around the Dragon Statue in the courtyard, with all Tessellation factor and distance set to maximum, to really put a strain on those Geometry Engines. Welp. This proves its geometry/tessellation limited. Because Hawaii has twice the ROPs of Tahiti (64 vs 32) and 37.5% more shading and texture power... and is only about 20% faster here. Some of that scene is benefiting from the increased shader and pixel muscle, but Hawaii is still hugely limited by its geometry and/or tessellation throughput. But more strange is the fact that the Tonga chip is driving over 90% more frames per second, with apparently the same number of geometry engines and core clock speed. I'd like to do a test without heavy tessellation, instead just really complex original meshes, to see if the gains are larger. I don't even know what's going on with the tessellation performance here. But I retested it many times to verify the results and it's all the same. So at least... Update: But not because I found out the "issue". It's actually an update to say I tried and retested this again and again with clean driver installs and closing all applications that were running (Something was actually causing the bench to stutter), and forcing 64X tessellation in the driver, using AMD optimised, etc, etc. And nothing changes. The 290X still scores barely better than the 280X. Here are the results for Tahiti, Hawaii and Tonga, respectively. Oh, 290X is reported at 1030 MHz. I do not not know why the bench reported that. I definitely had it running at 1000 MHz core. Results do not change substantially over my previous results except for the lack of stutter, which increased the minimums and pushed the average on Tonga up a bit. I really want to know why Hawaii is showing almost no gains here.

  • My PC

    Just going to make a post to upload some pictures of my PC. In case you wanted to know what it looks like. By the way, I love stickers. I also love Pusheen. You're probably going to figure that one out after you've seen it. I'm fairly happy with the way this one turned out. I love this case, it's really pretty damn awesome especially when you consider it was only £45. it's a Fractal Design Focus G in Mystic Red, the full ATX version. Yes you probably noticed my motherboard is Micro ATX, that is because I was having a Micro ATX fling when I bought it. On hindsight, I would have just bought a full ATX motherboard. Then I wouldn't have had to use that PCI-E 1X riser cable to use my Soundcard without choking the GPU fan. Oh you want my PC spec? No? Okay. But I'm going to put it here anyway :D Ryzen 7 2700X "50th Anniversary Gold Edition" at stock speeds with the Wraith Prism RGB cooler that came with it 16GB (2x 8GB) G.Skill "TridentZ" 3200 MHz CL14 running at XMP speed but with tightened secondary timings MSI B450M Mortar Micro ATX motherboard Sapphire Radeon RX 590 Nitro Special Edition at factory OC speed Corsair Force MP510 960GB NVMe M.2 SSD @ 4x Gen3 for games Samsung 960 EVO 256GB NVMe M.2 SSD @ 4x Gen2 for the Operating System Sandisk Extreme 480GB SATA3 SSD for games and storage Creative SoundblasterZ Soundcard with 5.1 surround output for Logitech Z906 Speakers Seasonic Focus Plus 850W Gold Modular Power Supply Fractal Design Focus G Mystic Red ATX case with 2x Corsair 140mm RGB fans on the top exhaust, 2x 140mm Noctua PPC fans in the front intake, a 120mm Noctua PPC in the bottom intake and another one in the rear exhaust. I use an LG Ultra 4K display with 10bit colour and FreeSync (but no HDR sadly), which is full 3840x2160 resolution at 24" so you get Pixel Density For Days. I love it, the biggest thing I notice with that amount of DPI is the detail and clarity of objects in the distance, in games. So much so that I just can't go back to a 1080p panel at the same size. it's just a pixelated mess. I've been spoiled~ With the Radeon VII I was gaming at 4K Ultra details, but now that I am on the RX 590 I mainly play at 2560x1440 Ultra, which the card handles really well actually, the 8GB helps here, especially with HD Textures. 1440P still looks pretty amazing at 24" so it's not the end of the world but I do miss the Radeon VII... T.T Thanks for reading, I guess.

  • GCN Performance: The Witcher 3

    I decided to update my findings on the performance of the three variations of GCN that have identical top-level CU configurations, Tahiti XT, Tonga XT, and the Polaris 20 (PRO) chip. Here I tested The Witcher 3. The scene is the introduction with Geralt in the bathtub. Because it's convenient as it's right at the beginning and rendered in real-time. And Geralt's hair is simulated with HairWorks tessellation. Benchmarking begins when the scene first renders the fire to when Geralt throws the creature thing out of the bath and it hits the floor and fizzles out. I did this test twice, once at maximum settings and then again at 720p resolution, and low post-processing. That second one was to try and shift the bottleneck towards the primitive-throughput of hte GPU instead of shader/bandwidth. But the first test should also let us see how well these GPUs do with the exact same core clock and memory bandwidth. First is 1920x1080 resolution. Total increase in performance in this test from GCN1 to GCN4, with the same number of CU, ROPs and core clock rate and memory bandwidth is 24.9%. Not too bad. Let's take a look at the 720p results with the emphasis on geometry performance over shading. A 32.6% increase in generational performance between 1 and 4 iterations of GCN. So the performance gap did widen as we try and emphasis triangle throughput. So that concludes this little micro-test. Buh bai~

  • I gamed on this for 7 months.

    Seven months. Disclaimer: The following post contains Sarcasm. It was brutal. Well ehm, actually it wasn't that bad. Either way this is the Fermi-based GF108 graphics processor surrounded by its quartet of GDDR3 chips. Each of those hooks into the silicon via 32-bit wiring so that's the 128-bit bus populated. Oh and they're running at 1.8Gbps. So you get an absolutely earth-shattering 28.8GB/s of Pure Unrestrained Bandwidth to that tiny little core. And they're 256MB in size so that's 1GB VRAM. Oh, I didn't even say what this is, it's a GeForce GT 540M. Ripped out of my old Toshiba P750 laptop. GF108 is around 116mm² in size (I didn't measure it, but Wikipedia is usually correct, right?) and contains about 585 million transistors. That's 0.58 billion transistors. That's not actually a lot of transistors. It's built on TSMC's now ancient 40nm process. So you're looking at a pair of the Fermi Streaming Multi-processors (they're basically GPU cores) and each contains 48 CUDA cores and 8 texture units. (the smaller Fermi chips have a different SM layout, GF110 and GF100 only have 32 CC per SM)... But you get a grand total of 96 shader processors and 16 texture units on this massively powerful beast of a GPU. Fermi has a decoupled core/shader clock so the GPU core (ROP, Raster, Geometry, etc) is running at 672 MHz and the shader processors are 'doubled-pumped' to 1344 MHz. Did I mention this GPU has 4 ROPs? Four. Anyway performance today is abysmal and it's weaker than basically every Integrated GPU available, even the lowly UHD 620 is likely comfortably ahead. This GPU is now quite literally, obsolete. Still I didn't have a picture of it so here it is. Did hold me over while I had to live at my Aunt's though. Oh, and Pusheen says hi. :3

bottom of page