AMD Ryzen 7 5800X3D CPU Top View

Introduction

AMD is not taking a back seat to gaming performance, with the recent launch of Intel’s Alder Lake CPUs Intel has proven they are back in the game.  AMD isn’t taking this lightly.  By innovating in unique ways AMD is finding ways that can improve gaming performance on CPUs to stay on top of gaming performance.  One of those ways is increasing the L3 cache size on a CPU, which if implemented well, has the potential to specifically improve gaming performance, especially where you are more CPU limited.  This is where the AMD Ryzen 7 5800X3D CPU comes in, this is AMD’s $449 MSRP CPU that uses 3D-stacked L3 cache to make a total of 96MB of L3 cache for a gaming processor.  

In a traditional CPU design, the L3 cache is integrated into the die and has a fixed amount, you cannot increase the cache unless you literally produce a new CPU architecture.  This all increases die size, transistor count, package size, and the works.  What if you could add more L3 by stacking it on the die?  This is what the new AMD 3D V-Cache Technology is all about.  It’s about boosting a CPU’s L3 cache in size by quite a large amount, without having to design a new CPU.  In this stacking sense, AMD is literally connecting the L3 cache atop the CPU, in a copper-to-copper unique bonding process that facilitates the connection.  Latency is an obvious problem, but it is something AMD has been working hard on to make sure this technology actually provides a benefit in gaming performance. 

We need to be specific here, this additional L3 is meant to improve gaming performance, that is its purpose, and with that goal in mind, there may be other things that don’t take advantage of the increased L3 cache size, and because of other decisions that had to be made in making the Ryzen 7 5800X3D CPU performance could be even slower.  You can think of the 3D V-Cache as a way to extend the usefulness and capabilities of the already established Zen 3 Ryzen CPUs, well, in this case, just one, as of right now there is only one with 3D V-Cache, the 5800X3D.

AMD Ryzen 7 5800X3D Specs

Ryzen 7 5800X3DRyzen 7 5800X
Architecture/Process NodeZen 3 (Vermeer-X) /TSMC N7Zen 3 (Vermeer) / TSMC N7
Cores/Threads8/168/16
L3 Cache96MB32MB
Base Clock3.4GHz3.8GHz
Turbo Clock4.5GHz4.7GHz
TDP105W105W
MSRP$449$449

One of the interesting, and welcomed pieces of information is first the MSRP.  The new Ryzen 7 5800X3D has an MSRP of $449, which is exactly the same as the Ryzen 7 5800X.  This is impressive, AMD could have charged a premium for it, but AMD is asking the same price as the 5800X, so it at least won’t cost you an arm or a leg more.  And that’s a good thing seeing as the Ryzen 7 5800X3D is architecturally identical to the Ryzen 7 5800X.  It’s similar in almost every single way, only two things separate them, the L3 cache size and the Boost Frequency.  Both CPUs are based on AMD’s Zen 3 architecture (Vermeer) and manufactured at TSMC N7.

The Ryzen 7 5800X and 5800X3D are 8-core/16-thread CPUs based on AMD’s Socket AM4.  The L1 cache size is 64K per-core, and L2 cache size is 512K per-core on both CPUs.  The difference is, of course, the L3 cache size where the Ryzen 7 5800X3D has 96MB and the Ryzen 7 5800X has 32MB, that’s a 200% increase in cache size. 

The other difference is the Turbo Clock, AMD had to lower it on the new 5800X3D, but not only that but also Voltage, the Turbo Clock is running at up to 4.5Ghz on the Ryzen 7 5800X3D at 1.35v.  On the Ryzen 7 5800X, the Turbo Clock runs up to 4.7GHz at 1.5v.  That’s a 200MHz decrease on the 5800X3D, so it has a lower Turbo Clock, but much more L3 cache.  This turbo clock frequency will affect single-core and all-core maximum frequencies. 

With that clock speed decrease, AMD is able to keep the TDP of the CPUs the same at 105W.  The Ryzen 7 5800X3D has another limitation, it is multiplier locked, and it does not support Precision Boost Overdrive (PBO) at all.  It is meant to only operate at the provided clock speed and not be overclocked by any means.  The Ryzen 7 5800X, does support PBO, which can overclock it by 200MHz, which could in theory set the two CPUs apart by as much as 400MHz.

Installation went without a hitch. HWiNFO64 recognizes the CPU as Vermeer-X Stepping VMR-B2. On our ASUS ROG X570 Crosshair VIII Hero Wi-Fi motherboard we have BIOS 4006 applied, which utilizes AMD AM4 AGESA V2 PI 1.2.0.6b and supports AMD Ryzen 7 5800X3D CPU. We are using the default motherboard settings with D.O.C.P enabled.

Go to thread

Don’t Miss Out on More FPS Review Content!

Our weekly newsletter includes a recap of our reviews and a run down of the most popular tech news that we published.

Brent Justice

Brent Justice has been reviewing computer components for 20+ years, educated in the art and method of the computer hardware review he brings experience, knowledge, and hands-on testing with a gamer-oriented...

29 comments

  1. I'll say that, for me, coming from a 3700X, I've seen much bigger FPS gains. I wasn't able to use resizable bar with that CPU and seeing it in action now with Horizon Zero Dawn has been impressive as well. It used to take ~30+ seconds to load and now it's more like 10. I don't know if Crysis 1-2 remastered use but they seem to load a little quicker now but that could also be due to those last patches Crytek put out.

    I only just went to using an AIO with this chip but I've been able to keep the Plex 360 at stock speeds and it's been able to keep this CPU at ~65c or less during gaming, mostly in the 50s, and that was in a room that was 75 degrees Fahrenheit at the time so I'm happy everything will work well in the summer. The two combined have also resulted in my GPU running a few degrees cooler as well which was an unexpected bonus. Up until recently, I had been setting the AIO to max fan speeds for gaming but now I see that is totally unnecessary.

    For me, gaming just feels smoother than when I had the 3700X, even in titles like SOTR and Metro Exodus. Granted, since I never had a 5800X I couldn't say what that would've been like but it was also Brent's comparison review between the two that first got my attention in regards to an upgrade. He'd already seen some unusual FPS gains back then.

    I game almost entirely at 5120x1440 with my rig, usually at 100 Hz so I can use HDR with my monitor but then 120 Hz for non-HDR gaming, and things definitely seem more fluid regardless if there's a significant gain in frames but even then I'm seeing anywhere from ~5 to ~20, depending on the game plus the lows seem to have gone up a bit. I also use DLSS on the quality setting as much as possible to try and offload some work from the GPU and give a little more to the CPU. I had games that would hang ~70-90 FPS that are now basically hammering 100 FPS or at the least the high 90s. Obviously, like @Brent_Justice 's review shows, moving from a 5800X is mixed at best and its ability to use PBO will definitely pull ahead but there's a cost of power and heat when you do that as well. I'm loving that I can now cut back on the fans and still enjoy more than I used to when everything was maxed. Definitely not a chip for everyone but for me, for the games I play, I'm loving it.
  2. Yes, coming from a 3700X that makes sense. The goal of this review though was to see what advantages, if any, versus the 5800X. If you had purchased a 5800X now, you would save quite a bit of money over the 5800X3D, and still get similar gameplay performance as the 5800X3D, yet you'd save money, and have the ability to use PBO, and everything else besides gaming would also just run faster with the 5800X due to the higher clock speed natively. That is honestly the best value until Zen 4 comes out the end of this year, which is not too far away now.
  3. I'm only just coming to realize how much I've shorted myself over the years in using more mid-grade CPUs. Moving forward I'll definitely be aiming a bit higher, but for now, pretty happy. My previous home desktop CPUs have been in the order of 2600K(OC'd to 4.2 GHz)>4930K(OC'd to 4.3 GHz-still in the cave), 3700X(stock)>5800X3D(stock) so I'm still getting used to identifying other performance metrics of how CPUs can affect high-end GPUs. I've been using the fastest GPUs I can afford for some time since I stopped using SLI: 1080 Ti>2080 Ti>3090, and now I've got a 3090 Ti coming via EVGA Step Up. Thanks again for the great review!

    edit: Just to add that I'm only including what I consider modern CPUs in my previous desktop rigs. It'd take some thought to remember all those other ones from the 80s-2000s-lol!
  4. This is a good comparison of the similarly priced 5800X vs 5800X3D.

    It does not, however, actually address the title question: how much does the cache actually benefit?

    You can only answer that question if you either lower the clocks/voltages to match on the 5800X (essentially under clock it), or lower the cache to match on the 5800X3D (which I don't know of any handy tool to accomplish this, but wouldn't call it impossible). You should expect them to perform identically except for cases where the cache actually benefits -- or detriments, although I can't think of any cases were it ~should~ do that, but this test as run would completely mask if that were the case, since you would just think it's because of the clocks difference.

    Or you could change the title.

    Here, we see many situations where the faster clocks allow the older chip to eek out the lead, and a few where the cache actually plays a big role, but it's not just the effect of the cache being evaluated because you have the clock mismatch.

    Also - curious, why disable Resizable BAR?

    The chart on Pg 6 where you break down the results so far -- nice to have the chart. It needs some coloring or something so you can tell which one was the winner, or include it in a bar graph or something. Just a chart of numbers is... hard to really interpret.

    The CPU speed charts on Pg 8 are awesome, they do illustrate the clock delta that I did complain about allowing above -- but if you're going to allow it, and it's a valid point in a CPU vs CPU comparison (just not in a cache-only benefit analysis), may as well make it easier to interpret. They would be better if you could fix them so they have the same vertical range - would make it better to make a cross comparison, especially since they are sitting side by side.
  5. This is interesting.

    I haven't been in the CPU market lately, so I haven't kept up. I had heard of the new cache AMD was bringing out, but when I saw the 5800X3D I had assumed it was just the same CPU as the 5800X but with the improved cache. I didn't realize they were cutting the core clock so much.

    Curious choice. I wonder why they did that. To get better yields? Or to offset the extra power used by the cache?

    I'd be curious how they look clock to clock, to tease out just how much good the cache itself does.

    Also, I thought it was really odd that 1080p performed worse than 1440p in Ms Flight Sim....
  6. And also, we have done coming from a 3700X to the 5800X benchmarks and gaming performance here:

    https://www.thefpsreview.com/2021/10/06/amd-ryzen-7-5800x-vs-ryzen-7-3700x-performance-review/

    This review is one of the reasons I am pissed at AMD for shafting us TRX40 users and refusing to give us a Zen3 version.

    I hate compromises. I want a build that gives me everything a HEDT system offers (primarily ECC, high PCIe lane counts and IOMMU, but high core counts don't hurt either) without sacrificing top of the line game performance.

    I hate compromises. They drive me nuts.
  7. Curious choice. I wonder why they did that. To get better yields? Or to offset the extra power used by the cache?
    My understanding is the stacked cache is more voltage sensitive, so the 3D version has lower voltage limits, which in turn limited the clocks. The TDP remained unchanged, but you just can't get as much if you can't boost the voltage as far.
  8. My understanding is the stacked cache is more voltage sensitive, so the 3D version has lower voltage limits, which in turn limited the clocks. The TDP remained unchanged, but you just can't get as much if you can't boost the voltage as far.

    Is the L3 cache directly on the core? It ought to be able to operate at a different voltage than the rest of the package...
  9. Is the L3 cache directly on the core? It ought to be able to operate at a different voltage than the rest of the package...
    Not an engineer so I don’t know — could be some limits on AM3 pinouts at play there - you only get so many to do different voltages.
  10. My understanding is the stacked cache is more voltage sensitive, so the 3D version has lower voltage limits, which in turn limited the clocks. The TDP remained unchanged, but you just can't get as much if you can't boost the voltage as far.
    The cache also acts as an insulator with respect to heat transfer - a disadvantage of stacking technology at this time. Wattage isn't changing as you note, yet temperatures from the same sensors as on the 5800X at the same power draw may be higher. And thus temperature throttling limits lower, leading to the need for more cooling capacity to get the best out of the CPU.

    It will be interesting to see how far the more adventurous 'extreme' overclockers are able to push the 5800X3D. Last I checked they were manually flipping bits in BIOS images to try to turn on things that AMD thinks should be left off :D
  11. It would be nice to see the min frames as well here.
    Im curious if in the stress cases the cache is helping or not.
    I fell thats a pretty important metric to expose for all video card/cpu reviews.
  12. The cache also acts as an insulator with respect to heat transfer - a disadvantage of stacking technology at this time. Wattage isn't changing as you note, yet temperatures from the same sensors as on the 5800X at the same power draw may be higher. And thus temperature throttling limits lower, leading to the need for more cooling capacity to get the best out of the CPU.

    It will be interesting to see how far the more adventurous 'extreme' overclockers are able to push the 5800X3D. Last I checked they were manually flipping bits in BIOS images to try to turn on things that AMD thinks should be left off :D

    Honestly,. I don't even understand why they have to stack it on the CPU core.

    It's not like there isn't PLENTY of space underneath the heat spreader to put them side by side and allow for more efficient cooling.

    Here is the 5800x for reference:

    1653079087700.png

    Shift one of those chiplets to the side and you have almost half the area under the heat spreader to work with, where you could shift in a 3rd chiplet containing the cache.
  13. I too would have liked to see the impact of the cache by itself - although that may never be truly possible because the latency is increased (but then, that is part of the cost of having more cache).

    To be honest, the games that show the most benefit may be those that are least optimized, because they have been designed to access more data at once than will fit in the cache. Both Crysis and Flight Simulator feature vast vistas which can drain frame rates. I suspect some database operations could also benefit significantly from this, and that overall games are likely to operate more smoothly, even if they don't go as fast at their peak.

    I think the real benefit will come five years down the line when every new CPU has 96MB cache, and software (especially games) expects to make use of that, because you won't need to get a new CPU to keep up. Of course, might want it to go a bit faster, but the CPU might also be less likely to bottleneck a new graphics card (whereas now, the card is clearly the bottleneck for 4K, and to an extent 1440p).

    It'll also benefit your electricity bills over that time. If you use the CPU continuously, you'll be using 20W less, the temperature will be 8°C lower, and the fan may be quieter as a result. This is a big win from AMDs perspective of performance per watt. Even if you don't game all the time, many run BOINC or similar to use up idle CPU.
  14. Honestly,. I don't even understand why they have to stack it on the CPU core.
    This page talks a bit about the construction.


    As to why it's on the CCD and not off on another chiplet by itself: my guess would be distance/latency. If you have to go out over Infinity Fabric for a cache hit the speed is going to go down by orders of magnitude (granted, still faster than going out to system RAM, but...). 5800X3d only has 1 CCD, so it's a best case scenario with respect to latency all around.

    There were probably also some tradeoffs to make sure it could drop in existing AM3 motherboards relatively painlessly.
  15. This page talks a bit about the construction.


    As to why it's on the CCD and not off on another chiplet by itself: my guess would be distance/latency. If you have to go out over Infinity Fabric for a cache hit the speed is going to go down by orders of magnitude (granted, still faster than going out to system RAM, but...). 5800X3d only has 1 CCD, so it's a best case scenario with respect to latency all around.

    There were probably also some tradeoffs to make sure it could drop in existing AM3 motherboards relatively painlessly.

    So, I was thinking about this, and I guess I don't fully understand how they mate the stacked silicon. I'd imagine the lithography would have to happen separately, and then they are somehow mated together when done.

    I guess what I am thinking is, if they can already manufacture them separately and then mate them stacked on top of each-other, shouldn't it be a trivial change to do the same thing and then mate them side by side for better thermals?

    My best guess here is that the 5800X3D is more of a learning tech demo for future highly dense products than it is intended to be a long term product in its current form. You know, test the technology out on something relatively simple, and learn from it, before trying to stack the **** out of super-dense Epyc's.

    Because you can do stuff lie this in the lab all you want, but the real learning from a new technology starts when you hit the market. Things always break in new and novel ways you couldn't possibly have guessed or prepared for in the test lab when they actually get used in the field.
  16. So, I was thinking about this, and I guess I don't fully understand how they mate the stacked silicon. I'd imagine the lithography would have to happen separately, and then they are somehow mated together when done.
    Through-Silicon Vias (TSVs).
  17. I really wish they had released a 5950X3D. Something like BF5 multiplayer already puts a 5800X at 100% cpu usage.
    Honestly,. I don't even understand why they have to stack it on the CPU core.

    It's not like there isn't PLENTY of space underneath the heat spreader to put them side by side and allow for more efficient cooling.

    Here is the 5800x for reference:

    View attachment 1617

    Shift one of those chiplets to the side and you have almost half the area under the heat spreader to work with, where you could shift in a 3rd chiplet containing the cache.
    Part of the benefit of vertical stacking is it adds no latency due to there being almost no added signaling distance. If you add a separate die then you're running through infinity fabric. A single CCD's infinity fabric only has 1/3 the read bandwidth and 1/6th the write bandwidth compared to the on die L3 cache.
    My best guess here is that the 5800X3D is more of a learning tech demo for future highly dense products than it is intended to be a long term product in its current form. You know, test the technology out on something relatively simple, and learn from it, before trying to stack the **** out of super-dense Epyc's.
    I think the 5800X3D is just them doing the bare minimum to be able to say they have the fastest gaming cpu. They had already being sampling Milan X Epyc CPUs to key customers last year, long before the 5800X3D was even announced. They make significantly more money using 3d v-cache equipped CCDs for Epyc so the 5800X3D is really just for bragging rights and consumer mindshare.

Leave a comment

Please log in to your forum account to comment