Image: AMD

AMD users who are planning to pick up red team’s next flagship desktop processor later this year should expect a similar core and thread configuration as the current flagship. This is based on new information shared by Robert Hallock, AMD’s Senior Technical Marketing Manager, who confirmed in a new interview with TechPowerUp that the maximum core configuration for the Ryzen 7000 Series at launch would be 16 cores and 32 threads, just like the Ryzen 9 5950X. They should differ pretty substantially in regard to TDP, however—echoing an official statement that AMD shared yesterday, Hallock reiterated that the most power-hungry Ryzen 7000 Series processors could feature TDPs of up to 170 watts. Some of the topics that Hallock touches upon in the interview include the purpose of the new heat spreader design, potential for overclocking, his thoughts on the transition from DDR4 to DDR5, and AMD’s future plans regarding the 3D Vertical Cache technology that it introduced with the Ryzen 7 5800X3D.

Image: AMD

16-core, 32-thread is the maximum core configuration for the Ryzen 7000 at launch?
That is correct.

What are your thoughts on 3D Vertical Cache (3DV Cache) for Zen 4?
3DV Cache will absolutely be a continuing part of our roadmap. It is not a one-off technology. We are a big believer in packaging as a competitive advantage for AMD, something that could meaningfully enhance performance for people, but we have nothing specific to announce for Zen 4 yet.

Why the new heatspreader design? Why the holes on the sides?
It’s actually how we achieve cooler compatibility. If you flip over one of the AM4 processors, you’ll find a blank spot in the middle without pins, which has space for capacitors. That blank space is not available on Socket AM5, it has LGA pads across the entire bottom surface of the chip. We had to move those capacitors somewhere else. They don’t go under the heatspreader due to thermal challenges, so we had to put them on top of the package, which required us to make cutouts on the IHS to make room. Because of those changes we’re able to keep the same package size, length and width, same z-height, same socket keep-out pattern, and that’s what enables cooler compatibility with AM4.

What can we expect from the processors in terms of CPU overclocking?
I’m not gonna make a commitment yet on frequency, but what I will say is that 5.5 GHz was very easy for us. The Ghostwire demo was one of many games that achieved that frequency on an early-silicon prototype 16-core part with an off-the-shelf liquid cooler. We’re very excited about the frequency capabilities of Zen 4 on 5 nanometer; it’s looking really good, more to come.

How do you feel about the transition from DDR4 to DDR5?
AMD is betting on DDR5, there’s no DDR4 support in Zen 4. In the last months, we talked to many component vendors, module makers, etc., to confirm their supply roadmap to confirm timings and avoid shortages. Everybody is coming back with very optimistic answers. DDR5 will be abundant in the lifetime of Socket AM5. The abundance and new demand from Socket AM5 will help bring pricing down.

Source: TechPowerUp

Go to thread

Don’t Miss Out on More FPS Review Content!

Our weekly newsletter includes a recap of our reviews and a run down of the most popular tech news that we published.

12 comments

  1. Come on AMD.

    Screw the cores. 8 cores are enough for just about everyone today. Add a few more for future proofing (maybe 10 or 12?) and that's all anyone shopping for a non-HEDT system really needs. If you are doing lots of VM's or rendering/encoding, the type of stuff that calls for a ton of cores you should probably be using a HEDT system anyway. The consumer parts do not need this many cores.

    Just give us a few more PCIe lanes instead. Instead of the 24 lanes, 16 used for the GPU, 4 for a single m.2 slot, and the last 4 for the chipset, how about we get like 50% more lanes. 36 total lanes would make all the difference. It would allow for a couple more expansion slots or more m.2 slots.

    24 is just way too restrictive. It limits the PC from truly being a PC where you add the hardware you need/want. It turns it into a limited reduced function device.
  2. Come on AMD.

    Screw the cores. 8 cores are enough for just about everyone today. Add a few more for future proofing (maybe 10 or 12?) and that's all anyone shopping for a non-HEDT system really needs. If you are doing lots of VM's or rendering/encoding, the type of stuff that calls for a ton of cores you should probably be using a HEDT system anyway. The consumer parts do not need this many cores.

    Just give us a few more PCIe lanes instead. Instead of the 24 lanes, 16 used for the GPU, 4 for a single m.2 slot, and the last 4 for the chipset, how about we get like 50% more lanes. 36 total lanes would make all the difference. It would allow for a couple more expansion slots or more m.2 slots.

    24 is just way too restrictive. It limits the PC from truly being a PC where you add the hardware you need/want. It turns it into a limited reduced function device.

    I think you're over stating a bit. Remember we're talking about PCIE 5.x lanes. Those are what.. lets look at the bandwidth scale.

    4 doubles three and 5 doubles 4. So 4 PCIE 5.x lanes is equivalent to 16 PCIE 3.x lanes. I meant... from a bandwidth perspective I think we're ok. With proper management of traffic you could get quite a lot done on 4 PCIE 5.x lanes. Unless you needed dedicated constant access to the lane.
  3. Come on AMD.

    Screw the cores. 8 cores are enough for just about everyone today. Add a few more for future proofing (maybe 10 or 12?) and that's all anyone shopping for a non-HEDT system really needs. If you are doing lots of VM's or rendering/encoding, the type of stuff that calls for a ton of cores you should probably be using a HEDT system anyway. The consumer parts do not need this many cores.

    Just give us a few more PCIe lanes instead. Instead of the 24 lanes, 16 used for the GPU, 4 for a single m.2 slot, and the last 4 for the chipset, how about we get like 50% more lanes. 36 total lanes would make all the difference. It would allow for a couple more expansion slots or more m.2 slots.

    24 is just way too restrictive. It limits the PC from truly being a PC where you add the hardware you need/want. It turns it into a limited reduced function device.
    You're definitely pointed in the right direction. I, personally like having the cores (my 5900 I just got is AMAZING). Cap the cores at 12 like you suggest and open up the PCIe lanes. Win/Win!
  4. The lanes become less important when the bandwidth is so large. With PCIe 5 you'd have a tough time saturating it.
    This. 24 lanes of pcie 5 is 48 lanes of pcie 4, though pcie 5 components are pretty slim pickings.

    How does a 4 lane pcie 5 slot look to a pcie 4 device? Ex: if you put a 3080ti in a pcie 5 4x slot, how much bandwidth does it get?
  5. This. 24 lanes of pcie 5 is 48 lanes of pcie 4, though pcie 5 components are pretty slim pickings.

    How does a 4 lane pcie 5 slot look to a pcie 4 device? Ex: if you put a 3080ti in a pcie 5 4x slot, how much bandwidth does it get?
    Downgrades the lanes used to be PCIe4. a PCIe4x16 card will not look like a PCIe5x8 in a PCIe5 slot. In essence, that doubled bandwidth for those 16 lanes is lost, thus why more lanes would be great!

    To answer the question though, a 3080Ti put into a PCIe5x4 slot will be in essence a PCIe4x4 device, able to consume 7.877 GB/s (vs 15.754 GB/s for PCIe5)
  6. The lanes become less important when the bandwidth is so large. With PCIe 5 you'd have a tough time saturating it.

    This. 24 lanes of pcie 5 is 48 lanes of pcie 4, though pcie 5 components are pretty slim pickings.

    How does a 4 lane pcie 5 slot look to a pcie 4 device? Ex: if you put a 3080ti in a pcie 5 4x slot, how much bandwidth does it get?


    1653778862100.png

    The thing is, unless you use some sort of PLX chip you can't pool all that bandwidth.

    If you stick a 8x gen 1 device in an 8x gen5 slot, it still uses all 8x lanes, even if it is just operating in gen 1 mode, and a fraction (~1/16) of the max bandwidth those Gen 5 lanes offer.

    Gen 5 devices are almost non-existent on the market today. Even gen 4 devices are rare outside of GPU's and certain high end NVMe SSD's

    Most expansion cards you are going to wind up buying are going to be Gen 3 or Gen 2. Some may even be Gen 1. Heck, if an old sound device works fine on 1x gen 1, why redesign it?

    And that's the problem. 24x lanes of Gen5 is an absolutely massive amount of bandwidth, but you can't really make the most of it without some sort of PLX chip that helps keep the lanes to the CPU maxxed at gen 5 speeds and pools that bandwidth and divy's it up across devices and slots of different generations.

    The problem is PLX chips are expensive, add power consumption, and most importantly cause added latency on the PCIe bus, which is detrimental to performance, which is why we see so few motherboards use them.


    So, what does this mean in practice?

    Since most of us use 16x capable discrete GPU's, and don't want to risk them losing an ounce of performance (even though in general 8x is probably fine) we are going to seek to have them maxed out at 16x.

    So, 16/24 goes to the GPU, and any GPU today is probably only using half the potential bandwidth of those 16x lanes, since it is only Gen4, but the 16x lanes are used none the less, because the protocol negotiates the speed at the lower of the capabilities of the host and the GPU. 8x Gen5 may provide the same amount of bandwidth as 16x Gen4, but the GPU can't do anything with 8x Gen5, if it fdrops to 8x, it will be getting 8x Gen4 bandwidth, so it runs at 16x gen4, and uses all 16 of those lanes.

    So, once the GPU is out of the way we have 24-16 = 8 lanes left.

    4 of those lanes are going to go to an NVMe device. Again, just like with the GPU it is going to use all lanes, even though it can't connect with the latest gen protocol.

    Now we have 4 lanes left. These are all going to go to the chipset.

    The chipset is ironically the only device in the system that makes good use of the lanes. At least in cases where the motherboard and CPU are at the same generation (backwards compatibility sometimes allows you to stick a newer CPU into an older motherboard, in which case, the chipset lanes will only connect at the older PCIe gen standard) They uses some sort of internal PLX like capability to spread out the total 4x Gen5 bandwidth over all on board devices, and in some designs share whatever bandwidth is left over to some extra PCIe slots or secondary m.2 slot. These are nice to have, but again, PLX=Latency, and that reduces performance.

    So I guess my point is, the total bandwidth really doesn't matter. Unless something changes about how these things work, you could have gen 30 PCIe bandwidth, and those 24 lanes will still be too restrictive. Gen 5 has some pretty impressive raw bandwidth numbers, but the way backwards compatibility and PCIe lanes works means in most cases it is simply not usable for anyhting but the bare minimum 1 GPU and 1 NVMe drive.
  7. I don't think it's the device that is the problem... it's the physical wiring. Just because you have nothing in a PCIE 4x slot doesn't mean those 4 PCIE lanes are available to be allocated to another device.

    What this DOES mean is if you only need 1 lane of bandwidth you can have VERY nice boards almost server like with 16 PCIE 1x lanes at gen 5 and have PCIE 3.x x 4 ike bandwidth available if the card is a PCIE 5 card. MEANING... you could have a literal **** TON of I/O be managed by these chips finally catching up to older gen Power PC or RS/6000 hardware.
  8. According to this: https://www.anandtech.com/show/17399/amd-ryzen-7000-announced-zen4-pcie5-ddr5-am5-coming-fall
    "AM5 also brings quad-channel (128-bit) DDR5 support to AMD's platforms, which promises a significant boost in memory bandwidth."

    Is that true?! Daaaaaang, it seems like AMD brought HEDT down to the mainstream segment! All we need now are more PCIe lanes, like @Zarathustra said.
    It's true - in that each DDR5 module itself represents two 32bit 'channels'. So with two modules, you now have four 32bit channels - which is, yes, quad-channel 128bit. Thing is, DDR4 (and all prior) were 64bit channels and one per module, so you had dual (64bit) channel 128bit memory instead.
  9. It's true - in that each DDR5 module itself represents two 32bit 'channels'. So with two modules, you now have four 32bit channels - which is, yes, quad-channel 128bit. Thing is, DDR4 (and all prior) were 64bit channels and one per module, so you had dual (64bit) channel 128bit memory instead.
    Hmm interesting.

    This ~should~ help with latency some I guess - more parallel channels.
  10. Hmm interesting.

    This ~should~ help with latency some I guess - more parallel channels.
    It actually does, apparently. In terms of 'latency', DDR5 is pretty difficult to quantify. It should be far worse than the best DDR4, if measurements are any indication (let alone the cycle latencies involved), but most performance testing of the better release DDR5 kits put it equal to DDR4 at worse, and better at many things.

    Future DDR5 dies will likely leave DDR4 far behind across the board. And by future, probably the second major release from the three main manufacturers for higher-end bins.

Leave a comment

Please log in to your forum account to comment