Image: AMD

We previously shared a leak from Patrick Schur that suggested AMD’s flagship Radeon RX 6000 Series graphics card would feature a TGP of 255 W. Due to a difference in the way NVIDIA and AMD brand their total power spec (AMD uses Total Board Power [TBP], while NVIDIA uses Total Graphics Power [TGP]), Igor Wallossek has published consumption figures that give us a better idea of how much power red team’s RDNA 2 cards really use.

After figuring in the components of the card aside from the GPU (e.g., memory, fans), Wallossek estimates that AMD’s Navi 21 XT model, which many have dubbed the Radeon RX 6900 XT, will feature a TBP of 320 watts.

That happens to be right in line with NVIDIA’s GeForce RTX 3080, which carries a 320 W TGP. The implication is that AMD’s Radeon RX 6000 Series may not be any more efficient than green team’s offerings.

Wallossek goes on to give estimates for custom (overlocked) Navi 21 XT models and Navi 21 XL, the latter of which is believed to be the Radeon RX 6900. These will supposedly feature a TBP of 355 watts and 290 watts, respectively.

We also learn that AMD’s Radeon RX 6000 Series flagship will utilize 16 gigabytes of Samsung’s GDDR6 memory. According to Wallossek, custom RDNA 2 GPUs will be released as early as mid-November.

Don’t Miss Out on More FPS Review Content!

Our weekly newsletter includes a recap of our reviews and a run down of the most popular tech news that we published.

Join the Conversation

22 Comments

  1. This is getting more and more interesting.

    Who would have thought that in 2020 with smaller and smaller process nodes and after years of focus on mobile low power stuff, both major GPU competitors would be coming out with power monsters.

    I’m really hoping AMD hits it out of the park, because Nvidia’s launch has been an utter disaster. They could used to be knocked down a few pegs.

    I hadn’t planned on going AMD this generation, but it may just be the first time since my Radeon HD 7970 that I do!

    I’m excited for this launch for once.

  2. [QUOTE=”Zarathustra, post: 21344, member: 203″]
    This is getting more and more interesting.

    Who would have thought that in 2020 with smaller and smaller process nodes and after years of focus on mobile low power stuff, both major GPU competitors would be coming out with power monsters.

    I’m really hoping AMD hits it out of the park, because Nvidia’s launch has been an utter disaster. They could used to be knocked down a few pegs.

    I hadn’t planned on going AMD this generation, but it may just be the first time since my Radeon HD 7970 that I do!

    I’m excited for this launch for once.
    [/QUOTE]
    If you never want to see significant performance bumps again, we could start lowering the power consumption. But nobody wants to see that. Ampere does look to be about 40% more efficient in rasterization, but it is also about 70% faster. Had NVIDIA stuck to the status quo of 250W on the top card then people would have been complaining about the anemic 30% performance bump in addition to the supply issue.

  3. [QUOTE=”Armenius, post: 21350, member: 180″]
    If you never want to see significant performance bumps again, we could start lowering the power consumption. But nobody wants to see that. Ampere does look to be about 40% more efficient in rasterization, but it is also about 70% faster. Had NVIDIA stuck to the status quo of 250W on the top card then people would have been complaining about the anemic 30% performance bump in addition to the supply issue.
    [/QUOTE]

    Don’t get me wrong, I’m not complaining. I’m just saying I never would have predicted this a year ago.

  4. I am hoping the new AMD GPUs are as disruptive as the Ryzen family has been.

    The GPU market needs some serious competition at the high end.

    Yo Intel, I’m speaking at you too.

  5. [QUOTE=”Zarathustra, post: 21351, member: 203″]
    Don’t get me wrong, I’m not complaining. I’m just saying I never would have predicted this a year ago.
    [/QUOTE]
    Agree. It’s like we are entering the Netburst era of GPUs. Damn the wattage – full speed ahead.

  6. on the upper tiers of cards does power consumption really matter?

    Anyone slapping a $800-1500 card in their PC shouldn’t be too worried if they need to replace their PSU.

  7. [QUOTE=”Riccochet, post: 21412, member: 4″]
    on the upper tiers of cards does power consumption really matter?

    Anyone slapping a $800-1500 card in their PC shouldn’t be too worried if they need to replace their PSU.
    [/QUOTE]
    If I was concerned about power consumption I would stick to using the IGP. As long as the performance is there to back up the power usage and it doesn’t overload the breaker for the room the PC is in then I’m good.

  8. The terms confuse me, Total board power seems straightforward, as in how much the whole cards eats up.. but then its not that?
    Total graphics power could be anything as far as the words go. Soo.. huh, what?

  9. [QUOTE=”Uvilla, post: 21428, member: 397″]
    The terms confuse me, Total board power seems straightforward, as in how much the whole cards eats up.. but then its not that?
    Total graphics power could be anything as far as the words go. Soo.. huh, what?
    [/QUOTE]
    I have no idea. To the end user nothing really matters besides what the whole card is using, so the way NVIDIA does it makes more sense to me.

  10. [QUOTE=”Armenius, post: 21350, member: 180″]
    If you never want to see significant performance bumps again, we could start lowering the power consumption. But nobody wants to see that. Ampere does look to be about 40% more efficient in rasterization, but it is also about 70% faster. Had NVIDIA stuck to the status quo of 250W on the top card then people would have been complaining about the anemic 30% performance bump in addition to the supply issue.
    [/QUOTE]
    I never saw anyone complaining about Pascal using “too little power”
    Unfortunately the trend of doing “more with less” has pretty much died with Ampere.

  11. [QUOTE=”Armenius, post: 21417, member: 180″]
    If I was concerned about power consumption I would stick to using the IGP. As long as the performance is there to back up the power usage and it doesn’t overload the breaker for the room the PC is in then I’m good.
    [/QUOTE]

    Lower power consumption —> lower heat —->lower fan noise—>better stability—->better OC

    And cheaper too.

  12. [QUOTE=”Stoly, post: 21436, member: 1474″]
    I never saw anyone complaining about Pascal using “too little power”
    Unfortunately the trend of doing “more with less” has pretty much died with Ampere.
    [/QUOTE]
    At 40% better efficiency the 3080 would be just as fast as the 2080 on 150W compared to 215W. But nobody wants to buy a card in the same product tier that offers the same amount of performance just because it uses less power. But if that is what you want then go ahead and buy a 3050 when/if that comes out.

  13. [QUOTE=”Armenius, post: 21441, member: 180″]
    At 40% better efficiency the 3080 would be just as fast as the 2080 on 150W compared to 215W. But nobody wants to buy a card in the same product tier that offers the same amount of performance just because it uses less power. But if that is what you want then go ahead and buy a 3050 when/if that comes out.
    [/QUOTE]
    On 2nd thoughts the RTX3070 is cheaper, faster and consumes less power than the RTX2080Ti. That’s what I want from a video card.

    So maybe ampere does not scale very well on a bigger die. Might benefit from moving to 7nm?

  14. [QUOTE=”Armenius, post: 21441, member: 180″]
    At 40% better efficiency the 3080 would be just as fast as the 2080 on 150W compared to 215W. But nobody wants to buy a card in the same product tier that offers the same amount of performance just because it uses less power. But if that is what you want then go ahead and buy a 3050 when/if that comes out.
    [/QUOTE]

    Oh that’s actually incorrect. If you’re putting in a specialized backplane to host 12 video cards, the less crazy cooling and insane power draw you can get for performance is actually a good thing. It all depends on your use case. But talking consumer only… for custom desktops like most of us run yea power draw is kind of a wash.

    But for the vast majority of users out there running on laptops or prebuilt systems from Dell or HP they kind of want good performance too and not insane power needs because it’s going in a ventless cabinet so they don’t have to look at the thing.

  15. Those estimates just look high… a decent 120mm case fan pulls < 2w at full power... how in the world did he end up with 15w for fans? And what is "other power"? It feels more like a slide that started with 320w and then divided the numbers to make them add up, more than a slide that started with anything AMD said with "normal" values and ended up at 320w. Anyways, if we actually think for ourselves, it may be much different. 5700XT had a TDP of 180w... and a TGP of 225w... what AMD (and NVIDIA) call TGP is actually TOTAL GRAPHICS POWER or TOTAL BOARD POWER, not just the power required by the GPU itself. Courtesy of igors lab June 2019...
    [ATTACH type="full" width="530px"]577[/ATTACH]

    Now with the actual definition in mind and knowing the 5700XT (non OC models) pretty much drew exactly at 225w... why are we now all the sudden believing this new TGP term (it's not new, it's been used for a while) has some magical new meaning? I mean, who knows what this thing will really draw, but his numbers seem A.) Inflated, and B.) Assume they chose to redefine a term that is already in use to mean something else.

    I dunno, fun to speculate, but all the baseless guessing and mixing of terms makes me think nobody has a real clue and they just make up random stuff for clicks ;).

  16. [QUOTE=”Stoly, post: 21439, member: 1474″]
    Lower power consumption —> lower heat —->lower fan noise—>better stability—->better OC
    [/QUOTE]
    Yes!
    [QUOTE=”Stoly, post: 21439, member: 1474″]
    And cheaper too.
    [/QUOTE]
    In terms of cost to support such a part, sure; but many times parts that do the same work in a lower power envelope are sold at a premium themselves.
    [QUOTE=”Grimlakin, post: 21447, member: 215″]
    But talking consumer only… for custom desktops like most of us run yea power draw is kind of a wash.
    [/QUOTE]
    Plus or minus fifty watts I agree. If it takes another hundred watts I’d want to be more careful.

  17. This bodes well. Leaked Firestrike scores show “6800XT” trouncing the 3080 in rasterization. Unfortunately it is just about equal to a 2080 Ti in ray tracing (Port Royal).

    [URL]https://videocardz.com/newz/amd-radeon-rx-6800xt-alleged-3dmark-scores-hit-the-web[/URL]

    Can’t wait for the full reveal next week.

  18. Yeah, will be nice to see real games and power draw. Also would be great to see the 6900XT as well, keep in mind these numbers that are beating the 3090 in Raster aren’t even their top end card.

  19. That’s interesting I wonder if they are going the route of.

    Screw Ray Tracing We have raster performance so good that you can take away from it for RT and it won’t even be a big deal. 😉 Meanwhile Nvidia is going with cores specifically designed for RT.

  20. [QUOTE=”Armenius, post: 21579, member: 180″]
    This bodes well. Leaked Firestrike scores show “6800XT” trouncing the 3080 in rasterization. Unfortunately it is just about equal to a 2080 Ti in ray tracing (Port Royal).

    [URL]https://videocardz.com/newz/amd-radeon-rx-6800xt-alleged-3dmark-scores-hit-the-web[/URL]

    Can’t wait for the full reveal next week.
    [/QUOTE]
    actually that’s pretty impressive considering it doesnt have dedicated rt hardware

  21. [QUOTE=”Grimlakin, post: 21624, member: 215″]
    That’s interesting I wonder if they are going the route of.

    Screw Ray Tracing We have raster performance so good that you can take away from it for RT and it won’t even be a big deal. 😉 Meanwhile Nvidia is going with cores specifically designed for RT.
    [/QUOTE]
    I’d be ok with that. RT right now is nothing more than fairly weak window dressing … at least until the hardware can get there.

    I’m perfectly OK with nVidia subsidizing that for me. I’ll take better rasterization until we get there.

  22. [QUOTE=”Stoly, post: 21633, member: 1474″]
    actually that’s pretty impressive considering it doesnt have dedicated rt hardware
    [/QUOTE]
    Why do people keep saying this? Is there some sort of misunderstanding about how AMD is going to do raytracing? They do have dedicated hardware… it’s just attached to a different part of the GPU than NVIDIA… it doesn’t make it not dedicated, it just makes it a different design. That’s like saying Lamgorghini doesn’t have an engine because they put it in the back of the car… no, it’s still an engine, it’s placement is different!

    “Essentially, AMD will be introducing what it calls a “fixed function ray intersection engine”, which is specialized hardware that only handles BVH intersection”
    [URL]https://www.techpowerup.com/256975/amd-patent-shines-raytraced-light-on-post-navi-plans[/URL]

    They have specialized hardware that’s only function is to handle the ray intersections… how is that not dedicated? It’s fixed function, meaning it can’t be used for any other thing. Yes, they have tied it to their other processing units and it’s not a standalone thing, but that doesn’t mean it isn’t a physical hardware implementation. Yes, it’s different than NVIDIA and does share some resources rather than create another pool of data, so it’s a bit more hybrid/integrated, but that doesn’t make it not have rt hardware.

Leave a comment