Image: NVIDIA

Serial leaker kopite7kimi has returned with some new alleged details pertaining to NVIDIA’s rumored GeForce RTX 3080 Ti, which could be fast tracked as a response to AMD’s surprisingly beastly Radeon RX 6000 Series.

According to kopite7kimi, the GeForce RTX 3080 Ti will boast a CUDA Core count of 10,496, which happens to be the same impressive amount as the GeForce RTX 3090.

It falls short of the BFGPU in the VRAM department, however: instead of 24 GB of GDDR6X, the GeForce RTX 3080 Ti will only have 20 GB to work with.

kopite7kimi also claims that the GeForce RTX 3080 Ti will mirror the memory speed (19 Gbps) and TGP (320 W) of the standard GeForce RTX 3080, as well as its lack of an NVLink port for SLI setups.

Considering the fact that the GeForce RTX 3090 costs $1,499, nobody should be surprised if NVIDIA prices the GeForce RTX 3080 Ti close to $999. This price point is also prompted by the cost of AMD’s flagship Radeon RX 6900 XT, the GPU it’ll directly compete with.

Don’t Miss Out on More FPS Review Content!

Our weekly newsletter includes a recap of our reviews and a run down of the most popular tech news that we published.

Join the Conversation

18 Comments

  1. NVIDIA is no longer to be found for anything until 2021.

    But, these so-called “leaks” (same for Intel) usually come AFTER or BEFORE a great AMD announcement / launch.

    Interesting!

  2. So, it’ll have the same CUDA core count, but less bandwidth… so it’ll be less than 10% faster for sure :). What price point do they slot this into? Above or below 6900xt? And even if this is “rushed” through, stock will be abysmal.

  3. This is more or less the card that many are waiting for; most due to stock availability of current releases of course, but personally speaking, this is [I]exactly[/I] the card I want.

  4. [QUOTE=”Stoly, post: 22786, member: 1474″]
    So the RTX 3080Ti was “confirmed” then cancelled and back again?
    [/QUOTE]

    no the 3080 with 20GB vram was cancelled and now supposedly a Ti will be announced

  5. [QUOTE=”Denpepe, post: 22792, member: 284″]
    no the 3080 with 20GB vram was cancelled and now supposedly a Ti will be announced
    [/QUOTE]
    some rumors claimed the 20gb version was the Ti

    anyway they are just rumors

  6. What’s the point of this “release” if its availability isn’t going to be until Q2 2021?? Does anyone actually THINK they’ll be able to get their hands on one at release?

  7. VRAM anxiety hitting it’s peak.

    If only Nvidia would have been smart enough to understand that no one cares about the ram specs, just put a lot in there!

  8. [QUOTE=”Uvilla, post: 22802, member: 397″]
    Is the 3000 series bandwidth starved then?
    Should have gone with hbm or something.
    [/QUOTE]
    The bandwidth wouldn’t be any higher with HBM2. It would actually be lower.

  9. [QUOTE=”Uvilla, post: 22802, member: 397″]
    Is the 3000 series bandwidth starved then?
    Should have gone with hbm or something.
    [/QUOTE]
    given that memory OC doesn’t increase performance much if at all, I’m gonna say no.

  10. [QUOTE=”GunShot, post: 22776, member: 1790″]
    NVIDIA is no longer to be found for anything until 2021.

    But, these so-called “leaks” (same for Intel) usually come AFTER or BEFORE a great AMD announcement / launch.

    Interesting!
    [/QUOTE]

    NVIDIA has been known to pull this sort of thing before. It’s not enough to have the better product. Business is a chess game and NVIDIA is very good at it.

    I also want to say, I actually believe this rumor. I normally wait and see with these things, but I’ve been thinking this would happen for quite some time now. A lot of the rumors we’ve heard about in the GPU industry didn’t really make sense by themselves. Add them together and they pointed to an inevitable RTX 3080 Ti with 20GB of VRAM.

  11. [QUOTE=”Uvilla, post: 22802, member: 397″]
    Is the 3000 series bandwidth starved then?
    Should have gone with hbm or something.
    [/QUOTE]

    No. The RTX 3080 is a cut down GPU with only 10GB of GDDR6X RAM to reduce costs so that they can be sold for $699 at a profit.

  12. [QUOTE=”Armenius, post: 22804, member: 180″]
    The bandwidth wouldn’t be any higher with HBM2. It would actually be lower.
    [/QUOTE]
    Somehow I didn’t realize that we hit that point; but also, HBM would increase [I]cost[/I] dramatically too. If only because the interposer will have to be massive. AMD seems to have learned that lesson the hard way…

  13. [QUOTE=”LazyGamer, post: 22858, member: 1367″]
    Somehow I didn’t realize that we hit that point; but also, HBM would increase [I]cost[/I] dramatically too. If only because the interposer will have to be massive. AMD seems to have learned that lesson the hard way…
    [/QUOTE]
    The size is definitely an issue, especially with these 800+ mm² chips. HBM requires more stacks to increase bandwidth, and the only way to beat the bandwidth of GDDR6X is to use four stacks. With the current version of HBM2 that would get you about 1200 GB/s of bandwidth. Three stacks would get you about 920 GB/s, which is just short of the 936 GB/s the 3090 has with GDDR6X.

  14. I think that the bandwidth would probably be there, but with that die size, the risk just wouldn’t be worth it.

    There’s a point where HBM should reduce costs; compact packages, simplified power and cooling planes, and so on, but I’m guessing that the lower performance simply wouldn’t be acceptable.

    The best use of HBM so far for graphics, IMO, was the module that AMD built for an Intel CPU package. I’m going to have to go look up the name of that now, since they only did it once, but such a solution perhaps slightly scaled up would make the perfect ‘APU’ product.

    Though given how AMDs drivers, middleware, and third-party software compatibility are going, best hope is probably for an Intel GPU unfortunately.

Leave a comment