NVIDIA GeForce RTX 3080 Founders Edition Video Card

Introduction

The successor to NVIDIA’s Turing architecture and the replacement for the GeForce RTX 2000 series is finally here.  That said, if you are coming from an older generation such as Pascal (GeForce 10 Series), CEO Jensen Huang made it clear in his Official Launch Event for the NVIDIA GeForce RTX 30 Series that you now have permission to upgrade, this is the one you are looking for, it’s safe.  Here is the official press release.  Here is the long official information page from NVIDIA with all the goodies about this launch.      

It is finally time to replace your GeForce GTX 1060, or 1070, or 1080 or even 1080 Ti.  These RTX 30 series cards upgrade is a much bigger upgrade than what the Turing generation, GeForce RTX 2000 series brought to standard, rasterized, gameplay performance.  For those currently on the Turing architecture (GeForce RTX 2000 series) the new Ampere GPUs promise to bring much improved and finally useful levels of performance with NVIDIA RTX features like Ray Tracing and DLSS. 

On the bench today we have the brand-new NVIDIA GeForce RTX 3080 Founders Edition video card directly from NVIDIA which has an MSRP of $699.  It should be noted, the Founders Edition is a unique design from NVIDIA, it is not the reference design.  There is actually a reference design with a rectangular PCB that all add-in-board partners will have.  Before we go into card details and specifications, let’s take a little look at the Ampere architecture that the GeForce RTX 3080 Founders Edition is based on.

Ampere

The new generation of GPUs that are being launched are based on the NVIDIA Ampere architecture.  This architecture continues the RTX branding because NVIDIA Ray Tracing and DLSS are two very important features of focus for this generation.  In fact, most of the improvements made directly affect Ray Tracing and AI/DLSS functions and floating-point performance. 

The Ampere architecture is based on an interesting manufacturing process this generation; NVIDIA has gone to Samsung to create a custom 8nm node with 28 billion transistors.  This specific Samsung 8nm node is an enhancement of the Samsung 10nm process branch, it is not on Samsung’s new 7nm EUV process sadly. By comparison, TSMC’s 7nm process and Samsung’s 7nm EUV process is better. This will ultimately affect clock speed capability out of the Samsung 8nm node.

The result of using Samsung 8nm is that these GPUs are going to gobble power and output high thermals to achieve the performance targets.  The obtainable clock speeds will not be what they could be on a better process. This is why most likely we see a huge bump in CUDA Cores, to offset the lower GPU clock speeds. If you can’t ramp up the clock speed, then you double down on cores. There is just no getting around this, this fact is proven in the design of the video cards we see today. NVIDIA has cleverly managed the power and temperatures this generation with its very smart engineers and custom Founders Edition design. The proof is the pudding though, so to speak, so let’s see what the performance turns out to be in this review.  BTW, we do plan to do overclocking in a separate review.

Shaders

Aside from the node, the architecture itself is very sound.  One of the main components of performance is the fact that NVIDIA has designed a new datapath for FP32 and INT32 operations which results in all four partitions combined executing 128 FP32 operations per clock.  This doubles the FP32 operations to two shader calculations per clock compared to the prior Turing architecture only executing one FP32 per clock.  This is the primary change to the programable shader operations. 

The question is, will this still speed up traditional game performance?  NVIDIA goes on to say that gaming performance will be sped up because graphics and compute algorithms rely on FP32 executions on modern shader workloads.  NVIDIA also states that Ray Tracing denoising shaders benefit from FP32 speedup, the heavier the Ray Tracing the bigger the performance gain versus the last generation. As shader workloads continue to intensify, FP32 will help more. 

In the past we’ve seen ATI go down this road, prioritizing floating-point performance over integer and that didn’t work out so well for them at the time.  However, it’s a different time now, and maybe games have finally moved into the era where floating-point is more important than ever.  We will have to see.  It is because of this doubling of the FP32 ops that you will see the CUDA cores be counted in a different way now.    

NVIDIA Ampere Architecture SM Unit Block Diagram
NVIDIA Ampere Architecture Block Diagram

Inside the SM each block has an FP32 and FP32+INT32 and a Tensor Core.  The prior generation had just one separate FP32, one separate INT32, and two Tensor Cores.  That’s right, Turing had 8 Tensor Cores per SM, and Ampere has 4.  BUT, the Tensor Cores have been updated in capability, as we’ll discuss below.

Instead, an FP32 has been added to an INT32 and concurrent execution of floating-point and integer is possible.  L2 bandwidth has also been improved and improved to 33% more capacity and 2x the cache partition size. 

GeForce RTX 3080 Block Diagram

Going by the numbers you can see how GeForce RTX 2080 SUPER GPU block diagram directly compares with GeForce RTX 3080 GPU block diagram.

RT Cores 2nd Generation

The RT Cores have been beefed up, and it is using NVIDIA’s 2nd generation RT Cores that allow 2x the triangle intersection rates.  Where the prior generation RT Cores could do 34 RT TFLOPS the Ampere architecture can do 58 RT TFLOPS.  The new RT Cores also allow for concurrent Ray Tracing and shading operations. 

There are also more of them this generation in each video card.  With Ampere RT Cores now accelerate Ray Tracing with Motion Blur.  The pipeline adds an Interpolate of triangles by position in time, something which the prior generation did not have.  The end result is that using Ray Tracing with Motion Blur is now up to 8x faster, motion blur is no longer a bottleneck when used with Ray Tracing.   

Tensor Cores 3rd Generation

The Tensor Cores have also been beefed up and it is using NVIDIA’s 3rd generation Tensor Cores that allow up to 2x math for sparse matrices.  The new Tensor Cores can automatically identify and remove less important DNN weights and the new hardware processes the sparse network at twice the rate of Turing.  Where the prior generation Turing architecture could do 89 Tensor TFLOPS Ampere can do 238 Tensor TFLOPS. Tensor Cores are used for DLSS (Deep Learning Super Sampling) and other machine learning. 

While the GeForce RTX 3080 (in this example) uses fewer Tensor Cores per SM (4) compared to the GeForce RTX 2080 SUPER’s (8) per SM, the Ampere Tensor Cores with sparsity exceed what Turing could do.

Power Rails

NVIDIA Architecture Performance per Watt and Graphics Power Rail and Memory Power Rail GPU

We wanted to show this presentation slide not for the performance per Watt claim, but because of the fact that with Ampere it will have separate power rails.  The graphics are on its own power rail, while the memory is on its own rail within the GPU.

GDDR6X

NVIDIA Ampere Architecture GDDR6X

NVIDIA has introduced a new memory technology specifically for the GeForce RTX 3090 and GeForce RTX 3080, GDDR6X.  GDDR6X uniquely uses new signaling technology called PAM4.  This new signaling technology is forward-looking, and the first time it’s been used on graphics memory.  It allows for 4-level stepping in Voltage, combined with new coding called Max Transition Avoidance Coding (MTA) and new algorithms for training and adaptation, memory frequencies can be maintained very high with less interference.  Expect to see speeds as high as 19.5GHz (maybe even higher), but at the cost of power.  This new memory is power-intensive adding to the total board power.   

HDMI 2.1 and AV1

Other enhancements to the Ampere architecture are support for AV1 full hardware decode, which means 8K 60FPS video.  These are also the first GPUs to support HDMI 2.1 for 8K at 60Hz or 4K at 120Hz on one cable. 

RTX IO

Another new technology NVIDIA announced, which we think will really improve gaming load and streaming is RTX IO.  In a traditional uncompressed read, data needs to travel from your storage device, through PCIe to read to CPU and system memory, then it must go out from system memory back on the same bus to the GPU and finally to GPU memory.  This consumes a lot of raw CPU processing power and I/O bandwidth.  To help, games have been using compressed reads.  While this helps a lot, your system is still bottlenecked by having to go through the CPU and system memory to compress and decompress reads and writes, and still travel back to the GPU and GPU memory with heavy CPU load.  The CPU cannot keep up with expanding demands and storage device speeds. 

What RTX IO does is to remove these bottlenecks by shifting the load directly to the GPU and GPU memory, removing CPU and system ram from the equation.  The GPU can decompress data at tremendous speeds. It only takes half of a GPU core to do what 24 CPU cores can do. 

This does not mean games can use your SSD as “RAM” or “VRAM” storage, like the PlayStation 5.  It simply means the decompressing of data can be offloaded to the GPU and sent directly to VRAM bypassing bottlenecks of CPU burden and I/O.  NVIDIA states that this will not take away from your GPU performance accelerating games, as the GPU takes very little resources to do this, it can do it very quickly, unlike the CPU.  It will alleviate burdens in the system, not add to them. 

Now, this technology is a bit down the road, as we wait for DirectStorage to come to Windows next year from Microsoft.  NVIDIA RTX I/O plugs into the DirectStorage API. The games must support it.  However, this technology could really be a big game-changer for PC gaming loading and streaming and NVIDIA seems to be at the forefront of this technology.

NVIDIA Reflex

Finally, NVIDIA is also introducing NVIDIA Reflex, a technology to reduce input lag while gaming.  If you want to reduce latency games will have an option and you just simply turn it on in the games that support it. One area where this technology is really going to start taking shape and showing the most benefit is on displays with insanely high refresh rates, like the new 360Hz panels coming out. In addition, there will be hardware analyzing support built into GSYNC monitors that lets you measure latency in a new and unique way. With Reflex Latency Analyzer you actually plug your mouse into the monitor and monitor latency in ways never done before. But the actual Reflex Low Latency SDK technology is software and enabled in-game on supported games, and you connect your mouse normally as you would to the computer.

Summary

Therefore, at the end of the day, there are three primary improvements in Ampere over Turing to improve performance.  The “shaders” or CUDA Cores have been improved by doubling FP32 operations inside the SM.  Ray Tracing Cores have been improved by doubling the triangle intersection rates, allowing concurrent Ray Tracing and Shader ops, and removing Motion Blur as a bottleneck.  Tensor Cores have been improved by doubling the throughput of sparse matrices and removing less important DNN weights. 

Overall, there are also more RT Cores and more Tensor Cores on each GPU as well.  It seems NVIDIA is doubling-down on Ray Tracing and AI performance, such as DLSS with Ampere.  Ampere is really shaping up to be a floating-point powerhouse.    

Recent Posts

Brent Justice

Brent Justice has been reviewing computer components for 20+ years, educated in the art and method of the computer hardware review he brings experience, knowledge, and hands-on testing with a gamer oriented...

Join the Conversation

75 Comments

  1. Ok.. So basically if you’re still at 1080p (or even 1440p), this card is overkill. I’ve been holding off on moving to 4k for the longest time now, and it looks like my patience will be rewarded. My plan is to get a couple of 4k monitors, and then this card would be a nice driver for those.

    Brent, excellent review as always! Thank you FPS Review for tremendous coverage!

  2. So it’s true that those leaked benches of Shadow of the Tomb Raider and Far Cry: New Dawn were worst-case scenarios for this comparison. Look like on average the 3080 is around 70% faster than the 2080 at 4K in rasterization and 30% faster than the 2080 Ti. Add ray tracing and it almost hits the "double performance" claim NVIDIA was making. That is almost exactly the same performance improvement we saw with the Pascal generation.

    Can’t wait to see the 3090 review 😁

  3. It seems to me, that the increase in power consumption is larger than the increase in performance. I’d have expected the opposite.

    Too bad Control is still unplayable in 4K with ray tracing, well maybe with the 3090 :D

  4. It seems to me, that the increase in power consumption is larger than the increase in performance. I’d have expected the opposite.

    Too bad Control is still unplayable in 4K with ray tracing, well maybe with the 3090 :D

    +29% power consumption compared to the 2080, with a performance increase of +70%. The increase in performance is larger than the increase in power. Not quite the 1.7x efficiency NVIDIA stated, but close.

    Control is quite playable with ray tracing using DLSS as pointed out in the review, and it looks better than native thanks to DLSS 2.0.

    1. make that 90% promised energy efficiency increase. Seems to me that if nvidia had gone 7nm, it would have pretty much spot on.

      Anyway I think its still great. Huge performance increase for a moderate increase in power draw. I’ll take that. Specially after watching AMD getting nowhere close to nvidias power efficiency. Hopefully tables will finally turn with the RX6000.

      Great time to be a gamer.

  5. +29% power consumption compared to the 2080, with a performance increase of +70%. The increase in performance is larger than the increase in power. Not quite the 1.7x efficiency NVIDIA stated, but close.

    Control is quite playable with ray tracing using DLSS as pointed out in the review, and it looks better than native thanks to DLSS 2.0.

    Mind you the +29% is compared to the entire system wattage, not the GPU. If you could isolate the GPU power consumption the percentage incrase would be much larger. Probably still not 70% but close to 50.

    With DLSS Control is already playable on the 2080ti I was of course referring to running without DLSS. With the claimed double ray tracing performance the 3080 should be able to handle it without DLSS.

  6. Who cares about power draw? Unless you’re trying to use a PSU that’s on the ragged edge of being sufficient.
  7. Who cares about power draw? Unless you’re trying to use a PSU that’s on the ragged edge of being sufficient.

    I was talking with a lot of the guys I play Destiny 2 with last night. Most of them are running units below 700w with stock Core i9-9900K’s and either RTX 2080 or RTX 2080 Ti cards. Any of them thinking about upgrading to the 30 series are contemplating PSU upgrades along with the graphics cards. Most are looking at the 3080 rather than the 3090.

  8. First site I went to when I woke up this morning to see this review and was not disappointed. Answered all my questions and I’m actually surprised at how well this thing performs.

    The thing that surprises me most is the 850 watt PSU recommendation! I figured it was just marketing BS when Nvidia first announced they recommended a 750 at minimum and figured I could run my EVGA 750 a little longer. Glad I waited for the review cause I never would’ve guessed I’d need to account for that in my budget too. But it’s an excuse to buy something new so I’m sure the wife won’t mind an extra $125 on the old credit card lol.

    I also don’t think this would be overkill for 1080p if you’re running a 240 Hz monitor. Judging by the benchmarks this would be about perfect. My rig currently runs Breakpoint at around 100-110 fps with mostly high settings but the 3080 would likely get over 140+ with maxed settings. I think that would be worth the investment.

    I’m gonna buy mine at Best Buy. My local store usually has a good selection of PC parts and I like being able to have an actual store I can take something back to if it craps the bed on me.

  9. I’m planning on picking one up (if I get lucky tomorrow) and running it on my 650 watt Seasonic titanium. It’s currently running a 2700x, 1080ti, and1660 super, so I’m not concerned about replacing both video cards with one 3080. I’ll upgrade the PSU to a 1000W when the next gen Ryzen CPUs drop.

    Now, I also intend on trying to get a 3090 – if I end up with both, I’ll sell the 3080, but I’m a little more wary of the 3090 on that 650w PSU, but the difference is only like 30 watts. I’ll just wait to OC which ever card till I upgrade PSU.

  10. Honestly not as fast as I had hoped but not bad in any way shape or form. I honestly would not understand many 2080 Ti owners making the jump to this and so we wait on the 3090 Reviews!
    Great Review as usual Brent!
  11. Honestly not as fast as I had hoped but not bad in any way shape or form. I honestly would not understand many 2080 Ti owners making the jump to this and so we wait on the 3090 Reviews!
    Great Review as usual Brent!

    Out of curiousity I checked the local classifieds for 2080ti listings, and there aren’t that many, and half of them are listed around $1000, with the lowest at $700, keep on dreaming boys. (I was thinking about picking up a second 2080ti – not anymore)

  12. You may have done a few spot tests and found no real difference, but did changing between PCIE modes 3.x or 4.x make any difference for the cards performance?
  13. Nice review and the card looks nice too. The only issue I have is the performance is a bit lower than it could be due to the use of the 3700X system. I know AMD gives you PCIe 4.0, but it lags about 7-8% (3900XT does) from the Intel CPU’s. A dual test setup would be nice.

  14. Nice review and the card looks nice too. The only issue I have is the performance is a bit lower than it could be due to the use of the 3700X system. I know AMD gives you PCIe 4.0, but it lags about 7-8% (3900XT does) from the Intel CPU’s. A dual test setup would be nice.

    http://[URL]https://www.techpowerup…-rtx-3080-amd-3900-xt-vs-intel-10900k/26.html

    [/URL]

    AMD still lags a bit in IPC compared to Intel, so in resolutions that are more CPU-dependent like 1920×1080 you’re going to see a big difference. The more the GPU struggles, the more that difference disappears. It’s practically non-existent at 4K.

  15. AMD still lags a bit in IPC compared to Intel, so in resolutions that are more CPU-dependent like 1920×1080 you’re going to see a big difference. The more the GPU struggles, the more that difference disappears. It’s practically non-existent at 4K.

    I thought in the current generation that IPC lead was actually on the Ryzen CPU’s side.

    At least according to this techspot article..

    https://www.techspot.com/article/1876-4ghz-ryzen-3rd-gen-vs-core-i9/

    Not saying it’s a huge difference though Intel does lead still in raw GHZ throughput.

  16. I thought in the current generation that IPC lead was actually on the Ryzen CPU’s side.

    At least according to this techspot article..

    https://www.techspot.com/article/1876-4ghz-ryzen-3rd-gen-vs-core-i9/

    Not saying it’s a huge difference though Intel does lead still in raw GHZ throughput.

    In the context of gaming, which is what we’re talking about with the 3080, It is still Intel. This is at 1280×720, so basically as much CPU dependence as you can get on a modern game. No question that Ryzen is better in other types of applications, especially multithreaded ones.

    View attachment 429

  17. I just hope my 650W PSU can hadle it, it’s a seasonic titanium one, otherwise I will need to swap in the 850 from my X299
  18. In the context of gaming, which is what we’re talking about with the 3080, It is still Intel. This is at 1280×720, so basically as much CPU dependence as you can get on a modern game. No question that Ryzen is better in other types of applications, especially multithreaded ones.

    View attachment 429

    To compare IPC you’d have to lock both cpus to the same clocks. This just tells us that a higher clocked cpu is faster.

  19. The indepth review I’ve been looking for! Thanks for making it pretty straight forward and easy to understand even for a newbie such as myself.

    Based on Microsoft FS performance, I’m really hoping MS does continue to optimize the game’s performance. Until then, here’s hoping the new architecture from AMD improves things a bit. Otherwise I’m going to have to start selling blood to buy a Threadripper!

  20. The indepth review I’ve been looking for! Thanks for making it pretty straight forward and easy to understand even for a newbie such as myself.

    Based on Microsoft FS performance, I’m really hoping MS does continue to optimize the game’s performance. Until then, here’s hoping the new architecture from AMD improves things a bit. Otherwise I’m going to have to start selling blood to buy a Threadripper!

    M$ has always developed new releases of Flight Sim to be ahead of current GPU performance levels. It’s by design. So the game has plenty of room to grow over many years. FSX was the same way, as was FS98,

  21. The indepth review I’ve been looking for! Thanks for making it pretty straight forward and easy to understand even for a newbie such as myself.

    Based on Microsoft FS performance, I’m really hoping MS does continue to optimize the game’s performance. Until then, here’s hoping the new architecture from AMD improves things a bit. Otherwise I’m going to have to start selling blood to buy a Threadripper!

    You’re in luck! New patch released today addresses CPU performance impact by preventing interruption of rendering threads, among other things.

    https://www.flightsimulator.com/patch-version-1-8-3-0-is-now-available/

  22. Yeah, it’s time to hand down the 980Ti to the kids computer and get a 3080…if I can manage to actually place an order before they go OoS.
  23. It seems to me, that the increase in power consumption is larger than the increase in performance. I’d have expected the opposite.

    Too bad Control is still unplayable in 4K with ray tracing, well maybe with the 3090 :D

    Well, Control has DLSS 2.0 now right?

    From the comparisons I have seen, DLSS 2.0 really doesn’t result in much of an image quality degradation, and sometimes even looks better, so as long as the 3090 can handle it with DLSS on, I’ll consider that a success.

  24. VP where I work has an Aorus Waterforce 2080 AIO with a 240mm rad. Says it never gets over 60c. And he’s had no issues with it, at least for the last year. It’s just a closed loop system. They work quite well.

    I have dual 360 rads, one 25mm and one 35mm. Bring on the heat.

  25. VP where I work has an Aorus Waterforce 2080 AIO with a 240mm rad. Says it never gets over 60c. And he’s had no issues with it, at least for the last year. It’s just a closed loop system. They work quite well.

    I have dual 360 rads, one 25mm and one 35mm. Bring on the heat.

    Yeah, AIO’S usually get you to the 60’s overclocked and loaded up.

    My WC loop kept my Pascal Titan under 40C overclocked and loaded up. That’s my target because under 40C I seem to have been getting better boost clocks.

    Question is if the temp calculus needs to change considering the massive thermal envelopes of these things.

  26. great review…very detailed…I love the games you tested and the fact that you enabled things like AA, PhysX, Hairworks etc…so you pretty much maxed out the graphics…lots of other 3080 reviews disabled a lot of the advanced graphics settings

    me personally I’m waiting for the 3080 20GB variant…I’m in the process of building a new Zen 3 system so I can afford to be patient

  27. Do Metro Exodus and SotTR still use DLSS 1? If that’s so, do they still exibit the same issues like blur and smear?

    I really hope DLSS2.x becomes a trend. By now there should be dozens of games and patches for DLSS, but still only a handful of games support it, and only a couple of them actually look awesome.
    /rant

  28. Honestly not as fast as I had hoped but not bad in any way shape or form. I honestly would not understand many 2080 Ti owners making the jump to this and so we wait on the 3090 Reviews!
    Great Review as usual Brent!

    Even if this were a huge upgrade over the RTX 2080 Ti, I’d still wait for the 3090. The only way I’d buy a 3080 is if the 3090 was less than 7% faster than a 3080 at twice the price or something stupid like that.

  29. Even if this were a huge upgrade over the RTX 2080 Ti, I’d still wait for the 3090. The only way I’d buy a 3080 is if the 3090 was less than 7% faster than a 3080 at twice the price or something stupid like that.

    20% faster for the price would still be a big NO NO for me even if I could spare the cash. But for people that are already used to pay $1,000+ for a card I guess I can see that happening. And people that already have a RTX2080Ti have nowhere else to go.

  30. 20% faster for the price would still be a big NO NO for me even if I could spare the cash. But for people that are already used to pay $1,000+ for a card I guess I can see that happening. And people that already have a RTX2080Ti have nowhere else to go.

    Enthusiast level cards have always been like this since at least the 8800 Ultra. The 8800 Ultra was about 48% more expensive than a 8800 GTX for 10% more performance.

    Big variable that needs to be considered with the 3090 vs. the 3080, though, is the amount of memory. GDDR6X is supposedly twice as expensive as GDDR6 for 8Gb chips in bulk (around $24/chip compared to $12/chip). That would make the 24GB on the 3090 $576 vs. $240 for the 10GB on the 3080. This is not the only factor accounting for the price difference, but it is a big one.

    If you want the fastest single gaming card available and have the money to buy it, though, then why not.

  31. Flight Simulator 2020 Re-Testing with New Patch
    9/18/2020

    Thank you to Armenius I became aware of this new patch for Flight Sim 2020 which has many adjustments to the performance. Therefore I decided to re-test the game with the new patch on the RTX 3080 (with the same driver) to see if there are any changes. These are my results.

    1440p Ultra – No Change In Performance
    1440p High-End – FPS went from 46 FPS to now 47.8 FPS AVG

    4K Ultra – FPS went from 29 FPS to 31.3 FPS AVG
    4K High-End – FPS went from 42.6 FPS to 46 FPS AVG

    The end result is that in the "High-End" Quality Preset, I saw a larger performance bump with the new patch. 4K "High-End" was the biggest performance leap.

    In the "Ultra" Quality Preset I only saw a very small increase at 4K "Ultra". However, at 1440p "Ultra" there was no difference.

    These are by no means game-changing numbers here, but it is good to see 4K "High-End" performance increasing, I just wish "Ultra" Quality performance increased more.

    It also seems overall, bigger changes at 4K than 1440p.

  32. Thanks, @Brent_Justice for such an in-depth and great review. As always, feel like I’ve been taken back to school. Now just to retain it. Had to wait until tonight until I had time to really read through it.
  33. Enthusiast level cards have always been like this since at least the 8800 Ultra. The 8800 Ultra was about 48% more expensive than a 8800 GTX for 10% more performance.

    Big variable that needs to be considered with the 3090 vs. the 3080, though, is the amount of memory. GDDR6X is supposedly twice as expensive as GDDR6 for 8Gb chips in bulk (around $24/chip compared to $12/chip). That would make the 24GB on the 3090 $576 vs. $240 for the 10GB on the 3080. This is not the only factor accounting for the price difference, but it is a big one.

    If you want the fastest single gaming card available and have the money to buy it, though, then why not.

    I’m aware of the law of diminishing returns. I always try to get the best bang for the buck, which for now IMO is the RTX3080, but will surely get replaced soon by the RTX3070 or RX6000.

    But I agree, whatever makes you happy no matter the cost its fine. Probably if I had the cash, I’d eat my words and end up getting one too 😁😁

  34. But I agree, whatever makes you happy no matter the cost it’s fine. Probably if I had the cash, I’d eat my words and end up getting one too

    I used to mock those who bought Titans for gaming back in Maxwell days. Now, I just start saving for whatever the next biggest hammer will be right after a release happens. Best value, of course, not, best experience, better believe it. It’s also nice seeing these top tier cards usually age gracefully and knowing you’re going to get at least 2 years of top-end performance out of them. My first x80 Ti was a 1080 Ti and it’s still chugging away 4+ years later at a reasonable level. The 2080 Ti I have now, it’ll end up in another rig and still be decent for 1440p for a year or two longer. Initial sticker shock sucks, people jump on the hate trains, but 3-4 years down the road and that same card is doing ok and I think to myself what a great ride it’s been.:giggle:

  35. Even if this were a huge upgrade over the RTX 2080 Ti, I’d still wait for the 3090. The only way I’d buy a 3080 is if the 3090 was less than 7% faster than a 3080 at twice the price or something stupid like that.

    Yeah and we both know a 3080Ti will likely come out.

  36. I don’t think it will. I think it will be a 3080 Super. It seems like NVIDIA is getting away from the "Ti" naming scheme.

    I think he means a faster/beefier version of the 3080, wether its called super, Ti, hyper, ultra, jumbo, is irrelevant.

  37. I think he means a faster/beefier version of the 3080, wether its called super, Ti, hyper, ultra, jumbo, is irrelevant.

    Yeah, from memory of ‘Ti’ and ‘Super’ releases, the only real common thread is that they have better specifications than whatever they are a ‘Ti’ or ‘Super’ of. Could be the same GPU die with faster memory, more memory, more compute resources unlocked, the next largest GPU die, or some combination.

  38. I have to say I’m a little bit dissapointed on DLSS+RTX performance hit as it seems to be comparatively the same as Turing. (about 10-20% depending on the game @4K) I’m getting this figure having as a reference 1440p RTX performance, since this is how its rendered under DLSS.

    I was expecting much better performance as ampere tensor cores are supposedly 3x faster and rtx cores 2x faster than Turing. Some untapped potential, maybe?

  39. I have to say I’m a little bit dissapointed on DLSS+RTX performance hit as it seems to be comparatively the same as Turing. (about 10-20% depending on the game @4K) I’m getting this figure having as a reference 1440p RTX performance, since this is how its rendered under DLSS.

    I was expecting much better performance as ampere tensor cores are supposedly 3x faster and rtx cores 2x faster than Turing. Some untapped potential, maybe?

    I agree.
    But also pretty cool that $699 is legit 4K60fps with just about every game out there.
    Hopefully they can tweak things a bit more down the line.

  40. I have to say I’m a little bit dissapointed on DLSS+RTX performance hit as it seems to be comparatively the same as Turing. (about 10-20% depending on the game @4K) I’m getting this figure having as a reference 1440p RTX performance, since this is how its rendered under DLSS.

    I was expecting much better performance as ampere tensor cores are supposedly 3x faster and rtx cores 2x faster than Turing. Some untapped potential, maybe?

    The full capability for RT will not be seen with the older RTX titles since they used DXR 1.0. Wolfenstein update, which showed a much better spread between the 2080Ti and 3080 represents more the 3080 RT potential pretty sure is using the much better parallel ability of DXR1.1 enhancements.

    I put a decent amount of effort in obtaining this card, Nvidia, Bestbuy, really wanted the FE. Nvidia failed to deliver or sell me one. Been trying since without any luck. If Nvidia cannot take care of their customers then best to move on.

  41. The full capability for RT will not be seen with the older RTX titles since they used DXR 1.0. Wolfenstein update, which showed a much better spread between the 2080Ti and 3080 represents more the 3080 RT potential pretty sure is using the much better parallel ability of DXR1.1 enhancements.

    I put a decent amount of effort in obtaining this card, Nvidia, Bestbuy, really wanted the FE. Nvidia failed to deliver or sell me one. Been trying since without any luck. If Nvidia cannot take care of their customers then best to move on.

    My fear, that regardless of the manufacturer, is that we’ve gotten to a point where scripts rule over the consumer base. Anyone, with enough capital funds could control the market as long as supply is limited at release. There’s no penalty for the bot-world to just buy up anything and everything at launch, as long as there is demand for their resale.

  42. When should we expect a review of the 3090? I am actually contemplating waiting for the 20GB 3080 depending on how the 3090 performs. For the first time in history I feel like the 3080 is "enough" for my gaming resolution and needs, and that paying double for the 3090 is a waste of money :eek:.
  43. When should we expect a review of the 3090? I am actually contemplating waiting for the 20GB 3080 depending on how the 3090 performs. For the first time in history I feel like the 3080 is "enough" for my gaming resolution and needs, and that paying double for the 3090 is a waste of money :eek:.

    From what I can tell, whenever the NDA lifts it’ll probably be a smaller selection of reviewers than what was seen for the 3080. I’d also guess the lift will happen no later than the card on sale date of 9/24 @ 6AM PDT. At this point, we don’t have one nor do we have any confirmed in the pipeline. As with the 3080, I’ll be F5’ing to try to get one when they launch and we’ll continue to shake down manufacturers for one…

  44. When should we expect a review of the 3090? I am actually contemplating waiting for the 20GB 3080 depending on how the 3090 performs. For the first time in history I feel like the 3080 is "enough" for my gaming resolution and needs, and that paying double for the 3090 is a waste of money :eek:.

    I’m kind of feeling the same. Haven’t had this much ambivalence in a while. For me the real decision will be pricing. It it costs over $1000 then I’ll still go for the 3090. Whether or not it’s DDR6X could be also be a factor.

  45. When should we expect a review of the 3090? I am actually contemplating waiting for the 20GB 3080 depending on how the 3090 performs. For the first time in history I feel like the 3080 is "enough" for my gaming resolution and needs, and that paying double for the 3090 is a waste of money :eek:.

    It is certainly a weird position to be in.

    While there are uses for the, uh, ‘excess’ performance, they don’t seem to merit significant increases in costs.

    Feels kind of like we’re on a divide, where more performance isn’t really useful for pure rasterization on desktops, but alsol isn’t nearly enough for say VR or RT (or both).

  46. It is certainly a weird position to be in.

    While there are uses for the, uh, ‘excess’ performance, they don’t seem to merit significant increases in costs.

    Feels kind of like we’re on a divide, where more performance isn’t really useful for pure rasterization on desktops, but alsol isn’t nearly enough for say VR or RT (or both).

    Only thing really holding me back from a 3080 right now is the 10GB memory simply because I’ve seen the 11GB on my 2080 Ti maxed out in a few games at 4K resolution. And it wasn’t simply usage, in those cases, having experienced degraded performance before turning down texture quality or other settings to reduce VRAM needed. Most games fall into the 6-8GB range right now, but I’m just worried that more games will be coming down the pipe that start running into limitations with 10GB. I do understand that they probably could not hit their $700 target if they added more, though. I can see the 20GB version being $1,000 or close to it unless Micron will have 16Gb chips ready when it hits production.

  47. 10 is still more than 8, remember, the 3080 isn’t an upgrade from the 2080 ti, it’s an upgrade from the 2080/super, if you have a 2080 Ti i’d recommend keeping it, the 3090 is really closer to the 2080 ti replacement, but maybe there will be a middle card in the future, or there is of course the more expensive 20GB 3080 option

    as for games utilizing more VRAM, well, I’m not sure what the trend will be, if DLSS is used more, that’s the answer to the VRAM capacity problem, it will alleviate so much pressure on capacity when used

    games are also constantly developer new compression methods, and ways to load balance everything correctly, with RTX I/O and Microsoft DirectStorage decompression should be a lot better and again the vram capacity issue won’t be such a problem

    I know your concerns for sure, and it will really all depends on the games themselves, but I do implore you, if a new game supports DLSS, give it a try, I’m actually liking the technology now, I’ve used it now, and DLSS 2.0 gives you good image quality and a perf increase

  48. 10 is still more than 8, remember, the 3080 isn’t an upgrade from the 2080 ti, it’s an upgrade from the 2080/super, if you have a 2080 Ti i’d recommend keeping it, the 3090 is really closer to the 2080 ti replacement, but maybe there will be a middle card in the future, or there is of course the more expensive 20GB 3080 option

    as for games utilizing more VRAM, well, I’m not sure what the trend will be, if DLSS is used more, that’s the answer to the VRAM capacity problem, it will alleviate so much pressure on capacity when used

    games are also constantly developer new compression methods, and ways to load balance everything correctly, with RTX I/O and Microsoft DirectStorage decompression should be a lot better and again the vram capacity issue won’t be such a problem

    I know your concerns for sure, and it will really all depends on the games themselves, but I do implore you, if a new game supports DLSS, give it a try, I’m actually liking the technology now, I’ve used it now, and DLSS 2.0 gives you good image quality and a perf increase

    DLSS is great, I agree, but unfortunately I do not think it will become ubiquitous.

  49. These are just rumors, but rumors are AMD will have something similar to DLSS coming.

    If that can happen, and maybe some form of standard API can be achieved, then maybe it will be used more.

    Like Ray Tracing, someone had to get the ball rolling first.

  50. DLSS made enough of a difference to me personally that I simply would not buy a GPU without it.
    Control and Wolfenstein:YB alone was worth the price of admission to play in 4K with a RTX2070.

    I was originally going to buy a R7. Glad I didnt.
    RTX and DLSS was way more fun and useful to me that an extra 8GB of ram could have ever been.

    Maybe I should just get a 3090 this time around and game on for the next 3 years, it seems very likely that 2080ti owners will get 3 years out theirs.
    Something to be said about buying the best available stuff…

  51. These are just rumors, but rumors are AMD will have something similar to DLSS coming.

    If that can happen, and maybe some form of standard API can be achieved, then maybe it will be used more.

    Like Ray Tracing, someone had to get the ball rolling first.

    Is Contrast Adaptive Sharpening not AMD’s version of DLSS?

  52. Only thing really holding me back from a 3080 right now is the 10GB memory simply because I’ve seen the 11GB on my 2080 Ti maxed out in a few games at 4K resolution. And it wasn’t simply usage, in those cases, having experienced degraded performance before turning down texture quality or other settings to reduce VRAM needed. Most games fall into the 6-8GB range right now, but I’m just worried that more games will be coming down the pipe that start running into limitations with 10GB.

    I do feel the same way; it’s not even that the 3080 has less than the 2080 Ti (which as noted by others, the 2080 should be the point of comparison), but that memory didn’t increase much.

    I do understand that they probably could not hit their $700 target if they added more, though. I can see the 20GB version being $1,000 or close to it unless Micron will have 16Gb chips ready when it hits production.

    I kind of feel like it’s worth waiting. Part of that at least is coming from a 1080Ti and not really wanting to go backward in VRAM capacity particularly given how long I’m likely to keep the new card.

    as for games utilizing more VRAM, well, I’m not sure what the trend will be, if DLSS is used more, that’s the answer to the VRAM capacity problem, it will alleviate so much pressure on capacity when used

    games are also constantly developer new compression methods, and ways to load balance everything correctly, with RTX I/O and Microsoft DirectStorage decompression should be a lot better and again the vram capacity issue won’t be such a problem

    As much as I admire upcoming solutions to the VRAM problem… these are ‘high-end’ solutions that require significant developer support. I can’t help but imagine that there might be games that slip through the cracks which wind up benefiting from the increased VRAM due to lack of optimization.

    That’s also compounded by waiting every other generation or so to upgrade in my case. More frequent upgraders probably have less to worry about!

  53. Is Contrast Adaptive Sharpening not AMD’s version of DLSS?

    No, that’s closer to NVIDIA’s Sharpening filter in the control panel

    https://nvidia.custhelp.com/app/ans…-image-sharpening-in-the-nvidia-control-panel

    DLSS uses AI (Tensor Cores) to take an image and scale it upwards by like 16x samples, so it renders at a lower resolution but is upscaled by AI to a baseline highly super-sampled image processed by NVIDIA servers offline. It’s much more complex.

    This is why NVIDIA’s method provides faster performance, cause it’s rendering at a lower resolution and then uses hardware to basically upscale it to a reference image with no loss in performance doing so.

    AMD’s method still renders at the same resolution, and there is no AI upscaling. It doesn’t improve performance, only sharpens image quality when temporal antialiasing is used.

    Now, there is supposed to be a feature of CAS that can scale an image. However I don’t know an example of it, and you really don’t hear about performance increases when CAS is used. The video card is not using AI to upscale, cause there’s no method for that ATM. That’s what sets DLSS far apart from CAS, it’s much more functional.

    However, I need to read into CAS a bit more, I’m not 100% on how it exactly works, I need to read a whitepaper or something. But so far, it hasn’t been marketed as a feature to improve performance, but only to improve image quality.

    It’s quite possible AMD could make a new version of CAS that is DLSS like when they have the hardware to do so. Or, they could brand their "DLSS" equivalent into a whole new feature name. Who knows, but the rumor is AMD will be coming out with something DLSS ‘like and I’m not sure that’s CAS.

  54. I thought FidelityFX was closer to DLSS than CAS?

    [/URL][/URL]

    FidelityFX is a suite of technologies, branded under the FidelityFX name. There are many features branded under that name.

    There is:

    FidelityFX Contrast Adaptive Shading
    FidelityFX Screen Space Reflections
    FidelityFX Combined Adaptive Compute Ambient Occlusion
    FidelityFX Lumincanace Preserver Mapper
    FidelityFX Single Pass Downsampler

    and more

    So a game can have only 1 of these features and still be called having FidelityFX technology, or it can have multiple of these features.

    So the thing to look for in games is which one of these specific features of FidelityFX is it using, it could be only one feature, or multiples.

Leave a comment