Conclusion

Today NVIDIA has launched the new GeForce RTX 3080 Founders Edition video card for $699. This new video card supersedes the GeForce RTX 2080 Founders Edition launched back in 2018 at the $699 price point. It also supersedes the GeForce RTX 2080 SUPER Founders Edition video card launched in 2019 also at $699. Basically, the $699 price segment just got a whole lot more interesting for gamers.

For testing, we included 9 games this round and tested both with and without Ray Tracing and DLSS where applicable. This provides a wide gamut of comparisons you can make, to compare without Ray Tracing and DLSS, and then compare to see what happens when Ray Tracing is turned on and then on top of that DLSS.

Rasterization Performance

Game rasterization performance still matters. We are not at the point of games being able to be purely rendered via ray tracing or path tracing, far from it still. NVIDIA is pushing hard for the future of ray tracing and path tracing games, to move forward beyond rasterization. This is a great goal and one that will eventually take hold. However, in the here and now there is no doubt that rasterization performance is important and games today benefit most from this.

There has been concern and debate lately about the importance of, or focus on, good ole rasterization performance. This is because the Ampere architecture focuses a lot on floating-point, Ray Tracing, and machine learning performance. There are concerns that the Ampere architecture has side-swiped rasterization performance and is not strong in this area. Well, our results today speak volumes. We tested every single game here without Ray Tracing and DLSS at 1440p and 4K to see how well the GeForce RTX 3080 FE video card is at good ole rasterization. The results are as follows.

GeForce RTX 3080 FE Performance Increase1440p
RTX 2080
4K
RTX 2080
1440p
RTX 2080 Ti
4K
RTX 2080 Ti
Control66%75%27%29%
Wolfenstein: Youngblood79%76%27%32%
Shadow of the Tomb Raider60%72%23%28%
Metro Exodus53%59%20%22%
Microsoft Flight Simulator 20209%62%2%2%
Horizon Zero Dawn56%69%24%29%
Red Dead Redemption 260%69%25%27%
Ghost Recon Breakpoint55%86%23%32%
Far Cry 521%59%5%20%

Remember, all these results above are without Ray Tracing or DLSS. This table shows that the average increase in performance at 1440p for the GeForce RTX 3080 FE compared to the GeForce RTX 2080 FE is 51%. The low Flight Sim result really brings the average down here, without FS in the mix the average is 56% performance increase. At 4K the average increase in performance is 70%. Considering the GeForce RTX 3080 FE is $699, same as the GeForce RTX 2080 FE, and a replacement for it, you are getting a very big bump in performance, without Ray Tracing and DLSS.

Compared to the GeForce RTX 2080 Ti at 1440p the GeForce RTX 3080 FE averages an increase in performance of 20% over the GeForce RTX 2080 Ti FE. The Far Cry 5 and FS 2020 numbers bring that average down a lot, if we remove those two then the average is 24%. At 4K the GeForce RTX 3080 FE averages 25% faster than the GeForce RTX 2080 Ti FE. The GeForce RTX 2080 Ti FE video card was a $1200 video card, now for $500 less at $699 you can have performance that is 20-25% faster than the GeForce RTX 2080 Ti FE, for less money. That is advancement, again, all without including Ray Tracing or DLSS into the mix, pure rasterization.

Point being? Rasterization Performance improvement is there on the GeForce RTX 3080 FE, the facts speak the truth.

NVIDIA RTX Ray Tracing

Ray Tracing performance, and machine learning performance, also matter. While Rasterization Performance is important today, more and more Ray Tracing types of workloads are going to be increasing and increasingly important. Now that both console platforms support Ray Tracing, and NVIDIA having supported Ray Tracing for years now, it is only going to become more important. The facts are, NVIDIA has a big leg up on Ray Tracing. The Ampere architecture improves upon the technology and makes it actually useful and playable now. Ray Tracing performance is an important aspect of a GPU and the gameplay experience.

We tested all the games that support Ray Tracing and DLSS today with and without Ray Tracing and DLSS so you can see the performance differences. It’s safe to say, the GeForce RTX 3080 FE has improved upon Ray Tracing performance over the last generation quite a bit. Once again the numbers speak the truth. In the table below we are going to compare performance with Ray Tracing enabled (no DLSS) between the GeForce RTX 3080 FE and GeForce RTX 2080 FE and GeForce RTX 2080 Ti FE at 1440p and 4K. You will be able to directly see the percentage advantage in performance with Ray Tracing the RTX 3080 FE has.

GeForce RTX 3080 FE Performance Increase with Ray Tracing Enabled on Each Video Card1440p
RTX 2080
4K
RTX 2080
1440p
RTX 2080 Ti
4K
RTX 2080 Ti
Control84%89%38%38%
Wolfenstein: Youngblood90%98%46%44%
Shadow of the Tomb Raider68%89%25%24%
Metro Exodus67%59%23%22%

As you can see, with Ray Tracing Enabled the performance advantages with GeForce RTX 3080 FE are even higher than without Ray Tracing. The average performance increase at 1440p compared to the RTX 2080 FE is 77%. The average performance increase at 4K compared to the RTX 2080 FE is 84%. The GeForce RTX 3080 FE has a very large leap over the GeForce RTX 2080 FE with Ray Tracing turned on. Compared to the RTX 2080 Ti FE the RTX 3080 FE at 1440p averages 33% faster and at 4K it is 32% faster. This proves that Ray Tracing performance is vastly improved.

NVIDIA RTX DLSS

DLSS is the magic sauce that makes Ray Tracing playable at high resolutions, like 4K, and performance overall playable at higher resolutions like 8K. Ok, it isn’t magic, it’s actually machine learning accelerated through Tensor Cores in the Ampere architecture. It is important though, DLSS is the means by which NVIDIA can claim playable Ray Tracing performance at high resolutions in graphically intense games, and game performance at 8K.

Without DLSS the new GeForce RTX 3080 FE is fast enough finally to make Ray Tracing useful at 1440p in most games. However, when it comes to 4K, it’s generally still not fast enough in graphically intense games. This is where DLSS steps in, with DLSS enabled that performance is vastly improved, and this is where we get playable performance at 4K with Ray Tracing. DLSS is very important for NVIDIA because this is the sauce that makes the goose fly. Our results today showed that using DLSS in combination with Ray Tracing gave us playable performance at 4K.

In Control at 4K with Ray Tracing and DLSS we saw 65 FPS average, the game was very smooth. Without DLSS it was 36 FPS. In Wolfenstein: Youngblood at 4K with Ray Tracing and DLSS performance was at 134 FPS. Without DLSS it was 91 FPS. In Shadow of the Tomb Raider at 4K with Ray Tracing and DLSS performance was at 70 FPS. Without DLSS performance was at 36 FPS. DLSS was required to make 4K playable in that game with Ray Tracing. In Metro Exodus at 4K with Ray Tracing and DLSS performance was at 49 FPS. Without DLSS it was 34 FPS.

This proves that DLSS can make a big difference and will be the deciding factor whether something is playable or not especially when Ray Tracing is used. DLSS being enabled allows us to enjoy games with Ray Tracing enabled at a much higher performance at 1440p and makes 4K + Ray Tracing a possible thing. DLSS might deserve an article on its own right, as image quality is an important factor in regards to that.

We should note that all the games we used DLSS in today looked better to us with DLSS on at 4K than without. We saw no immediate blurring or lower quality images with DLSS enabled. DLSS 2.0 is a huge improvement over the first generation, though a more in-depth look is warranted.

CPU Limitations

It is true that with the GeForce RTX 3080 Founders Edition performance is so good that CPU limitations are going to arise in certain situations. If you have an older CPU, it could affect the overall performance advantage you receive with the GeForce RTX 3080 FE. If you are running a game that has extreme CPU limitations, or at a low resolution where it is CPU dependent, it could affect the overall performance advantage you receive with the GeForce RTX 3080 FE. You need to be aware of this.

The performance advantages you gain with a GeForce RTX 3080 FE are going to depend on these things. We’ve seen games like Far Cry 5 and Flight Sim 2020 now where CPU limitations are affecting the performance advantages. Older games are going to be affected in this way more than newer games. When games do not utilize newer APIs and forward-looking graphics features, they cannot reveal the most out of the architecture of the GPU.

If you are held back by bottlenecks elsewhere in the system, it affects GPU performance as well and keeps the RTX 30 series from utilizing its full potential. This is now increasingly true with the GeForce RTX 3080 FE, and also will be with the GeForce RTX 3090 FE. If you have a much older CPU, consider a CPU upgrade before a GPU. With Zen 3 around the corner, it might not be a bad time for an upgrade, and that might really help improve GeForce RTX 3080 FE performance.

The VRAM Debate

Around the Internet, on the forums, in the places where things such as this are debated, we have heard the concerns from gamers about the 10GB of VRAM capacity on the GeForce RTX 3080. The debate starts with the fact that the GeForce RTX 2080 Ti has 11GB, and therefore naturally 10GB is a downgrade. With new games on the horizon with system recomendations pushing VRAM capacity, and fears of 10GB bottlenecking game performance it is a worthy topic.

I’d like to offer an alternative angle on the issue. At the $699 price point, you are actually getting an upgrade in VRAM capacity. It’s important to keep in mind that the GeForce RTX 3080 is the replacement for the GeForce RTX 2080 and 2080 SUPER, not the GeForce RTX 2080 Ti. The GeForce RTX 2080 and GeForce RTX 2080 SUPER have only offered 8GB of VRAM for two years now at $699. The GeForce RTX 3080 replaces those video cards and now offers 10GB of VRAM at $699. In that respect, in this price segment, you are actually getting a VRAM upgrade this generation, 10GB vs. 8GB.

The GeForce RTX 2080 Ti is a $1200 video card, and while it has 11GB of VRAM, it’s the price that sets it far apart from where the GeForce RTX 3080 occupies. If you want to look at what will be replacing the GeForce RTX 2080 Ti in that price segment then you have to look toward the GeForce RTX 3090 to fill that role, or perhaps even an unannounced video card. The GeForce RTX 3090 has 24GB of VRAM, so that very much upgrades the VRAM capacity beyond the GeForce RTX 3080 Ti, and its price point of $1500 is closer to the GeForce RTX 2080 Ti’s price point.

Yes, VRAM capacity matters moving forward. We always welcome more. Though features like DLSS can alleviate this pressure on the VRAM capacity. Plus, there are rumors that some custom cards could indeed slap more VRAM onboard, up to 20GB. So that solves that anyway. But when it comes down to it, at the $699 price point you are technically getting a VRAM capacity upgrade, not a downgrade.

The Power

We want to briefly talk about some other aspects of the GeForce RTX 3080 FE and the architecture in general. Firstly, to the Samsung 8nm manufacturing node discussion. No doubt this one will be talked about for a long while among tech enthusiasts. It is likely that Samsung’s 8nm custom node has held back the potential of GPU clock frequency compared to the better TSMC 7nm or Samsung 7nm EUV process. However, it is done and done, and that’s just how it is.

The facts are that this means the GPUs require a lot of power to achieve the performance goals, and they need more robust cooling to dissipate the thermals. NVIDIA doubled down on the cores to offset the frequency problem. If the GeForce RTX 30 Series was on a better node the frequencies would sore, but then we’d also probably be talking about higher prices. There is some good from Samsung 8nm, it saves NVIDIA cost, and that cost has been trickled down to us. In the end, it’s the performance that matters.

Our caution for everyone right now is, make sure you have a solid power supply if you plan on going forward with the GeForce RTX 3080 or GeForce RTX 3090. Our personal recommendations are nothing lower than 850W for the GeForce RTX 3080 and nothing less than 1000W potentially for the GeForce RTX 3090. If you plan on overclocking, this is very important as well as we saw power utilization spike very high just from modest overclocking.

We also suggest the power supply not be aged, power supply components can degrade over time by components and in their ability to regulate voltage and lose efficiency. This can be detrimental to the fast load-changes of these video cards. If you have to include the cost of a new PSU with the video card, then that does increase the total cost of ownership, and that is something to consider. Now more than ever good cable quality is important and the cable’s amperage and ability to handle the load. If you ever experience any instability with your new RTX 3080 or RTX 3090, the first component I’d look at is your power supply.

The Founders Edition design remained remarkably quiet, so we have no issues with sound here. It does get very hot though, and we recommend good case cooling and airflow, do not starve these video cards from airflow. We would have concerns in very tight and small cases, but that would have to be tested in each case individually. We will have to test add-in-board partner video card designs of course, but most seem to be looking very robust in the cooling department, which is good.

Final Points

At the end of the day, the NVIDIA Ampere architecture is superior to last generation’s Pascal architecture. The node has improved from the last generation, and the architecture is now keyed more specifically to floating-point performance, Ray Tracing performance, and machine learning/AI performance via Tensor Cores. The architecture also supports some interesting new technologies we are looking forward to such as RTX I/O. It has future bandwidth support in mind with PCI-Express 4.0.

Rasterization, Ray Tracing, and Machine Learning are all aspects of modern-day GPUs, and they all matter moving forward for gaming. In traditional gaming (rasterized performance) the GeForce RTX 3080 Founders Edition gives us a big upgrade in performance compared to the GeForce RTX 2080 Founders Edition it is replacing. We see benefits depend on the game, with some as high as 80+% and most averaging around 50-60% advantage, depending on the resolution. In addition, the GeForce RTX 3080 FE also provides 20-25% faster performance than the previous fastest video card, the GeForce RTX 2080 Ti. When you apply Ray Tracing, the advantages in performance grow even more. Apply DLSS on top of that and Ray Tracing is playable in games at 4K now, and most definitely 1440p.

The GeForce RTX 2080 FE had a lot of trouble when it debuted giving us playable performance with Ray Tracing in games. Some games even struggled at 1080p with Ray Tracing with that card. The GeForce RTX 3080 FE finally gives us usable and playable Ray Tracing performance at 1440p, and with DLSS up to 4K. Finally, Ray Tracing is a playable game feature. But, if Ray Tracing isn’t your thing, and you don’t care about it and will never turn on Ray Tracing or DLSS you can still rest in comfort knowing that the GeForce RTX 3080 FE is faster than the GeForce RTX 2080 Ti. If you are looking for a performance that is faster than the GeForce RTX 2080 Ti, here it is, and it’s now $500 less than the GeForce RTX 2080 Ti FE was.

At $699 the GeForce RTX 3080 Founders Edition video card offers gamers a lot of gaming performance and features that will improve the gameplay experience. At the end of the day the gameplay experience is most important, and the GeForce RTX 3080 FE has the ability to transform that gameplay experience with features like Ray Tracing and DLSS. With the performance it brings, those features are playable. It also offers the fastest performance around, and even provides better performance than the fastest video card of the last generation. Whether you play games without Ray Tracing and DLSS, or you play games with, this video card will provide the best gameplay experience.

Discussion

TheFPSReview Gold Award

Recent Posts

Brent Justice

Brent Justice has been reviewing computer components for 20+ years, educated in the art and method of the computer hardware review he brings experience, knowledge, and hands-on testing with a gamer oriented...

Join the Conversation

75 Comments

  1. Ok.. So basically if you’re still at 1080p (or even 1440p), this card is overkill. I’ve been holding off on moving to 4k for the longest time now, and it looks like my patience will be rewarded. My plan is to get a couple of 4k monitors, and then this card would be a nice driver for those.

    Brent, excellent review as always! Thank you FPS Review for tremendous coverage!

  2. So it’s true that those leaked benches of Shadow of the Tomb Raider and Far Cry: New Dawn were worst-case scenarios for this comparison. Look like on average the 3080 is around 70% faster than the 2080 at 4K in rasterization and 30% faster than the 2080 Ti. Add ray tracing and it almost hits the "double performance" claim NVIDIA was making. That is almost exactly the same performance improvement we saw with the Pascal generation.

    Can’t wait to see the 3090 review 😁

  3. It seems to me, that the increase in power consumption is larger than the increase in performance. I’d have expected the opposite.

    Too bad Control is still unplayable in 4K with ray tracing, well maybe with the 3090 :D

  4. It seems to me, that the increase in power consumption is larger than the increase in performance. I’d have expected the opposite.

    Too bad Control is still unplayable in 4K with ray tracing, well maybe with the 3090 :D

    +29% power consumption compared to the 2080, with a performance increase of +70%. The increase in performance is larger than the increase in power. Not quite the 1.7x efficiency NVIDIA stated, but close.

    Control is quite playable with ray tracing using DLSS as pointed out in the review, and it looks better than native thanks to DLSS 2.0.

    1. make that 90% promised energy efficiency increase. Seems to me that if nvidia had gone 7nm, it would have pretty much spot on.

      Anyway I think its still great. Huge performance increase for a moderate increase in power draw. I’ll take that. Specially after watching AMD getting nowhere close to nvidias power efficiency. Hopefully tables will finally turn with the RX6000.

      Great time to be a gamer.

  5. +29% power consumption compared to the 2080, with a performance increase of +70%. The increase in performance is larger than the increase in power. Not quite the 1.7x efficiency NVIDIA stated, but close.

    Control is quite playable with ray tracing using DLSS as pointed out in the review, and it looks better than native thanks to DLSS 2.0.

    Mind you the +29% is compared to the entire system wattage, not the GPU. If you could isolate the GPU power consumption the percentage incrase would be much larger. Probably still not 70% but close to 50.

    With DLSS Control is already playable on the 2080ti I was of course referring to running without DLSS. With the claimed double ray tracing performance the 3080 should be able to handle it without DLSS.

  6. Who cares about power draw? Unless you’re trying to use a PSU that’s on the ragged edge of being sufficient.
  7. Who cares about power draw? Unless you’re trying to use a PSU that’s on the ragged edge of being sufficient.

    I was talking with a lot of the guys I play Destiny 2 with last night. Most of them are running units below 700w with stock Core i9-9900K’s and either RTX 2080 or RTX 2080 Ti cards. Any of them thinking about upgrading to the 30 series are contemplating PSU upgrades along with the graphics cards. Most are looking at the 3080 rather than the 3090.

  8. First site I went to when I woke up this morning to see this review and was not disappointed. Answered all my questions and I’m actually surprised at how well this thing performs.

    The thing that surprises me most is the 850 watt PSU recommendation! I figured it was just marketing BS when Nvidia first announced they recommended a 750 at minimum and figured I could run my EVGA 750 a little longer. Glad I waited for the review cause I never would’ve guessed I’d need to account for that in my budget too. But it’s an excuse to buy something new so I’m sure the wife won’t mind an extra $125 on the old credit card lol.

    I also don’t think this would be overkill for 1080p if you’re running a 240 Hz monitor. Judging by the benchmarks this would be about perfect. My rig currently runs Breakpoint at around 100-110 fps with mostly high settings but the 3080 would likely get over 140+ with maxed settings. I think that would be worth the investment.

    I’m gonna buy mine at Best Buy. My local store usually has a good selection of PC parts and I like being able to have an actual store I can take something back to if it craps the bed on me.

  9. I’m planning on picking one up (if I get lucky tomorrow) and running it on my 650 watt Seasonic titanium. It’s currently running a 2700x, 1080ti, and1660 super, so I’m not concerned about replacing both video cards with one 3080. I’ll upgrade the PSU to a 1000W when the next gen Ryzen CPUs drop.

    Now, I also intend on trying to get a 3090 – if I end up with both, I’ll sell the 3080, but I’m a little more wary of the 3090 on that 650w PSU, but the difference is only like 30 watts. I’ll just wait to OC which ever card till I upgrade PSU.

  10. Honestly not as fast as I had hoped but not bad in any way shape or form. I honestly would not understand many 2080 Ti owners making the jump to this and so we wait on the 3090 Reviews!
    Great Review as usual Brent!
  11. Honestly not as fast as I had hoped but not bad in any way shape or form. I honestly would not understand many 2080 Ti owners making the jump to this and so we wait on the 3090 Reviews!
    Great Review as usual Brent!

    Out of curiousity I checked the local classifieds for 2080ti listings, and there aren’t that many, and half of them are listed around $1000, with the lowest at $700, keep on dreaming boys. (I was thinking about picking up a second 2080ti – not anymore)

  12. You may have done a few spot tests and found no real difference, but did changing between PCIE modes 3.x or 4.x make any difference for the cards performance?
  13. Nice review and the card looks nice too. The only issue I have is the performance is a bit lower than it could be due to the use of the 3700X system. I know AMD gives you PCIe 4.0, but it lags about 7-8% (3900XT does) from the Intel CPU’s. A dual test setup would be nice.

  14. Nice review and the card looks nice too. The only issue I have is the performance is a bit lower than it could be due to the use of the 3700X system. I know AMD gives you PCIe 4.0, but it lags about 7-8% (3900XT does) from the Intel CPU’s. A dual test setup would be nice.

    http://[URL]https://www.techpowerup…-rtx-3080-amd-3900-xt-vs-intel-10900k/26.html

    [/URL]

    AMD still lags a bit in IPC compared to Intel, so in resolutions that are more CPU-dependent like 1920×1080 you’re going to see a big difference. The more the GPU struggles, the more that difference disappears. It’s practically non-existent at 4K.

  15. AMD still lags a bit in IPC compared to Intel, so in resolutions that are more CPU-dependent like 1920×1080 you’re going to see a big difference. The more the GPU struggles, the more that difference disappears. It’s practically non-existent at 4K.

    I thought in the current generation that IPC lead was actually on the Ryzen CPU’s side.

    At least according to this techspot article..

    https://www.techspot.com/article/1876-4ghz-ryzen-3rd-gen-vs-core-i9/

    Not saying it’s a huge difference though Intel does lead still in raw GHZ throughput.

  16. I thought in the current generation that IPC lead was actually on the Ryzen CPU’s side.

    At least according to this techspot article..

    https://www.techspot.com/article/1876-4ghz-ryzen-3rd-gen-vs-core-i9/

    Not saying it’s a huge difference though Intel does lead still in raw GHZ throughput.

    In the context of gaming, which is what we’re talking about with the 3080, It is still Intel. This is at 1280×720, so basically as much CPU dependence as you can get on a modern game. No question that Ryzen is better in other types of applications, especially multithreaded ones.

    View attachment 429

  17. I just hope my 650W PSU can hadle it, it’s a seasonic titanium one, otherwise I will need to swap in the 850 from my X299
  18. In the context of gaming, which is what we’re talking about with the 3080, It is still Intel. This is at 1280×720, so basically as much CPU dependence as you can get on a modern game. No question that Ryzen is better in other types of applications, especially multithreaded ones.

    View attachment 429

    To compare IPC you’d have to lock both cpus to the same clocks. This just tells us that a higher clocked cpu is faster.

  19. The indepth review I’ve been looking for! Thanks for making it pretty straight forward and easy to understand even for a newbie such as myself.

    Based on Microsoft FS performance, I’m really hoping MS does continue to optimize the game’s performance. Until then, here’s hoping the new architecture from AMD improves things a bit. Otherwise I’m going to have to start selling blood to buy a Threadripper!

  20. The indepth review I’ve been looking for! Thanks for making it pretty straight forward and easy to understand even for a newbie such as myself.

    Based on Microsoft FS performance, I’m really hoping MS does continue to optimize the game’s performance. Until then, here’s hoping the new architecture from AMD improves things a bit. Otherwise I’m going to have to start selling blood to buy a Threadripper!

    M$ has always developed new releases of Flight Sim to be ahead of current GPU performance levels. It’s by design. So the game has plenty of room to grow over many years. FSX was the same way, as was FS98,

  21. The indepth review I’ve been looking for! Thanks for making it pretty straight forward and easy to understand even for a newbie such as myself.

    Based on Microsoft FS performance, I’m really hoping MS does continue to optimize the game’s performance. Until then, here’s hoping the new architecture from AMD improves things a bit. Otherwise I’m going to have to start selling blood to buy a Threadripper!

    You’re in luck! New patch released today addresses CPU performance impact by preventing interruption of rendering threads, among other things.

    https://www.flightsimulator.com/patch-version-1-8-3-0-is-now-available/

  22. Yeah, it’s time to hand down the 980Ti to the kids computer and get a 3080…if I can manage to actually place an order before they go OoS.
  23. It seems to me, that the increase in power consumption is larger than the increase in performance. I’d have expected the opposite.

    Too bad Control is still unplayable in 4K with ray tracing, well maybe with the 3090 :D

    Well, Control has DLSS 2.0 now right?

    From the comparisons I have seen, DLSS 2.0 really doesn’t result in much of an image quality degradation, and sometimes even looks better, so as long as the 3090 can handle it with DLSS on, I’ll consider that a success.

  24. VP where I work has an Aorus Waterforce 2080 AIO with a 240mm rad. Says it never gets over 60c. And he’s had no issues with it, at least for the last year. It’s just a closed loop system. They work quite well.

    I have dual 360 rads, one 25mm and one 35mm. Bring on the heat.

  25. VP where I work has an Aorus Waterforce 2080 AIO with a 240mm rad. Says it never gets over 60c. And he’s had no issues with it, at least for the last year. It’s just a closed loop system. They work quite well.

    I have dual 360 rads, one 25mm and one 35mm. Bring on the heat.

    Yeah, AIO’S usually get you to the 60’s overclocked and loaded up.

    My WC loop kept my Pascal Titan under 40C overclocked and loaded up. That’s my target because under 40C I seem to have been getting better boost clocks.

    Question is if the temp calculus needs to change considering the massive thermal envelopes of these things.

  26. great review…very detailed…I love the games you tested and the fact that you enabled things like AA, PhysX, Hairworks etc…so you pretty much maxed out the graphics…lots of other 3080 reviews disabled a lot of the advanced graphics settings

    me personally I’m waiting for the 3080 20GB variant…I’m in the process of building a new Zen 3 system so I can afford to be patient

  27. Do Metro Exodus and SotTR still use DLSS 1? If that’s so, do they still exibit the same issues like blur and smear?

    I really hope DLSS2.x becomes a trend. By now there should be dozens of games and patches for DLSS, but still only a handful of games support it, and only a couple of them actually look awesome.
    /rant

  28. Honestly not as fast as I had hoped but not bad in any way shape or form. I honestly would not understand many 2080 Ti owners making the jump to this and so we wait on the 3090 Reviews!
    Great Review as usual Brent!

    Even if this were a huge upgrade over the RTX 2080 Ti, I’d still wait for the 3090. The only way I’d buy a 3080 is if the 3090 was less than 7% faster than a 3080 at twice the price or something stupid like that.

  29. Even if this were a huge upgrade over the RTX 2080 Ti, I’d still wait for the 3090. The only way I’d buy a 3080 is if the 3090 was less than 7% faster than a 3080 at twice the price or something stupid like that.

    20% faster for the price would still be a big NO NO for me even if I could spare the cash. But for people that are already used to pay $1,000+ for a card I guess I can see that happening. And people that already have a RTX2080Ti have nowhere else to go.

  30. 20% faster for the price would still be a big NO NO for me even if I could spare the cash. But for people that are already used to pay $1,000+ for a card I guess I can see that happening. And people that already have a RTX2080Ti have nowhere else to go.

    Enthusiast level cards have always been like this since at least the 8800 Ultra. The 8800 Ultra was about 48% more expensive than a 8800 GTX for 10% more performance.

    Big variable that needs to be considered with the 3090 vs. the 3080, though, is the amount of memory. GDDR6X is supposedly twice as expensive as GDDR6 for 8Gb chips in bulk (around $24/chip compared to $12/chip). That would make the 24GB on the 3090 $576 vs. $240 for the 10GB on the 3080. This is not the only factor accounting for the price difference, but it is a big one.

    If you want the fastest single gaming card available and have the money to buy it, though, then why not.

  31. Flight Simulator 2020 Re-Testing with New Patch
    9/18/2020

    Thank you to Armenius I became aware of this new patch for Flight Sim 2020 which has many adjustments to the performance. Therefore I decided to re-test the game with the new patch on the RTX 3080 (with the same driver) to see if there are any changes. These are my results.

    1440p Ultra – No Change In Performance
    1440p High-End – FPS went from 46 FPS to now 47.8 FPS AVG

    4K Ultra – FPS went from 29 FPS to 31.3 FPS AVG
    4K High-End – FPS went from 42.6 FPS to 46 FPS AVG

    The end result is that in the "High-End" Quality Preset, I saw a larger performance bump with the new patch. 4K "High-End" was the biggest performance leap.

    In the "Ultra" Quality Preset I only saw a very small increase at 4K "Ultra". However, at 1440p "Ultra" there was no difference.

    These are by no means game-changing numbers here, but it is good to see 4K "High-End" performance increasing, I just wish "Ultra" Quality performance increased more.

    It also seems overall, bigger changes at 4K than 1440p.

  32. Thanks, @Brent_Justice for such an in-depth and great review. As always, feel like I’ve been taken back to school. Now just to retain it. Had to wait until tonight until I had time to really read through it.
  33. Enthusiast level cards have always been like this since at least the 8800 Ultra. The 8800 Ultra was about 48% more expensive than a 8800 GTX for 10% more performance.

    Big variable that needs to be considered with the 3090 vs. the 3080, though, is the amount of memory. GDDR6X is supposedly twice as expensive as GDDR6 for 8Gb chips in bulk (around $24/chip compared to $12/chip). That would make the 24GB on the 3090 $576 vs. $240 for the 10GB on the 3080. This is not the only factor accounting for the price difference, but it is a big one.

    If you want the fastest single gaming card available and have the money to buy it, though, then why not.

    I’m aware of the law of diminishing returns. I always try to get the best bang for the buck, which for now IMO is the RTX3080, but will surely get replaced soon by the RTX3070 or RX6000.

    But I agree, whatever makes you happy no matter the cost its fine. Probably if I had the cash, I’d eat my words and end up getting one too 😁😁

  34. But I agree, whatever makes you happy no matter the cost it’s fine. Probably if I had the cash, I’d eat my words and end up getting one too

    I used to mock those who bought Titans for gaming back in Maxwell days. Now, I just start saving for whatever the next biggest hammer will be right after a release happens. Best value, of course, not, best experience, better believe it. It’s also nice seeing these top tier cards usually age gracefully and knowing you’re going to get at least 2 years of top-end performance out of them. My first x80 Ti was a 1080 Ti and it’s still chugging away 4+ years later at a reasonable level. The 2080 Ti I have now, it’ll end up in another rig and still be decent for 1440p for a year or two longer. Initial sticker shock sucks, people jump on the hate trains, but 3-4 years down the road and that same card is doing ok and I think to myself what a great ride it’s been.:giggle:

  35. Even if this were a huge upgrade over the RTX 2080 Ti, I’d still wait for the 3090. The only way I’d buy a 3080 is if the 3090 was less than 7% faster than a 3080 at twice the price or something stupid like that.

    Yeah and we both know a 3080Ti will likely come out.

  36. I don’t think it will. I think it will be a 3080 Super. It seems like NVIDIA is getting away from the "Ti" naming scheme.

    I think he means a faster/beefier version of the 3080, wether its called super, Ti, hyper, ultra, jumbo, is irrelevant.

  37. I think he means a faster/beefier version of the 3080, wether its called super, Ti, hyper, ultra, jumbo, is irrelevant.

    Yeah, from memory of ‘Ti’ and ‘Super’ releases, the only real common thread is that they have better specifications than whatever they are a ‘Ti’ or ‘Super’ of. Could be the same GPU die with faster memory, more memory, more compute resources unlocked, the next largest GPU die, or some combination.

  38. I have to say I’m a little bit dissapointed on DLSS+RTX performance hit as it seems to be comparatively the same as Turing. (about 10-20% depending on the game @4K) I’m getting this figure having as a reference 1440p RTX performance, since this is how its rendered under DLSS.

    I was expecting much better performance as ampere tensor cores are supposedly 3x faster and rtx cores 2x faster than Turing. Some untapped potential, maybe?

  39. I have to say I’m a little bit dissapointed on DLSS+RTX performance hit as it seems to be comparatively the same as Turing. (about 10-20% depending on the game @4K) I’m getting this figure having as a reference 1440p RTX performance, since this is how its rendered under DLSS.

    I was expecting much better performance as ampere tensor cores are supposedly 3x faster and rtx cores 2x faster than Turing. Some untapped potential, maybe?

    I agree.
    But also pretty cool that $699 is legit 4K60fps with just about every game out there.
    Hopefully they can tweak things a bit more down the line.

  40. I have to say I’m a little bit dissapointed on DLSS+RTX performance hit as it seems to be comparatively the same as Turing. (about 10-20% depending on the game @4K) I’m getting this figure having as a reference 1440p RTX performance, since this is how its rendered under DLSS.

    I was expecting much better performance as ampere tensor cores are supposedly 3x faster and rtx cores 2x faster than Turing. Some untapped potential, maybe?

    The full capability for RT will not be seen with the older RTX titles since they used DXR 1.0. Wolfenstein update, which showed a much better spread between the 2080Ti and 3080 represents more the 3080 RT potential pretty sure is using the much better parallel ability of DXR1.1 enhancements.

    I put a decent amount of effort in obtaining this card, Nvidia, Bestbuy, really wanted the FE. Nvidia failed to deliver or sell me one. Been trying since without any luck. If Nvidia cannot take care of their customers then best to move on.

  41. The full capability for RT will not be seen with the older RTX titles since they used DXR 1.0. Wolfenstein update, which showed a much better spread between the 2080Ti and 3080 represents more the 3080 RT potential pretty sure is using the much better parallel ability of DXR1.1 enhancements.

    I put a decent amount of effort in obtaining this card, Nvidia, Bestbuy, really wanted the FE. Nvidia failed to deliver or sell me one. Been trying since without any luck. If Nvidia cannot take care of their customers then best to move on.

    My fear, that regardless of the manufacturer, is that we’ve gotten to a point where scripts rule over the consumer base. Anyone, with enough capital funds could control the market as long as supply is limited at release. There’s no penalty for the bot-world to just buy up anything and everything at launch, as long as there is demand for their resale.

  42. When should we expect a review of the 3090? I am actually contemplating waiting for the 20GB 3080 depending on how the 3090 performs. For the first time in history I feel like the 3080 is "enough" for my gaming resolution and needs, and that paying double for the 3090 is a waste of money :eek:.
  43. When should we expect a review of the 3090? I am actually contemplating waiting for the 20GB 3080 depending on how the 3090 performs. For the first time in history I feel like the 3080 is "enough" for my gaming resolution and needs, and that paying double for the 3090 is a waste of money :eek:.

    From what I can tell, whenever the NDA lifts it’ll probably be a smaller selection of reviewers than what was seen for the 3080. I’d also guess the lift will happen no later than the card on sale date of 9/24 @ 6AM PDT. At this point, we don’t have one nor do we have any confirmed in the pipeline. As with the 3080, I’ll be F5’ing to try to get one when they launch and we’ll continue to shake down manufacturers for one…

  44. When should we expect a review of the 3090? I am actually contemplating waiting for the 20GB 3080 depending on how the 3090 performs. For the first time in history I feel like the 3080 is "enough" for my gaming resolution and needs, and that paying double for the 3090 is a waste of money :eek:.

    I’m kind of feeling the same. Haven’t had this much ambivalence in a while. For me the real decision will be pricing. It it costs over $1000 then I’ll still go for the 3090. Whether or not it’s DDR6X could be also be a factor.

  45. When should we expect a review of the 3090? I am actually contemplating waiting for the 20GB 3080 depending on how the 3090 performs. For the first time in history I feel like the 3080 is "enough" for my gaming resolution and needs, and that paying double for the 3090 is a waste of money :eek:.

    It is certainly a weird position to be in.

    While there are uses for the, uh, ‘excess’ performance, they don’t seem to merit significant increases in costs.

    Feels kind of like we’re on a divide, where more performance isn’t really useful for pure rasterization on desktops, but alsol isn’t nearly enough for say VR or RT (or both).

  46. It is certainly a weird position to be in.

    While there are uses for the, uh, ‘excess’ performance, they don’t seem to merit significant increases in costs.

    Feels kind of like we’re on a divide, where more performance isn’t really useful for pure rasterization on desktops, but alsol isn’t nearly enough for say VR or RT (or both).

    Only thing really holding me back from a 3080 right now is the 10GB memory simply because I’ve seen the 11GB on my 2080 Ti maxed out in a few games at 4K resolution. And it wasn’t simply usage, in those cases, having experienced degraded performance before turning down texture quality or other settings to reduce VRAM needed. Most games fall into the 6-8GB range right now, but I’m just worried that more games will be coming down the pipe that start running into limitations with 10GB. I do understand that they probably could not hit their $700 target if they added more, though. I can see the 20GB version being $1,000 or close to it unless Micron will have 16Gb chips ready when it hits production.

  47. 10 is still more than 8, remember, the 3080 isn’t an upgrade from the 2080 ti, it’s an upgrade from the 2080/super, if you have a 2080 Ti i’d recommend keeping it, the 3090 is really closer to the 2080 ti replacement, but maybe there will be a middle card in the future, or there is of course the more expensive 20GB 3080 option

    as for games utilizing more VRAM, well, I’m not sure what the trend will be, if DLSS is used more, that’s the answer to the VRAM capacity problem, it will alleviate so much pressure on capacity when used

    games are also constantly developer new compression methods, and ways to load balance everything correctly, with RTX I/O and Microsoft DirectStorage decompression should be a lot better and again the vram capacity issue won’t be such a problem

    I know your concerns for sure, and it will really all depends on the games themselves, but I do implore you, if a new game supports DLSS, give it a try, I’m actually liking the technology now, I’ve used it now, and DLSS 2.0 gives you good image quality and a perf increase

  48. 10 is still more than 8, remember, the 3080 isn’t an upgrade from the 2080 ti, it’s an upgrade from the 2080/super, if you have a 2080 Ti i’d recommend keeping it, the 3090 is really closer to the 2080 ti replacement, but maybe there will be a middle card in the future, or there is of course the more expensive 20GB 3080 option

    as for games utilizing more VRAM, well, I’m not sure what the trend will be, if DLSS is used more, that’s the answer to the VRAM capacity problem, it will alleviate so much pressure on capacity when used

    games are also constantly developer new compression methods, and ways to load balance everything correctly, with RTX I/O and Microsoft DirectStorage decompression should be a lot better and again the vram capacity issue won’t be such a problem

    I know your concerns for sure, and it will really all depends on the games themselves, but I do implore you, if a new game supports DLSS, give it a try, I’m actually liking the technology now, I’ve used it now, and DLSS 2.0 gives you good image quality and a perf increase

    DLSS is great, I agree, but unfortunately I do not think it will become ubiquitous.

  49. These are just rumors, but rumors are AMD will have something similar to DLSS coming.

    If that can happen, and maybe some form of standard API can be achieved, then maybe it will be used more.

    Like Ray Tracing, someone had to get the ball rolling first.

  50. DLSS made enough of a difference to me personally that I simply would not buy a GPU without it.
    Control and Wolfenstein:YB alone was worth the price of admission to play in 4K with a RTX2070.

    I was originally going to buy a R7. Glad I didnt.
    RTX and DLSS was way more fun and useful to me that an extra 8GB of ram could have ever been.

    Maybe I should just get a 3090 this time around and game on for the next 3 years, it seems very likely that 2080ti owners will get 3 years out theirs.
    Something to be said about buying the best available stuff…

  51. These are just rumors, but rumors are AMD will have something similar to DLSS coming.

    If that can happen, and maybe some form of standard API can be achieved, then maybe it will be used more.

    Like Ray Tracing, someone had to get the ball rolling first.

    Is Contrast Adaptive Sharpening not AMD’s version of DLSS?

  52. Only thing really holding me back from a 3080 right now is the 10GB memory simply because I’ve seen the 11GB on my 2080 Ti maxed out in a few games at 4K resolution. And it wasn’t simply usage, in those cases, having experienced degraded performance before turning down texture quality or other settings to reduce VRAM needed. Most games fall into the 6-8GB range right now, but I’m just worried that more games will be coming down the pipe that start running into limitations with 10GB.

    I do feel the same way; it’s not even that the 3080 has less than the 2080 Ti (which as noted by others, the 2080 should be the point of comparison), but that memory didn’t increase much.

    I do understand that they probably could not hit their $700 target if they added more, though. I can see the 20GB version being $1,000 or close to it unless Micron will have 16Gb chips ready when it hits production.

    I kind of feel like it’s worth waiting. Part of that at least is coming from a 1080Ti and not really wanting to go backward in VRAM capacity particularly given how long I’m likely to keep the new card.

    as for games utilizing more VRAM, well, I’m not sure what the trend will be, if DLSS is used more, that’s the answer to the VRAM capacity problem, it will alleviate so much pressure on capacity when used

    games are also constantly developer new compression methods, and ways to load balance everything correctly, with RTX I/O and Microsoft DirectStorage decompression should be a lot better and again the vram capacity issue won’t be such a problem

    As much as I admire upcoming solutions to the VRAM problem… these are ‘high-end’ solutions that require significant developer support. I can’t help but imagine that there might be games that slip through the cracks which wind up benefiting from the increased VRAM due to lack of optimization.

    That’s also compounded by waiting every other generation or so to upgrade in my case. More frequent upgraders probably have less to worry about!

  53. Is Contrast Adaptive Sharpening not AMD’s version of DLSS?

    No, that’s closer to NVIDIA’s Sharpening filter in the control panel

    https://nvidia.custhelp.com/app/ans…-image-sharpening-in-the-nvidia-control-panel

    DLSS uses AI (Tensor Cores) to take an image and scale it upwards by like 16x samples, so it renders at a lower resolution but is upscaled by AI to a baseline highly super-sampled image processed by NVIDIA servers offline. It’s much more complex.

    This is why NVIDIA’s method provides faster performance, cause it’s rendering at a lower resolution and then uses hardware to basically upscale it to a reference image with no loss in performance doing so.

    AMD’s method still renders at the same resolution, and there is no AI upscaling. It doesn’t improve performance, only sharpens image quality when temporal antialiasing is used.

    Now, there is supposed to be a feature of CAS that can scale an image. However I don’t know an example of it, and you really don’t hear about performance increases when CAS is used. The video card is not using AI to upscale, cause there’s no method for that ATM. That’s what sets DLSS far apart from CAS, it’s much more functional.

    However, I need to read into CAS a bit more, I’m not 100% on how it exactly works, I need to read a whitepaper or something. But so far, it hasn’t been marketed as a feature to improve performance, but only to improve image quality.

    It’s quite possible AMD could make a new version of CAS that is DLSS like when they have the hardware to do so. Or, they could brand their "DLSS" equivalent into a whole new feature name. Who knows, but the rumor is AMD will be coming out with something DLSS ‘like and I’m not sure that’s CAS.

  54. I thought FidelityFX was closer to DLSS than CAS?

    [/URL][/URL]

    FidelityFX is a suite of technologies, branded under the FidelityFX name. There are many features branded under that name.

    There is:

    FidelityFX Contrast Adaptive Shading
    FidelityFX Screen Space Reflections
    FidelityFX Combined Adaptive Compute Ambient Occlusion
    FidelityFX Lumincanace Preserver Mapper
    FidelityFX Single Pass Downsampler

    and more

    So a game can have only 1 of these features and still be called having FidelityFX technology, or it can have multiple of these features.

    So the thing to look for in games is which one of these specific features of FidelityFX is it using, it could be only one feature, or multiples.

Leave a comment