Introduction

Last year, we wrote an overclocking article about the GeForce RTX 2070 Super FE capabilities. In the interim, we’ve realized that it’s about time for us to work our way through the current generation of video cards on the market and do an overclocking article on each so you can see how much headroom each card will have and what performance gains you can expect. These overclocking articles will also serve as a basis for our comparison to retail cards as we strap them to the test bench over the next few months.

Our Overclocking Methodology

Overclocking video cards can sometimes be more of an art than a science as there are a number of ways to go about finding the best combination for performance as well as different ways to evaluate the stability of an overclock.  To complicate matters further, the final overclocked speed is not what we dial in as our settings but rather what the card will boost to based upon actual gameplay, available power, and thermals.

For the NVIDIA RTX Founders Edition cards, we will approach the overclocking by boosting the power limit to maximum levels within MSI Afterburner (including using the extended range options) and then start increasing the GPU clock speed until we find the highest stable value. We will then do trial and error to see what, if any, additional performance we can get from adding additional voltage to the GPU. Once we’re set with the most we can expect from the GPU, we turn to the memory with a stock GPU clock and see what it can do. Once we find these individual values, we work on increasing them in tandem until we find the best performing combination between the GPU overclock and memory overclock.

An overclock can be considered stable when multiple hours of several different games can be played on it without it crashing. Over the years, we’ve found that synthetic benchmarks or testing a single game is not an effective way to determine the stability of an overclock, so we take our time to make sure that we can throw any task at it without crashing. As a result, our highest achieved overclocks may end up being lower than you see elsewhere and that’s simply due to our stability bar being set higher.

NVIDIA GeForce RTX 2080Ti Founders Edition

Let’s recap what the GeForce RTX 2080 Ti Founders Edition is all about as it has been a while since its September 19, 2018 launch. The GeForce GTX 2080 Ti Founders Edition launched at a suggested retail price of $1199 which represented a $200 premium over the suggested retail price for the GeForce RTX 2080 Ti reference specification. Over the past year and a half, we’ve seen most vendors selling cards at about the same price point as the Founders Edition. The Founders Edition allows for a slightly higher Graphics Card Power (260W vs. 250W) and higher boost clocks than the standard 2080Ti GPU (1635MHz vs. 1545MHz).

Aside from the above differences, the GeForce RTX 2080 Ti Founders Edition shares the same 11GB of 14GHz GDDR6 memory, the 352-bit memory interface, 88 ROPS, 4,352 CUDA cores and 544 Tensor cores using the Turing TU102 GPU with reference specifications. It sports two 8-pin PCIe power connectors, twin fans, three DisplayPort, one HDMI and one USB Type-C connection.

Recent Posts

David Schroth

David is a computer hardware enthusiast that has been tinkering with computer hardware for the past 25 years.

Join the Conversation

22 Comments

  1. Glad to see the test system setup up in performance. Also seems like we need a new 4k gaming card if a overclocked 2080ti can’t hit 60 anymore in these current titles.
    1. Sad to say that unless NV can deliver on that 20-30% increase rumor even the next release will still have difficulty. The problem is that games are advancing to such a degree that even the best GPU now is still 2-3 years behind the most demanding games. RDR2 will be crushing GPU’s for years to come. Metro, Control, and a few others will do the same but not as much.
  2. Sad to say that unless NV can deliver on that 20-30% increase rumor even the next release will still have difficulty. The problem is that games are advancing to such a degree that even the best GPU now is still 2-3 years behind the most demanding games. RDR2 will be crushing GPU’s for years to come. Metro, Control, and a few others will do the same but not as much.

    I considered delaying new games for a few years so the cost of GPU upgrades would be lessened to get max iq.
    But it wont help that much with a slower advance in GPU performance, the very fastest current card will still be needed to get the best from many older games.
    After this next gen, the move to chiplets is sure to take a while to get right (ie bug free +high enough performance) and be cost effective.
    I hope NVidia have something up their sleeve that will leave the crickets in my wallet something to feed on!

  3. After this next gen, the move to chiplets is sure to take a while to get right (ie bug free +high enough performance) and be cost effective.
    I hope NVidia have something up their sleeve that will leave the crickets in my wallet something to feed on!

    Pretty sure we got a story about that coming out soon.

  4. Chiplets for CPU’s has been embraced very quickly. I don’t see why Video cards will be vastly different in that specific regard. As long as the controllers delivering content to the chiplets can do it well.
  5. After this next gen, the move to chiplets is sure to take a while to get right (ie bug free +high enough performance) and be cost effective.
    I hope NVidia have something up their sleeve that will leave the crickets in my wallet something to feed on!

    What if AMD had something up their sleeve?

    I do agree that chiplets have promise — GPUs are already highly parallel – much more so than even CPUs. Chiplet has the ability to drop overall GPU price, as you aren’t requiring one single large die to cram all those cores into.. so yields could be theoretically better, giving lower price points.

    But I think Chiplets will run into the traditional road block though: power. Your process node and architecture are always going to limit you on the clockspeed/power curve, and as long as your trying to fit into a PCI form factor, your max power is going to be capped by just physical size of heat removal equipment. Chiplets don’t really do much at all to address that, other than giving you a bit more area to dissipate heat across.

  6. Imagine a grid layout with chiplets and memory chips on a board layout with a big actively cooled heatsink over all of it. You could have grid layouts of 4 chiplets surrounded by memory and controllers… and just have one big card that’s one big GPU in reality with all of the pieces spread through the card. not sure how efficient that would be… but it sounds cool! ;) Then the card could only turn on the chiplets it needs for what is being used/demanded of it. mmmm… interesting… too advanced I think.

    The problem with what I just said above is the parallelism of the processing. What’s going to coordinate all of that in a timely manner for local rendering and delivery to display devices and all. It seems like it would almost need PCIE 4.x bandwidth…. or more even.

  7. What if AMD had something up their sleeve?

    I do agree that chiplets have promise — GPUs are already highly parallel – much more so than even CPUs. Chiplet has the ability to drop overall GPU price, as you aren’t requiring one single large die to cram all those cores into.. so yields could be theoretically better, giving lower price points.

    But I think Chiplets will run into the traditional road block though: power. Your process node and architecture are always going to limit you on the clockspeed/power curve, and as long as your trying to fit into a PCI form factor, your max power is going to be capped by just physical size of heat removal equipment. Chiplets don’t really do much at all to address that, other than giving you a bit more area to dissipate heat across.

    Problem with AMD is driver quality. I’ve been subjected to this with my last 2 AMD cards and had a terrible time of it, my very first card (X1800) was ok though.
    The 2nd bad experience caused me to sell my 290x prematurely and go NVidia. It was plain sailing from then on, a real relief.
    I’ve been reading about the problems with Navi, I would have sold my card if I owned one, that would drive me mad.
    As things are I am warned off AMD as they still dont have a handle on it.

    I think it will be a little easier to cool chiplets because they can be grouped further apart preventing the cooler from saturating as easily within a single cooling area.
    The smaller process will undoubtedly throw up issues but this wont be unique to chiplet designs

  8. I’ve had no end of driver problems with my 980GTX since Win10 and I’m hardly alone there…

    not discounting your experience, I just don’t buy it anymore when someone says “drivers” and then posts about cards that are nearly a decade old.

  9. I’ve had no end of driver problems with my 980GTX since Win10 and I’m hardly alone there…

    not discounting your experience, I just don’t buy it anymore when someone says “drivers” and then posts about cards that are nearly a decade old.

    In my case its worth pointing out the cards were new and the problems continued for the period of time I owned them (at least 1.5 yrs each).

    I stayed with Windows 7 for most of my gaming to avoid issues with Windows 10.
    Theres no point running the gauntlet when I dont have to.

  10. I haven’t found the need to overclock my GPU before, I’ve been using it at stock clocks for 10 months, but now with CP2077 upcoming and Control running like a dog I decided to give it a go. I didn’t expect much as I’ve got probably one of the lowest binned units out there. A GIGABYTE Turbo with a blower cooler from factory. Which I replaced with an Arctic Accelero Xtreme IV.

    To my surprise I’m now at +250 core and +600 memory. Which netted me a +10% score increase in the Port Royale benchmark. Which translates to getting from 35 to 41 FPS average in it. And I haven’t even touched the vcore yet, assuming I can at all on this card. A quick google search suggests I got lucky as most seem to top out between 150-200 core OC.

  11. I haven’t found the need to overclock my GPU before, I’ve been using it at stock clocks for 10 months, but now with CP2077 upcoming and Control running like a dog I decided to give it a go. I didn’t expect much as I’ve got probably one of the lowest binned units out there. A GIGABYTE Turbo with a blower cooler from factory. Which I replaced with an Arctic Accelero Xtreme IV.

    To my surprise I’m now at +250 core and +600 memory. Which netted me a +10% score increase in the Port Royale benchmark. Which translates to getting from 35 to 41 FPS average in it. And I haven’t even touched the vcore yet, assuming I can at all on this card. A quick google search suggests I got lucky as most seem to top out between 150-200 core OC.

    Touching the vcore will likely destabilize the memory OC as you’re probably bumping up against your total board power limit.

    I would suggest trying to max out the GPU and memory separately to see what each is capable of, then figure out the right mix for performance…

  12. I haven’t found the need to overclock my GPU before, I’ve been using it at stock clocks for 10 months, but now with CP2077 upcoming and Control running like a dog I decided to give it a go. I didn’t expect much as I’ve got probably one of the lowest binned units out there. A GIGABYTE Turbo with a blower cooler from factory. Which I replaced with an Arctic Accelero Xtreme IV.

    To my surprise I’m now at +250 core and +600 memory. Which netted me a +10% score increase in the Port Royale benchmark. Which translates to getting from 35 to 41 FPS average in it. And I haven’t even touched the vcore yet, assuming I can at all on this card. A quick google search suggests I got lucky as most seem to top out between 150-200 core OC.

    The better the cooler (and more tweaked its bios already is) on the card, the lower an increase would be if fitting an Accelero cooler, stated as an example not a recommendation.
    ie cards with better coolers/bios already run at much higher speed so there will be a lot less overclock headroom.
    Actual clock speed, fully stable, is the only real benchmark.

    What I like is the slight automatic performance increase without overclocking due to lower temps and that its so darn quiet with max fan speed using an Accelero!
    Its a great cooler.

  13. The better the cooler (and more tweaked its bios already is) on the card, the lower an increase would be if fitting an Accelero cooler, stated as an example not a recommendation.
    ie cards with better coolers/bios already run at much higher speed so there will be a lot less overclock headroom.
    Actual clock speed, fully stable, is the only real benchmark.

    What I like is the slight automatic performance increase without overclocking due to lower temps and that its so darn quiet with max fan speed using an Accelero!
    Its a great cooler.

    It’s ugly as sin, though, but I do appreciate that they prioritized function over form.

  14. Actual clock speed, fully stable, is the only real benchmark.

    How do you measure actual clock speed when it’s boosting all over the place?

    What I like is the slight automatic performance increase without overclocking due to lower temps and that its so darn quiet with max fan speed using an Accelero!
    Its a great cooler.

    It’s loud even at 50%, I don’t tolerate fan noise very well.

  15. How do you measure actual clock speed when it’s boosting all over the place?

    It’s loud even at 50%, I don’t tolerate fan noise very well.

    Once you’re in game it typically flattens out after the card warms up – see this chart of frequency over time from the article linked in the OP.

  16. Once you’re in game it typically flattens out after the card warms up – see this chart of frequency over time from the article linked in the OP.

    Yeah, but it’s dependent on the actual application, how far it will boost. If it’s extremely demanding it will be much lower. For example Furmark only boosts to around 1600 for me.

Leave a comment