Image: NVIDIA

PCWorld’s Brad Chacos has published a story regarding NVIDIA’s new GeForce Game Ready 456.55 WHQL driver. As suggested by various user reports over the last few hours, the driver does appear to put a band-aid on the game-crashing issues that custom GeForce RTX 3080 owners have been complaining about over the past few days. But there’s a cost: reduced clock speeds.

“…with the original drivers, the [Horizon Zero Dawn] benchmark ran at a mostly consistent 2010MHz on this GPU until it hit the 2025MHz wall and crapped out,” Chacos noted. “With Nvidia’s new 456.55 drivers, the the GPU clock speed in Horizon Zero Dawn now flitters between 1980MHz and 1995MHz (though it can still hit a stable 2010MHz in menu screens). Nvidia’s fix appears to be dialing back maximum performance with the GPU Boost feature.”

Of course, none of you should be surprised by the fact that downclocking is the fix. While the GeForce RTX 3080 drama appears to be evolving beyond POSCAP and MLCC capacitors, it should be safe enough to say that higher clocks are the trigger mechanism for whatever lapse in engineering occurred with certain variants. Users had already been advised to dial down their clocks with software such as MSI Afterburner.

“On the all-POSCAP [EVGA] FTW3, using Nvidia’s original 456.38 drivers, the Horizon Zero Dawn benchmark consistently crashes at 1440p resolution on our test system,” Chacos confirmed. “Other games work fine; even HZD works fine at 4K; but it always crashes at 1440p, and so hard that it takes out GPU-Z, too.”

“After that, I ran the fantastic Display Driver Uninstaller software to wipe Nvidia’s 456.38 drivers from my system, and installed the new 456.55 drivers with promised stability fixes,” he continued. “That made Horizon Zero Dawn repeat its excruciating shader optimization process, but after that, the game just worked. I’ve rerun the benchmark five times with the 456.55 drivers installed and successfully completed every run. It just works, now.”

According to Chacos, the reduction in clock speed is so minor that nobody should notice. NVIDIA also sent him the following statement, which has since been copied on green team’s official forum.

“NVIDIA posted a driver this morning that improves stability. Regarding partner board designs, our partners regularly customize their designs and we work closely with them in the process. The appropriate number of POSCAP vs. MLCC groupings can vary depending on the design and is not necessarily indicative of quality.”

Don’t Miss Out on More FPS Review Content!

Our weekly newsletter includes a recap of our reviews and a run down of the most popular tech news that we published.

Join the Conversation

20 Comments

  1. I saw this coming a mile away.

    However, I would counter, why not send AIBs an updated minimum requirement for the reference spec for hardware components instead? Amend the minimum reference spec AIBs need to design cards, and that also solves the problem, and we’d be able to retain the full GeForce Boost clock.

  2. [QUOTE=”Brent_Justice, post: 19307, member: 3″]
    I saw this coming a mile away.

    However, I would counter, why not send AIBs an updated minimum requirement for the reference spec for hardware components instead? Amend the minimum reference spec AIBs need to design cards, and that also solves the problem, and we’d be able to retain the full GeForce Boost clock.
    [/QUOTE]

    That, and more opportunity for testing.

    As this little “Scandal” wanes away tho, no real harm done.
    Availability is still a far bigger issue.

  3. [QUOTE=”Brent_Justice, post: 19313, member: 3″]
    Also, this just continues to go on to show how bad Samsung 8nm (really a custom 10nm node) was as a choice. I don’t think this would be happening on TSMC 7nm.
    [/QUOTE]

    I don’t get it.

    This seems like a lack of time, testing and poor component choices rather than the 8nm being bad…?
    I mean, these cards perform very well now (so far).

    Are you saying that the TSMC 7nm would have behaved differently with the same under spec’d components?

  4. [QUOTE=”Auer, post: 19316, member: 225″]
    I don’t get it.

    This seems like a lack of time, testing and poor component choices rather than the 8nm being bad…?
    I mean, these cards perform very well now (so far).

    Are you saying that the TSMC 7nm would have behaved differently with the same under spec’d components?
    [/QUOTE]
    I don’t think it was poor component choice, just going by what has worked in the past. Lack of time and testing was certainly the big factor in this issue. Doesn’t seem like FE cards are seeing this issue, so it’s only one for AIB cards.

  5. [QUOTE=”Armenius, post: 19320, member: 180″]
    I don’t think it was poor component choice, just going by what has worked in the past. Lack of time and testing was certainly the big factor in this issue. Doesn’t seem like FE cards are seeing this issue, so it’s only one for AIB cards.
    [/QUOTE]
    Wrong component for this type of boost, if you watch Der8aer switching out caps the crashes end…so too much boost for the caps used no?

    Either way, a adjustment in drivers seems to have fixed that. So far most testers have reported little to no negative effects that I can tell.
    I don’t count loosing 1-2 fps…

  6. I’ll add this here too:

    ” During testing, we also re-ran the benchmarks, and it had offset effects that are close to zero, meaning at 100 FPS you’d perhaps see a 1 FPS differential, but that can be easily assigned to random anomalies as well. As to why there is so little performance decrease is simple, not many games trigger the GPU all the way the end of the spectrum at say 2050 MHz. That’s isolated to very few titles as most games are GPU bound and hover in the 1900 MHz domain.”

    [URL unfurl=”true”]https://www.guru3d.com/news-story/geforce-rtx-3080-ctd-issues-likely-due-to-poscap-and-mlcc-configuration.html[/URL]

  7. [QUOTE=”Armenius, post: 19320, member: 180″]
    I don’t think it was poor component choice, just going by what has worked in the past. Lack of time and testing was certainly the big factor in this issue. Doesn’t seem like FE cards are seeing this issue, so it’s only one for AIB cards.
    [/QUOTE]

    The FE cards don’t have the issue because NVIDIA built them above and beyond the minimum reference spec, with 4 POSCAPs + 20 MLCC.

    It’s a component issue primarily.

  8. Remember, this generation, FE cards are NOT the reference design or spec. NVIDIA has a minimum, separate reference spec sent to AIBs. That spec says 6 POSCAPS is the minimum.

    The Founders Edition is NVIDIA being like an AIB and making their own card, superseding the reference spec. They put the power filters on back in a more robust config for the FE than the minimum reference spec calls for, hence, no issues.

    What has been found out though, is that with GeForce Boost boosting the clock, 6 POSCAPs aren’t 100% stable. It’s stable for the reference base clock, but the fact that GPU Boost boosts the clock much higher is the issue, they just can’t take it. However, just removing 1 POSCAP with a better 10 MLCC, seems to solve the issue. And overall, the less POSCAPs there are, and more MLCC’s, the better potential for overclocking is there as well.

  9. [QUOTE=”LeRoy_Blanchard, post: 19371, member: 137″]
    So Nvidia has driver issues. This isn’t AMD right?
    [/QUOTE]
    No, they had a hardware issue and fixed it with a driver.

  10. [QUOTE=”Auer, post: 19376, member: 225″]
    No, they had a hardware issue and fixed it with a driver.
    [/QUOTE]

    Kind of sounds more like they had the wrong settings for the driver that the hardware wasn’t able to utilize. I mean, isn’t that the same with ALL driver issues? Something the driver requests or wants but the hardware isn’t capable of providing?

  11. [QUOTE=”LeRoy_Blanchard, post: 19371, member: 137″]
    So Nvidia has driver issues. This isn’t AMD right?
    [/QUOTE]
    If we (and AMD) were still trying to figure out what was going on six months later, [I]then[/I] it’d be like AMD.

    It’s definitely still a black mark for Nvidia, and makes two in a row with the 2080Ti’s Space Invaders.

  12. Is this going to downgrade all 3080s or just the ones with the lesser power delivery? That should be the real question.

  13. [QUOTE=”LeRoy_Blanchard, post: 19378, member: 137″]
    Kind of sounds more like they had the wrong settings for the driver that the hardware wasn’t able to utilize. I mean, isn’t that the same with ALL driver issues? Something the driver requests or wants but the hardware isn’t capable of providing?
    [/QUOTE]
    Yes, I don’t think this is the first time a situation like this has happened on release.

  14. It looks more like the AIB’s were trying to push the reference design just a tad past what it was capable of. All of those cards effected hit reference clocks with no problem. It’s when they tried to boost beyond reference boost clocks that the issue presents itself.

    More of a nothing burger and AIB’s just trying to squeeze a bit more performance out than they should have.

  15. If I had bought one, I would be pissed.

    Question is, does this driver downclock only affected 3080 models, or all models?

    Does it make sense to re-run review benchmarks on this one?

Leave a comment