Image: Amazon Game Studios

Amazon’s upcoming massively multiplayer online role-playing game, New World, is reportedly so demanding and poorly developed that it’s destroying select graphics cards such as NVIDIA’s GeForce RTX 3090. This is according to various closed beta testers who have vented their frustration on New World’s official forums and other social channels, claiming that their GPUs no longer function after attempting to run the game. Many of the complaints appear to be coming from EVGA GeForce RTX 3090 FTW3 ULTRA owners.

So after hitting the play button for New World beta the game started to load, followed immediately by fan speeds increasing to 100%, fps dropping to 0, and then my monitors turning off and my video card is no longer detected. So i reboot the PC and everything seemed to work fine, i even tried a few other games to make sure and i had no problems. So i hit play on New World again and same thing happen but this time i heard a loud pop and now my 3090 wont get past POST on bootup.

Update – 7/21/2021

Amazon wrote us with their official reply to the recent issues – below is their direct quote:

“Hundreds of thousands of people played in the New World Closed Beta yesterday, with millions of total hours played. We’ve received a few reports of players using high-performance graphics cards experiencing hardware failure when playing New World.

New World makes standard DirectX calls as provided by the Windows API. We have seen no indication of widespread issues with 3090s, either in the beta or during our many months of alpha testing.

The New World Closed Beta is safe to play. In order to further reassure players, we will implement a patch today that caps frames per second on our menu screen. We’re grateful for the support New World is receiving from players around the world, and will keep listening to their feedback throughout Beta and beyond.”

Sources: New World, r/newworldgame

Join the Conversation

35 Comments

  1. My wife was telling me about this, this morning. Why would a game cause the graphics card to overheat and pop? Wouldn’t that be the fault of the driver?? I thought thermal protections were provided by the Hardware/Driver.
    1. I was thinking that too, GPU these day have thermal protections to prevent these sort of stuff unless somehow the software is circumventing these protection. This issue reminds me of Starcraft II all over again though this time is seems to be only affecting EVGA cards.
  2. My wife was telling me about this, this morning. Why would a game cause the graphics card to overheat and pop? Wouldn’t that be the fault of the driver?? I thought thermal protections were provided by the Hardware/Driver.

    I’ve seen it before. I’m not sure what causes this or why it happens but some games can actually cause this behavior in a game. Star Wars: The Old Republic used to do this with certain AMD Radeon HD cards like the 7970. Now, the cards wouldn’t die but despite the game not being that demanding, the card would run so hot that the computer would either lockup or the screen would go black followed by a reboot. I’d run the fan at 100% and the temperature on that game alone would creep up to throttling level and eventually, a black screen and reboot or a hard lock up. It would run for 20 to 30 minutes all the while heat soaking itself and then eventually crossing the throttling threshold so fast the video card would just stop doing anything.

    If I ran the fans at standard speeds or leave it on automatic, you could game on it for 5 minutes. Meanwhile, the card would crush whatever AAA games were out at the time in the same system without issue. In doing some research on the problem back then, I found out that it wasn’t unheard of. I’ve since heard of this with various NVIDIA cards and specific games but its pretty rare. It also doesn’t happen to everyone.

  3. I was thinking that too, GPU these day have thermal protections to prevent these sort of stuff unless somehow the software is circumventing these protection. This issue reminds me of Starcraft II all over again though this time is seems to be only affecting EVGA cards.

    I doubt it’s circumventing thermal protection. It’s probably doing something that’s causing them to heat up so fast that the thermal protection doesn’t have time to do its job before damage occurs.

  4. I recall doom3 really stressed my FX 5900 a lot to the point of overheating and lockup, but not my FX6800, wierd…
  5. My GT 1030 is ready. Passive cooling, got a Lasko box fan on it….

    This game has no appeal to me. Looks like every other MMO out there. Sure to be a pay to win

  6. Amazon provided a comment for us, so it has been added to the post.

    Now that Amazon mentions it, I do seem to recall that cases of runaway temps on graphics cards in specific games often came down to excessively high frame rates in specific conditions being part of that. By the sound of it, the patch should fix that.

  7. Reminds me of the ‘power virus’ description that Furmark got from AMD… still surprising that it’s even possible, though.

    I get that there’s more juice going through GPUs than CPUs, but Intel had this figured decades ago. Some of us found out the hard way with S462 CPUs when coolers failed that AMD hadn’t quite figured it out yet. Intel CPUs would just shut down.

    Guess Nvidia hasn’t fully figured it out yet either :oops:

  8. Reminds me of the ‘power virus’ description that Furmark got from AMD… still surprising that it’s even possible, though.

    I get that there’s more juice going through GPUs than CPUs, but Intel had this figured decades ago. Some of us found out the hard way with S462 CPUs when coolers failed that AMD hadn’t quite figured it out yet. Intel CPUs would just shut down.

    Guess Nvidia hasn’t fully figured it out yet either :oops:

    well apparently it only happens with EVGA some RTX30390, so its more like EVGA’s fault, rathen than nvidia’s.

  9. well apparently it only happens with EVGA some RTX30390, so its more like EVGA’s fault, rathen than nvidia’s.

    It’s almost certainly due to the firmware on the cards being more aggressive than your average RTX 3090’s.

  10. well apparently it only happens with EVGA some RTX30390, so its more like EVGA’s fault, rathen than nvidia’s.

    Sort of?

    Again, the Intel CPUs would shut themselves down. You could still damage them from applying too much voltage through the motherboard, but that was your choice.

    This is an MMO. Yeah, they can turn on all the pretties, but do you really expect loading up a game to kill a GPU?

  11. Hmm. I do have an EVGA 3090 FTW3 and XC3 here I could test this out with. I think I’ll pass though.
  12. They had my ‘pass’ at Amazon MMO. Both parts, really. Games that start to feel like jobs just don’t hold my attention, and believe me, I’ve tried.

    As for working to demonstrate that it’s a problem, maybe you can get someone to sponsor the test?

    :cool:

  13. I don’t care for MMO’s. I have even less interest in playing one that could potentially fry me RTX 3090 FE.
  14. I’d imagine 3080Ti’s are also effected since they’re basically the same card as a 3090.
  15. I’d imagine 3080Ti’s are also effected since they’re basically the same card as a 3090.

    I don’t think so.

    The 3080 ti is like a traditional GPU

    the 3090 has extra memory that needs additional cooling & monitoring

  16. Memory on both sides of the PCB has been a thing since before they were called ‘GPUs’.

    My interpretation was based on the statement below that 3090 was unique in its design

    A dual-sided memory will allow this board to feature up to 24GB of memory. The RTX 3080, also based on GA102 GPU, should only need 10 single-sided modules

  17. Dang, when something can tank one of the fastest gaming cards on the planet, well that’s pretty bad. Either way, I had no interest in this anyway. MMO, pass, Amazon, even harder pass.

    Tinfoil hat theory: It’s secretly using GPUs to mine data for nav computers for Bezos Mars mission.

  18. Memory on both sides of the PCB has been a thing since before they were called ‘GPUs’.

    My GeForce2 ultra has memory on both sides. I think my voodoo 2 12mbs do too, but I’d have to go open up a case to verify.

  19. I don’t think so.

    The 3080 ti is like a traditional GPU

    the 3090 has extra memory that needs additional cooling & monitoring

    There is no additional "monitoring" with more RAM. I have no idea where you get that idea from, but it’s not true. Yes, the RTX 3090 needs more cooling but that’s primarily for the additional VRAM on the PCB. The RTX 3080 Ti is so close to the RTX 3090 in terms of specs and power consumption that it proves NVIDIA could probably get away with the same cooling solution on the RTX 3090 if it weren’t for having double the physical memory of the 3080 Ti.

    That’s not it. The reason this likely occurs on RTX 3090’s (EVGA XC3 and FTW3 cards specifically) is due to differences in voltage configuration which do not always allow for the thermal protection in the GPU to protect it before damage occurs in this particular instance.

  20. J2C did a video on this issue. He used an EVGA FTW3 3090 and an MSI 3090. There was definitely some strange behavior going on with the EVGA card, but not the MSI card. Whatever it is seems to be effecting only EVGA cards.
  21. EVGA is upholding all warranty claims and RMA’s already going out.
    Amazon released a patch for a problem they claim they didnt have.

    FTW3 cards are bleeding edge, even EVGA gives you a 500W+ Bios if you want one for the 3090.

  22. FTW3 cards are bleeding edge, even EVGA gives you a 500W+ Bios if you want one for the 3090.

    Along with a disclaimer that you’re pretty much on your own should anything happen with using it(unless that recently changed). I remember when that came out because I was chatting with David about the cards with the custom BIOS/PCB options and he told me about that one(I was bragging about the 420 or 450 one I got for my Strix). I haven’t looked at it recently but I remember reading about it in their forums back around Nov-Dec. It was actually one of the first things I thought about when this story popped up and wondered if the game was somehow accessing something it shouldn’t. That doesn’t seem to be the case but I did wonder.

  23. J2C did a video on this issue. He used an EVGA FTW3 3090 and an MSI 3090. There was definitely some strange behavior going on with the EVGA card, but not the MSI card. Whatever it is seems to be effecting only EVGA cards.

    Yeah when I watched that I kept thinking something is wrong with the EVGA firmware to allow it to overcurrent way beyond what he set in afterburner. The MSI card didn’t have the same problem. EVGA can probably fix this with a firmware update. I did think it was funny when he tried to hook up the thermal probe and it burned and smoked.

  24. It seems like the old issue that I’ve mentioned before with some odd ball games and weird GPU’s or hardware combinations. I’ve physically seen it happen before. However, the difference here is there seems to be some oddity with EVGA’s fan controller based on the article I linked above. I have no way to verify my theory at this time, but it’s as good a guess as any I suppose.

    That being said, those of you who are running a 500w vBIOS on your EVGA cards are likely at greater risk than those who aren’t.

Leave a comment