Image: AMD

AMD’s next generation of Radeon RX graphics cards will be powered by the third generation of its rapidly evolving GPU microarchitecture, RDNA 3. According to the latest rumors from various insiders, the Radeon RX 7000 Series is expected to boast a greater level of power efficiency thanks to major developments that include a multi-chip module (MCM) design. While both AMD and NVIDIA’s next-generation GPUs are rumored to be equally as power hungry as the current generation, RNDA 3 products will reportedly have an edge in performance and efficiency over green team’s Lovelace-based GeForce RTX 40 Series for reasons such as the competition’s adherence to a traditional, monolithic design. NVIDIA is expected to bounce back with the first MCM GPUs derived from its Hopper architecture, however, which are expected to provide triple the performance of current Ampere GPUs.

AMD/NVIDIA Rumored GPU Roadmap

201920202021202220232024
AMD (RDNA 1)AMD (RDNA 2)N/AAMD (RDNA 3)N/AAMD (RDNA 4)?
NVIDIA (Turing Refresh)NVIDIA AmpereN/ANVIDIA Ada LovelaceN/ANVIDIA Hopper
Source: Wccftech

[…] KittyYYuko states that AMD RDNA 3 ‘Navi 31’ flagship could feature 60 WGPs which equals 120 Compute Units while Greymon55 states that the chip could reach up to 160 Compute Units. However, Kopite7kimi clarifies that 120 Compute Units per die should be the correct configuration and the full chip should reach 15,360 cores. It could be likely that the 80 WGP (160 CU) variant is a more high-end part that we haven’t heard about yet like the Big Navi 21 XTX rumors.

Sources: Greymon55, kopite7kimi, Yuko Yoshida (via Wccftech)

Don’t Miss Out on More FPS Review Content!

Our weekly newsletter includes a recap of our reviews and a run down of the most popular tech news that we published.

Join the Conversation

41 Comments

  1. Just like every other new AMD card.

    I don’t understand why people still post this stupid rumor. Every year for what? 18 years now and not once has it ever been true.

  2. Yep just like amd will never beat the titan of intel.. oh wait…

    Competition is good. My 6800xt is going strong. If a 7800xt comes out that is even better and is cost competitive with Nvidia I have no issue buying amd again. Then again if Nvidia trounced amd and has availability at the price point I want I will buy them. Just the way it goes.

    But to say amd can’t beat nvidia… it could happen. Not super likely I’ll give you that but really this day and age all they need to do is be competitive and have availability.

  3. I don’t care it they beat the fastest card they offer, just that they beat them in the mid-range on price / performance.

  4. I’ll take another look at AMD GPU’s when EVGA starts making them.

    As it is EVGA is about to do a AMD MB so who knows…maybe one day.

  5. [QUOTE=”Auer, post: 38197, member: 225″]
    I’ll take another look at AMD GPU’s when EVGA starts making them.

    As it is EVGA is about to do a AMD MB so who knows…maybe one day.
    [/QUOTE]

    Yea maybe we can get an amd card with evga quality.. like those 3090s right? Has anyone heard if Evga is replacing those?

  6. [QUOTE=”xGryfter, post: 38183, member: 93″]
    Just like every other new AMD card.
    I don’t understand why people still post this stupid rumor. Every year for what? 18 years now and not once has it ever been true.
    [/QUOTE]
    Radeon 9700Pro. Enough said.

  7. [QUOTE=”Grimlakin, post: 38223, member: 215″]
    Yea maybe we can get an amd card with evga quality.. like those 3090s right? Has anyone heard if Evga is replacing those?
    [/QUOTE]
    Of course they are, best CS in the business.
    Would be nice to see AMD based GPU’s with EVGA performance and CS.

  8. [QUOTE=”Stoly, post: 38228, member: 1474″]
    Radeon 9700Pro. Enough said.
    [/QUOTE]
    So, not AMD… and an odd situation. Nvidia provided GPUs that did 16-bit color at full speed and 32-bit color at half speed, while ATi just did 24-bit regardless of input.

    Nvidia’s solution depended on developers tagging only the stuff that needed 32-bit color, and well, that basically didn’t happen.

    So of course we all bought 9700 Pros, but that was the last time Nvidia was legitimately behind across the board. Now, [I]AMD[/I] has brought some decent performing GPUs since and pulled ahead for six or so months, but nothing like ATi did back then for several generations.

  9. [QUOTE=”LazyGamer, post: 38233, member: 1367″]
    So, not AMD… and an odd situation. Nvidia provided GPUs that did 16-bit color at full speed and 32-bit color at half speed, while ATi just did 24-bit regardless of input.

    Nvidia’s solution depended on developers tagging only the stuff that needed 32-bit color, and well, that basically didn’t happen.

    So of course we all bought 9700 Pros, but that was the last time Nvidia was legitimately behind across the board. Now, [I]AMD[/I] has brought some decent performing GPUs since and pulled ahead for six or so months, but nothing like ATi did back then for several generations.
    [/QUOTE]

    Did or do you work for Nvidia? Everyone else is having a conversation and you’re reading from marketing material.

  10. [QUOTE=”Grimlakin, post: 38234, member: 215″]
    Did or do you work for Nvidia? Everyone else is having a conversation and you’re reading from marketing material.
    [/QUOTE]
    Nope, just a happy consumer that owned the stuff.

  11. The ATI 9700 AIW was one of my favorite gpus, but it wasn’t AMD then. I still have the card. Loved it till I replaced it with an Nvidia card.

    We do continually hear the hype that the next gen AMD gpu’s will be all that. Well, it usually isn’t, however, they do continue to show pretty good progress. I wouldn’t turn down a current gen AMD gpu right now.

    The prices for any gpu currently are very discouraging.

    I’m not going to hold my breathe, but, as we have seen in the cpu world, good things can happen. We may yet see a leapfrog. May….

  12. I seem to remember Fermi being a horrible mess and AMD’s Cypress cards pretty well walking over everything nVidia had that generation. Fermi may have been “faster” but that didn’t make it better.

    AMD was also first with Turbo on Cayman the next generation.

    And there were a lot of times where GCN/Polaris cards held the lower and middle tiers pretty well on value propositions.

    And AMD pulled out the console contracts – twice in a row.

    There are a lot of metrics out there other than just King of the Hill

  13. [QUOTE=”Brian_B, post: 38251, member: 96″]

    There are a lot of metrics out there other than just King of the Hill
    [/QUOTE]

    [IMG]https://media3.giphy.com/media/3ohuAxV0DfcLTxVh6w/giphy.gif[/IMG]

  14. [QUOTE=”Uvilla, post: 38182, member: 397″]
    No AMD ain’t gonna leapfrog Nvidia. Just no.
    [/QUOTE]

    That’s what they said about AMD making a comeback against Intel.

    In any case, I’ll root for anyone who goes up against nVidia. Fuck nVidia.

  15. [QUOTE=”rat, post: 38254, member: 327″]
    That’s what they said about AMD making a comeback against Intel.

    In any case, I’ll root for anyone who goes up against nVidia. **** nVidia.
    [/QUOTE]
    To me is more about this useless clickbait rumor mill that happens each and every time. But I guess due to being the underdog right now AMD gets more clickbait useless rumors . Honestly, my ‘frustration’ ( as far as these things matter to be ‘frustrated’ ) is my own creation as I used to read a lot, and believe so much I read, or saw in videos… You know, just like an idiot hehhe.

  16. Yea I didn’t check the source but if it’s wccftech.. well you know the odds of them presenting anything accurate are about 5%. I don’t have the time or desire to read one of their articles.

    And when exactly did every damn thing how to turn j to a freaking video? I know it makes money for people… and really that’s why. It just bugs me.

  17. [QUOTE=”rat, post: 38254, member: 327″]
    That’s what they said about AMD making a comeback against Intel.
    [/QUOTE]
    They’ve done it in terms of performance – where available, I think AMD CPUs are almost always preferred at the same price unless an OEM really borked something up. Laptop offerings are starting to look especially good, beyond their aged APUs.

    And the availability part is the clincher. Everyone (including Intel, through their own stumbles) is having to ration fab output, and that means skipping on products that would otherwise be widely available. That also means that with companies selling basically everything they can make, ‘success’ is bounded by manufacturing output, so while AMD is doing great on the CPU [I]performance[/I] front, they’re not gaining the marketshare that they could be. Same with GPUs really, since AMD cards are going for a premium for reasons other than their gaming performance.

    [QUOTE=”rat, post: 38254, member: 327″]
    In any case, I’ll root for anyone who goes up against nVidia. **** nVidia.
    [/QUOTE]
    Not sure where this comes from; whether it’s drivers, features, top-end performance, Nvidia tends to push the envelope. Competitors getting closer does tend to get them to push a little harder and I’m definitely looking forward to what Intel has on the table (again, mostly not about gaming).

  18. [QUOTE=”Brian_B, post: 38251, member: 96″]
    I seem to remember Fermi being a horrible mess and AMD’s Cypress cards pretty well walking over everything nVidia had that generation. Fermi may have been “faster” but that didn’t make it better.
    [/QUOTE]
    You just sent me down memory lane… actually traded a later Fermi-based card for a Cayman-based card and then bought a second for Crossfire, mostly because they had more VRAM and I was running what was at the time a higher-resolution display.

    [QUOTE=”Brian_B, post: 38251, member: 96″]
    AMD was also first with Turbo on Cayman the next generation.
    [/QUOTE]
    But it wasn’t until [I]after[/I] Cayman that AMD got called to carpet for their horrendous multi-GPU frame-pacing issues, i.e., [I]negative[/I] effective performance scaling, and that’s one trade I regret. Still the single GPUs were great once the drivers matured, though I had no problem with the Fermi. Both Cayman GPUs served for years under other owners after I’d moved on to a pair of surprisingly-efficient Kepler cards.

    [QUOTE=”Brian_B, post: 38251, member: 96″]
    And there were a lot of times where GCN/Polaris cards held the lower and middle tiers pretty well on value propositions.
    [/QUOTE]
    Undeniable! And for gaming, those are still decent enough cards. I wish they had better transcoder support, as these are a no-go for Plex, that they had better drivers for content creation, as I and reviewers have experienced crashes (even lately as these are still shipping in APUs!), and that AMD hadn’t decided to pull Windows Server support for some inexplicable reason when any basic Nvidia GPU or Intel iGPU for that matter will just work.

    [QUOTE=”Brian_B, post: 38251, member: 96″]
    And AMD pulled out the console contracts – twice in a row.
    [/QUOTE]
    IIRC, that first one basically saved their bacon. They’d also customized their GPU for Microsoft for the generation before so had some experience, and unlike Nvidia who apparently every other major company basically hates (I guess except TSMC, who probably likes the money), AMD was in a position to both offer a combined CPU + GPU solution that no one else had or even has today, while also quite likely providing a bargain for Microsoft and Sony.

    I bet both console vendors and console developers also appreciated the more standardized architecture at the hardware level.

  19. [QUOTE=”LazyGamer, post: 38279, member: 1367″]
    Still the single GPUs were great once the drivers matured, though I had no problem with the Fermi. Both Cayman GPUs served for years under other owners after I’d moved on to a pair of surprisingly-efficient Kepler cards.
    [/QUOTE]

    Back then you bought an AMD card for the performance you hoped they would get a year after you bought the card, good to see their drivers are better these days.

  20. [QUOTE=”rat, post: 38254, member: 327″]
    That’s what they said about AMD making a comeback against Intel.

    In any case, I’ll root for anyone who goes up against nVidia. **** nVidia.
    [/QUOTE]
    Thing is that intel pretty much remained stagnant for years, focusing on mobile rather than desktop. And it took AMD that long to finally reach parity and even surpass intel. Nvidia does have dropped the ball before, but unless there’s some major manufacturing/architechture issues, I don’t see it happening.

    BTW **** nVidia=Love nVidia? 😉 😉 🤣 🤣

  21. [QUOTE=”Stoly, post: 38315, member: 1474″]
    Thing is that intel pretty much remained stagnant for years, focusing on mobile rather than desktop. And it took AMD that long to finally reach parity and even surpass intel. Nvidia does have dropped the ball before, but unless there’s some major manufacturing/architechture issues, I don’t see it happening.
    [/QUOTE]
    Yeah, it’s rare for nVidia to hit a misstep – more often it’s just been delaying tech (Volta comes to mind) because they were only competing against themselves. Fermi is the most recent that I can think of, and they were able to quickly pivot off of that. I guess you could say space invaders as well, but that didn’t feel like as big an issue as an entire architecture being bad.

    AMD is very much chasing a moving target – and nVidia is a fast moving target with big R&D resources, deep packets, a lot of momentum and brand recognition.

    That said, TSMC 5nm is looking to be crowded. That could be the big bottleneck, especially since Intel has also signed up there and Apple has been there for a while, but since AMD is also set to use TDMC 5nm for RDNA 3 I don’t know that it presents any opportunities to AMD – just adding to the traffic jam.

    Next gen is shaping up to be the same logistical clusterf&*k that this generation has been. The only silver lining here is that 5nm will be very mature by the time we get there – Apple has been using it for a while now and will have moved to 3nm by then (assuming it’s ready).

  22. One thing that seems consistent in most rumors is that RDNA3 will be out BEFORE “Ampere Next” so AMD will indeed be the new King of the hill if only for a short term.

    Another thing is that since RDNA3 is an evolution of RDNA2, I don’t really see any major changes in RT and AI so even if RDNA3 is faster in rasterizing, it may still lag in RT.

  23. [QUOTE=”Stoly, post: 38319, member: 1474″]
    One thing that seems consistent in most rumors is that RDNA3 will be out BEFORE “Ampere Next” so AMD will indeed be the new King of the hill if only for a short term.

    Another thing is that since RDNA3 is an evolution of RDNA2, I don’t really see any major changes in RT and AI so even if RDNA3 is faster in rasterizing, it may still lag in RT.
    [/QUOTE]

    In watching reviews and gaming and such the gap in RT performance between AMD and Nvidia is quickly closing. Either because more RT is being designed for AMD processes in specific, Microsoft’s API for RT is more native to AMD, or maturity in software was needed, only time will tell.

  24. [QUOTE=”Stoly, post: 38319, member: 1474″]
    One thing that seems consistent in most rumors is that RDNA3 will be out BEFORE “Ampere Next” so AMD will indeed be the new King of the hill if only for a short term.

    Another thing is that since RDNA3 is an evolution of RDNA2, I don’t really see any major changes in RT and AI so even if RDNA3 is faster in rasterizing, it may still lag in RT.
    [/QUOTE]
    Not sure either of those will necessarily be true.

    They are both possible, but hardly given.

    But a lot could change with hardware RT even without big changes to the rasterization side — just having it’s first generation under it’s belt they will see how the process works better and can better optimize hardware for it (caches, bandwidth, instructions, etc). And, just dropping to 5nm won’t necessarily mean AMD gets a performance crown – being the first to 7nm didn’t get them that either, and nVidia is burning a lot of power right now to claim that crown – that may be something AMD is unable or unwilling to do, depending on how RDNA 3 tapes out.

  25. [QUOTE=”Stoly, post: 38319, member: 1474″]
    Another thing is that since RDNA3 is an evolution of RDNA2, I don’t really see any major changes in RT and AI so even if RDNA3 is faster in rasterizing, it may still lag in RT.
    [/QUOTE]
    Both AMD and Nvidia started developing RT hardware before they knew how developers were going to implement it, given the development cycles that this class of hardware involves. It’s entirely possible for RDNA3 to be a direct evolution of RDNA2 and be significantly better at RT.

    AI / ML I’d be less sure about given that there’s very little standardization around it all, it’s fast moving, and we’ve seen very little on the consumer side. RTX Voice comes to mind as one visible example.

    [QUOTE=”Grimlakin, post: 38321, member: 215″]
    In watching reviews and gaming and such the gap in RT performance between AMD and Nvidia is quickly closing. Either because more RT is being designed for AMD processes in specific, Microsoft’s API for RT is more native to AMD, or maturity in software was needed, only time will tell.
    [/QUOTE]
    Overall it’s most likely a product of learning how to use RT, but use it [I]less[/I]. Nvidia got the ball rolling and the first RT efforts showed that it needed to be implemented sparingly, lest we get another BF:V implementation that tanks performance without obvious benefit.

    I’m sure that developing RT for the latest consoles has driven developers toward more creative and less heavy-handed solutions given the hardware constraints.

  26. [QUOTE=”Denpepe, post: 38284, member: 284″]
    Back then you bought an AMD card for the performance you hoped they would get a year after you bought the card, good to see their drivers are better these days.
    [/QUOTE]
    I think I’ve bought exactly [I]two[/I] GPUs within six months of their release, ever: the 9700 Pro, and the GTX670.

    What I’d found with ATi and now AMD drivers, and again not with their latest hardware, is that the lag is also when games are released. These days it might be a week or two, tossing a coin for which vendor has the most / biggest problems. I do remember having to turn Crossfire off in order to get Skyrim to run, and I didn’t really have issues like that with Kepler or Maxwell in SLI.

  27. [QUOTE=”Grimlakin, post: 38321, member: 215″]
    In watching reviews and gaming and such the gap in RT performance between AMD and Nvidia is quickly closing. Either because more RT is being designed for AMD processes in specific, Microsoft’s API for RT is more native to AMD, or maturity in software was needed, only time will tell.
    [/QUOTE]
    I’d like for this to be true. But apart from FSR finally hitting, and that helping boost AMD’s RT efforts, I haven’t seen anything that has really affected the delta between AMD and nVidia really. Apart from some other driver boost like FSR, or just plain ole’ Fine Wine effect, I don’t know that we will see much change until we get new hardware – the hardware is what it is.

  28. [QUOTE=”Stoly, post: 38228, member: 1474″]
    Radeon 9700Pro. Enough said.
    [/QUOTE]

    The 9700 Pro is a 21 year old card created by ATI. Enough said.

    My statement stands, even for nVidia’s shitty cards.

    Anyone playing the “performance isn’t the only metric” doesn’t seem to understand what this rumor is saying.
    Almost nobody gives a shit about power consumption unless they’ve already lost the performance race.

  29. [QUOTE=”xGryfter, post: 38375, member: 93″]
    Almost nobody gives a **** about power consumption unless they’ve already lost the performance race.
    [/QUOTE]
    Hmm. No, I don’t agree. The two very much go hand in hand.

    You have a somewhat hard cap on energy use. The efficiency of your architecture is going to define your top performance because you are more or less capped at the amount of power you can throw at it.

    Top tier cards are almost always 300-350W TGP. Even if you were to pump more power in via more or varied PCI-E connectors, your still stuck by being in a standard PC enclosure and moving heat out via a double or triple slot cooler (or AIO cooler). Going much beyond that power envelope, stock, is courting disaster. Of course, overclocks are going to be capable of exceeding that, but that’s why an overclock is an overclock and not just “stock”.

    So yeah, power matters, even at the top end. Otherwise, why wouldn’t AMD/nVidia just crank the dial up to 11 and get even more out of them?

    Intel is right now learning this lesson all over again – they had learned it once with Prescott->Core. Now they need to go from Skylake to ???

  30. [QUOTE=”Brian_B, post: 38376, member: 96″]
    Hmm. No, I don’t agree. The two very much go hand in hand.

    You have a somewhat hard cap on energy use. The efficiency of your architecture is going to define your top performance because you are more or less capped at the amount of power you can throw at it.

    Top tier cards are almost always 300-350W TGP. Even if you were to pump more power in via more or varied PCI-E connectors, your still stuck by being in a standard PC enclosure and moving heat out via a double or triple slot cooler (or AIO cooler). Going much beyond that power envelope, stock, is courting disaster. Of course, overclocks are going to be capable of exceeding that, but that’s why an overclock is an overclock and not just “stock”.

    So yeah, power matters, even at the top end. Otherwise, why wouldn’t AMD/nVidia just crank the dial up to 11 and get even more out of them?

    Intel is right now learning this lesson all over again – they had learned it once with Prescott->Core. Now they need to go from Skylake to ???
    [/QUOTE]

    I didn’t say power didn’t matter. People generally don’t use power consumption as a metric over performance as a bullet point unless their preferred card has already lost that performance race. Power is an important factor when reviewing a card as a whole but unless the TGP is totally f***** most people will buy for higher performance over lower power consumption.

  31. [QUOTE=”xGryfter, post: 38377, member: 93″]
    I didn’t say power didn’t matter. People generally don’t use power consumption as a metric over performance as a bullet point unless their preferred card has already lost that performance race. Power is an important factor when reviewing a card as a whole but unless the TGP is totally f***** most people will buy for higher performance over lower power consumption.
    [/QUOTE]
    Good point.

    I mostly see efficiency as it relates to being able to crank up the clocks more I guess, but your right, not everyone looks at it that way.

  32. [QUOTE=”xGryfter, post: 38377, member: 93″]
    I didn’t say power didn’t matter. People generally don’t use power consumption as a metric over performance as a bullet point unless their preferred card has already lost that performance race. Power is an important factor when reviewing a card as a whole but unless the TGP is totally f***** most people will buy for higher performance over lower power consumption.
    [/QUOTE]
    Power draw by itself doesn’t mean too much, but it does set the bar for heat and noise output. Noise can be dealt with at a cost if the available coolers aren’t great, but generally that means that the heat output is felt even more.

    This is also important for smaller systems. More efficient parts means less performance compromise for SFFs, whether that be due to hard cooling limits or just not wanting the system to sound like a jet engine. Also a pretty big deal for laptops though users usually don’t get much choice there.

    But if power draw is close, say within 10% under load? Point taken.

    [QUOTE=”Brian_B, post: 38379, member: 96″]
    Good point.

    I mostly see efficiency as it relates to being able to crank up the clocks more I guess, but your right, not everyone looks at it that way.
    [/QUOTE]
    This, but note that being more efficient doesn’t always translate into being able to crank up the clocks. It’s pretty clear, for example, that AMD would probably give up no performance ground at all to Intel if they could get another 10% or so more clockspeed out of Ryzen regardless of power draw. As it stands AMD is likely to retain the efficiency advantage for a few more years while still seeing performance challenges from Intel despite Intel’s fab troubles, simply because Intel can clock their parts higher.

    With GPUs it’s really all over the place though. Nvidia seemed to take the efficiency ground back with Kepler, but I think that the majority of that has been due to AMDs inability to keep up with Nvidia’s pace of innovation since as they’ve only been able to present a challenge between Nvidia’s release cycles and only by pushing out parts that stretched power envelopes and users’ willingness to subject themselves to screaming blowers, like the R9 290 cards. And those were [I]only[/I] 250W stock!

    For what it’s worth, chiplets have a lot of promise, and AMD has shown with their CPUs that they can navigate around the many potential pitfalls of breaking up a processor without unduly affecting performance. The greatly increased memory latency on Zen 2 CPUs just doesn’t seem to matter with all the cache they threw on the memory controller die, for example, and since GPUs are far less latency sensitive than CPUs, they very well could pull this off.

    Still, I don’t envy anyone who has to deal with the first round of drivers 😎

  33. [QUOTE=”xGryfter, post: 38375, member: 93″]
    The 9700 Pro is a 21 year old card created by ATI. Enough said.

    My statement stands, even for nVidia’s ****ty cards.

    Anyone playing the “performance isn’t the only metric” doesn’t seem to understand what this rumor is saying.
    Almost nobody gives a **** about power consumption unless they’ve already lost the performance race.
    [/QUOTE]
    Well technically the 9700pro was created by ArtX which ATI aquired back in 2000 IIRC.

    On a side note, its funny how ArtX paved the way for ATI/AMD future cards, while nvidia didn’t really use 3dfx technologies at all.

  34. [QUOTE=”Stoly, post: 38389, member: 1474″]
    Well technically the 9700pro was created by ArtX which ATI aquired back in 2000 IIRC.

    On a side note, its funny how ArtX paved the way for ATI/AMD future cards, while nvidia didn’t really use 3dfx technologies at all.
    [/QUOTE]

    I think that’s an unfair statement. They used ATI tech and learned how to do SLI at the time from what they acquired. I think it’s crap what happened to 3dfx and they lost the race by mismanagement rather than lack of tech.

    Regardless as long as the leader keeps passing between the major manufacturers it is only good for us. And currently if you’re looking for an upgrade from the previous generation of Nvidia or AMD card you will take what you can get within your budget. Otherwise you might as well start looking at the next generation.

  35. [QUOTE=”Grimlakin, post: 38390, member: 215″]
    I think that’s an unfair statement. They used ATI tech and learned how to do SLI at the time from what they acquired. I think it’s crap what happened to 3dfx and they lost the race by mismanagement rather than lack of tech.

    Regardless as long as the leader keeps passing between the major manufacturers it is only good for us. And currently if you’re looking for an upgrade from the previous generation of Nvidia or AMD card you will take what you can get within your budget. Otherwise you might as well start looking at the next generation.
    [/QUOTE]
    The only thing that nvidia implemented from 3dfx SLI was the name. Actually nvidia SLI has much more in common with Rage Fury MAXX as it used AFR.

    Don’t get me wrong some people claim 3dFX was father of 3d pc gaming and I tend to agree. But by the time nvidia bought it, it was already years behind the competition.

  36. [QUOTE=”Stoly, post: 38228, member: 1474″]
    Radeon 9700Pro. Enough said.
    [/QUOTE]

    The 9700 Pro was only possible through ATi’s acquisition of another company called ArtX. It was that technology that made the Radeon 9700Pro possible. Again, this was ATi, not AMD. AMD purchased ATi after that.

  37. [QUOTE=”Dan_D, post: 38465, member: 6″]
    The 9700 Pro was only possible through ATi’s acquisition of another company called ArtX. It was that technology that made the Radeon 9700Pro possible. Again, this was ATi, not AMD. AMD purchased ATi after that.
    [/QUOTE]

    And fired everyone immediately because there was no knowledge transfer or keeping of talent. Sigh.

Leave a comment