Image: AMD

AMD’s FidelityFX Super Resolution (FSR) 2.0 has officially debuted today, and early reviewers seem to be pretty happy with what red team has achieved with the second iteration of its upscaling technology. TechPowerUp, for one, has called FSR 2.0 “just as good” as modern versions of NVIDIA’s highly regarded DLSS, sharing screenshots that suggest the two are quite comparable in terms of image quality on select settings. This is a pretty notable feat, as FSR 2.0 does not rely on AI and machine learning like green team’s alternative but still manages to produce a picture that’s substantially improved compared to its previous iteration, particularly in Quality mode. Deathloop players can try out FSR 2.0 today thanks to a new update that developer Arkane Studios released for the game. AMD has also shared first-party benchmarks and comparisons for Deathloop running with FSR 2.0, which can be found here.

AMD has achieved the unthinkable—the new FidelityFX Super Resolution FSR 2.0 looks amazing, just as good as DLSS 2.0, actually DLSS 2.3 (in Deathloop). Sometimes even slightly better, sometimes slightly worse, but overall, this is a huge win for AMD. Take a look at our comparison images—there’s a huge improvement when comparing FSR 1.0 to FSR 2.0. The comparison to “Native” or “Native+TAA” also always looks worse than FSR 2.0, which is somewhat expected. When comparing “DLSS Quality” against “FSR 2.0 Quality,” spotting minor differences is possible, but for every case I found, I’d say it’s impossible to declare one output better than the other; it’s pretty much just personal preference, or not even that.

Source: TechPowerUp

Go to thread

Don’t Miss Out on More FPS Review Content!

Our weekly newsletter includes a recap of our reviews and a run down of the most popular tech news that we published.

14 comments

  1. Again very impressive work by amd here across the board. No ai needed no special cores needed. That with heir driver updates make some mature cards even better.
  2. That is very interesting.

    In theory upsampling methodologies as used by AMD should not be capable of competing with AI generated geometry type methods like Nvidia uses.

    I wonder what magic is making this possible?

    Also, I wonder if there will be an RSR version of this, or if the technique requires explicit game support in order to work.
  3. That is very interesting.

    In theory upsampling methodologies as used by AMD should not be capable of competing with AI generated geometry type methods like Nvidia uses.

    I wonder what magic is making this possible?

    Also, I wonder if there will be an RSR version of this, or if the technique requires explicit game support in order to work.

    Well, given that FSR1.0 was about as good or sometimes better than DLSS1.x, it seems only natural that FSR 2.0 compares with DLSS 2.x, so I'm going to say you are mistaken.

    IIRC CONTROL DLSS 1 implementation initially was shader based and actually produced pretty good results, but then DLSS 2.0 came out and it was just better. So upscaling CAN be done without dedicated hardware. Now lets see what DLSS 3 brings to the table.

    I'm more impressed that FSR 2.0 only gets a minor performance hit compared to FSR1.0 while having much better IQ.
  4. Well, given that FSR1.0 was about as good or sometimes better than DLSS1.x, it seems only natural that FSR 2.0 compares with DLSS 2.x, so I'm going to say you are mistaken.

    IIRC CONTROL DLSS 1 implementation initially was shader based and actually produced pretty good results, but then DLSS 2.0 came out and it was just better. So upscaling CAN be done without dedicated hardware. Now lets see what DLSS 3 brings to the table.

    I'm more impressed that FSR 2.0 only gets a minor performance hit compared to FSR1.0 while having much better IQ.

    I mean, these are different strategies.

    Everyone knows you cant add detail that isn't there to an image. It gets back to the old "enhance" trope always used in spy/police shows and movies.

    1652394895000.png

    FSR and DLSS use two different methods.

    AMDs FSR is the simplest. It uses various combinations of upscaling algorithms and sharpening filters to try to minimize the quality loss (but there will always be quality loss).

    Nvidia's method is more sophisticated. It tries to fill in geometry using AI based pattern matching. This will result in more native resolution type of sharpness, but will result in other types of errors, when it misinterprets the geometry that is needed, because after all it is dealing with incomplete information.

    There is no free lunch. You can't make something out of nothing. Upscaling algporithms will always have quality loss, and you can never eliminate AI geometry matching from occasionally guessing wrong/ With the game specific "training" required to make DLSS work, you can minimize it, but it will still always be there.
  5. I mean, these are different strategies.

    Everyone knows you cant add detail that isn't there to an image. It gets back to the old "enhance" trope always used in spy/police shows and movies.

    View attachment 1596

    FSR and DLSS use two different methods.

    AMDs FSR is the simplest. It uses various combinations of upscaling algorithms and sharpening filters to try to minimize the quality loss (but there will always be quality loss).

    Nvidia's method is more sophisticated. It tries to fill in geometry using AI based pattern matching. This will result in more native resolution type of sharpness, but will result in other types of errors, when it misinterprets the geometry that is needed, because after all it is dealing with incomplete information.

    There is no free lunch. You can't make something out of nothing. Upscaling algporithms will always have quality loss, and you can never eliminate AI geometry matching from occasionally guessing wrong/ With the game specific "training" required to make DLSS work, you can minimize it, but it will still always be there.
    Are you telling me CSI is FAKE?????? no way :LOL: :p :D
  6. FSR and DLSS use two different methods.
    FSR 2.0 is pretty much the same thing as DLSS 2.x. Both are temporal-based upscalers with sharpeners. One (supposedly) uses AI/Tensor cores for ... magic? The other just uses shader cores. But as far as implementation / high level overview, both are pretty darn close now with the 2.0 revisions to each.

    FSR 1 was not temporal based, it was strictly per frame. Not really similarly, but in the same vein of "having matured" -- DLSS 1.0 used static AI training. Both DLSS 2 and FSR 2 are basically just a temporal-based upscaler - the only real difference now being that one runs on Tensor cores, while the other runs on basically any GPU with shader cores.
  7. FSR 2.0 is pretty much the same thing as DLSS 2.x. Both are temporal-based upscalers with sharpeners. One (supposedly) uses AI/Tensor cores for ... magic? The other just uses shader cores. But as far as implementation / high level overview, both are pretty darn close now with the 2.0 revisions to each.

    FSR 1 was not temporal based, it was strictly per frame. Not really similarly, but in the same vein of "having matured" -- DLSS 1.0 used static AI training. Both DLSS 2 and FSR 2 are basically just a temporal-based upscaler - the only real difference now being that one runs on Tensor cores, while the other runs on basically any GPU with shader cores.
    Interesting.

    I did not realize they completely changed their method DLSS 2.x I was under the impression it was just an improved AI method like the initial release.

    How does temporal based upsacling work without introducing input lag? Wouldn't it need to peek into th efuture at the next frame, and thus delay everything by a frame?
  8. FSR 2.0 is pretty much the same thing as DLSS 2.x. Both are temporal-based upscalers with sharpeners. One (supposedly) uses AI/Tensor cores for ... magic? The other just uses shader cores. But as far as implementation / high level overview, both are pretty darn close now with the 2.0 revisions to each.

    FSR 1 was not temporal based, it was strictly per frame. Not really similarly, but in the same vein of "having matured" -- DLSS 1.0 used static AI training. Both DLSS 2 and FSR 2 are basically just a temporal-based upscaler - the only real difference now being that one runs on Tensor cores, while the other runs on basically any GPU with shader cores.
    The point of the tensor cores is to speed up the processing.

    Instead of the upscaling adding 5ms to the frame it adds 2ms.

    So if you are getting 60fps at 1080p and want to temporally upscale to 4K, it would be the difference between 53.5fps (tensor cores) and 46.2fps (no tensor cores)

    These numbers are made up but the overall picture is what tensor cores are supposed to offer.
  9. Interesting.

    I did not realize they completely changed their method DLSS 2.x I was under the impression it was just an improved AI method like the initial release.

    How does temporal based upsacling work without introducing input lag? Wouldn't it need to peek into th efuture at the next frame, and thus delay everything by a frame?
    They can’t look at future frames, only past - which is why it has problems with some ghosting and types of motion. AMD talked a bit about how they reduce that via “disocclusion maps”, nVidia just says it’s AI magic.
  10. What I find funny is that just a week ago there were still people that claimed that FSR1.0 was just as good as DLSS2.x, but now the general consensus is that FSR sucks but FSR2.0 is as good or even better than DLSS2.x.

    BTW I really hope FSR1.0 and DLSS 2.x games get a FSR 2.0 upgrade. I think most DLSS1.x games never made the jump to DLSS2.x
  11. What I find funny is that just a week ago there were still people that claimed that FSR1.0 was just as good as DLSS2.x, but now the general consensus is that FSR sucks but FSR2.0 is as good or even better than DLSS2.x.

    BTW I really hope FSR1.0 games get a FSR 2.0 upgrade. I think most DLSS1.x games never made the jump to DLSS2.x

    Hopefully FSR 2.0 will just wind up bewing an upgrade to RSR and thus work in all titles without the need for any in game support.
  12. I think most DLSS1.x games never made the jump to DLSS2.x
    Yeah, it took forever for Metro Exodus but I think its only with the Enhanced edition, and SOTTR (and unexpectedly ROTTR), to get it, and even then you have to opt into a beta version for the Tomb Raider games to get it because they also bundled in some social media bs which pissed off a bunch of people (ironically nobody was forced to use it but that didn't stop people from complaining).

    I really like the new DLSS stuff but if FSR 2.0 gains wider adoption I'm sure I'll enjoy it too. It's great having something non-proprietary but I also like how NVs approach offloads some of the work onto other hardware as well. In the end, whatever works is awesome for us all.
  13. Hopefully FSR 2.0 will just wind up bewing an upgrade to RSR and thus work in all titles without the need for any in game support.
    Don't think that's gonna happen just as NIS will not bring DLSS to all games.

    The thing here is the motion vectors that are necessary for both FSR2.0 and DLSS to work their magic. That's why TAA is a prerequisite as it already uses motion vectors.

    But an improved RSF/NIS should be possible with the lessons learned from DLSS/FSR2.0.
  14. Yeah, it took forever for Metro Exodus but I think its only with the Enhanced edition, and SOTTR (and unexpectedly ROTTR), to get it, and even then you have to opt into a beta version for the Tomb Raider games to get it because they also bundled in some social media bs which pissed off a bunch of people (ironically nobody was forced to use it but that didn't stop people from complaining).

    I really like the new DLSS stuff but if FSR 2.0 gains wider adoption I'm sure I'll enjoy it too. It's great having something non-proprietary but I also like how NVs approach offloads some of the work onto other hardware as well. In the end, whatever works is awesome for us all.
    While I really don't care about propietary or open source standards, particularly on GPUs; (AMD doesn't really have the greatest track record using open standards), the more choices, the better.

    Now lets see what DLSS3.0 and XeSS bring to the table.

Leave a comment

Please log in to your forum account to comment