Cyberpunk 2077 Logo

Introduction

It’s here! It’s here! It finally came!  We now have Ray Tracing support in Cyberpunk 2077 from cd Projekt Red on AMD Radeon RX 6000 series video cards!  It only took more than 6 months for that support, something that should have been ready at the game launch.  With the new patch 1.2 Ray Tracing support for Radeon RX 6000 series has been added.  This means you can turn on Reflection, Shadow, and Lighting Ray Tracing in Cyberpunk 2077 on the Radeon RX 6900 XT, Radeon RX 6800/XT, and Radeon RX 6700 XT.

Cyberpunk 2077 Patch 1.2 Release Notes

This new patch 1.2 actually has a ton of change in the game.  There are a ton of fixes, it’s really quite extensive.  There are also updates to the game in other areas in terms of graphics.  There are improvements in textures rendering from afar.  Visual quality adjustments of elements when underwater.  Improvements in materials details quality.  Improvements for interior and exterior light sources.  Adjust dirt quality.  Improved foliage destruction visuals. Various optimizations to shadows, shaders, physics, animation system, occlusion system, and facial animations.  There’s really a whole lot more here. 

To find out how demanding, and what kind of performance you’ll experience, we decided to go test it.  This is a quick, no-frills, straight to the point, down and dirty simple performance comparison.  We are taking the AMD Radeon RX 6800 XT, and the NVIDIA GeForce RTX 3080 FE and putting them head-to-head in Ray Tracing in Cyberpunk 2077. 

We are also going to find out what kind of performance drop you get enabling the features on each video card.  We are going to test the built-in preset options, as well as manually just enabling Reflections, then Shadows then Lighting to find out how each one individually performs.

Cyberpunk 2077 Ray Tracing Menu

Ok, let’s look at how you can enable Ray Tracing on the Radeon RX 6800 XT.  Similar to the GeForce RTX 3080 FE, you can select different “Quick Preset” options at the top.  If you put this on “Ultra” that means you will have the highest game settings, but no Ray Tracing.

There are two built-in presets for Ray Tracing.  Ray Tracing Medium and Ray Tracing Ultra.  In Ray Tracing Medium Ray-Traced Reflections are kept OFF, while Ray-Traced Shadows is ON and Ray-Traced Lighting is on “Medium.”  When you select Ray Tracing Ultra then everything is enabled, you get Ray-Traced Reflections, Ray-Traced Shadows and Ray-Traced Lighting is on “Ultra.”  The Ray-Traced Lighting option actually has one higher setting called “Psycho” but don’t dare turn this on, it kills performance on pretty much everything.

Since you can also just manually go into each option and turn on just one Ray-Traced feature, to save performance, we are also going to test each one individually.  In this way, we can see which ones take the most burden on GPU performance to run and how they compare.

Resizable BAR

Before we begin, just a couple of notes on the system setup.  For this review, we are using a Ryzen 9 5900X.  We also have enabled Resizable BAR and AMD Smart Access Memory on each video card.  We have applied the latest motherboard BIOS, enabled Resizable BAR in it, and also applied the new VBIOS for the GeForce RTX 3080 FE video card, and we are using the latest driver.  This means Resizable BAR is enabled on both video cards here today as you can see above. We are using the latest drivers for each video card, GeForce 465.89 and AMD Adrenalin 2020 Edition 21.3.2 Optional.

Brent Justice

Brent Justice has been reviewing computer components for 20+ years, educated in the art and method of the computer hardware review he brings experience, knowledge, and hands-on testing with a gamer oriented...

Join the Conversation

23 Comments

  1. So basically, AMD’s RT performance is a swing and a miss for its first attempt. I’m glad they are there but they need to take another stab at the way they go about it, cause the first go-around ain’t all that pretty with respects to performance.

    Thanks for the look into it Brent. Job well done!

  2. Im still waiting to buy a GPU that has ray tracing……so it’s really a moot point.

    AMD needs help with this, it appears.

  3. I’m not too concerned.

    First, as cool as Raytracing is – I don’t play a single game that supports it yet, and even of the ones I am looking at getting, yeah… it would be single digit numbers for the near term. So it’s nice, but it’s far and away not my primary concern. Same thing with DLSS honestly – nice feature, wouldn’t turn it down – not paying extra. Traditional rasterizing performance is still my primary concern.

    For the same amount of money – yeah, I’d rather have better RT than worse RT. But I’m not going to go set out to spend more money just for the sake of better RT. The offerings, at MSRP, between nVidia and AMD – break out a bit oddly for me. The 6700 doesn’t make a lot of sense at all, the 6800 is a rock star, the 6900 not really, the 6800 XT on the fence…. but that’s at MSRP, all other things equal.

    Second – yeah, can’t buy any of these anyway, from any manufacturer. So … like @magoo says, moot point. Not to say this isn’t a good writeup. It’s great info to have. It just won’t influence any of my near purchases because… there’s nothing out there to purchase.

    Third – playing devil’s advocate: it isn’t charted, but I imagine it clocks in about the same as nVidia’s first stab at hardware accelerated RT. Granted, that only goes so far – you can’t really compare two different gens, otherwise we’d be talking about how much better the 11900 is than Bulldozer – just that I didn’t have very high expectations and those expectations were not exceeded by AMD.

    If you really truly care about Raytracing, I guess your choice is clear cut. For the same price and availability, I’d rather have it than not, I fully well admit. But, for instance, I wouldn’t pay more for 20% better raytracing performance but 20% worse rasterization performance – which is about how the line breaks down now, depending on what tier your looking at.

  4. So basically, AMD’s RT performance is a swing and a miss for its first attempt. I’m glad they are there but they need to take another stab at the way they go about it, cause the first go-around ain’t all that pretty with respects to performance.

    I’m going to swing toward giving AMD credit for having gotten the hardware out there in the first place. Same credit I give Nvidia for their 2000-series, the release of which has made possible games with decent RT implementations today. Same credit I’ll give Intel and ARM whenever they get there.

    Having the hardware out there isn’t going to help gamers today, but it’s an install base that’s expanding and it’s a second vendor implementing the function in mass-market consumer hardware, and that helps fence sitters join the crowd of developers supporting RT. Think of it this way: anyone that has waited until there was a market consensus formed on ‘how to do RT’ that wasn’t just ‘The Nvidia Way’ now has their answer. By the time Intel catches up, any deviation from how things are settling now will work against Intel’s product, and so there is a real disincentive for Intel to stray- turning that back around, it becomes highly likely that Intel (and everyone else) is going to go with the flow. Which means that the water’s now safe to jump into.

    First, as cool as Raytracing is – I don’t play a single game that supports it yet, and even of the ones I am looking at getting, yeah… it would be single digit numbers for the near term. So it’s nice, but it’s far and away not my primary concern.

    The tech geek in me wants it to be a major concern for myself, but the basic reality is that of the games I do play, ray tracing is not a killer feature. I’ll note that our perspectives here are intensely personal, of course, and also likely temporary, but they’re also worth consideration and shouldn’t be discounted. Ray tracing is cool, but it hasn’t hit ‘critical mass’ just quite yet, in my opinion.

    Second – yeah, can’t buy any of these anyway, from any manufacturer. So … like @magoo says, moot point. Not to say this isn’t a good writeup. It’s great info to have. It just won’t influence any of my near purchases because… there’s nothing out there to purchase.

    And that’s the kicker ;)

    Many of us could drop a K on a GPU if one worth such an expense were even available, but they’re simply not!

  5. Plays fine for me at 1440p with RT on… I think on medium? I didn’t really check. Not even sure if it’s making a huge difference fidelity or experience wise. Maybe if I was taking screenshots to post on instagram or something?
  6. Plays fine for me at 1440p with RT on… I think on medium? I didn’t really check. Not even sure if it’s making a huge difference fidelity or experience wise. Maybe if I was taking screenshots to post on instagram or something?

    Ray tracing shouldn’t be like turning on ‘salesman mode’ on a TV in your dark basement- done right, you shouldn’t notice it. It’s the absence of immersion-breaking shadow and color artifacts that defines ray tracing, not some in-your-face effect!

  7. Ray tracing shouldn’t be like turning on ‘salesman mode’ on a TV in your dark basement- done right, you shouldn’t notice it. It’s the absence of immersion-breaking shadow and color artifacts that defines ray tracing, not some in-your-face effect!

    It takes so much more GPU power so you don’t notice it. Sounds like a salesman line trying to justify to a customer on the brink to get the XBR model TV back in the days a WEGA TV’s and such. ;)

  8. It takes so much more GPU power so you don’t notice it. Sounds like a salesman line trying to justify to a customer on the brink to get the XBR model TV back in the days a WEGA TV’s and such. ;)

    You notice when things are out of place. RT is designed to put them into place. So yeah, the absence of wacky rasterization artifacts is what you should notice.

  9. You notice when things are out of place. RT is designed to put them into place. So yeah, the absence of wacky rasterization artifacts is what you should notice.

    Well, there has always been two schools of thought with regard to things like this. The TV Analogy @Grimlakin made is pretty good. Speakers/ audio systems would be another good case.

    Some people thing the screen (and in this case by extension, the GPU) is just there to translate the media. You shouldn’t notice anything imparted by the technology, unless it’s deficient somehow in showing the media. You want to have as accurate and precise translation of the media as you can, and the tech is just there to serve it to the end user. This would be like Cinema mode on the TV where it tries to duplicate the theater experience, or my HT Receiver has a Pure Direct option, where it does no post-processing and tries to present the sound exactly as it’s presented by the media.

    Another camp thinks of the tech as part of presentation – it’s not just there as a method to serve the media, it’s part of the experience itself. Crank the bass up, saturate the colors – take the original media and make it bigger and bolder.

    I can see both, and I’m often guilty of both. Right now, I think Raytracing tends to fall into the latter category – not everyone can do it, so you can’t really make a mainstream game rely on RT for anything other than putting in over-the-top effects. Eventually it may fall into the former – maybe as soon as the tail end of this generation of consoles, as RT hardware becomes more ubiquitous and alternative methods of implementing RT work around any lack of hardware.

  10. I think we are quickly going to hit a moment when CPU’s and current and even last gen GPU’s can step up to the plate with a new way to think of the reflections and shadows in the game to render them as part of the scene without additional passes for proper reflections. Doing the reflections in code before passing to the GPU to render as an example. Just making that process ultra efficient and bundling into fewer fatter data streams that the CPU is better at parsing and handling than the GPU would be with it’s millions of parallel processes.
  11. Some people thing the screen (and in this case by extension, the GPU) is just there to translate the media. You shouldn’t notice anything imparted by the technology, unless it’s deficient somehow in showing the media. You want to have as accurate and precise translation of the media as you can, and the tech is just there to serve it to the end user.

    I think perhaps a better way for me to put it would be… you notice these things when you go back. That’s when their absence is felt. It’s the ‘we didn’t know any better’ answer to the future ‘how did we ever even live with that?!?’ question.

    I think we are quickly going to hit a moment when CPU’s and current and even last gen GPU’s can step up to the plate with a new way to think of the reflections and shadows in the game to render them as part of the scene without additional passes for proper reflections. Doing the reflections in code before passing to the GPU to render as an example. Just making that process ultra efficient and bundling into fewer fatter data streams that the CPU is better at parsing and handling than the GPU would be with it’s millions of parallel processes.

    I’ll say that CPUs are definitely not the way forward. Ray tracing, like many other data processing types, is best done on dedicated hardware. I’m not saying that GPUs are that, as they are also definitely not dedicated to that purpose (and can’t be for the foreseeable future, technically speaking), but they’re still several orders of magnitude better than CPUs.

    CPUs do branching code well. Anything that’s not branching, or doesn’t rely on massive in-order instructions (thus single-threaded) is better done on dedicated hardware. CPUs have this already in various forms of SIMD like SSE, AVX, the old MMX, but ray tracing is on a completely different scale.

    As far as alternative means of lighting and refining the process in general, well, that has to happen regardless. Especially if any decent ray tracing will ever happen on the current console generation!

  12. I went to Microcenter on launch day to buy a 6900XT as stock on NVIDIA cards was bad. I figured the RTX 3090 would be the faster option overall, but I figured if we can get 90% of the performance for $500 less, it would be a great option. I’m glad I didn’t get the 6900XT. No DLSS and **** ray tracing performance would have disappointed me. It’s a non-starter for Cyberpunk 2077 at 4K. A game I have hundreds of hours in.
  13. I think we are quickly going to hit a moment when CPU’s and current and even last gen GPU’s can step up to the plate with a new way to think of the reflections and shadows in the game to render them as part of the scene without additional passes for proper reflections. Doing the reflections in code before passing to the GPU to render as an example. Just making that process ultra efficient and bundling into fewer fatter data streams that the CPU is better at parsing and handling than the GPU would be with it’s millions of parallel processes.

    That makes no sense. You’re just adding latency. You can’t do reflections without first rendering the scene in the first place, and you can’t render them as "part of the scene" unless you are fine with static reflections and shadows. Technically, reflections and shadows are already "in code," but they need to be projected based on the perspective of the view frustum which can’t be done without direct access to the video card’s memory.

  14. That makes no sense. You’re just adding latency. You can’t do reflections without first rendering the scene in the first place, and you can’t render them as "part of the scene" unless you are fine with static reflections and shadows. Technically, reflections and shadows are already "in code," but they need to be projected based on the perspective of the view frustum which can’t be done without direct access to the video card’s memory.

    I’m not disagreeing with you. I obviously don’t know the backend code. It’s just a gut feeling. May be way off but there it is.

  15. It is obvious that the developers designed around RTX hardware and not Amds for RT. Newer games designed around AMD hardware, consoles, are doing well with AMD RT. FSR now available should make some inroads for using RT with AMD hardware.

    AMD design with Infinity Cache does better with multi shaders/compute operations combined into a mega shader where the cache will be much more efficient.

    Have to see how newer games like Farcry 6 with RT and FSR performs.

  16. It is obvious that the developers designed around RTX hardware and not Amds for RT. Newer games designed around AMD hardware, consoles, are doing well with AMD RT. FSR now available should make some inroads for using RT with AMD hardware.

    AMD design with Infinity Cache does better with multi shaders/compute operations combined into a mega shader where the cache will be much more efficient.

    Have to see how newer games like Farcry 6 with RT and FSR performs.

    How are they designing it around RTX hardware when they’re using the Microsoft DXR API?

    There’s no custom code that’s done to differentiate between the red/green side. I suppose the only thing could be that developers chose to implement a RT method that is known to perform poorly on AMD hardware, but if it gives them the visual effect that they are looking for within the game, that’s not necessarily a decision based upon performance but rather artistic direction.

  17. How are they designing it around RTX hardware when they’re using the Microsoft DXR API?

    There’s no custom code that’s done to differentiate between the red/green side. I suppose the only thing could be that developers chose to implement a RT method that is known to perform poorly on AMD hardware, but if it gives them the visual effect that they are looking for within the game, that’s not necessarily a decision based upon performance but rather artistic direction.

    Unless you want to count the specifically written Nvidia denoiser for ray tracing that they have created. Not sure what magic this works behind the scenes in coding but it is nvidia specific to take advantage of specifically gated hardware. Not that I’m against that just saying it isn’t all roses. ;)

    Noted here: https://www.realtimerendering.com/raytracing.html

    And also on nvidia’s page. Here is the quote from the source.

    "Denoising is critical for real-time DXR performance when using path tracing or other Monte Carlo techniques. Alain Galvan’s summary posts on ray tracing denoising and machine-learning denoising are good places to start. Zwicker et al. give a state of the art report about this area; note that it is from 2015, however, so is not fully up to date. Intel provides free code in their Open Image Denoise filter collection. The Quake II RTX demo includes shader code for the A-SVGF filter for denoising. NVIDIA has a developer access program for their denoiser. "

  18. Unless you want to count the specifically written Nvidia denoiser for ray tracing that they have created. Not sure what magic this works behind the scenes in coding but it is nvidia specific to take advantage of specifically gated hardware. Not that I’m against that just saying it isn’t all roses. ;)

    Microsoft’s DXR does not have a specific hardware code path for developers to use, and DXR is what a vast majority of the ray tracing games are using.

    NVIDIA *has* written custom extensions for Vulkan (prior to Vulkan RT being released), however, those only appear in Quake II RTX and Wolfenstein Youngblood. I don’t see how what you linked contradicts that…

  19. It is obvious that the developers designed around RTX hardware and not Amds for RT.

    If there was any design target, it would have been RTX because that’s all anyone had for years. Which means that the issue was AMD not designing their hardware and drivers to meet the needs of games, something both companies have failed to do on occasion when a new technology becomes available.

    To be fair to AMD, their first-gen RT is probably better than Nvidia’s 2000-series.

    Unless you want to count the specifically written Nvidia denoiser for ray tracing that they have created. Not sure what magic this works behind the scenes in coding but it is nvidia specific to take advantage of specifically gated hardware. Not that I’m against that just saying it isn’t all roses. ;)

    Noted here: https://www.realtimerendering.com/raytracing.html

    And also on nvidia’s page. Here is the quote from the source.

    "Denoising is critical for real-time DXR performance when using path tracing or other Monte Carlo techniques. Alain Galvan’s summary posts on ray tracing denoising and machine-learning denoising are good places to start. Zwicker et al. give a state of the art report about this area; note that it is from 2015, however, so is not fully up to date. Intel provides free code in their Open Image Denoise filter collection. The Quake II RTX demo includes shader code for the A-SVGF filter for denoising. NVIDIA has a developer access program for their denoiser. "

    If the blame is focused on the denoisers, then wouldn’t it be that AMDs denoiser is less efficient?

    I don’t really see the big deal here. Nvidia had been working to find a way to bring RT to real-time consumer graphics for near a decade and brought their solution to market a few years earlier, and it’s very reasonable for games to appear to ‘favor’ the platform that was actually available during their development.

    You might recall how painful the release of DX9 was for Nvidia :cool:

  20. How are they designing it around RTX hardware when they’re using the Microsoft DXR API?

    There’s no custom code that’s done to differentiate between the red/green side. I suppose the only thing could be that developers chose to implement a RT method that is known to perform poorly on AMD hardware, but if it gives them the visual effect that they are looking for within the game, that’s not necessarily a decision based upon performance but rather artistic direction.

    It is more to do with the enhancements of DXR 1.1 which AMD and Microsoft worked together on, Nvidia also contributed. Most likely the optimizations that work well in RNDA2 did not make it in Cyberpunk 2077 since DXR 1.1 was much later. AMD video on how to optimize for RNDA2 plus it covers briefly optimized denoiser made by AMD:

    On the XBox Series X, Rachet and Clank is pushing 60 FPS, 4K resolution and Raytracing (reflections). Game to me is obviously well optimized to take advantage of RNDA2:

    This second video has developer talking about RT, if you can’t watch the first due to time this will probably be best, covers many aspects of the game:

    I just don’t think Cyberpunk 2077 represent well what AMD can do with RT at this stage.

  21. Personally I see things as AMD turning a lot of things around in the design of their chips and chiplets be they for Primary CPU or GPU builds. They’ve clearly established a topology that is working for them and they have thus far successfully iterated on it in the CPU front, and now are working to bring that same iterative development improvement to the GPU front.

    No company today can safely sit on their Laurels EVEN with the enhanced demand that computer parts are currently under.

    This is good for the market… and will be even BETTER once pricing turns the bend back to the levels of sanity. Though we all know it will won’t completely return to ‘pre covid’ normal for a very long time.

    And yes each company has tips/tricks/code to best take advantage of their hardware. Be that through driver plugin’s denoisers, or flat out building their hardware to code targets.

    The advantage AMD has is most games for the next 3-5 years will be specifically targeted to best take advantage of their hardware. As long as they can continue to iterate and optimize those familiar paths they will have success.

    The advantage Nvidia has is they are first to the party and have the financial capital to fund development to best take advantage of their hardware, literally more than any other hardware developer today.

    AMD is probably taking a loss short term on the chips and design work they are doing for Sony and Microsoft consoles. Not to mention the Samsung partnership. BUT, on the back of the same token Nvidia is more interested in big compute where the larger profit margins are. Not that they are NOT interested in consumer market.

    The likes of Nvidia should be worried about Apple entering the desktop chip design. 1 ubiquitous api platform for all compute… That’s dangerous for all of the players other than Apple, and Apple has the financial might to try and make that happen.

  22. I just don’t think Cyberpunk 2077 represent well what AMD can do with RT at this stage.

    You do know that this is because AMD RT didn’t exist when CDPR implemented ray tracing, right? And that Intel is going to have the same problem, as will Apple and anyone else trying to step up to the plate?

    And this is why…

    The advantage Nvidia has is they are first to the party and have the financial capital to fund development to best take advantage of their hardware, literally more than any other hardware developer today.

    Though Intel and Apple are very much likely to be nipping on their heals, from a capital standpoint. AMD still has a significant advantage in terms of having a head start, but they’re still a generation behind Nvidia too.

    Personally I see things as AMD turning a lot of things around in the design of their chips and chiplets be they for Primary CPU or GPU builds.

    GPU chiplets should be easy enough after getting CPU chiplets working. Like RT, though, and many other things, they haven’t done it till they’ve done it.

    If (when?) they do pull it off, I’m pretty excited about the possibilities. Performance can raise pretty easily without cost skyrocketing.

    The advantage AMD has is most games for the next 3-5 years will be specifically targeted to best take advantage of their hardware. As long as they can continue to iterate and optimize those familiar paths they will have success.

    This most recent console generation has been the least potato out of the box, so this generation might actually be different, but it should still be mentioned that this has yet to really result in an advantage. There’s always just a bit of difference between the console stack and the desktop.

Leave a comment