• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[DF] Cyberpunk 2077 2.0 - PC Tech Review - DLSS 3.5 Ray Reconstruction Deep Dive

Mr.Phoenix

Member
I don't get how AMD aren't ashamed of themselves at this point. Aren't they seeing all this?

2028 is still a ways off, but if by 2026 AMD hasn't matched Nvidia with the AI tech in their GPUs and all the stuff that brings, I really think Sony and MS should consider switching GPU vendors. As hard as that may be, it would be worth it.
 

Zathalus

Member
I don't get how AMD aren't ashamed of themselves at this point. Aren't they seeing all this?

2028 is still a ways off, but if by 2026 AMD hasn't matched Nvidia with the AI tech in their GPUs and all the stuff that brings, I really think Sony and MS should consider switching GPU vendors. As hard as that may be, it would be worth it.
It's amazing to think that Intel of all people have better AI acceleration on GPUs. As well as a better upscaled then AMD in the form of XeSS. Not to mention better RT acceleration as well.
 
DLSS/ML is the future. There is no going back.

Expanding on this, I think higher resolutions will not be the way going forward. Especially on consoles, I think Sony and MS will adopt DLSS-like stuff for their first-party titles at the very least.
 
Last edited:
Does Alex address the horrible ghosting that is omnipresent in the first 5 minutes? Or is he too busy gushing about ray reconstruction for that?
Yes, later on in the video. I also noticed that the reflections on a rainy street were a bit too clear and sharp considering the mixture of pavement and rainwater, and he addresses that issue too.
 
fake frames, fake rays, what next? soon we will just be playing figments of our imagination thanks to neurolink

Fake boobs, that’s what’s next
Animated GIF
 

SlimySnake

Flashless at the Golden Globes
I don't get how AMD aren't ashamed of themselves at this point. Aren't they seeing all this?

2028 is still a ways off, but if by 2026 AMD hasn't matched Nvidia with the AI tech in their GPUs and all the stuff that brings, I really think Sony and MS should consider switching GPU vendors. As hard as that may be, it would be worth it.
Or they should have their engineering teams help amd where they are lacking. Instead of doing a copy pasta, maybe do some actual r&d for a change.

All i heard leading up to the console reveal was how both Sony and ms had custom rt solutions. Turns out they just took what amd gave them. Going with nvidia won’t change things because nvidia creates massive gpus that will be too expensive for consoles. Those dedicated rt cores and tensor cores take up space on the die.
 
Last edited:
Or they should have their engineering teams help amd where they are lacking. Instead of doing a copy pasta, maybe do some actual r&d for a change.

All i heard leading up to the console reveal was how both Sony and ms had custom rt solutions. Turns out they just took what amd gave them. Going with nvidia won’t change things because nvidia creates massive gpus that will be too expensive for consoles. Those dedicated rt cores and tensor cores take up space on the die.
There's no way AMD wouldn't improve things on their end to match this...right?

maxresdefault.jpg
 

Silver Wattle

Gold Member
I don't get how AMD aren't ashamed of themselves at this point. Aren't they seeing all this?

2028 is still a ways off, but if by 2026 AMD hasn't matched Nvidia with the AI tech in their GPUs and all the stuff that brings, I really think Sony and MS should consider switching GPU vendors. As hard as that may be, it would be worth it.
You can't just slap in an "AI Tech" and suddenly match Nvidia, AMD were caught with their pants down regarding both resolution upscaling and ray tracing, they are playing catch up to an Nvidia that has been investing heavily in both of those technologies.
They are doing a decent job with hardware not really built specifically for RT.
The consoles won't be switching to Nvidia, margins are too tight for lord Jensen.
 

Buggy Loop

Member
2019 Path tracing :




2023 Path tracing :




Do peoples understand the massive leap in this rendering technique in such a short span?

If we listened to every "RT is too much performance cost for the look" and Nvidia had given up, we would still be last gen.
 

hinch7

Member
Night and day difference with path tracing plus Ray Reconstruction verses normal RT; with current denoisers. So much detail lost and regained with RR. You could say Nvidia fixed RT to how it should be with 3.5.

Tbh.. AMD has their work cut out for them in the years to come. Hopefully they can get all this sorted by the time RDNA 5 is out. But for now people can enjoy PT and Raytracing in all its glory with a moderately powerful RTX 4000 series card. Can't wait to see how Nvidia advances this tech going through to next generation, and sort out PT performance so its actually a viable option to switch on in a mainstream card.
 
Last edited:
Tried this with my 4090 and the difference is indeed HUGE this is a big fucking deal. The game finally really looks like true 4k. The smear/blur is gone. Outstanding job from Nvidia and CDPR.
Or they should have their engineering teams help amd where they are lacking. Instead of doing a copy pasta, maybe do some actual r&d for a change.

All i heard leading up to the console reveal was how both Sony and ms had custom rt solutions. Turns out they just took what amd gave them. Going with nvidia won’t change things because nvidia creates massive gpus that will be too expensive for consoles. Those dedicated rt cores and tensor cores take up space on the die.
Rich said in the newest DF direct that he would hope MS would go with Nvidia for next gen. It is not impossible but would be pricey of course.
 
Last edited:
I wonder if we will ever get another bleeding edge leap like this from cdpr since they are switching to Unreal 5.
What even is the next game in the near future that pushes visuals this hard. Alan Wake 2 is the only thing that comes to mind.
 

shamoomoo

Member
I don't get how AMD aren't ashamed of themselves at this point. Aren't they seeing all this?

2028 is still a ways off, but if by 2026 AMD hasn't matched Nvidia with the AI tech in their GPUs and all the stuff that brings, I really think Sony and MS should consider switching GPU vendors. As hard as that may be, it would be worth it.
Why should they? There's only 2 current games with path tracing and when Nvidia launched turning,they stated it would be until 2024 when ray tracing was going to be impactful.

RDNA3 can accelerate AI work loads,it may not be as performant as Nvidia but how much do you need for gaming at every level of resolution?
 

kittoo

Cretinously credulous
Tried this with my 4090 and the difference is indeed HUGE this is a big fucking deal. The game finally really looks like true 4k. The smear/blur is gone. Outstanding job from Nvidia and CDPR.

Rich said in the newest DF direct that he would hope MS would go with Nvidia for next gen. It is not impossible but would be pricey of course.

Wouldn't that also affect backwards compatability?
 
The ghosting and the rasterization or w/e DF calls it is far worse then shown in the video. It applies to a lot of npc models, weapons, etc. It's not something you just deal with it. Flickering is far less than what RR offers. Also, reflections are not supposed to be that crystal clear if accuracy is what they're sweating over unless cars and roads are made out of fucking mirrors.
 

Buggy Loop

Member
Night and day difference with path tracing plus Ray Reconstruction verses normal RT; with current denoisers. So much detail lost and regained with RR. You could say Nvidia fixed RT to how it should be with 3.5.

Tbh.. AMD has their work cut out for them in the years to come. Hopefully they can get all this sorted by the time RDNA 5 is out. But for now people can enjoy PT and Raytracing in all its glory with a moderately powerful RTX 4000 series card. Can't wait to see how Nvidia advances this tech going through to next generation, and sort out PT performance so its actually a viable option to switch on in a mainstream card.

performance-pt-1920-1080.png
performance-pt-3840-2160.png


RDNA 2 → RDNA 3 = 1.5~1.6x
Turing → Ampere = 2.1~2.4x
Ampere → Ada = 1.8~1.9x (not including frame gen )

To catch up, AMD has no choice but to throw their hybrid RT pipeline in the garbage.
Even if RDNA 3→ 4 has a 1.5x jump (not negligible) and again a 1.5x jump from RDNA 4 → 5, it wouldn't catch up to 4090 today in this game. And they have an advantage in rasterization as a baseline in Cyberpunk 2077 before anyone comes in screaming it's Nvidia biased. It's one of the better AMD performing titles before enabling RT.

They're 2 gen behind for path tracing. A damn 2080 Ti matches the 7900XTX flagship.
Turing 2080Ti which had
  • no concurrent RT & graphic workload
  • way lower frequencies, 1545MHz vs 2499MHz clocks
  • 18.6M vs 57.7M transistors
  • 28% pixel rate, under, 43% texture rate, 22% TFlops
  • 68 RT cores vs 96
  • Virtually negligible cache compared to RDNA 3
img GIF


Let's not even get into ML to match DLSS 2, 3 frame gen and now 3.5 with ray reconstruction. They haven't even touched ML yet. Scary. I'm assuming that the current pipeline already has its hands full to juggle between RT & graphic workload, to add ML into the mix would choke it even further. Thus, i really think AMD needs to rethink the whole architecture. Do they swallow their pride and change, or they dig their heels in and risk Intel to come with a 2nd iteration that will put them at risk because they already have better RT & ML ?

I wonder if we will ever get another bleeding edge leap like this from cdpr since they are switching to Unreal 5.
What even is the next game in the near future that pushes visuals this hard. Alan Wake 2 is the only thing that comes to mind.

Very sad that they are switching to Unreal 5 after the initial backlash of Cyberpunk 2077. Game has no stutter, has cutting edge tech. I know internal tools to make it happen might have been development hell, but going to unreal 5 is gonna suck, inevitably, what we're seeing so far is the cost of graphics ain't even worth the visuals. Cyberpunk 2077 overdrive performs better and scales better than say, Immortals of Aveum. A freaking open world megacity full of details vs a damn linear shooter.
 
Last edited:

FireFly

Member
Very sad that they are switching to Unreal 5 after the initial backlash of Cyberpunk 2077. Game has no stutter, has cutting edge tech. I know internal tools to make it happen might have been development hell, but going to unreal 5 is gonna suck, inevitably, what we're seeing so far is the cost of graphics ain't even worth the visuals. Cyberpunk 2077 overdrive performs better and scales better than say, Immortals of Aveum. A freaking open world megacity full of details vs a damn linear shooter.
Immortals of Aveum was made by a team a 10th of the size, and was originaly built in UE4. And we already have pathtracing on UE5 with Desordre.
 

Buggy Loop

Member
Or they should have their engineering teams help amd where they are lacking. Instead of doing a copy pasta, maybe do some actual r&d for a change.

All i heard leading up to the console reveal was how both Sony and ms had custom rt solutions. Turns out they just took what amd gave them. Going with nvidia won’t change things because nvidia creates massive gpus that will be too expensive for consoles. Those dedicated rt cores and tensor cores take up space on the die.

Is it massive?

a 4080 with 379 mm^2 competes with a 7900XTX's GCD 306mm^2 + 6x37mm^2 MCDs
  • With 20~25% of silicon dedicated to RT/ML
  • Without taking into account the memory controllers (how much you want out? 150~190 mm^2? They take space too)
  • Without taking into account the huge cache upgrades Ada got. How much area, who knows, but cache is typically not space savy.
I removed the MCDs on RDNA 3, which includes the cache, just to showcase how stupid this architecture is. You're left with nearly a raw GCD chip of 306mm^2 of pure hybrid RT/ML to optimize the area towards more rasterization, as per patent.

Yet we're talking a 2~4% RASTERIZATION performance advantage for nearly a 60W more power consumption on AMD side

I would say that's pretty fucking amazing what they did on Ada's architecture.

If i was a console manufacturer and i hesitate on Nvidia because "reasons", monetary or APU, then i go Intel. They're on the fast track to jump AMD on next iteration.

And we already have pathtracing on UE5 with Desordre.

UE5 games already perform like shit before path tracing. I can't even find benchmarks of path tracing in that indie puzzle game. But we're a far cry from a game like Cyberpunk 2077, i think we can agree on that. ReSTIR PT for the nearly thousands of lights present in night city would make pretty much all other path tracing engines crawl. For UE5 you would have to import nvidia plugins anyway to match this at the very least, not the native path tracing branch.

CDPR's Cyberpunk 2077 overdrive engine is now soooo good. I would hope they fix their dev tooling to smooth things out, but the foundation of that engine is top tier now, in fact, nothing like it as of now, until another devs implements ReSTIR Path tracing. Alan Wake 2 is next. Northlight engine.

Not impressed with UE5 so far. I'll just say that.
 
Last edited:

Mr.Phoenix

Member
You can't just slap in an "AI Tech" and suddenly match Nvidia, AMD were caught with their pants down regarding both resolution upscaling and ray tracing, they are playing catch up to an Nvidia that has been investing heavily in both of those technologies.
They are doing a decent job with hardware not really built specifically for RT.
The consoles won't be switching to Nvidia, margins are too tight for lord Jensen.
This is not true, `AI tech` is basically matrix math or FP4/6/8 operations. Once you have the hardware, you can accelerate AI operations. Just look at Intel, their very first attempt at a dedicated GPU, and they already have AI on par with dlss and RT too. At this point, AMD needs to shamelessly just at the very least, copy Nvidia and Intel and make sure they at least have hardware parity. At the very least, that is what they should target. As it stands, their GPUs are lacking proper AI acceleration and the RT cores aren't even accelerating the full RT pipeline (thats why AMD RT is so bad).
Why should they? There's only 2 current games with path tracing and when Nvidia launched turning,they stated it would be until 2024 when ray tracing was going to be impactful.

RDNA3 can accelerate AI work loads,it may not be as performant as Nvidia but how much do you need for gaming at every level of resolution?
There is a gross embarrassment of lacking tech features between an AMD GPU vs an Nvidia or even Intel GPU. This is just the truth. The sad thing is, outside BC complications, even an Intel GPU built for consoles would be better performant than an AMD GPU.

let's not sugarcoat this... there have been two defining hardware features in GPUs in the last 6 years. AI and RT acceleration. That's it, those two hardware components are the difference between a gen GPU and a current-gen GPU. They represent a clear technological shift from everything that came after 2018 and everything that came before that. They are the single biggest advancements made in GPU tech since we started having programmable shaders in the 2000s.

And yet, somehow... almost 6 years from their first appearance on the market, AMD doesn't even have full or at least comparable hardware for them? AMD is still fighting a Raster/FP battle with who? like we are still looking at a Vega 64. It's honestly embarrassing. There is absolutely no reason, why a 6-year-old GPU from their rivals (2080ti) should perform better than a just-released 7800XT in a current-gen game with modern graphical features. NO REASON that should be happening. And that just goes to show how far behind AMD is letting themselves lag behind.

Good enough or not that bad is not okay anymore.

Is it massive?

a 4080 with 379 mm^2 competes with a 7900XTX's GCD 306mm^2 + 6x37mm^2 MCDs
  • With 20~25% of silicon dedicated to RT/ML
  • Without taking into account the memory controllers (how much you want out? 150~190 mm^2? They take space too)
  • Without taking into account the huge cache upgrades Ada got. How much area, who knows, but cache is typically not space savy.
I removed the MCDs on RDNA 3, which includes the cache, just to showcase how stupid this architecture is. You're left with nearly a raw GCD chip of 306mm^2 of pure hybrid RT/ML to optimize the area towards more rasterization, as per patent.

Yet we're talking a 2~4% RASTERIZATION performance advantage for nearly a 60W more power consumption on AMD side

I would say that's pretty fucking amazing what they did on Ada's architecture.

If i was a console manufacturer and i hesitate on Nvidia because "reasons", monetary or APU, then i go Intel. They're on the fast track to jump AMD on next iteration.



UE5 games already perform like shit before path tracing. I can't even find benchmarks of path tracing in that indie puzzle game. But we're a far cry from a game like Cyberpunk 2077, i think we can agree on that. ReSTIR PT for the nearly thousands of lights present in night city would make pretty much all other path tracing engines crawl. For UE5 you would have to import nvidia plugins anyway to match this at the very least, not the native path tracing branch.

CDPR's Cyberpunk 2077 overdrive engine is now soooo good. I would hope they fix their dev tooling to smooth things out, but the foundation of that engine is top tier now, in fact, nothing like it as of now, until another devs implements ReSTIR Path tracing. Alan Wake 2 is next. Northlight engine.

Not impressed with UE5 so far. I'll just say that.
And this is the problem, everyone who tries to defend AMD, including AMD themselves, does this. Talj up their raster performance. The reason they are winning the raster battle is because everyone else ha seen that it doesn't mean shit. They are increasing die area on more meaningful GPU features instead of just more Raster performance.

The key things that define any current-gen games are things that when fully utilized, would make the best AMD GPUs perform worse than 5 year old GPUs.

Raster...smh, AMD is like a damn one-trick pony right now.
 
Last edited:

Kenpachii

Member
This is not true, `AI tech` is basically matrix math or FP4/6/8 operations. Once you have the hardware, you can accelerate AI operations. Just look at Intel, their very first attempt at a dedicated GPU, and they already have AI on par with dlss and RT too. At this point, AMD needs to shamelessly just at the very least, copy Nvidia and Intel and make sure they at least have hardware parity. At the very least, that is what they should target. As it stands, their GPUs are lacking proper AI acceleration and the RT cores aren't even accelerating the full RT pipeline (thats why AMD RT is so bad).

There is a gross embarrassment of lacking tech features between an AMD GPU vs an Nvidia or even Intel GPU. This is just the truth. The sad thing is, outside BC complications, even an Intel GPU built for consoles would be better performant than an AMD GPU.

let's not sugarcoat this... there have been two defining hardware features in GPUs in the last 6 years. AI and RT acceleration. That's it, those two hardware components are the difference between a gen GPU and a current-gen GPU. They represent a clear technological shift from everything that came after 2018 and everything that came before that. They are the single biggest advancements made in GPU tech since we started having programmable shaders in the 2000s.

And yet, somehow... almost 6 years from their first appearance on the market, AMD doesn't even have full or at least comparable hardware for them? AMD is still fighting a Raster/FP battle with who? like we are still looking at a Vega 64. It's honestly embarrassing. There is absolutely no reason, why a 6-year-old GPU from their rivals (2080ti) should perform better than a just-released 7800XT in a current-gen game with modern graphical features. NO REASON that should be happening. And that just goes to show how far behind AMD is letting themselves lag behind.

Good enough or not that bad is not okay anymore.


And this is the problem, everyone who tries to defend AMD, including AMD themselves, does this. Talj up their raster performance. The reason they are winning the raster battle is because everyone else ha seen that it doesn't mean shit. They are increasing die area on more meaningful GPU features instead of just more Raster performance.

The key things that define any current-gen games are things that when fully utilized, would make the best AMD GPUs perform worse than 5 year old GPUs.

Raster...smh, AMD is like a damn one-trick pony right now.

Rast performance has been a useless metric the moment DLSS 2.0 arrived and now with framegen rast performance metric has been completely useless. Sadly a lot of bench websites are simple delusional when it comes to this shit.

Sites like hardware unboxed for the longest time
OMG look a AMD GPU performs like a top end Nvidia gpu with all features disabled of Nvidia like DLSS and FG.
Reality from players, 50 fps vs 50 fps becomes 50 fps versus 100 fps with far better image quality.

I got a 4080 laptop and this the reality.

Maxed out path tracing, every setting max in a city 116 fps, framegen + dlss quality.

QtQCKgs.jpeg


Framegen disabled and boom u now sit at mid 60's.

AX7SCKH.jpeg


DLSS and Framegen Disabled. 31 fps

AlBzdu9.jpeg


Honestly i move up from low 30's to 120's with nvidia features.

Starfield is another example, slam on framegen and the game becomes from barely playable at 60 fps to 100+ fps. Even solves CPU bottlecks.

Anybody that still benches native other then a meme honestly needs there brains checked.

At this point in time i even go so far that if a game is hard on the hardware to run and doesn't have DLSS + framegen i simple see it as a unoptimized shit port that i won't spend a dime on.
 
Last edited:

hinch7

Member
performance-pt-1920-1080.png
performance-pt-3840-2160.png


RDNA 2 → RDNA 3 = 1.5~1.6x
Turing → Ampere = 2.1~2.4x
Ampere → Ada = 1.8~1.9x (not including frame gen )

To catch up, AMD has no choice but to throw their hybrid RT pipeline in the garbage.
Even if RDNA 3→ 4 has a 1.5x jump (not negligible) and again a 1.5x jump from RDNA 4 → 5, it wouldn't catch up to 4090 today in this game. And they have an advantage in rasterization as a baseline in Cyberpunk 2077 before anyone comes in screaming it's Nvidia biased. It's one of the better AMD performing titles before enabling RT.

They're 2 gen behind for path tracing. A damn 2080 Ti matches the 7900XTX flagship.
Turing 2080Ti which had
  • no concurrent RT & graphic workload
  • way lower frequencies, 1545MHz vs 2499MHz clocks
  • 18.6M vs 57.7M transistors
  • 28% pixel rate, under, 43% texture rate, 22% TFlops
  • 68 RT cores vs 96
  • Virtually negligible cache compared to RDNA 3
img GIF


Let's not even get into ML to match DLSS 2, 3 frame gen and now 3.5 with ray reconstruction. They haven't even touched ML yet. Scary. I'm assuming that the current pipeline already has its hands full to juggle between RT & graphic workload, to add ML into the mix would choke it even further. Thus, i really think AMD needs to rethink the whole architecture. Do they swallow their pride and change, or they dig their heels in and risk Intel to come with a 2nd iteration that will put them at risk because they already have better RT & ML ?



Very sad that they are switching to Unreal 5 after the initial backlash of Cyberpunk 2077. Game has no stutter, has cutting edge tech. I know internal tools to make it happen might have been development hell, but going to unreal 5 is gonna suck, inevitably, what we're seeing so far is the cost of graphics ain't even worth the visuals. Cyberpunk 2077 overdrive performs better and scales better than say, Immortals of Aveum. A freaking open world megacity full of details vs a damn linear shooter.

Granted Path tracing is in its infancy. It just goes to show how far behind AMD is behind Nvidia, especially RT and Ai. Just looking at Microsoft's leak of their plans for the next generation console from 2020.. its a little depressing to see a lot of those future feature sets planned are available rn (and/or have been for years) but still so far away on AMD's GPU roadmap. And true, seeing their current flagship SKU just matching the two gen old 2080Ti in PT is just laughably bad. Rasterization performance has been largely good enough for a while now and the move to RT and PT is only inevitable in the upcoming years. Sadly we're all going to have to wait for AMD to drag their feet.. as they're the ones who are dictating what goes into the next generation of consoles, and thus capablilities of those and game engines.

And yeah, feels bad to see CDPR to go UE5. Thus far, from most titles we've seen the games running on UE5; perform quite poorly. And can't see it being anywhere near as performant as we have now with REDengine. With that said hopefully changing engines will improve CDPR's development cycle and workflow meaning we'd get a Witcher 4 and CP 2 out faster. And by the time they release a lot of us would've likely upgraded already, so performance woes of RT and full RT should be a thing of the past.
 
Last edited:

tusharngf

Member
It's amazing to think that Intel of all people have better AI acceleration on GPUs. As well as a better upscaled then AMD in the form of XeSS. Not to mention better RT acceleration as well.
Intel has some of the best engineers in their team but they lack better leadership and vision. They were stuck on 4 core CPU's for 5-6 yr and only changed when AMD came up with a better solution.
 

Agent_4Seven

Tears of Nintendo
I've noticed the same exact issues (and even more) with DLSS3.5 RR and PT. It's just not where it needs to be right now and the cost is not worth it at all, meaning performance and the price of 4000 series. We'll see how 4.0 will improve things and how new GPUs will perform.

This is not acceptable:

fspJGQX.jpg
 
Last edited:
To me it is no different than this:




If the end result of this tech gives us the IQ that we want or close to that, it's a net positive.

The problem is that these modern movie scenes DO look fake. It's extremely impressive on a technical level but also mind-numbing. As soon as you see a trillion things going on at the same time you know it's CG and then my brain gets overstimulated and I lose all interest.

Watching older movies is like getting a nice brain massage...

I do totally agree though that Nvidia's technology is amazing and a big positive in the context 🙂
 
Last edited:
The problem is that these modern movie scenes DO look fake. It's extremely impressive on a technical level but also mind-numbing. As soon as you see a trillion things going on at the same time you know it's CG and then my brain gets overstimulated and I lose all interest.
Unless you’re watching recent marvel movies or Netflix films, I bet you’d be genuinely surprised at the amount of non blockbuster, down-to-earth films that use background CGI(static buildings, grass, skies, etc), that you would not notice at all. It’s way more than you think.
 
Unless you’re watching recent marvel movies or Netflix films, I bet you’d be genuinely surprised at the amount of non blockbuster, down-to-earth films that use background CGI(static buildings, grass, skies, etc), that you would not notice at all. It’s way more than you think.
Trust me, I know. I've stopped watching almost all new movies since a few years. The only ones I can stand are low budget/indie movies and some international stuff.

And action/blockbuster movies.. I stopped watching them at least 10 years ago.
 
Last edited:
Top Bottom