• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[Digital Foundry] Death Stranding Director's Cut: PC vs PS5 Graphics Breakdown

Dream-Knife

Banned
Also, interesting to see how Performance /TFLOPS ratios play out.
mxLBFBH.png
2080 is 10.07 FP32.

He should really compare it to a 6600xt.

EDIT: I think it's nuts how when this game came out in 2019 I thought it looked so real. Looks cartoony now.
 
Last edited:

winjer

Gold Member
The way he setup triple buffering and all the programs he's running in the background and the transparencies he has enabled on the Windows UI, are probably causing a performance hit on the Windows machines.
Really strange way of setting up a benchmark. In more than twenty years seeing benchmarks on PC, I've never seen anyone screwing around like this.
 

Loxus

Member
- An RTX 2060 Super is 71.2% of a PS5's performance at the same xact same settings.

- A 5700 is 75.4% of PS5's.

- A 5700 XT is 83.5%.

- An RTX 2070 Super is 88.5%.

- An RTX 2080 is 96.6% of PS5's performance.


So according to Alex PS5 performs somewhere between an RTX 2080 and a RTX 2080 Super.

Not bad for a machine that is 399, right? 🤷‍♂️
Yea, it's definitely between the 2080 and 2080 Super.
The numbers line up almost perfectly.

NVIDIA GeForce RTX 2080 Super - Techpowerup
rvMk5HF.png


If there was a RX 6700, it would be the PS5 GPU.
 
The way he setup triple buffering and all the programs he's running in the background and the transparencies he has enabled on the Windows UI, are probably causing a performance hit on the Windows machines.
Really strange way of setting up a benchmark. In more than twenty years seeing benchmarks on PC, I've never seen anyone screwing around like this.

I think that's the only way to get rid of the frame rate smoothing which clocks it at 40FPS, I guess the only real way to test this would be to replicate this at home to see if there are any discrepancies, obviously doing this with Death Standing would not be ideal, you'd need to do it with another title that allows you disable VSYNC and have no frame smoothing effect.
 

winjer

Gold Member
I think that's the only way to get rid of the frame rate smoothing which clocks it at 40FPS, I guess the only real way to test this would be to replicate this at home to see if there are any discrepancies, obviously doing this with Death Standing would not be ideal, you'd need to do it with another title that allows you disable VSYNC and have no frame smoothing effect.

He could have just disabled v-sync.
 

DJ12

Member
PS5 is outperforming the RTX 2080 here even with Alex using a 10900k. If the game is not CPU bound, why not use a less powerful CPU like a Ryzen 3600 or even the 2700 NX Gamer uses? 10900k has 10 cores, 20 threads running at 5.2 ghz. the ryzen 3600 has 6 threads and 12 threads and runs at 4.3 ghz. Since the PS5 reserves 1 core for its OS, the 3600 is probably the better CPU to use.

zWKva8t.jpg
He wants console to look bad obviously, especially when he's doing it pc vs ps5
 

DenchDeckard

Moderated wildly
Not the best port obviously but there is no denying that this is seriously impressive.

I still need to pick it up, and I’ll be honest. Out of sheer respect I’m getting it on ps5. Will be even better once that VRR patch drops.

will pick up a PS4 disk version and pay the upgrade fee.
 

winjer

Gold Member
I believe he did, but the frame rate smoothing still kicks in and smooths it to 40fps.

I never played this game, so I have no idea of what happens.
But if disabling v-sync means the game still locks frame rate, then there is some big issue with it.
That would make it a really bad port.
 

Zathalus

Member
PS5 is outperforming the RTX 2080 here even with Alex using a 10900k. If the game is not CPU bound, why not use a less powerful CPU like a Ryzen 3600 or even the 2700 NX Gamer uses? 10900k has 10 cores, 20 threads running at 5.2 ghz. the ryzen 3600 has 6 threads and 12 threads and runs at 4.3 ghz. Since the PS5 reserves 1 core for its OS, the 3600 is probably the better CPU to use.

zWKva8t.jpg
Because the 10900k runs the game at close to 200 FPS when not GPU limited. Even a 3600 would remain well over the 60 FPS cap here. Its not going to really impact benchmark numbers.

For example: https://overclock3d.net/reviews/sof...c_performance_review_and_optimisation_guide/4
 
I never played this game, so I have no idea of what happens.
But if disabling v-sync means the game still locks frame rate, then there is some big issue with it.
That would make it a really bad port.


I mean, potentially? But, if it was truly a jank port, I'd imagine RTX 3070s and RTX 3080s wouldn't scale in performance on this title either.


They seem to scale pretty high in this title.
 

SlimySnake

Flashless at the Golden Globes
2080 is 10.07 FP32.

He should really compare it to a 6600xt.

EDIT: I think it's nuts how when this game came out in 2019 I thought it looked so real. Looks cartoony now.
Nah 2080 is more like 11.4. Nvidia underreported their average clocks so the 10.07 number is based on their 1.7 ghz number whereas in game i see 1.95-2.0 ghz clockspeeds without any overclocking.
 

winjer

Gold Member
I mean, potentially? But, if it was truly a jank port, I'd imagine RTX 3070s and RTX 3080s wouldn't scale in performance on this title either.


They seem to scale pretty high in this title.

Not a jank port. But somewhat to be desired if it has issues with basic stuff, like v-sync.
 
And would completely invalidate the entire point of the video.

You've missed the point.
Based on the title...what's the point then?

Because when it says Directors Cut PC port vs. PS5 GRAPHICS BREAKDOWN...You'd think they actually take a look and compare all the tech of each platform and actually compare, I dunno, the graphics and how they breakdown...

But by all means, keep on crusading for whatever...point..you're driving at.
 
Last edited:

ChiefDada

Gold Member
This guy still don't understand how CPU and GPU clock works in PS5. Jesus. Cerny never said that less stressed on CPU means max clock on GPU. Both can Run at Max frequency as long as the work load doesn't exceed the power budget. The power shifts only happens when the CPU running at max clocks with some extra juice to spare, so that the GPU can sqeeze afew more pixel.

At this point, It's impossible to believe he doesn't know he's being misleading. Cerny explicitly stated they designed PS5 under the assumption that AVX 256 code would be used very often. The majority of PS5 games will never see the GPU downclock from 2.23ghz.
 
Because the 10900k runs the game at close to 200 FPS when not GPU limited. Even a 3600 would remain well over the 60 FPS cap here. Its not going to really impact benchmark numbers.

For example: https://overclock3d.net/reviews/sof...c_performance_review_and_optimisation_guide/4
Why use a 10900k when he can use another CPU then? He wants to remove possibility of CPU bottleneck, I get it, but what if PS5 is CPU limited then? How can he know PS5 is not CPU limited?
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Shit port than, okay
Not the best port obviously but there is no denying that this is seriously impressive.

I still need to pick it up, and I’ll be honest. Out of sheer respect I’m getting it on ps5. Will be even better once that VRR patch drops.

will pick up a PS4 disk version and pay the upgrade fee.
Death Stranding's PC port was hailed as one of the best PC ports. The PS5 version basically used the PC settings and this version simply ports the new content back to PC. At launch back in 2020, people were running this game at 4k 120 fps using DLSS. it is NOT a shit port. That's nonsense.

And unlike GOW and Horizon, this port ran well on older cards as well as newer cards. This is exactly what we would expect games to scale from GCN card to a RDNA card. a 50% IPC gain. A 1.84 tflops GPU running it at native 1080p 30 fps vs a 10 tflops RDNA 2.0 GPU running it at almost 60 fps at native 4k.

The pixels budget of a native 4k 60 fps game is exactly 8x that of a 1080p 30 fps game. Which is very close to the tflops difference between the 1.84 tflops PS4 GPU vs the 15 gcn 1.0 tflops of the PS5 GPU after IPC gains from GCN 1.0 to Polaris and then from Polaris to RDNA are taken into account.

If anything, this game is one of the few games that is taking full advantage of the PS5's raw horsepower whereas other games by giving us a 7-8x increase in pixel budget when games like Horizon, Guardians and well pretty much every game with a native 4k 30 fps mode is offering us just a 4x increase in pixels compared to the PS4 versions.
 
Last edited:

Keihart

Member
Well I meant that it run on PC like shit. And... I guess that's not the point of the video.
ah ok. Runs alright on mine but i guess it could run better on older rigs.
It still runs alright on my brothers PC with a 1060, it certainly looks better than the PS4 version and i can max it on mine no biggie.
No way you are going to be running this maxed out on a 600 bucks PC tho.
 

adamsapple

Or is it just one of Phil's balls in my throat?
2080 is 10.07 FP32.

He should really compare it to a 6600xt.

EDIT: I think it's nuts how when this game came out in 2019 I thought it looked so real. Looks cartoony now.

The character close ups still look pretty good, but once the BT's start appearing, it loses that aspect.
 

SlimySnake

Flashless at the Golden Globes
lol his own comparisons have shown this since launch when AC Valhalla outperformed even a 2080 Super. We have seen the PS5 outperform the 2080 time and time again over the last year and a half. It has outperformed the XSX many times and MS themselves told DF that the XSX GPU was equivalent to the RTX 2080 in Gears 5 benchmarks. At this point, Who still has unrealistic expecations?

I mean besides Alex.
 

Lysandros

Member
The RTX 2060 has 69.8% of the theoretical Teraflops of the PS5, and achieves 71.2% of the performance of the PS5, etc. It basically is good for showing how well TF relates/scales to performance, and in that case, it's nearly a 1:1 scaling.
It is to strengthen Teraflop=Performance narrative/fixation then. So what's going on with 5700 XT?...
 

Lysandros

Member
This guy still don't understand how CPU and GPU clock works in PS5. Jesus. Cerny never said that less stressed on CPU means max clock on GPU. Both can Run at Max frequency as long as the work load doesn't exceed the power budget. The power shifts only happens when the CPU running at max clocks with some extra juice to spare, so that the GPU can sqeeze afew more pixel.
This is the guy who refused to believe that PS5 had any kind of RT hardware after the literal spec reveal, one can expect only this much from him. At least his ignorance is consistent.
 
lol his own comparisons have shown this since launch when AC Valhalla outperformed even a 2080 Super. We have seen the PS5 outperform the 2080 time and time again over the last year and a half. It has outperformed the XSX many times and MS themselves told DF that the XSX GPU was equivalent to the RTX 2080 in Gears 5 benchmarks. At this point, Who still has unrealistic expecations?

I mean besides Alex.
Alex is a well known PS5 hater in disguise.🤷🏼‍♂️
 
It is to strengthen Teraflop=Performance narrative/fixation then. So what's going on with 5700 XT?...

I am not 100% sure, maybe? Seems that you can make a correlation between some using tflops/performance ratios, and others not (5700XT) for what I can only assume is architecture inefficiencies?
 
It does at some games especially when rt is involved.
Why use a 10900k when he can use another CPU then? He wants to remove possibility of CPU bottleneck, I get it, but what if PS5 is CPU limited then? How can he know PS5 is not CPU limited?
At lower resolution it gets higher frames. It's quite simple. Not sure what you can't follow.
This place ban has no logic at all.

Maybe they take in account the avatar or the username... well maybe the mood of the mod when he read the post... I don't know.
Victims of rape often don't like the word rape being trivialized. Judging by the other bans this ban makes perfect sense.
 

winjer

Gold Member

Truth be told, the biggest bottleneck in PC gaming is Microsoft.
SSDs have been mainstream on the PC for over a decade now. And nvme drives have been mainstream for over half a decade.
RTX IO has been available in Turing, released 3 years ago.
And despite all this, Microsoft has only released a new API for storage, this month, March 2022.

Microsoft and especially the Windows team, is constantly screwing around with the UI. And adding features nobody asked for. And adding bloatware and spyware.
But doing important things, like replacing a decades old storage system, for them, that's not a priority.
 

rofif

Can’t Git Gud
Truth be told, the biggest bottleneck in PC gaming is Microsoft.
SSDs have been mainstream on the PC for over a decade now. And nvme drives have been mainstream for over half a decade.
RTX IO has been available in Turing, released 3 years ago.
And despite all this, Microsoft has only released a new API for storage, this month, March 2022.

Microsoft and especially the Windows team, is constantly screwing around with the UI. And adding features nobody asked for. And adding bloatware and spyware.
But doing important things, like replacing a decades old storage system, for them, that's not a priority.
yep. For last 10 years we could just as well used normal run of the mill 500mb ssds. no gaming difference to fastest nvme
 
Death Strandings original port was fantastic. Ran amazingly well at Ultrawide on my 1080ti. I expect this Directors Cut to to be no different, I'd just like to see the DLSS performance now that I have a 3080ti.

Some of y'all are taking the criticisms of this video like it's a personal attack of PS5 when the console has literally nothing to do with why some of us are confused by the lack of major features in this video being covered.
 

Lysandros

Member
I am not 100% sure, maybe? Seems that you can make a correlation between some using tflops/performance ratios, and others not (5700XT) for what I can only assume is architecture inefficiencies?
Why not look at the GPUs at a whole level with all the relevant throughputs, architecture and speed differences to have a deeper understanding instead? Why it all must come down to one signle GPU metric?
 

SlimySnake

Flashless at the Golden Globes
It is to strengthen Teraflop=Performance narrative/fixation then. So what's going on with 5700 XT?...
His numbers are all wrong. He is using the 9.75 theoretical tflops number AMD released for the 5700xt which is based on clocks it does not hit regularly on non-OC cards.

In this video, we can see it hovers around 1,800 mhz to 1,860 ghz. Mostly averaging 1.84 ghz. Thats 9.4 tflops. Their advertised ingame frequency is even lower at 1.75 ghz which puts it at around 9.1 tflops which is basically what the launch benchmarks showed.




The 2080 is also not a 10.07 tflops GPU. I cant believe he would make this rookie mistake. Everyone knows Nvidia underreported their RTX GPU clocks for some reason. My factory clocked RTX 2080 sits at 1.95 ghz and in some games like Elden Rings sits at 2010 mhz.

This is the only video i could find of a 2080 running death stranding but it hovers around 2040 on an 100 mhz overclocked GPU. Even if remove his overclock and use 1.95 ghz like I get on my 2080, thats 11.4 tflops.11% more powerful than the PS5 offering only 3% worse performance.



Even the 2070 Super which he says is only 88% of PS5's tflops runs at a constant 1.965 Ghz in this video which is 10.06 tflops or 98.3% of PS5 tflops. Basically the number he gave for the 2080.



Clock speeds can differ from card to card so all he had to do was enable the clockspeeds on the benchmarks like every other PC benchmark youtuber out there, but it would mess with his narrative so fuck him.
 

winjer

Gold Member
yep. For last 10 years we could just as well used normal run of the mill 500mb ssds. no gaming difference to fastest nvme

Maybe not. From the first numbers we see from Direct Storage on Forspoken, even a sata SSD might be bottlenecked by the Windows old file system .

But still, even if we ignore sata SSDs, Microsoft is late with Direct Storage by over half a decade.
Sony is not a software company, but they already had a new file system, capable of using nvme SSDs to a high degree. The PS5 was released 1.5 years ago.
Microsoft, the world biggest software company, can't even keep pace.

And then there is the stutter with DirectX12.
Vulkan already has extensions to reduce stutter from shader compilation.
Reducing Draw Time Hitching with VK_EXT_graphics_pipeline_library

And Linux is doing strong strides to improve performance and reduce stutter. In some games, already surpassing Windows by a mile.
 
Last edited:

Midn1ght

Member
House Reaction GIF

Clearly not bad for a 400/500$ machine.

Hopefully we can put the PS5=2060S to rest but also call out devs when they shit the bed with poor optimization on both side.

Would be curious to see a similar comparison with a well optimized game on both platform but with RT-ON this time.
 
Why not look at the GPUs at a whole level with all the relevant throughputs, architecture and speed differences to have a deeper understanding instead? Why it all must come down to one signle GPU metric?

Probably to make it easier for the layman to follow the video I'd imagine, they want their clicks! lol.
 
Top Bottom