• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[Digital Foundry] Death Stranding Director's Cut: PC vs PS5 Graphics Breakdown

House Reaction GIF

Clearly not bad for a 400/500$ machine.

Hopefully we can put the PS5=2060S to rest but also call out devs when they shit the bed with poor optimization on both side.

Would be curious to see a similar comparison with a well optimized game on both platform but with RT-ON this time.
You might have to put the 2060s back into contention haha.
 

SlimySnake

Flashless at the Golden Globes
Truth be told, the biggest bottleneck in PC gaming is Microsoft.
SSDs have been mainstream on the PC for over a decade now. And nvme drives have been mainstream for over half a decade.
RTX IO has been available in Turing, released 3 years ago.
And despite all this, Microsoft has only released a new API for storage, this month, March 2022.

Microsoft and especially the Windows team, is constantly screwing around with the UI. And adding features nobody asked for. And adding bloatware and spyware.
But doing important things, like replacing a decades old storage system, for them, that's not a priority.
Maybe, but I have been using a 7.0 GBps SSD since last August and this thing loads just as fast as the PS5 in several games. Horizon Zero Dawn on my PC has fast travel load times that are like 2-3 seconds just like the PS5 version Horizon Forbidden West. Elden Rings fast travel is just as fast. 2-3 seconds or what feels like 2-3 seconds. Cyberpunk initial load is so fast I dont even know when it stops loading. I think having fast DDR4 RAM, a fast 5.0 Ghz CPU helps just as much as a fast PCIE GEN4 SSD.

I think this game loading in 9 seconds vs 4.5 seconds on the PS5 isnt a big deal, but yes MS could be doing better. I dont really care for how they have handled PC gaming.
 

rofif

Can’t Git Gud
Maybe, but I have been using a 7.0 GBps SSD since last August and this thing loads just as fast as the PS5 in several games. Horizon Zero Dawn on my PC has fast travel load times that are like 2-3 seconds just like the PS5 version Horizon Forbidden West. Elden Rings fast travel is just as fast. 2-3 seconds or what feels like 2-3 seconds. Cyberpunk initial load is so fast I dont even know when it stops loading. I think having fast DDR4 RAM, a fast 5.0 Ghz CPU helps just as much as a fast PCIE GEN4 SSD.

I think this game loading in 9 seconds vs 4.5 seconds on the PS5 isnt a big deal, but yes MS could be doing better. I dont really care for how they have handled PC gaming.
There is nothing your pc does different than any other pc. There is still no direct IO.
Most games just don't utilize ps5 direct io and thus will load the same or slower than pc.
It would be most interesting if we got Returnal or Demons Souls on pc. But Death Stranding native ps5 is pretty good example.
Uncharted 4 will be interesting too. ps5 version does not even have loading screens anymroe
 

winjer

Gold Member
Maybe, but I have been using a 7.0 GBps SSD since last August and this thing loads just as fast as the PS5 in several games. Horizon Zero Dawn on my PC has fast travel load times that are like 2-3 seconds just like the PS5 version Horizon Forbidden West. Elden Rings fast travel is just as fast. 2-3 seconds or what feels like 2-3 seconds. Cyberpunk initial load is so fast I dont even know when it stops loading. I think having fast DDR4 RAM, a fast 5.0 Ghz CPU helps just as much as a fast PCIE GEN4 SSD.

I think this game loading in 9 seconds vs 4.5 seconds on the PS5 isnt a big deal, but yes MS could be doing better. I dont really care for how they have handled PC gaming.

Let's be honest here. Horizon Zero Dawn has much less detail to load than Horizons Forbidden West. And still, loads as just as fast. And on a high end PC, with an SSD that is 1.5 GB/s faster than the PS5.
This is a good example of how bottlenecked things are on the PC.
 

SlimySnake

Flashless at the Golden Globes
He literally points out the PS5 is not CPU limited as it a locked 60 FPS when he drops the resolution to 1800p.
I think the point is that if the game isnt CPU bound then why not use a CPU equivalent to the PS5 CPU when all you are doing is comparing GPUs. Just for peace of mind. Just to keep things more or less equal.

The PS5 GPU is already at a disadvantage by having to share its memory bandwidth with the CPU. Something the RTX and RDNA 1.0 cards dont have to. Then you also have PS5 short 1 entire core whereas PC CPUs use ALL cores and threads when running games.

If he's trying to make things even in order to do this GPU and I stress again, GPU benchmark then he should not be using anything that might potentially mess with the results. If anything he should be using the shitty Ryzen 2700 CPU NX Gamer loves to use because benchmarks of the leaked PS5 APU has shown it to be roughly the same as the 2700 which is notoriously bad when running cross gen games that are not CPU bound. That way we will get a more accurate picture of the PS5 GPU.
 

Lysandros

Member
Probably to make it easier for the layman to follow the video I'd imagine, they want their clicks! lol.
Be it for the sake of Teraflop supremacy or clicks, the layman is left with only conspiracy theories when this unreliable number fails to explain performance profiles of two machines across the games. The leading console/PC tech channel should do a better job to instruct its followers i think.
 

ethomaz

Banned
Some overhaul in how Windows manage I/O was needed... DirectStorage come even if late it come.
But looking at the Forspoken results seems like even with DirectStorage increasing the access to I/O bandwidth it is still giving very small difference.
I believe the next step is really put specialized descompressor to DirectStorage works like intended.

I/O is not the boottleneck anymore in PC with NVMe and DirectStorage.
The issue is how fast you decompress assets now... the higher I/O bandwidth is not being used.
 
Last edited:
  • Like
Reactions: Rea

DeepEnigma

Gold Member
Some overhaul in how Windows manage I/O was needed... DirectStorage come even if late it come.
But looking at the Forspoken results seems like even with DirectStorage increasing the access to I/O bandwidth it is still giving very small difference.
I believe the next step is really put specialized descompressor to DirectStorage works like intended.
Sounds familiar
 
His numbers are all wrong. He is using the 9.75 theoretical tflops number AMD released for the 5700xt which is based on clocks it does not hit regularly on non-OC cards.

In this video, we can see it hovers around 1,800 mhz to 1,860 ghz. Mostly averaging 1.84 ghz. Thats 9.4 tflops. Their advertised ingame frequency is even lower at 1.75 ghz which puts it at around 9.1 tflops which is basically what the launch benchmarks showed.




The 2080 is also not a 10.07 tflops GPU. I cant believe he would make this rookie mistake. Everyone knows Nvidia underreported their RTX GPU clocks for some reason. My factory clocked RTX 2080 sits at 1.95 ghz and in some games like Elden Rings sits at 2010 mhz.

This is the only video i could find of a 2080 running death stranding but it hovers around 2040 on an 100 mhz overclocked GPU. Even if remove his overclock and use 1.95 ghz like I get on my 2080, thats 11.4 tflops.11% more powerful than the PS5 offering only 3% worse performance.



Even the 2070 Super which he says is only 88% of PS5's tflops runs at a constant 1.965 Ghz in this video which is 10.06 tflops or 98.3% of PS5 tflops. Basically the number he gave for the 2080.



Clock speeds can differ from card to card so all he had to do was enable the clockspeeds on the benchmarks like every other PC benchmark youtuber out there, but it would mess with his narrative so fuck him.

Excellent post.
 

ethomaz

Banned
Sounds familiar
Yeap but from these early tests it seems really the issue now.
See the tests... the I/O access almost doubled but the loading continued basically the same.
Weird to se some side effects too... SATA has little to no gain with DirectStorage and HDD runs worst with it.

dsbench.jpg
 
Last edited:
Be it for the sake of Teraflop supremacy or clicks, the layman is left with only conspiracy theories when this unreliable number fails to explain performance profiles of two machines across the games. The leading console/PC tech channel should do a better job to instruct its followers i think.

Yeah, I agree, as it's not a wholesome snapshot of the entire game or a large section of unique differing scenes which would allow you to build out a much larger, complete performance "Picture". But, it was interesting nonetheless to see the PS5 hold more frames in a like-for-like seen against some decent desktop graphic cards.
 
This guy still don't understand how CPU and GPU clock works in PS5. Jesus. Cerny never said that less stressed on CPU means max clock on GPU. Both can Run at Max frequency as long as the work load doesn't exceed the power budget. The power shifts only happens when the CPU running at max clocks with some extra juice to spare, so that the GPU can sqeeze afew more pixel.
"There's enough power that both CPU and GPU can potentially run at their limits of 3.5GHz and 2.23GHz, it isn't the case that the developer has to choose to run one of them slower."


The funny side is that this is a DF article. He doesn't even know the texts of his own work.
 

Zathalus

Member
I think the point is that if the game isnt CPU bound then why not use a CPU equivalent to the PS5 CPU when all you are doing is comparing GPUs. Just for peace of mind. Just to keep things more or less equal.

The PS5 GPU is already at a disadvantage by having to share its memory bandwidth with the CPU. Something the RTX and RDNA 1.0 cards dont have to. Then you also have PS5 short 1 entire core whereas PC CPUs use ALL cores and threads when running games.

If he's trying to make things even in order to do this GPU and I stress again, GPU benchmark then he should not be using anything that might potentially mess with the results. If anything he should be using the shitty Ryzen 2700 CPU NX Gamer loves to use because benchmarks of the leaked PS5 APU has shown it to be roughly the same as the 2700 which is notoriously bad when running cross gen games that are not CPU bound. That way we will get a more accurate picture of the PS5 GPU.
Because it literally doesn't matter. The PS5 is not CPU bound as evident by the 1800p results and an ancient 4c/8t Haswell CPU can run this game north of 100 FPS. Sure he could have swapped in something like a 3600 but it would have zero impact on the results.

As for the shared bandwidth and CPU core issue, once again that does not matter for the results of this test. The console is always going to have that bandwidth deficiency, here he is just comparing to what the equivalent on PC would be. What would be the point be of cutting down the GPUs memory bandwidth by say 20%? Nobody uses a card like that.
 

Lysandros

Member
Some overhaul in how Windows manage I/O was needed... DirectStorage come even if late it come.
But looking at the Forspoken results seems like even with DirectStorage increasing the access to I/O bandwidth it is still giving very small difference.
I believe the next step is really put specialized descompressor to DirectStorage works like intended.

I/O is not the boottleneck anymore in PC with NVMe and DirectStorage.
The issue is how fast you decompress assets now... the higher I/O bandwidth is not being used.
I agree, what was the difference for Forspoken with DirectStorage, about 15% faster?
 

SlimySnake

Flashless at the Golden Globes
Nah, there are clearly games, which are really optimized for PS5 and it's great, but I PS5 is certainly not a 2080 level, let's be real here. But hey it runs well on PS5, so cool
What makes you say this? Why would it not be?

We are talking about standard rasterization here. Not ray tracing performance. There have been several games that show the PS5 performs better than the 2080 or roughly on par with it. Which makes sense considering the tflops difference between the two cards is only 11%. So in AMD sponsored games like AC Valhalla, we expect the PS5 to perform better. RDNA and RTX tflops are roughly identical as proven time and time again by 2070 and 5700xt comparisons.

This really isnt a surprising result. Even Microsoft did a Gears 5 benchmark and found that the the XSX performed the same as the RTX 2080 and we have seen the PS5 keep up with the XSX in several games. So again, what makes you so certain that the PS5 isnt 2080 level in standard rasterization?
 
I think the point is that if the game isnt CPU bound then why not use a CPU equivalent to the PS5 CPU when all you are doing is comparing GPUs. Just for peace of mind. Just to keep things more or less equal.

The PS5 GPU is already at a disadvantage by having to share its memory bandwidth with the CPU. Something the RTX and RDNA 1.0 cards dont have to. Then you also have PS5 short 1 entire core whereas PC CPUs use ALL cores and threads when running games.

If he's trying to make things even in order to do this GPU and I stress again, GPU benchmark then he should not be using anything that might potentially mess with the results. If anything he should be using the shitty Ryzen 2700 CPU NX Gamer loves to use because benchmarks of the leaked PS5 APU has shown it to be roughly the same as the 2700 which is notoriously bad when running cross gen games that are not CPU bound. That way we will get a more accurate picture of the PS5 GPU.
If you are cpu bound you can't get an accurate GPU benchmark. That's the point. You just want to gimp PC for some reason. You do realize PC runs an os as well. One that is known to be pretty shit right?
 

ethomaz

Banned
Bandwidth means nothing. 15% faster in actual loadings is going to be unnoticeable compared to PS5 150% faster loadings.
Read my previous post.
I'm not saying if it is good or bad or even comparing with others systems.

It just the DirectStorage make the I/O bandwidth increase near 100% but the results are basically around the same.
That means the issue is not bandwidth but the time the CPU is having to decompress the assets.
For Forsaken seems like the loading just decrease up to 3GB/s of I/O bandwidth... after that you have no gain because there is another bottleneck.

If you don't reduce that time DirectStorage will probably means nothing.

But people can be happy... decompression via GPU or co-processor is coming... it is the natural step for PC.
 
Last edited:

S0ULZB0URNE

Member
Nah, there are clearly games, which are really optimized for PS5 and it's great, but I PS5 is certainly not a 2080 level, let's be real here. But hey it runs well on PS5, so cool
Actually it was made for ancient PS4.
If it was made for PS5 the load times and other optimizations would be even better.
 

M1chl

Currently Gif and Meme Champion
What makes you say this? Why would it not be?

We are talking about standard rasterization here. Not ray tracing performance. There have been several games that show the PS5 performs better than the 2080 or roughly on par with it. Which makes sense considering the tflops difference between the two cards is only 11%. So in AMD sponsored games like AC Valhalla, we expect the PS5 to perform better. RDNA and RTX tflops are roughly identical as proven time and time again by 2070 and 5700xt comparisons.

This really isnt a surprising result. Even Microsoft did a Gears 5 benchmark and found that the the XSX performed the same as the RTX 2080 and we have seen the PS5 keep up with the XSX in several games. So again, what makes you so certain that the PS5 isnt 2080 level in standard rasterization?
You can really equate AMD and Nvidia flops in direct comparison, but it's clear that in many games the performance falls way short of RTX2080, for this to be fair call on my side. And no, not even XSX is on that level, obviously. Also wasn't HZD trash on PC as well? I think that porting PS only code to PC, requires quite a lot of work and re-engineering (especially for DX, Vulkan less so), not the mention that CPU side of things are going to be quite a bit different, because you have to use different build for physics and so on. I commend the job on PS5 side, don't get me wrong. Especially having wide screen mode, which is chef kiss. However that does not detract from the shortcomings. It really does not make sense to re-invent the wheel when it's good enough and especially not when it has DLSS (Radeon fanboys be fucked). I know that the same thing is with AC: Vallhala, where RTX2080Ti falls short of 5700XT, which I am sure you understand that something is not quite right.

Actually it was made for ancient PS4.
If it was made for PS5 the load times and other optimizations would be even better.
I want so specify, that I thought that there is a talk about PC version, not how well PS5 port went. I admit, for one time I didn't watch the video.
 
Last edited:

Md Ray

Member
He could have just disabled v-sync.
Enabling Vsync incurs performance penalty on both machines (PS5, PC). If he'd disabled the Vsync on PC, then it wouldn't be a 1:1 comparison against PS5 in terms of the Vsync setup. And it would give PC an unfair advantage.
Maybe not. From the first numbers we see from Direct Storage on Forspoken, even a sata SSD might be bottlenecked by the Windows old file system .

But still, even if we ignore sata SSDs, Microsoft is late with Direct Storage by over half a decade.
Sony is not a software company, but they already had a new file system, capable of using nvme SSDs to a high degree. The PS5 was released 1.5 years ago.
Microsoft, the world biggest software company, can't even keep pace.

And then there is the stutter with DirectX12.
Vulkan already has extensions to reduce stutter from shader compilation.
Reducing Draw Time Hitching with VK_EXT_graphics_pipeline_library

And Linux is doing strong strides to improve performance and reduce stutter. In some games, already surpassing Windows by a mile.

It's not just software/new file system at work here (on PS5). It even has a dedicated hardware decompression block built into the main SoC that helps in reducing load times/stream assets instantly. Hopefully, MS lets PCs use GPU decompression (RTX IO) soon which is slated to arrive with the next iteration of DStorage.
 
Last edited:
Read my previous post.
I'm not saying if it is good or bad or even comparing with others systems.

It just the DirectStorage make the I/O bandwidth increase near 100% but the results are basically around the same.
That means the issue is not bandwidth but the time the CPU is having to decompress the assets.
For Forsaken seems like the loading just decrease up to 3GB/s of I/O bandwidth... after that you have no gain because there is another bottleneck.

If you don't reduce that time DirectStorage will probably means nothing.

But people can be happy... decompression via GPU or co-processor is coming... it is the natural step for PC.
That means there are others bottlenecks. Maybe CPU, probably 10 others things.
 
Last edited:

rodrigolfp

Haptic Gamepads 4 Life

lol his own comparisons have shown this since launch when AC Valhalla outperformed even a 2080 Super. We have seen the PS5 outperform the 2080 time and time again over the last year and a half. It has outperformed the XSX many times and MS themselves told DF that the XSX GPU was equivalent to the RTX 2080 in Gears 5 benchmarks. At this point, Who still has unrealistic expecations?

I mean besides Alex.
It's about RT performance.
 

KungFucius

King Snowflake
It still amazes me how stout gen 9 consoles are, the gap was MASSIVE in gen 7 and gen 8, now, not so much. Obviously, someone is going to chime in with a (Insert whatever argument about the RTX 3090 demolishing Gen9 consoles).
Why do you even do this? They compare the console with previous gen GPUs and then you think people who would rightfully and obviously claim that a 3090 would smoke it are not getting it. Why should the consoles be compared to cards that were available 2 years before they came out? They should be comparing to the 3000/6000 series cards because that HW all released within weeks. This year we will get more powerful GPUs and again in 2024.
 

rsouzadk

Member
Yeap but from these early tests it seems really the issue now.
See the tests... the I/O access almost doubled but the loading continued basically the same.
Weird to se some side effects too... SATA has little to no gain with DirectStorage and HDD runs worst with it.

dsbench.jpg

That is because IO speed is not the bottleneck anymore, it is the decompression and asset initialization, like the devs from forspoken addressed on this benchmark.
 

MikeM

Member
I think the point is that if the game isnt CPU bound then why not use a CPU equivalent to the PS5 CPU when all you are doing is comparing GPUs. Just for peace of mind. Just to keep things more or less equal.

The PS5 GPU is already at a disadvantage by having to share its memory bandwidth with the CPU. Something the RTX and RDNA 1.0 cards dont have to. Then you also have PS5 short 1 entire core whereas PC CPUs use ALL cores and threads when running games.

If he's trying to make things even in order to do this GPU and I stress again, GPU benchmark then he should not be using anything that might potentially mess with the results. If anything he should be using the shitty Ryzen 2700 CPU NX Gamer loves to use because benchmarks of the leaked PS5 APU has shown it to be roughly the same as the 2700 which is notoriously bad when running cross gen games that are not CPU bound. That way we will get a more accurate picture of the PS5 GPU.
CPU bound on the PC? He showed it running at 180+fps in 720p mode, didn’t he?
 
Why do you even do this? They compare the console with previous gen GPUs and then you think people who would rightfully and obviously claim that a 3090 would smoke it are not getting it. Why should the consoles be compared to cards that were available 2 years before they came out? They should be comparing to the 3000/6000 series cards because that HW all released within weeks. This year we will get more powerful GPUs and again in 2024.

It's a fun comparison to make? And they seriously aren't getting it. $1499 only in MSRP for an RTX 3090, most are still north of 2K. To me, and to everyone else, it's downright impressive that $399-$499 consoles are on par/exceed RTX 2080s. How can you not be impressed by that? Besides, RTX 2080 matches or beats an RTX 3060 in 1080P gaming, once the resolution moves to 1440P and on to 4K, the RTX 2080 lead widens. So sure, maybe Alex could have thrown in an RTX 3060, or a 6600XT, or something along those lines.
 

PaintTinJr

Member
This right here clearly shows the limitation of Win32 API indicating why PC needs DirectStorage to take full advantage of the NVMe drives.
Id6Qdzt.png
With slow load times like that all round it looks like all the decompression is being done on CPU cores even on the PS5 - running a PS4 port, much like HFW, etc.

That's probably the main reason Alex used the Core i9, is because the load times on a CPU that compares to the PS5 APU's CPU specs probably doubles or triples those loading times on PC.
 

Neo_game

Member
Personally I do not think the PC vs consoles comparison is fair. But this is one of the rare games that make PS5 look great. Overall though the consoles performance is usually lower than expected. With most games now scaling in all format I am not sure console performance will be this good. It needs individual attention which now seems thing of a past.
 

PaintTinJr

Member
I understand Alex's reasoning for not spending more time comparing the visuals of the PS5 and PC versions, as he said John's previous video covered that. John's video doesn't tell the whole story though. That video was comparing the PS5 Director's Cut release with the original PC release. Alex has assumed a bit too much here that PS5 quality mode just = highest PC settings. John's video pretty explicitly said that KojiPro had reworked the environment detail a bit in the DC, so Alex should've checked that was the same case in the DC on PC. It seems from other comparisons that the PC has further LOD settings than PS5.

lodnyj4b.png


The same video also shows a Corsair MP600 loads the game quicker than the PS5, while the Steam Deck loading time is more in line with DF's results - so who to believe here? Right now Alex's video feels half complete because he's only focused on performance.
The two images are from different camera positions - or possibly different frustum setups, or different depth buffer precision - which you can easily see by the left picture having more foreground while having similar background composition. The areas highlighted with red arrows will have different mipmap and anisotropic results because they are deeper in the frustum on the left, and possibly lower shading if using a form of VRS.
 
Top Bottom