• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[NxGamer] Spider-Man Remastered PC vs PS5 vs Steam Deck vs PS4 vs 750Ti - Technical Review & Comparison

yamaci17

Member
Yeah 6600xt gets rekt in this game. 128 bit + 256 gb/s bandwidth is destroying that card's potential. Charts speak for themselves;

At native 1080p with high + high RT set, where 6600xt feels comfortable, it averages 70 frames

ayxHLE4.jpg


Practically, gap between 6700xt and 6600xt at 1080p is %38.

4K however is antagonizing for 6600xt;

59GW859.jpg


Now the gap is %78. It quite literally averages 28 frames with high preset and high Ray Tracing, which is practically equivalent to PS5 (although high preset sets RT object range to 6, but it is not affecting GPU bound performance heavily, so it is not a huge factor)
Full benchmark here; https://www.techspot.com/article/2518-spiderman-remastered-benchmarks/
Full 4K table; https://static.techspot.com/articles-info/2518/bench/4K-RT.png
Full 1080p table; https://static.techspot.com/articles-info/2518/bench/1080p-RT.png
Full 1440p table; https://static.techspot.com/articles-info/2518/bench/1440p-RT.png

Its not a VRAM limitation either (the test is done with high preset, so it uses high textures). It quite literally chokes on its bandwidth. From having a gap of %38 at 1080p, to %55 to 1440p and finally %78 at 4K.

Even 6700xt with its 192 bit 384 GB/s suffers from it, mildly:
At 1440p, gap between 6700xt and 6900xt is %24 (101 FPS vs. 80 FPS). That gap widens to %40 at 4K (70 FPS vs 50 FPS).

So yeah, 6600xt is the worst offender in this case. It scales horribly to 4K.

Although I don't give any respect to this test. If Techspot did this test with Very High textures, it would be a completely different picture. Let me paint you that picture at 4K:

- 6600xt is getting destroyed further, probably to 10-15 framerate average
- 3060 staying at its exact same location, having enough VRAM
- 2070 dropping to a 25 framerate average, breaking parity with 3060
- 3070 dropping to a 43 framerate average, breaking parity with 2080ti

And so on. You get the gist of it. I invite Hardware Unboxed to replicate the test with Very High textures in Times Square. Sure as hell, that would completely re-shape the entire 4K table... :D
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Yeah 6600xt gets rekt in this game. 128 bit + 256 gb/s bandwidth is destroying that card's potential. Charts speak for themselves;

At native 1080p with high + high RT set, where 6600xt feels comfortable, it averages 70 frames

ayxHLE4.jpg


Practically, gap between 6700xt and 6600xt at 1080p is %38.

4K however is antagonizing for 6600xt;

59GW859.jpg


Now the gap is %78. It quite literally averages 28 frames with high preset and high Ray Tracing, which is practically equivalent to PS5 (although high preset sets RT object range to 6, but it is not affecting GPU bound performance heavily, so it is not a huge factor)
Full benchmark here; https://www.techspot.com/article/2518-spiderman-remastered-benchmarks/
Full 4K table; https://static.techspot.com/articles-info/2518/bench/4K-RT.png
Full 1080p table; https://static.techspot.com/articles-info/2518/bench/1080p-RT.png
Full 1440p table; https://static.techspot.com/articles-info/2518/bench/1440p-RT.png

Its not a VRAM limitation either (the test is done with high preset, so it uses high textures). It quite literally chokes on its bandwidth. From having a gap of %38 at 1080p, to %55 to 1440p and finally %78 at 4K.

Even 6700xt with its 192 bit 384 GB/s suffers from it, mildly:
At 1440p, gap between 6700xt and 6900xt is %24 (101 FPS vs. 80 FPS). That gap widens to %40 at 4K (70 FPS vs 50 FPS).

So yeah, 6600xt is the worst offender in this case. It scales horribly to 4K.

Although I don't give any respect to this test. If Techspot did this test with Very High textures, it would be a completely different picture. Let me paint you that picture at 4K:

- 6600xt is getting destroyed further, probably to 10-15 framerate average
- 3060 staying at its exact same location, having enough VRAM
- 2070 dropping to a 25 framerate average, breaking parity with 3060
- 3070 dropping to a 43 framerate average, breaking parity with 2080ti

And so on. You get the gist of it. I invite Hardware Unboxed to replicate the test with Very High textures in Times Square. Sure as hell, that would completely re-shape the entire 4K table... :D
So if these are ps5 equivalent settings. The ps5 is performing like a 13 tflops 6700xt? As the variation of the 13.8 tops ship that sank faster than the titanic on March 17th 2020, I can live with that. remember fidelity mode unlocked ram at 45-50 fps.

Edit: there are using the amazing 5800x3D for these tests. The ps5 cpu can’t compete with s cpu like that especially in a cu heavy title like ToS. The ps5 gpu must be working overtime here.
 
Last edited:

Md Ray

Member
Great Post. The fact this guy is being commisioned to make videos that could be viewed by millions of people and he doesn't even have a grasp of v ram bottlenecks is agonising. I'm sure he is a nice guy but these videos are like some person with a slight grasp on something spreading bullshit to the masses and we see it far too much across many parts of media and social media nowadays....he's basically a flat earther and being paid for it.

...the fact he is willing to do this to get paid makes me change my stance from nice guy......

....to massive chode!
What's with the personal attacks over a freaking Technical analysis? Why so angry?
 

SlimySnake

Flashless at the Golden Globes
Yes I agree with the others here that this is likely a VRAM constraint.
This is funny because i had a lot of people jumping down my throat a couple of years ago when i pointed out that despite the gpu compute performance parity between the 20 series cards, the ps5 and xsx will perform better because of their higher vram allocation. Triggered a bunch of people. I even pointed out the vram starved gtx 570 which was supposedly better than the ps4 and yet defaulted all my settings to very low in cod advanced warfare but nope, the pc’s system ram was going to save the PCs there. Except it didn’t then and it won’t now.
 

Md Ray

Member
P playsaves3

my final post to you, to make you understand real good this time. do note that i have a 2700x, the so called ps5 capable CPU

QRbva76.jpg


lets see. in this scene we supposedly see ps5 being %38 faster than 2070. therefore, NX gamer claims ps5 is almost performing like a 3070 here.

let's see. with low textures but matched ps5 settings (high particles, high RT, 10 rt object range and so on)

3070 renders a whopping 56 frames. this is the raw brute power 3070 has. it is %30 faster than ps5, which is what it should be.

a6glnra.jpg


if i use very high textures;
CxtnbuY.jpg



i lose %36 frames.
practically the entire advantage 3070 has over PS5.

now, what I'm trying to convey you is that if 3070 had 12 GB memory this would not happen. because, this does not happen to 2080ti, and it always performs %30 over PS5, in almost every case.

in that shot with 2070 and PS5, 2070 is handicapped by %30. if it did not have the huge VRAM bottleneck, it would perform almost similar to PS5, maybe only tailing %5 behind it, instead of tailing %38 behind it.

as i said, this is a VRAM memory bottleneck issue. it does not have to do with architectural differences. VRAM bound bottlenecks are known to cause severe framerate drops since 2005s. this is a wide knowledge that this is what happens when you run out of VRAM

this is clear as I can get.

both my 2700x and my 3070 is capable of pushing 56 framerate average there. if i had a 2080ti instead of a 3070, I would get 56-57 framerate average with VERY HIGH textures. but I cannot. I'm hugely constrained by VRAM. a very simple concept misused by NX gamer to make irrevelant foundations his comical deductions

When we compare PS5 to equivalent GPUs, we compare their raw strenght and power. but when that power is strangled by an other factor, ie VRAM, things change. 3070 is not performing like the usual 3070 in this case. if it had more VRAM, it would.

GPU has the grunt, it just cannot show it because the GPU stalls and waits for VRAM to do its job. its not a normal thing to happen, you never want this to happen. you either lower texture resolution or game resolution.

yes, in the end, 8 GB 2070 model cannot match PS5 experience at 4K. no one denies this. But it does not change the fact that PS5 is not overperforming. It is 8 GB GPUs underperforming instead, compared to PS5.
Do you still get 56fps using High textures?
 

DenchDeckard

Moderated wildly
What's with the personal attacks over a freaking Technical analysis? Why so angry?
It frustrates me that he paints these narratives about PC gaming without really explaining it logically. He doesn't cover the games that are built with pc in mind and then go into how his PC beats the ps5 etc. This game was built from the ground up in playstations dev environment to only run on PlayStation, well before sonys plans to have a PC presence or before they even bought nixxes. What was the dev cycle on this PC port for nixxes? Is this stuff mentioned?

It's like he does everything he can to paint the ps5 as better than the entire PC platform with his questionable PC that when we attempted to help him with his blat3ntly strangely underperformed PC he ghosted gaf members...he then continues to use that PC to do comparisons and have them published on ign. It frustrates me.

He could atleast upgrade his PC to components that were new when the ps5 launched. He's using like 3 to 4 year old tech at this point.
 
Post like these are how you know the person didn't even bother to read what you posted.

If you did bother to read everything, you would have understood I used PS5 console settings from Digital Foundry with has the 6600XT @ 32fps and the PS5 @ 49fps.

It can be that hard, all of us went to school correct?

You can't expect to see identical scaling on cards with different memory configurations, bandwidth, compute unit counts etc. [simply by adding 10 fps to all, just because one card showed such gains is asinine]

Here's an example. One is maxed 2560x1440 native
Control-DX12-2022-09-13-08-36-33-656.png

and PS5 like settings also at wqhd
Control-DX12-2022-09-13-08-35-36-374.png


36 fps gain by going from high settings RT to PS5 like, but with higher RT resolution, so the increase would be even higher. On something like rtx 3090 the delta between settings would very likely be even higher and on something like 3060 smaller.
 

Rubim

Member
I wasnt talking about mitmaps in my post I was sorely talking about graphical settings and a custom preset on console. About cpu limits that gpu is being limited by it just as much as the ps5 is by it’s cpu so it’s still equal in the end
Sadly thats not how this works at all.

Spiderman being heavy on the CPU is not a normal thing on PCs, i think you're confused on how this particular game works.
 
It frustrates me that he paints these narratives about PC gaming without really explaining it logically. He doesn't cover the games that are built with pc in mind and then go into how his PC beats the ps5 etc. This game was built from the ground up in playstations dev environment to only run on PlayStation, well before sonys plans to have a PC presence or before they even bought nixxes. What was the dev cycle on this PC port for nixxes? Is this stuff mentioned?

It's like he does everything he can to paint the ps5 as better than the entire PC platform with his questionable PC that when we attempted to help him with his blat3ntly strangely underperformed PC he ghosted gaf members...he then continues to use that PC to do comparisons and have them published on ign. It frustrates me.

He could atleast upgrade his PC to components that were new when the ps5 launched. He's using like 3 to 4 year old tech at this point.
Same, same. But still doesn't warrant personal attacks tbh. Mods are currently discussing to nuke you for a week, so be careful :messenger_beaming:
 

Loope

Member
What's with the personal attacks over a freaking Technical analysis? Why so angry?
new person GIF


DF guys remembering all the post made by Assurdum and the like, when they called them shills at every single opportunity. Oh wait, it actually happened already in 2 or 3 posts on this very thread.
 

Mister Wolf

Member
It frustrates me that he paints these narratives about PC gaming without really explaining it logically. He doesn't cover the games that are built with pc in mind and then go into how his PC beats the ps5 etc. This game was built from the ground up in playstations dev environment to only run on PlayStation, well before sonys plans to have a PC presence or before they even bought nixxes. What was the dev cycle on this PC port for nixxes? Is this stuff mentioned?

It's like he does everything he can to paint the ps5 as better than the entire PC platform with his questionable PC that when we attempted to help him with his blat3ntly strangely underperformed PC he ghosted gaf members...he then continues to use that PC to do comparisons and have them published on ign. It frustrates me.

He could atleast upgrade his PC to components that were new when the ps5 launched. He's using like 3 to 4 year old tech at this point.

I would be surprised if I make more money than him and even I have a better PC.
 

Loxus

Member
You can't expect to see identical scaling on cards with different memory configurations, bandwidth, compute unit counts etc. [simply by adding 10 fps to all, just because one card showed such gains is asinine]

Here's an example. One is maxed 2560x1440 native
Control-DX12-2022-09-13-08-36-33-656.png

and PS5 like settings also at wqhd
Control-DX12-2022-09-13-08-35-36-374.png


36 fps gain by going from high settings RT to PS5 like, but with higher RT resolution, so the increase would be even higher. On something like rtx 3090 the delta between settings would very likely be even higher and on something like 3060 smaller.
Dude, I said it's a thought experience, why you taking it so seriously?

I was only using AMD cards, we know Nvidia cards scale better than AMD when it comes to RT. I added in the 3060 by mistake, if not I would have used all the Nvidia cards too.

Going by your, fps you're probably using a 3080, which only proves my post to be somewhat close to being accurate.
DESGkSZ.jpg


6800XT runs the corridor of doom @ 33fps, while the 2070 Super runs it @ 30fps with console settings.

You got a 36 fps improvement using console settings using a Nvidia card with RT enabled.
I gave a 21 fps improvement to AMD.

Like I said, it's just a thought experience.
 

Loxus

Member
It frustrates me that he paints these narratives about PC gaming without really explaining it logically. He doesn't cover the games that are built with pc in mind and then go into how his PC beats the ps5 etc. This game was built from the ground up in playstations dev environment to only run on PlayStation, well before sonys plans to have a PC presence or before they even bought nixxes. What was the dev cycle on this PC port for nixxes? Is this stuff mentioned?

It's like he does everything he can to paint the ps5 as better than the entire PC platform with his questionable PC that when we attempted to help him with his blat3ntly strangely underperformed PC he ghosted gaf members...he then continues to use that PC to do comparisons and have them published on ign. It frustrates me.

He could atleast upgrade his PC to components that were new when the ps5 launched. He's using like 3 to 4 year old tech at this point.
Can you do any better?

The benchmarks don't lie.
He didn't create the port, he's just doing benchmarks then analyzing the results.

Just because he didn't say it's VRAM starved, you think it's right to hate him?

What's even more puzzling, I only seem to hear it's VRAM starved in this thread, not saying it isn't VRAM starved.

Other places on the internet says it could also be a CPU bottleneck because the PS5's CPU handles a lot of RT work and it hammers the PC CPU as a result, which is way the CPU has heavy utilization with RT enabled and also high PCIe throughtput.



 
Last edited:

DenchDeckard

Moderated wildly
Can you do any better?

The benchmarks don't lie.
He didn't create the port, he's just doing benchmarks then analyzing the results.

Just because he didn't say it's VRAM starved, you think it's right to hate him?

What's even more puzzling, I only seem to hear it's VRAM starved in this thread, not saying it isn't VRAM starved.

Other places on the internet says it could also be a CPU bottleneck because the PS5's CPU handles a lot of RT work and it hammers the PC CPU as a result, which is way the CPU has heavy utilization with RT enabled and also high PCIe throughtput.





My problem is not with the benchmarks it's with his explanations that are based on some knowledge but he usually goes into wild speculation about the magic Sony hardware that's allowing this stuff when it's probably more to do with dev time and resources than some secret sauce.

The dev had years to make this exclusively for playstation platforms and catered it to that and then an incredible porting studio had what? 6 to 12 months to shoe horn in a pc release.
 

tommib

Member
Good to know that what was considered an amazing pc port by tech reviewers with endless graphical options is now a shoehorned port. You’re not ok, kids.

From this:

oHTaCQT.jpg


To shit shoehorned port. Take your pills because what you’re doing is too transparent.
 
Last edited:

Loxus

Member
My problem is not with the benchmarks it's with his explanations that are based on some knowledge but he usually goes into wild speculation about the magic Sony hardware that's allowing this stuff when it's probably more to do with dev time and resources than some secret sauce.

The dev had years to make this exclusively for playstation platforms and catered it to that and then an incredible porting studio had what? 6 to 12 months to shoe horn in a pc release.
NXGamer has been speaking nothing but facts, it's CPU and Pcie throughput bound.

Here you can see CPU utilization and Pcie bandwidth low with RT disabled.
7IhLxDw.jpg


With RT enabled, you can now see CPU utilization high and Pcie bandwidth goes form 1938MB/s to 10449MB/s.
g0zkKVC.jpg


When it comes to Spider-Man, the CPU help handle a lot of BVH management and uses a lot of bandwidth as Cerny said.

"Having said that the Ray-Tracing instruction is pretty memory intensive, so it's a good mix with logic heavy code."

The PS5 I/O holds true here as hardware had been implemented to help handle a lot of work loads to remove bottlenecks.
FFCmAly.jpg
IW1xSlW.jpg


Even Nvidia realizes this and try to solve some of these problems with their upcoming RTX IO as the PCie doesn't have to handle twice as much bandwidth by just going straight to the GPU.
KBKhInv.png


It's clear you don't understand anything.
 
Last edited:

yamaci17

Member
NXGamer has been speaking nothing but facts, it's CPU and Pcie throughput bound.

Here you can see CPU utilization and Pcie bandwidth low with RT disabled.
7IhLxDw.jpg


With RT enabled, you can now see CPU utilization high and Pcie bandwidth goes form 1938MB/s to 10449MB/s.
g0zkKVC.jpg


When it comes to Spider-Man, the CPU help handle a lot of BVH management and uses a lot of bandwidth as Cerny said.

"Having said that the Ray-Tracing instruction is pretty memory intensive, so it's a good mix with logic heavy code."

The PS5 I/O holds true here as hardware had been implemented to help handle a lot of work loads to remove bottlenecks.


Even Nvidia realizes this and try to solve some of these problems with their upcoming RTX IO as the PCie doesn't have to handle twice as much bandwidth by just going straight to the GPU.


It's clear you don't understand anything.
Sorry to break it you but that is a momentary PCIE usage incident. It also happens on almost every system when you change the settings and it will quickly get back to normal values. You would do better to ask whowever take that shot to actually record a video swinging around. Unless you're hugely VRAM constrained, your PCIE bus won't get hammered. And 3090 is not getting PCIE hammered, its a momentary thing that happens.

Here's a gameplay part for you with RT enabled and calm PCIE usage all over the board. Never once it gets past 3.5 GB/s. if you don't believe me.


 

Loxus

Member
Sorry to break it you but that is a momentary PCIE usage incident. It also happens on almost every system when you change the settings and it will quickly get back to normal values. You would do better to ask whowever take that shot to actually record a video swinging around. Unless you're hugely VRAM constrained, your PCIE bus won't get hammered. And 3090 is not getting PCIE hammered, its a momentary thing that happens.

Here's a gameplay part for you with RT enabled and calm PCIE usage all over the board. Never once it gets past 3.5 GB/s. if you don't believe me.



I like how you completely ignore CPU utilization.
BVH management is done on the CPU.
It hampers performance and becomes a bottleneck.

I also find it odd you're literally the only one pushing this VRAM starved agenda.
 

yamaci17

Member
I like how you completely ignore CPU utilization.
BVH management is done on the CPU.
It hampers performance and becomes a bottleneck.

I also find it odd you're literally the only one pushing this VRAM starved agenda.

i like how you ignored how my PCIE stats are calm
BHV management is always done on CPU, even in other RT games such as Cyberpunk.

also, CPU usage is super high because my SMT is disabled
you may find it odd. just because I'm the only one does not make it any less true. also, the CPU situation and VRAM situation are two different things that happen on different situations. VRAM thing halves your GPU performance by half. CPU thing just causes slight bottlenecks where you may not get full GPU utitlization. so they're irrevelant to each other.

i can show you a random cyberpunk video and exact situation would happen: high CPU utilization lmao
 
Last edited:

Loxus

Member
i like how you ignored how my PCIE stats are calm
BHV management is always done on CPU, even in other RT games such as Cyberpunk.

also, CPU usage is super high because my SMT is disabled
you may find it odd. just because I'm the only one does not make it any less true. also, the CPU situation and VRAM situation are two different things that happen on different situations. VRAM thing halves your GPU performance by half. CPU thing just causes slight bottlenecks where you may not get full GPU utitlization. so they're irrevelant to each other.

i can show you a random cyberpunk video and exact situation would happen: high CPU utilization lmao
It is how the RT pipeline is done on Spider-Man. CPU performance can influence fps performance. I mean, you can clearly see your CPU has a higher utitlization than your GPU.

This is how to know if it’s VRAM starved.
This is the 3070, which only have 8GB of GDDR.


Here we can see it can go above 7GB with RT disabled.
cLjo2Go.jpg


But when RT is enabled, it only uses just above 6GB.
WqbeZ9A.jpg


So how can it be VRAM starved if it has a GB it can utilize?
 
Last edited:

yamaci17

Member


then this video is fake, is that it?

me getting a cpu bottleneck is something do with my 3.7 ghz 2700. if i used more gpu bound settings, it would be the other way around. RT using CPU is not detrimental when I still get 65+ frames, so I'm failing to understand as to what you're trying to prove here

that 7.1 gb value is total vram usage. game only uses a maximum of 6.4 gb vram. 700 mb difference is background stuff Ultron HD has (ultron needs some vram for visual processing, I guess). game becomes vram starved above 6.4 GB. anything over 6.4 GB is just background stuff added in. you need to look at per-app VRAM consumption to get the picture I'm getting WHICH almost no reviewer does, or any random video you will find on the Youtube will not do that.
 
Last edited:

Loxus

Member


then this video is fake, is that it?

me getting a cpu bottleneck is something do with my 3.7 ghz 2700. if i used more gpu bound settings, it would be the other way around. RT using CPU is not detrimental when I still get 65+ frames, so I'm failing to understand as to what you're trying to prove here

that 7.1 gb value is total vram usage. game only uses a maximum of 6.4 gb vram. 700 mb difference is background stuff Ultron HD has (ultron needs some vram for visual processing, I guess). game becomes vram starved above 6.4 GB. anything over 6.4 GB is just background stuff added in. you need to look at per-app VRAM consumption to get the picture I'm getting WHICH almost no reviewer does, or any random video you will find on the Youtube will not do that.

You still don't understand do you?
RT is mostly always entirely done on the GPU but with Spider-Man, the CPU now also handles some RT work.

This means the CPU is now working harder than normal, which translate into lower fps than if it was mostly done on the GPU.

Depending on how strong the CPU is, it than becomes the weakest component turning into a bottleneck.

It's that simple to understand.
GPU has to be using all the VRAM for VRAM to become a bottleneck not 6-7GB out of 8GB.
 

yamaci17

Member
GPU has to be using all the VRAM for VRAM to become a bottleneck not 6-7GB out of 8GB.

then why does lowering textures from very high to high shoots framerates from 38 to 58? you still ignore that part purposefully

a game won't use all your available VRAM. some developers decide to give an headroom for background tasks. in nixxes's case, they gave a %80 headroom. even people at beyond3d proved it, the other user reported that PERAPP vram usage for his 3060 never went past 9.6 GB, which is %80 of 12 GB.

you're the one who understands nothing here.

you're still not understanding that RT BHV management WAS ALWAYS done on CPU on most RT ports. Ray Tracing is known to increase CPU requirements by %25-30 for BF 5, Cyberpunk, Hitman 3 and many more. IT IS NOT AN exclusive situation to Spiderman. There are tons of CPU bound RT/non RT benchmarks out there proves that RT requires more CPU power


Then again, this has nothing to do with 2070 being hugely hamstrung by its VRAM at native 4K with VH textures

you're just skewing the discussion at this point. 2070 dropping from 50 framerate average to 25 framerate average due to having enormous VRAM starvation has nothing to do with how RT is handled in this game or any other game

you still refuse to acknowledge my system's PCIE behaviour
 
Last edited:

SlimySnake

Flashless at the Golden Globes
NXGamer has been speaking nothing but facts, it's CPU and Pcie throughput bound.

Here you can see CPU utilization and Pcie bandwidth low with RT disabled.
7IhLxDw.jpg


With RT enabled, you can now see CPU utilization high and Pcie bandwidth goes form 1938MB/s to 10449MB/s.
g0zkKVC.jpg


When it comes to Spider-Man, the CPU help handle a lot of BVH management and uses a lot of bandwidth as Cerny said.

"Having said that the Ray-Tracing instruction is pretty memory intensive, so it's a good mix with logic heavy code."

The PS5 I/O holds true here as hardware had been implemented to help handle a lot of work loads to remove bottlenecks.
FFCmAly.jpg
IW1xSlW.jpg


Even Nvidia realizes this and try to solve some of these problems with their upcoming RTX IO as the PCie doesn't have to handle twice as much bandwidth by just going straight to the GPU.
KBKhInv.png


It's clear you don't understand anything.
Nixxes actually talked about the CPU being the bottleneck in their interview with DF on this game. I will pull this up but I am pretty sure its both VRAM 'bandwidth' and CPU that are causing the bottlenecks.

They did say that this is a work in progress and they expect to get performance going forward. I think one thing is clear from both the Matrix and Spiderman, the CPU is going to be a major bottleneck for these RT games going forward.
 

Loxus

Member
then why does lowering textures from very high to high shoots framerates from 38 to 58? you still ignore that part purposefully

a game won't use all your available VRAM. some developers decide to give an headroom for background tasks. in nixxes's case, they gave a %80 headroom. even people at beyond3d proved it, the other user reported that PERAPP vram usage for his 3060 never went past 9.6 GB, which is %80 of 12 GB.

you're the one who understands nothing here.
It is common knowledge lowering textures quality will result in better performance, even if the CPU is the bottleneck.

What would of made sense is if Spider-Man had a mode were BVH management wasn't done on the CPU.

Beyond3d isn't some special place.

As I showed you above, it's capable of utilizing above 7GB of VRAM, so clearly utilizing 6GB of VRAM means you're not VRAM starved.

Like come on dude, your VRAM point is invalid. If the 3070 only had 6GB of GDDR, than your point would have been valid.
 

yamaci17

Member
It is common knowledge lowering textures quality will result in better performance, even if the CPU is the bottleneck.

What would of made sense is if Spider-Man had a mode were BVH management wasn't done on the CPU.

Beyond3d isn't some special place.

As I showed you above, it's capable of utilizing above 7GB of VRAM, so clearly utilizing 6GB of VRAM means you're not VRAM starved.

Like come on dude, your VRAM point is invalid. If the 3070 only had 6GB of GDDR, than your point would have been valid.

no, texture quality never affects performance unless you're hugely VRAM starved

my common knowledge says that texture quality is free upgrade as long as you have enough VRAM. it is quite literally higher resolution textures, nothing else, it just a thing sits in memory. if you have enough of it, that is. when you don't, it either tanks your framerate by half, or it tanks your framerate to 5.

now explain why alternating between high and very high textures at 1440p do not even affect framerate at all at. f texture quality, supposedly, results better performance, per your "common knowledge", i should've had higher framerate at 1440p when I lowered the texture setting to high. but it is just the same performance, with both textures. go figure, I guess. find a new goalpost. skew the discussion towards something else. I bet you will find something new

;)

 
Last edited:

Loxus

Member
Nixxes actually talked about the CPU being the bottleneck in their interview with DF on this game. I will pull this up but I am pretty sure its both VRAM 'bandwidth' and CPU that are causing the bottlenecks.

They did say that this is a work in progress and they expect to get performance going forward. I think one thing is clear from both the Matrix and Spiderman, the CPU is going to be a major bottleneck for these RT games going forward.
Exactly.

This made me remembered something with Infinity Cache.
It seems like RT uses a lot of bandwidth.
I read somewhere a theory that Infinity Cache was implemented to help cut RT from utilizing the GDDR bandwidth.
C4lUq2O.jpg

This sentence makes it sound like only instructions it stored in Infinity Cache.
AMD Infinity Cache can hold a very high percentage of the BVH working set, reducing intersection latency

I don't know if this is just AMD's RT implementation that uses a lot of bandwidth or if it applies to NVidia also.

Mark Cerny somewhat also confirms this theory in his Road to PS5 talk.

"Having said that the Ray-Tracing instruction is pretty memory intensive, so it's a good mix with logic heavy code."
 

SlimySnake

Flashless at the Golden Globes
Looks like a CPU issue as well as typical PC overheads like DXR, and IO compression. They do admit that they are still in the optimization stage so it can potentially get better but right now the PS5 might be outperforming the 6 core 12 thread 3600 despite having only 6 cores 12 threads available for gaming at a much lower clockspeed. (How Alex knows this information is weird because PS5's OS allocations for CPU and RAM have not been officially revealed. )

Digital Foundry: I think you as a team have proven that you are good at getting a foreign codebase and making magic out of it. So, you mention getting ray tracing on PC and the last push before launch was offering more settings to adjust the load on the CPU with the BVH ray tracing object range. Trying it on the 12900K or recent Alder Lake CPUs, the game flies, but then on a Ryzen 3600, what I consider a mid-range CPU very similar to the PS5: Zen 2, 6 cores/12 threads available for game usage, so on and so forth. And on that CPU at the highest settings it will be drop below 60fps while moving through the city and swinging as it is CPU limited. It has lurches down to the upper 40s. So what exactly is the CPU limitation bottleneck? What work do you want to do there in the future?

Jurjen Katsman: There are a few things going on there that are interesting and we're actually still making some changes to the code right now. So as I think you know, from your early analysis, to achieve 60fps on PS5 with ray tracing it makes other compromises [even beyond ray tracing]. So it turns down crowd density for example, or there's fewer cars around. And so that compensates for some of those CPU things, and we didn't make that very easy for the user to do in the [early review] build you played. So we are actually offering up some more options to make that to allow that to be better balanced [in the retail build].

And in general, the game originally came from the PS4 right? The PS4 CPU cores were not so stellar and the PS5 and the PCs were far more powerful. With the PS5, that gap has certainly gotten smaller. And there's still quite a few things on the PC where there's more overhead, like the APIs have more overhead, we don't have the decompressor for example, we don't have hardware doing decompression for us as we're streaming in content - that gets left to the CPU. So we certainly have more CPU challenges to go around even when we're doing the same things. And then if we don't dial down things that are dialled down on the console, we now have even more work to do on the CPU.

So okay, so if you have a PS5 game that fully loads all the CPU cores, then yeah, PC CPUs that don't have the same core count, for example, or the same processing power, they'll be in a tricky spot, right? And they will have to rely on lower settings of scalability, as well. But I think that's important about PC, right, that we do have that scalability, we do offer all those options. And you can run it in a way that works well for your system, no matter what.

Michiel Roza:It's even worse for us because we also have the added overhead of the abstraction layer to DX12 and the DXR abstraction layer, which is obviously very lean on the Sony side. So even if you have a more powerful CPU than on the PlayStation 5, you might still end up with a lower frame-rate.
 

Loxus

Member
no, texture quality never affects performance unless you're hugely VRAM starved

my common knowledge says that texture quality is free upgrade as long as you have enough VRAM. it is quite literally higher resolution textures, nothing else, it just a thing sits in memory. if you have enough of it, that is. when you don't, it either tanks your framerate by half, or it tanks your framerate to 5.

now explain why alternating between high and very high textures at 1440p do not even affect framerate at all at. f texture quality, supposedly, results better performance, per your "common knowledge", i should've had higher framerate at 1440p when I lowered the texture setting to high. but it is just the same performance, with both textures. go figure, I guess. find a new goalpost. skew the discussion towards something else. I bet you will find something new

;)


Did you noticed the VRAM usage still doesn't pass 7GB even though the GPU can utilize over 7.5GB of VRAM before it has to start using RAM?

A matter of fact, did you notice the game itself is only using 5GB of VRAM.
The GPU is utilizing an extra GB of VRAM with files (just in case it's needed) for the next 30 seconds of gameplay.
Id85ycN.jpg


After this, surely you can't still believe it's VRAM starved?
 

yamaci17

Member
Did you noticed the VRAM usage still doesn't pass 7GB even though the GPU can utilize over 7.5GB of VRAM before it has to start using RAM?

A matter of fact, did you notice the game itself is only using 5GB of VRAM.
The GPU is utilizing an extra GB of VRAM with files (just in case it's needed) for the next 30 seconds of gameplay.
Id85ycN.jpg


After this, surely you can't still believe it's VRAM starved?
you're still dodging the question

at 4K, going from high to very high textures causes performance to drop from 60s to 38s
at 1440p, gong frim hogh to very high textures do not cause a performance dropp

because the card is heavily VRAM starved at 4K with very high textures.

game will revert back to 5.3 GB from 6.4 GB once it arrives 6.4 GB.

don't you still understand, game has a hard cap at %80 allocation.

you're really testing my patience. I'm done with you. Have a good day
 
Last edited:

Loxus

Member
you're still dodging the question

at 4K, going from high to very high textures causes performance to drop from 60s to 38s
at 1440p, gong frim hogh to very high textures do not cause a performance dropp

because the card is heavily VRAM starved at 4K with very high textures.

game will revert back to 5.3 GB from 6.4 GB once it arrives 6.4 GB.

You're not even understanding the parameters:
GPU Mem 6.11 Gb is total system VRAM usage
GPU Mem 1.92 GB is shared systeM VRAM usage
GPU MEM game 5.29 GB is game's specific VRAM usage (this means that 6.11-5.29, my system uses 800 mb of VRAM)
GPU mem game 1.87 gb is game's specific shared VRAM usage

it is indeed VRAM starved at 4K with VH textures.

in my videos, gpu vram utilization is never above 7.5 GB. those two values are dedicated and shared usage values. shared vram is normal RAM, not actual GPU VRAM.
Dude just stop.
If it was VRAM starved, it would be at something like 7.895GB utilization or are you mixing up GDDR usage with bandwidth?
 
Last edited:

yamaci17

Member
This will be my "final" answer to you. Now you have moved goalpost to bandwidth. I will gladly refute that too. But I will stop at there. There's no ending to your goalposts.



Here's 3060 being %11 faster than 2070 super at 4k ray tracing with very high textures;

NEGFfEL.jpg



2070 super, in a normal situation is %10 faster than a 3060.
3060 has 360 gb/s of bandwidth over a 192 bit bus
2070 has 446 gb/s of bandwidth over a 256 bit bus

2070 super is so VRAM starved, NOT BANDWIDTH, it loses %20 of its performance, instead of being %10 ahead of 3060, it is %10 behind of 3060, despite HAVING HIGHER BANDWIDTH, HIGHER MEMORY BUS.

2070 super's texture rate is 283 gtexels/s and 3060's texture rate is 199 gtexels/s

despite 2070 super having every kind of theoritical BANDWIDTH advantage, it still tails %10 behind 3060. BECAUSE IT IS INDEED, VRAM STARVED.



Grand Theft Auto Reaction GIF by Rockstar Games



A very good saying, out of sight, out of mind. Have a real good day. It is really an achivement to force me to ignore someone on this forum and you are the 3rd person to achieve that spectular achivement.

W1H9jIQ.jpg
 
Last edited:

Loxus

Member
That dude clearly doesn't know about running out of VRAM. Utilizing 6-7GB out of 8GB is not VRAM starved.

In Spider-Man's case, the game would just crash to desktop if it's VRAM starved.
Video Memory Crash
"The game has crashed due to using more video memory than currently available on this PC. Please try lowering your graphics settings, lowering your resolution, and closing any unnecessary background applications before running the game again."


Not to mention not understanding the IPC gain of the RTX 3000 Series.
 
Last edited:
Yeah 6600xt gets rekt in this game. 128 bit + 256 gb/s bandwidth is destroying that card's potential. Charts speak for themselves;

At native 1080p with high + high RT set, where 6600xt feels comfortable, it averages 70 frames

ayxHLE4.jpg


Practically, gap between 6700xt and 6600xt at 1080p is %38.

4K however is antagonizing for 6600xt;

59GW859.jpg


Now the gap is %78. It quite literally averages 28 frames with high preset and high Ray Tracing, which is practically equivalent to PS5 (although high preset sets RT object range to 6, but it is not affecting GPU bound performance heavily, so it is not a huge factor)
Full benchmark here; https://www.techspot.com/article/2518-spiderman-remastered-benchmarks/
Full 4K table; https://static.techspot.com/articles-info/2518/bench/4K-RT.png
Full 1080p table; https://static.techspot.com/articles-info/2518/bench/1080p-RT.png
Full 1440p table; https://static.techspot.com/articles-info/2518/bench/1440p-RT.png

Its not a VRAM limitation either (the test is done with high preset, so it uses high textures). It quite literally chokes on its bandwidth. From having a gap of %38 at 1080p, to %55 to 1440p and finally %78 at 4K.

Even 6700xt with its 192 bit 384 GB/s suffers from it, mildly:
At 1440p, gap between 6700xt and 6900xt is %24 (101 FPS vs. 80 FPS). That gap widens to %40 at 4K (70 FPS vs 50 FPS).

So yeah, 6600xt is the worst offender in this case. It scales horribly to 4K.

Although I don't give any respect to this test. If Techspot did this test with Very High textures, it would be a completely different picture. Let me paint you that picture at 4K:

- 6600xt is getting destroyed further, probably to 10-15 framerate average
- 3060 staying at its exact same location, having enough VRAM
- 2070 dropping to a 25 framerate average, breaking parity with 3060
- 3070 dropping to a 43 framerate average, breaking parity with 2080ti

And so on. You get the gist of it. I invite Hardware Unboxed to replicate the test with Very High textures in Times Square. Sure as hell, that would completely re-shape the entire 4K table... :D
hey where i can find techspot becnchmarks I weirdly like the format of the charts
 
So if these are ps5 equivalent settings. The ps5 is performing like a 13 tflops 6700xt? As the variation of the 13.8 tops ship that sank faster than the titanic on March 17th 2020, I can live with that. remember fidelity mode unlocked ram at 45-50 fps.

Edit: there are using the amazing 5800x3D for these tests. The ps5 cpu can’t compete with s cpu like that especially in a cu heavy title like ToS. The ps5 gpu must be working overtime here.
especially with how cpu demanding this game is i think its performing more like the 6750xt if it was using an equivalent cpu
 
It frustrates me that he paints these narratives about PC gaming without really explaining it logically. He doesn't cover the games that are built with pc in mind and then go into how his PC beats the ps5 etc. This game was built from the ground up in playstations dev environment to only run on PlayStation, well before sonys plans to have a PC presence or before they even bought nixxes. What was the dev cycle on this PC port for nixxes? Is this stuff mentioned?

It's like he does everything he can to paint the ps5 as better than the entire PC platform with his questionable PC that when we attempted to help him with his blat3ntly strangely underperformed PC he ghosted gaf members...he then continues to use that PC to do comparisons and have them published on ign. It frustrates me.

He could atleast upgrade his PC to components that were new when the ps5 launched. He's using like 3 to 4 year old tech at this point.
i think he moreso wanted to show the advantages console development can have for the platform I also think he wanted to take advanatge of games with unlocked framerates since its normally not a thing on console
 
Sadly thats not how this works at all.

Spiderman being heavy on the CPU is not a normal thing on PCs, i think you're confused on how this particular game works.
it would still be heavy on the cpu on console as well... your trying to give the pc a cpu benefit but not the console which isnt equal
 

01011001

Banned
this is a sub par PC port that clearly needs some serious work with VRAM usage being fucking weird.

is it really so hard to agree on that? properly made PC games never will get into issues with VRAM on 8GB cards like this.
you can play Cyberpunk with max taxture settings and have no issues like these. that game will use about 7.5GB of vram, sometimes a bit more sometimes a bit less, and has way higher object and detail density.

the fact that the likes of DF and NXG still call this a good PC port is ridiculous.. it's serviceable, that's it... serviceable... not great, not good... it's playable, but it clearly underperforms and has CPU and VRAM issues.
 
Last edited:

yamaci17

Member
appreciated man. Also appreciated the talk we had
i can be rude or condensing at times. sorry for that. but really, it really hurts me to not being able to get acknowledged since I don't have a channel like NX gamer or DF or anything. but I'm the one who actually experiences these issues or affected by them. Naturally I want people to get the right story, instead of getting the story of "PS5 is punching above its weight". No matter how you put it.

i've had my 8 gb limited 3070 GPU for almost 2 years now. as a matter of fact, i even had gtx 1080 since 2018. since 4 years, I always made observations AS an actual user and know how gamers interact with performance when it comes to texture quality and VRAM usage. i made tons of discoveries such as most electron apps like chrome, spotify and discord using VRAM, whereas this was NEVER ever mentioned in any kind of forum. this is a really "obscure" area where most people are not knowledgeable. I even made discoveries how games used normal RAM as a substitute for VRAM. I made the discovery of "more" shared vram a game uses, more of your performance tanks, for example. these are results and experiments that are never mentioned anywhere. any big outlet carefully analyzing this would be a huge red flag for NVIDIA and they would blacklist that outlet for a long period of time, maybe forever. NVIDIA themselves made a "emergeny" question and answer where they claimed "10 GB was fine, see, these and those games use 4-6 GB vram at 4K, now believe us and get those GPUs in your PCs now! pascal friends! its time to upgrade!".

so my perspective for VRAM is more experienced and detailed than nx gamer and DF. think about it. im the one who used 8 gb gpus for 4 years. these dudes play with their super high end systems on a regular day, or just playstation or xbox consoles. naturally some people being shocked that me having a different take than most reviewers is surprising somehow.

look at my findings about high textures+high reflections. it produces worse reflections than PS5, is it not? have you seen a similar report or mentioning anywhere else? you havent. i only see it and observe it because I actually play the game from start to finish with a 8 GB GPU. most reviewers will look at first hours and move on. and some problems or situations may not even occur even with 3 hr+ of playtime .

then you have textures not being able to load (completely irrevelant to vram amount). even on a 3090, those cutscenes lags when it comes to loading textures. i also was the first to discover it, then nx gamer also made the discovery, but 1 month later. these are never mentioned by any outlets like DF, techspot or other review sites.

they just do barebones and move on to the next big thing. I actively did long studies and experiments on how this game behaves and acts specifically with 8 GB VRAM.

so it is really hard for a person like me to get heard or get his experience accepted. most people think what I present is outliers or not representative.

what nxgamer, you and others fail to understand that this has never been a specific problem to Spiderman. I've had exact same kind of framedrops in Godfall, Borderlands 3, AND Crysis 2 remastered. You heard it right. Playing Crysis 2 remastered with its "updated" super quality textures ALSO tanked the performance, EXACTLY the same way it tanks the performance in Spiderman.

Problem here is that it is the first time a person like NX Gamer discovered that VRAM starvation situation where performance is tanked, and he took it as a way to promote his own thoughts, and run away with it. I however, experienced the exact situation on lastgen games. Naturally, to me, Spiderman is not doing anything unique. It works just like any other PC game I've come across for the past 4 years. This even happens with Cyberpunk. It literally happens with every game. You run out of your VRAM, you lose GPU performance.

Some people think that to run out of VRAM, you have to see that it is fully utilized, it is not the case. A game can have allocation caps. Most games do.
 

Loxus

Member
this is a sub par PC port that clearly needs some serious work with VRAM usage being fucking weird.

is it really so hard to agree on that? properly made PC games never will get into issues with VRAM on 8GB cards like this.
you can play Cyberpunk with max taxture settings and have no issues like these. that game will use about 7.5GB of vram, sometimes a bit more sometimes a bit less, and has way higher object and detail density.

the fact that the likes of DF and NXG still call this a good PC port is ridiculous.. it's serviceable, that's it... serviceable... not great, not good... it's playable, but it clearly underperforms and has CPU and VRAM issues.
Like I said above, if it was VRAM starved or running out of VRAM. It would crash to desktop.

Even Insomniac Games confirmed this.
Video Memory Crash
"The game has crashed due to using more video memory than currently available on this PC. Please try lowering your graphics settings, lowering your resolution, and closing any unnecessary background applications before running the game again."


Performance is only lost when the game starts using RAM, which is not the case with Spider-Man.

Can we move pass this VRAM starved nonsense.
 

01011001

Banned
Like I said above, if it was VRAM starved or running out of VRAM. It would crash to desktop.

Even Insomniac Games confirmed this.
Video Memory Crash
"The game has crashed due to using more video memory than currently available on this PC. Please try lowering your graphics settings, lowering your resolution, and closing any unnecessary background applications before running the game again."


Performance is only lost when the game starts using RAM, which is not the case with Spider-Man.

Can we move pass this VRAM starved nonsense.

explain how higher VRAM cards have better performance than lower VRAM cards then, even when the higher VRAM card has slower raster and RT performance...

also explain how performance stabilises once you lower texture settings.

these things can only be explained with the game running out of VRAM for no apparent reason
 

Mr Moose

Member
This will be my "final" answer to you. Now you have moved goalpost to bandwidth. I will gladly refute that too. But I will stop at there. There's no ending to your goalposts.



Here's 3060 being %11 faster than 2070 super at 4k ray tracing with very high textures;

NEGFfEL.jpg



2070 super, in a normal situation is %10 faster than a 3060.
3060 has 360 gb/s of bandwidth over a 192 bit bus
2070 has 446 gb/s of bandwidth over a 256 bit bus

2070 super is so VRAM starved, NOT BANDWIDTH, it loses %20 of its performance, instead of being %10 ahead of 3060, it is %10 behind of 3060, despite HAVING HIGHER BANDWIDTH, HIGHER MEMORY BUS.

2070 super's texture rate is 283 gtexels/s and 3060's texture rate is 199 gtexels/s

despite 2070 super having every kind of theoritical BANDWIDTH advantage, it still tails %10 behind 3060. BECAUSE IT IS INDEED, VRAM STARVED.



Grand Theft Auto Reaction GIF by Rockstar Games



A very good saying, out of sight, out of mind. Have a real good day. It is really an achivement to force me to ignore someone on this forum and you are the 3rd person to achieve that spectular achivement.

W1H9jIQ.jpg
What's going on with that 6600XT? The 2060 is killing it :messenger_face_screaming: I was thinking of getting a 6600XT to replace my dog shit GPU.
 

yamaci17

Member
explain how higher VRAM cards have better performance than lower VRAM cards then, even when the higher VRAM card has slower raster and RT performance...

also explain how performance stabilises once you lower texture settings.

these things can only be explained with the game running out of VRAM for no apparent reason
"even when the higher VRAM card has slower raster and RT performance..." also add, higher VRAM card having slower bandwidth, slower texture fillrate, and lower memory bus. I'm sure this will create enough controversy to a point Nixxes may come out and make an explanation. I really look foward to their reasoning for falling back to 5.2 GB of VRAM usage when game reaches 6.4 GB VRAM and then using 4.5 GB of normal shared memory, causing %50-70 slowdowsn. I wonder if its an actual intended logical solution they made. It is still ridiculously stupid to me that game falls back to 5.2 GB after reaching 6.4 GB and using 4.5 GB of normal RAM as a substitute. If it needs 9.8 GB budget, it can still use normal 7.4 GB from the normal VRAM, then use another 2 GB from RAM. Instead, it falls back to 5.2 GB, potentially leaving an almost empty buffer of 3 GB. They should understand that it is not logical to sacrifice %50 performance for this solution. Just made it so that frames drop to 3-5 and inform that user is not suited to use those textures. That would prevent that video from spawning entirely.

Some people theorized that it leaves VRAM for sudden turns. I actually have a refute to that. I actually FILLED that empty VRAM buffer with: Twitch Studio, Chrome, Spotify and Discord. These 4 programs together filled that buffer. Then THE GAME still performed exactly like it did before. This proves that that portion of VRAM is never touched by the game's engine. I just find this design horrible, however you put it. I hope Alex can communicate with Nixxes regarding this. Then we can rectify certain inequities.
 
Last edited:

yamaci17

Member
What's going on with that 6600XT? The 2060 is killing it :messenger_face_screaming: I was thinking of getting a 6600XT to replace my dog shit GPU.
No idea honestly, even native 1080 with ray tracing is not kind to 6600xt with maxed out Ray Tracing.
WycTizv.jpg



However "high" RT is more forgiving for it

ykyVAqc.jpg
 
Top Bottom