• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[NXG] Matrix UE5 PC Tech Analysis

GHG

Gold Member
Mate, just look at the benchmarks done with a dozen cpus. The 2700x is just 5fps slower than a 3800x.
The 2700x is an old CPU, but it's not that bad.

Overall no its not a bad CPU but in certain scenarios it will seriously underperform due to reasons mentioned above.

Look at the differences at 1080p here for example:




It's really not a CPU that should be used for the purposes of modern benchmarking, it has too many fundamental issues.
 

winjer

Gold Member
Overall no its not a bad CPU but in certain scenarios it will seriously underperform due to reasons mentioned above.

Look at the differences at 1080p here for example:




It's really not a CPU that should be used for the purposes of modern benchmarking, it has too many fundamental issues.


Emulators are diferent stuff.

I'll say it again, look at the benchmarks for the 2700x from other sites.
No one is having performance issues like NXGamer.
 
Last edited:

GHG

Gold Member
Emulators are diferent stuff.

What an a demo for a brand new engine isn't "different stuff" either?

He shouldn't be drawing any conclusions based on his hardware (the CPU especially), that's for sure, but I don't doubt the results based on what I know about the 2700x.
 

winjer

Gold Member
What an a demo for a brand new engine isn't "different stuff" either?

He shouldn't be drawing any conclusions based on his hardware (the CPU especially), that's for sure, but I don't doubt the results based on what I know about the 2700x.

Dude, just look at the benchmarks for this demo, in the previous page.
There is no point in speculating with results from emulators.
 

SlimySnake

Flashless at the Golden Globes
Have we got benchmarks with the 2700x for this demo? All I know is that the 2xxx series of CPU's from AMD are far more susceptible to encountering bottlenecks when gaming due to the CCX layout. Any average % differences in CPU's go out of the window when that happens.
His 3600 performance is just as bad. You can see his 16 tflops 6800 paired with the 3600 struggle to hit 22 fps at 1080p when the 6800 and all 6000 series cards outperform the 30 series cards in this demo in the benchmark i provided above. I mean his GPU is only being utilized at 40-50%. Of course, its giving him 22 fps. In the benchmark, the 6800 can hit 58 fps.

NNqLaaV.jpg
 
Last edited:
Mate, just look at the benchmarks done with a dozen cpus. The 2700x is just 5fps slower than a 3800x.
The 2700x is an old CPU, but it's not that bad.
The benchmark chart you posted looks only at gaming performance with FPS difference.

There is quite big difference in perf between 2700x and 3700x in example compression/decompression benchmarks.
 

GHG

Gold Member
His 3600 performance is just as bad. You can see his 16 tflops 6800 paired with the 3600 struggle to hit 22 fps at 1080p when the 6800 and all 6000 series cards outperform the 30 series cards in this demo in the benchmark i provided above. I mean his GPU is only being utilized at 40-50%. Of course, its giving him 22 fps. In the benchmark, the 6800 can hit 58 fps.

NNqLaaV.jpg

He's running the 6800 at 4k max settings:

IMG-20220421-182148.jpg


Here's a not to dissimilar outcome but with a 6800xt instead of the 6800 (but still with the 3600):




The fact that he reduced the resolution on the 2700x system but didn't see any improvement in performance tells you that the results are almost 100% certainly due to the CPU.
 
Last edited:
2700x is still a great cpu for modern multithread games at 60fps, I remember it was about 170 brand new at the lowest which was a bargain.

Still in games that rely on single core and don't use more than 2 threads you will have issues getting stable performance.

I really don't expect UE5 to not have proper multithread support when it's fully released as it would greatly limit performance and also the number of devs that want to use it.
 

DenchDeckard

Moderated wildly
Sorry to just copy paste my post from another thread, but NXGamer probably screwed up his review, very badly:

His results can't be right. His frame rate is way too low, compared to mine.
I do have a slightly better PC than him. But not that much.
I have a 3700X and an RTX 2070S. He has a 2700X and an RTX 2070 non super.
But the 3700X is just 7% faster in games, than a 2700X.

My 2070S is clocked at 1920Mhz, resulting in 9.8 TFLOPs.
His 2070 is clocked at 2040, resulting in 9.4 TFLOps. This is a difference of 4.2%.

On the same spot, I get 38 fps. He gets 23 fps. This is a difference of 65% in performance. In a demo that is CPU bound.
While the 2 CPUs should have a difference of around 7% in performance.
Either he f***d up really bad with his packaged demo. Or he has some performance issues with his PC, which would also explain why in so many of his analysis he gets lower results, than on consoles.

Then he speaks about how when he is driving, he gets to 10-14 fps.
In my PC, I get 30-31 fps. With lows of 26 fps when crashing.

Here is a screenshot, in the demo, settings at 3, resolution 1440p with TSR at 75%, meaning it's rendered at a base of 1080p.
LZQ5zBt.jpg


Here is his result, on the exact same spot. He is rendering at 1080p, settings at 3.
QkaIqWu.jpg


From little knowledge I have the 3700X destroys the 2700X in single thread performance and gaming. Saying that, I find your findings intriguing. Would be great to get some more feedback on this. Would be a touch embarrassing for NXG if there is people with 2700X getting similar performance to yourself.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
He's running the 6800 at 4k max settings:

IMG-20220421-182148.jpg


Here's a not to dissimilar outcome but with a 6800xt instead of the 6800 (but still with the 3600):




The fact that he reduced the resolution on the 2700x system but didn't see any improvement in performance tells you that the results are almost 100% certainly due to the CPU.

Nope. His first comparison is at 1080p. He then changes the resolution to native 4k to try and make sense of his poor performance and it still doesnt make any sense because his GPU utilziation only goes up to 69%. I mean look at his clock speeds on the GPU. 1.6 Ghz on a RDNA 2.0 card? Come on. How does he not look at that and think hey there is something really wrong here.

TNLU6qM.jpg


That GPU is a 220 Watt GPU using only 80 watts here. It makes no sense. I mean the PS5 uses 220 watts when running this demo. The 6800 is definitely being bottlenecked here by something. Maybe it is the CPU. Maybe its the RAM. Or some other shit, but whatever it is, it is HIS system. His compiled demo. I have not seen this performance in either of the two demos ive installed on my PC with two different graphics cards.

There is zero doubt that the game is CPU bound. It's something we have been talking about in the UE5 demo thread since day one. But what we are seeing here is completely ridiculous. That kind of GPU utilization makes no sense. Not on the 3600. Not even on the awful 2700.

Here is another 6800 benchmark running the game at 1440p easily at 35 fps. His GPU utilization hovers around 80-90% depending on the crowd and NPC density so hes not maxing out the GPU either but its not at 50-60% ffs. And his clock speeds are in the 2.3 ghz range at all times.

 

GHG

Gold Member
Nope. His first comparison is at 1080p. He then changes the resolution to native 4k to try and make sense of his poor performance and it still doesnt make any sense because his GPU utilziation only goes up to 69%. I mean look at his clock speeds on the GPU. 1.6 Ghz on a RDNA 2.0 card? Come on. How does he not look at that and think hey there is something really wrong here.

TNLU6qM.jpg


That GPU is a 220 Watt GPU using only 80 watts here. It makes no sense. I mean the PS5 uses 220 watts when running this demo. The 6800 is definitely being bottlenecked here by something. Maybe it is the CPU. Maybe its the RAM. Or some other shit, but whatever it is, it is HIS system. His compiled demo. I have not seen this performance in either of the two demos ive installed on my PC with two different graphics cards.

There is zero doubt that the game is CPU bound. It's something we have been talking about in the UE5 demo thread since day one. But what we are seeing here is completely ridiculous. That kind of GPU utilization makes no sense. Not on the 3600. Not even on the awful 2700.

Here is another 6800 benchmark running the game at 1440p easily at 35 fps. His GPU utilization hovers around 80-90% depending on the crowd and NPC density so hes not maxing out the GPU either but its not at 50-60% ffs. And his clock speeds are in the 2.3 ghz range at all times.



Well if he is indeed running the game at 1080p on the 6800 system then something is wrong. Either his configuration, or the resolution didn't actually change or he's mislabeling his footage.

I still maintain my stance on the 2700x though, it's a piece of shit for modern gaming purposes.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Another thing I have noticed is that the low GPU utilization is almost always related to shader compiling stutters. Whenever I load the game for the first time after uninstalling my drivers which ive had to do a lot recently to fix some issues with my new cards, the game stutters like a mother fucker. It takes around 5 minutes of driving around to stabilize and even then entering new areas can cause stutters. During those times, my GPU utilzation goes from 95-98% to 80% for a second or two. Then comes back up.

NX Gamer needs to run these tests using a different compiled version of this demo. Preferably the one we are all using in the UE5 thread. I admire him compiling his own demo, but clearly something is way off here.
 

SlimySnake

Flashless at the Golden Globes
I still maintain my stance on the 2700x though, it's a piece of shit for modern gaming purposes.
No doubt. We have seen this time and time again in NX Gamer's 2070 benchmarks that dont line up with other benchmarks. Ive actually come around on those benchmarks because the 2700x performs a lot like the PS5 CPU so if his 2700x is holding back his 2070 we can safely assume that the PS5 GPU is being held back by its CPU. Thats why I put more stock into NX Gamer's PC vs console comparisons than Alex's who uses fancy 5.0 Ghz 16 core 32 thread CPUs in his testing.

The main problem I have with his results here is that even the 3600, a much better CPU compared to the 2700x, is having issues.
 
  • Like
Reactions: GHG

DenchDeckard

Moderated wildly
Another thing I have noticed is that the low GPU utilization is almost always related to shader compiling stutters. Whenever I load the game for the first time after uninstalling my drivers which ive had to do a lot recently to fix some issues with my new cards, the game stutters like a mother fucker. It takes around 5 minutes of driving around to stabilize and even then entering new areas can cause stutters. During those times, my GPU utilzation goes from 95-98% to 80% for a second or two. Then comes back up.

NX Gamer needs to run these tests using a different compiled version of this demo. Preferably the one we are all using in the UE5 thread. I admire him compiling his own demo, but clearly something is way off here.

Did you get your 3080 sorted? I know you were having a few issues. I hope you got it all worked out. :)
 
Well if he is indeed running the game at 1080p on the 6800 system then something is wrong. Either his configuration, or the resolution didn't actually change or he's mislabeling his footage.

I still maintain my stance on the 2700x though, it's a piece of shit for modern gaming purposes.
You are blaming 2700x for this terrible early access engine?

Bro 2700x can hit over 144fps average in death stranding at 1080p.

Not saying 2700x and zen+ arch isn't dated, but if it's a proper multithread engine it's still more than good enough at 60fps. Smooth 120fps is out of the question usually though unless it's a simple title.
 

yamaci17

Member
From little knowledge I have the 3700X destroys the 2700X in single thread performance and gaming. Saying that, I find your findings intriguing. Would be great to get some more feedback on this. Would be a touch embarrassing for NXG if there is people with 2700X getting similar performance to yourself.
3000 cl15 stock timings - 32 (lower bound) - 34 (upper bound) while standing still in the start of demo

VkpKdr3.png


3400 cl14 - 37 (lower bound) - 39 (upper bound) while standing still in the start of demo
gSvtWoI.png
 

SlimySnake

Flashless at the Golden Globes
Did you get your 3080 sorted? I know you were having a few issues. I hope you got it all worked out. :)
Nope. :(

After trying everything from clean installs using DDU, firmware updates on the GPU and extra power cables, I finally returned the 3080 to Microcenter who were very understanding and offered to replace it for free. I was like what if it happens on this one, can I still return the replacement? and they are like sure, you have one month to test it out. So I got a brand new replacement and the damn thing still crashed in UE5 and RDR2 after playing for around 10 minutes lol

I ran both games/demos for over an hour each on my 2080 so its definitely nothing in my system. Control and Cyberpunk are both fine now on the 3080 so hopefully this GPU just has issues with these two particular games. Im just gonna buy their 3 year warranty and just replace it if this thing runs into issues when UE5 games start coming out in 2024.
 
  • Empathy
Reactions: GHG

SlimySnake

Flashless at the Golden Globes
3000 cl15 stock timings - 32 (lower bound) - 34 (upper bound) while standing still in the start of demo

VkpKdr3.png


3400 cl14 - 37 (lower bound) - 39 (upper bound) while standing still in the start of demo
gSvtWoI.png
Your GPU utilization is very low here just like NX Gamer's but your FPS is still very good. What GPU/CPU are you using?
 

DenchDeckard

Moderated wildly
Nope. :(

After trying everything from clean installs using DDU, firmware updates on the GPU and extra power cables, I finally returned the 3080 to Microcenter who were very understanding and offered to replace it for free. I was like what if it happens on this one, can I still return the replacement? and they are like sure, you have one month to test it out. So I got a brand new replacement and the damn thing still crashed in UE5 and RDR2 after playing for around 10 minutes lol

I ran both games/demos for over an hour each on my 2080 so its definitely nothing in my system. Control and Cyberpunk are both fine now on the 3080 so hopefully this GPU just has issues with these two particular games. Im just gonna buy their 3 year warranty and just replace it if this thing runs into issues when UE5 games start coming out in 2024.

Aw man, sorry to hear. Maybe it is your 850 Watt PSU but I find that odd. I would have thought it would be ok but maybe its drawing too much through the 3 x 8 Pins. My MSI uses the same power set up, but I have a 1000 watt PSU. Hope you get it all sorted :)
 

SlimySnake

Flashless at the Golden Globes
Aw man, sorry to hear. Maybe it is your 850 Watt PSU but I find that odd. I would have thought it would be ok but maybe its drawing too much through the 3 x 8 Pins. My MSI uses the same power set up, but I have a 1000 watt PSU. Hope you get it all sorted :)
Mine is actually a 750 watt power supply but its hooked up to a fancy UPS which tells me just how much power any appliance connected to it is using and my PC tops out around 600-620 watts when running the Matrix. much less when running Red Dead 2 which doesnt tax my CPU as much.

Whats odd is that cyberpunk is also around 600 watts. The GPU consumes 390 watts on its own and thats normal for this particular EVGA card which is overclocked out of the box. But cybperunk doesnt crash. At least not since i updated the drivers.
 

S0ULZB0URNE

Member
I know right.
Dude thinks PC running the demo in editor mode equals to PS5 environment.

While ignoring quotes from people like Nick Penwarden, the VP of engineering at Epic Games.
Epic Games had to rewrite parts of Unreal Engine to keep up with the PS5's SSD
"The PlayStation 5 provides a huge leap in both computing and graphics performance, but its storage architecture is also truly special," Nick Penwarden, VP of engineering at Epic Games told us.
"The ability to stream in content at extreme speeds enables developers to create denser and more detailed environments, changing how we think about streaming content. It’s so impactful that we’ve rewritten our core I/O subsystems for Unreal Engine with the PlayStation 5 in mind," he added.


SSD speed or how many MBs moved is not what NX Gamer is talking about but the latency of the CPU decompression.

As we saw in NX Gamer video,
System Memory utilization = 12 GB at one point (this is game assets stored before or after decompression to help reduce latency of having to go back to storage and decompress again constantly.)
GPU Memory utilization = 7 GB (most game assets working on the games behalf)
That a total of 19-20 GB
The consoles only has around 13 GB of RAM to play with, 14 GB the most. So he speculates the I/O fast latency to get something from storage to RAM is what is helping with frame rate in some cases. Not everything can fit in RAM and CPU resources are freed up.

He implies more assets should be in GPU Memory to help with the latency because his GPU with 16 GB isn't fully being utilized.

Imo, RTX IO will better utilize GPU Memory and 8 GB wouldn't be enough. Decompression can also limit the SSD potential.
gB1HPzm.jpg


PC environment with latency from CPU decompression.
pv56VYI.png


PS5 environment with Custom I/O.
sseGaPi.jpg

rJokUyZ.jpg



NX Gamer also thinks Nanite constantly updating the Nanite clusters (which the CPU handles) when moving is the reason for frame rate drops also. So Alex is right about higher clocked CPU is better, the Nanite clusters can be updated faster. At least that's how I interpreted it from both videos.
PS5 SSD is "best-in-class" across all platforms, says Epic CEO Tim Sweeney


 

DenchDeckard

Moderated wildly
Mine is actually a 750 watt power supply but its hooked up to a fancy UPS which tells me just how much power any appliance connected to it is using and my PC tops out around 600-620 watts when running the Matrix. much less when running Red Dead 2 which doesnt tax my CPU as much.

Whats odd is that cyberpunk is also around 600 watts. The GPU consumes 390 watts on its own and thats normal for this particular EVGA card which is overclocked out of the box. But cybperunk doesnt crash. At least not since i updated the drivers.
Oh, 750 watt. Interesting. I would have thought 750 Watts could be too low for that oc card. Maybe it is an issue but if you can monitor usage and see it's not a problem then ignore me lol.
 
Last edited:

hlm666

Member
The Unreal dev over on beyond3d disagrees with all of you saying that IO is the problem. But what would he know.

"I took a quick look in the profiler while running through the startup sequence yesterday and none of it looked like IO stuff directly:
1) Was mostly PSO compiles (as expected the first time)
2) Next most common was graphics driver stalls while creating resources/adjusting pool sizes. That sort of thing is a common target for IHVs to tweak buffer sizes, defrag timing, etc. for shipped games so sort of to be expected with a non-tuned workload as well. I imagine it will get better over time in drivers as well, especially now that we've forced people onto DX12 (thank god).
3) Remainder of the stalls I saw were things like spawning/deleting a ton of actors in a frame or something, which are demo issues in things like the crowd/traffic system that can also be improved over time (these systems are not core UE per se). These were the minority though.
Nothing looked categorically scary. The PSO situation sucks in general on PC, but there are some ways to mitigate that with PSO caches for a shipping game. A bunch of these stalls are because drivers are still doing rather conservative things (i.e. recompiling every hit shader in the scene) when it comes to RT PSO linking and that will likely improve further now that we've forced the issue as well.
Admittedly UE is a giant thing at this point, and certainly there's aspects on the CPU that are non-ideal. That said, I didn't see anything fundamentally unsolvable or needing esoteric tech here. The GPU side is already running quite smoothly and will only improve further."

 

GHG

Gold Member
You are blaming 2700x for this terrible early access engine?

Bro 2700x can hit over 144fps average in death stranding at 1080p.

Not saying 2700x and zen+ arch isn't dated, but if it's a proper multithread engine it's still more than good enough at 60fps. Smooth 120fps is out of the question usually though unless it's a simple title.

It's more that when a game or engine doesn't work well with it then it really doesn't work well. Like you mentioned, the game in question needs to be multi-threaded or the CPU is going to struggle and underperform other CPU's it should be competing with.
 

GHG

Gold Member
Nope. :(

After trying everything from clean installs using DDU, firmware updates on the GPU and extra power cables, I finally returned the 3080 to Microcenter who were very understanding and offered to replace it for free. I was like what if it happens on this one, can I still return the replacement? and they are like sure, you have one month to test it out. So I got a brand new replacement and the damn thing still crashed in UE5 and RDR2 after playing for around 10 minutes lol

I ran both games/demos for over an hour each on my 2080 so its definitely nothing in my system. Control and Cyberpunk are both fine now on the 3080 so hopefully this GPU just has issues with these two particular games. Im just gonna buy their 3 year warranty and just replace it if this thing runs into issues when UE5 games start coming out in 2024.

Just wait for the 4xxx series of cards mate, they aren't too far out now. I've got a 2070 super and that's what I'm doing. The 3080 was never meant to happen for me, had everything from cancelled orders to a faulty unit (my fault for straying from EVGA) so I'm just taking it as a sign and I'm waiting.
 

SlimySnake

Flashless at the Golden Globes
Just wait for the 4xxx series of cards mate, they aren't too far out now. I've got a 2070 super and that's what I'm doing. The 3080 was never meant to happen for me, had everything from cancelled orders to a faulty unit (my fault for straying from EVGA) so I'm just taking it as a sign and I'm waiting.
I thought about it but Let’s face it we will all lose to those fucking bots, miners and scalpers once again and even if there is no shortage by some miracle, nvidia won’t make the same mistake of pricing the xx80 cards at $700. They will all be $1,200 like the ti and 12gb ones they came out with.

Starfield and avatar are likely going to be out by November or early next year and i just don’t see myself beating out these scalpers and bots for at least the first six months.
 
It's more that when a game or engine doesn't work well with it then it really doesn't work well. Like you mentioned, the game in question needs to be multi-threaded or the CPU is going to struggle and underperform other CPU's it should be competing with.
Really I agree with the context of what you guys are saying, the 2700x is probably really struggling here and 3700x is a much better cpu, and 5700x much better still so it's definitely time to upgrade a 2700x these days.

But I wouldn't ever call it horrible, esp. just because of a very unoptimized engine showing its weaknesses. I am still using a 6 core version of the 2700x.
 
Last edited:
  • Like
Reactions: GHG

Clear

CliffyB's Cock Holster
"Yes although this is shader *compilation*, which is a CPU task, and thus creates a CPU stutter. None of these stutters are on the GPU side."


The compiled shader isn't run on the CPU. Its run on the GPU, and what's more the source shader script is held in system storage, read to and processed on the CPU, and then stored in system ram, from where its fetched by the GPU. Its extremely I/O intensive.
 

sendit

Member
Aw man, sorry to hear. Maybe it is your 850 Watt PSU but I find that odd. I would have thought it would be ok but maybe its drawing too much through the 3 x 8 Pins. My MSI uses the same power set up, but I have a 1000 watt PSU. Hope you get it all sorted :)
Running a 3090 (ASUS Strix OC) on a 750 watt platinum grade power supply for the past ~1.5 years. Perfectly stable.
 

Corndog

Banned
Shader compilation/caching is an i/o process! The clue's in the naming!
Compilation is not an io process.
Edit: also pretty sure they are compiled on the cpu. Doesn’t make sense to do it on the gpu. Look it up.
 
Last edited:
"Yes although this is shader *compilation*, which is a CPU task, and thus creates a CPU stutter. None of these stutters are on the GPU side."

Almost everything not purely GPU compute is at some point managed (at least started) by the CPU. So technically almost everything makes a game CPU limited (if you use that flawed definition). But being purely CPU limited is not that. CPU limited means the framerate will improve as long as you use a more powerfull CPU.

If your framerate isn't improved much when you use a more powerfull CPU, then it's not CPU limited anymore and it becomes limited by something else.
 

PaintTinJr

Member
Compilation is not an io process.
Edit: also pretty sure they are compiled on the cpu. Doesn’t make sense to do it on the gpu. Look it up.
It sort of is, if the shaders are small text files, and it is a shame we can't use this demo on other filesystems such as ext4 that handle large amounts of small files much more efficiently than NTFS or FAT going by running Java programs with unzipped Jar files.

Someone with an official workstation version of Windows using the ReFs filesystem for their primary drive would be a good test to see if using a vastly different filesystems impact framerate, because primary CPU clock speed should impact CPU cache speed and ultimately be the best way to improve file checking on NTFS/FAT PCs, but not by much as the benchmarks seem to show.
 

Hoddi

Member
The I/O is still unmatched.
It’s still not the reason for these stutters. I compiled the demo without any compression whatsoever and didn’t make any difference to speak of. I even tried allocating my memory elsewhere (so there was little left for caching in sysRAM) and disk IO still never exceeded even 1GB/s.
 

PaintTinJr

Member
Mine is actually a 750 watt power supply but its hooked up to a fancy UPS which tells me just how much power any appliance connected to it is using and my PC tops out around 600-620 watts when running the Matrix. much less when running Red Dead 2 which doesnt tax my CPU as much.

Whats odd is that cyberpunk is also around 600 watts. The GPU consumes 390 watts on its own and thats normal for this particular EVGA card which is overclocked out of the box. But cybperunk doesnt crash. At least not since i updated the drivers.
How does it crash? And what brand of PSU? If it is crashing the system to switch off - by tripping your platinum PSU through overpower safety feature and it happens to be an EVGA PSU, too then it is almost certainly the PSU.

Had to take a bundle EVGA 700watt Gold PSU when getting my RTX 3060 Dual and despite power draw being below 600watt, my 12 Core Xeon/3060 and Minecraft killed two of them, yet my old 650watt bronze SLI PSU still worked fine when reinstated and I eventually bought a Fractal Ion 860p after my friend sent me a gamernexus video showing how the certification means nothing compared to credible reviews of PSUs in the wild, otherwise it is pot luck.
 

PaintTinJr

Member
It’s still not the reason for these stutters. I compiled the demo without any compression whatsoever and didn’t make any difference to speak of. I even tried allocating my memory elsewhere (so there was little left for caching in sysRAM) and disk IO still never exceeded even 1GB/s.
But it still could be IO related, just not total transfer size, but number of small transfers or the latency of a few key transfers that require synchronisation - such as a shader in a region you are moving towards that is outside of the current region needs compiled (and could be long and complex) or a small; data transfer that needs complete and will stall the CPU until complete.

Someone said in another thread that the PS5 transfers 10GBs more data than the PC from doing SMART analysis on PC of an nvme in the PS5 - but that could just be the section on PS5 outside of the PC sample causing that difference.
 

SlimySnake

Flashless at the Golden Globes
How does it crash? And what brand of PSU? If it is crashing the system to switch off - by tripping your platinum PSU through overpower safety feature and it happens to be an EVGA PSU, too then it is almost certainly the PSU.

Had to take a bundle EVGA 700watt Gold PSU when getting my RTX 3060 Dual and despite power draw being below 600watt, my 12 Core Xeon/3060 and Minecraft killed two of them, yet my old 650watt bronze SLI PSU still worked fine when reinstated and I eventually bought a Fractal Ion 860p after my friend sent me a gamernexus video showing how the certification means nothing compared to credible reviews of PSUs in the wild, otherwise it is pot luck.
It just quits the games. For UE5, I get an error saying DX Device HUNG or disconnected or something like that which google suggested was a GPU issue. I figured there wasnt enough power being supplied to it so i now run three different PSU to PCIE cables for each 8 pin slot. Didnt help.

RDR2 just quits out with no error message. Before I did a clean install and reinstalled the drivers, all games were crashing without any error messages. Mafia, Cyberpunk, Control. Now it's only RDR2 and UE5 and both around 10 minutes in.

The PC keeps running. My PSU is a Seasonic GX750 80 Plus Gold that cost a whopping $150. Someone here told me to not skimp on the PSU when I replaced it last year so i got the best one I could find. They literally have a 10 year warranty. Havent had any issues with it until I installed this card last week and started playing UE5 at native 4k. The power draw for RDR2 is lower than cyberpunk which also hits 600 watts and yet it quits all the same. But that gives a good idea. I will drop the resolution to 1440p, and cap it to 30 fps and see if it still crashes. Im getting 45 fps at native 4k in Matrix so thats incredible performance but id like to play it for more than 10 minutes at a time lol.
 

PaintTinJr

Member
It just quits the games. For UE5, I get an error saying DX Device HUNG or disconnected or something like that which google suggested was a GPU issue. I figured there wasnt enough power being supplied to it so i now run three different PSU to PCIE cables for each 8 pin slot. Didnt help.

RDR2 just quits out with no error message. Before I did a clean install and reinstalled the drivers, all games were crashing without any error messages. Mafia, Cyberpunk, Control. Now it's only RDR2 and UE5 and both around 10 minutes in.

The PC keeps running. My PSU is a Seasonic GX750 80 Plus Gold that cost a whopping $150. Someone here told me to not skimp on the PSU when I replaced it last year so i got the best one I could find. They literally have a 10 year warranty. Havent had any issues with it until I installed this card last week and started playing UE5 at native 4k. The power draw for RDR2 is lower than cyberpunk which also hits 600 watts and yet it quits all the same. But that gives a good idea. I will drop the resolution to 1440p, and cap it to 30 fps and see if it still crashes. Im getting 45 fps at native 4k in Matrix so thats incredible performance but id like to play it for more than 10 minutes at a time lol.
Yeah it won't be your power supply going by the reviews of that top tier one.

Is the high precision motherboard clock option set in your UEFI, as when it is disabled I've had issues with that setting. As the RAM has to shadow the GPU vram to a certain degree, I would probably do an overnight memory test on your RAM, just in case it you have a faulty module.

What type of memory use does this demo(and the games that crash) show when running at 4K on your GPU? Is it possible that they are over a module boundary, meaning the system is stable when using less sticks and crashes when access all?
 

SlimySnake

Flashless at the Golden Globes
You need to get off the Sony pr train. The ssd is fast. Does it have less potential bottlenecks? Sure. Is it unmatched? Of course it isn’t. Technology doesn’t stand still.
I dont know why this stuff is still coming up after Epic revealed to DF that Nanite demos used needed only 300 MBps of transfer speeds. Im sure the PS5 IO is helping with decompression here but the XSX is running the same demo at the same settings and almost the same framerate without that fancy IO block. Maybe the PS5 is able to keep up with the XSX due to its IO, but the PS5 and the XSX are both severely underperforming here compared to RTX 20 series and RDNA 2.0 cards.

This demo favors the 6000 series. The RTX 6700xt is a 13 tflops card. only 9% more powerful than the XSX. And yet it is averaging 44 fps while the XSX and PS5 average around 25 fps around the city.

More importantly, it is destroying the 2080 and 2080 Super which are usually much better at ray tracing performance and on par with it in standard rasterization so clearly the game was optimized for RDNA 2.0 GPUs found in consoles. And yet, the consoles are struggling to hit 30 fps at a mere 1080p although TSR might have some kind of performance hit compared to native 1080p results we see here.

Still, things dont add up. The PS5 and XSX are underperforming. They are both roughly on par with the 2080 if not the 2080 Super, but they arent performing like they should here despite the same RDNA 2.0 family cards destroying even 30 series cards on PC. At least at 1080p. hell, my RTX 3080 is roughly 2x more powerful than the 2080 which itself is on par with the PS5 and XSX in most AMD heavy benchmarks, and yet I can run the game at native 4k 40-45 fps. Thats 4x more pixels AND 50% more framerate.

It's clearly CPU clock bound. If it was designed around the PS5 IO, the PS5 would be easily doing 1440p 40-45 fps since my 2x card is able to do native 4k 40 fps.

NNqLaaV.jpg
 

SlimySnake

Flashless at the Golden Globes
Yeah it won't be your power supply going by the reviews of that top tier one.

Is the high precision motherboard clock option set in your UEFI, as when it is disabled I've had issues with that setting. As the RAM has to shadow the GPU vram to a certain degree, I would probably do an overnight memory test on your RAM, just in case it you have a faulty module.

What type of memory use does this demo(and the games that crash) show when running at 4K on your GPU? Is it possible that they are over a module boundary, meaning the system is stable when using less sticks and crashes when access all?
I havent checked the memory usage but i can enable that. Someone mentioned increasing the virtual memory allocation in windows but i totally forgot to try that until just now.

I will check out the high precision option in my mobo tonight. Thanks for the tip.
 
Top Bottom