• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[Digital Foundry] Immortals of Aveum PS5/Xbox Series X/S: Unreal Engine 5 is Pushed Hard - And Image Quality Suffers

SlimySnake

Flashless at the Golden Globes
By the way you seem to be persisting in your wrong assumption that 18% higher theoretical TF advantage = 18% 'performance' advantage, that is far from being the case.
Bizarre of you to say this when I literally said that the xsx fails to hold that advantage when dynamic elements are added on screen.
 

Lysandros

Member
Bizarre of you to say this when I literally said that the xsx fails to hold that advantage when dynamic elements are added on screen.
Maybe that's the wording, nevermind then. The thing i disagree to begin with is making specific cases a generality. What you say is certainly true let's say in the context of Control. But we are in the third year of the generation, there are also plenty of cases where XSX is slightly outperforming PS5 in dynamic situations and vice versa (like PS5 outperforming XSX with not much going on on screen) depending on games.
 
Last edited:

Darsxx82

Member
Lol, so this game runs terribly on PC too and performance on equivalent hardware seems roughly on par with consoles as the 2080 offers similar performance to the PS5. The problem being the fact that settings don't make a bit of difference aside from Global Illumination, Shadow Rendering Pool Size, and Shadow Resolution Quality.

The 3600 seems to have major traversal stutters not seen on consoles. Also, according to benchmarks, AMD GPUs perform noticeably better in this game than their NVIDIA counterparts. The 6700 XT for instance beats the 2080 Ti/3070 and the 6800 XT beats the 3080 by 15-20%.
More confusion regarding the developer's statements.....

Those settings that have computational cost in the PC analysis are the same in XSX and PS5. From the .ini published by the developer for XSX and PS5 only the one related to Sharpening/CAS would be different. The rest are either 1:1 or only indicated on XSX and not on PS5.

We still do not know what (if any) is that higher setting on PS5 that makes IQ better and that is not CAS activated (which lacks any computational cost).

PS. 6700XT performace Is better than PS5 in this game despite the fact that the developer insists that PS5 is equivalent in performance to a 6700XT.
 
Last edited:

Gaiff

SBI’s Resident Gaslighter
PS. 6700XT performace Is better than PS5 in this game despite the fact that the developer insists that PS5 is equivalent in performance to a 6700XT.
I wouldn't say he insists unless I missed something. He says the PS5's GPU is basically a 6700 XT but falls behind because it has no boost clocks. The 6700 XT has more CUs but lower memory bandwidth so in general it's not THAT far ahead but should be faster most of the time, Sony developed games being the obvious exception. But yeah, in this game, a 6700 XT crushes the 2080 by 25% or so.
 
Last edited:

Mr Moose

Member
More confusion regarding the developer's statements.....

Those settings that have computational cost in the PC analysis are the same in XSX and PS5. From the .ini published by the developer for XSX and PS5 only the one related to Sharpening/CAS would be different. The rest are either 1:1 or only indicated on XSX and not on PS5.

We still do not know what (if any) is that higher setting on PS5 that makes IQ better and that is not CAS activated (which lacks any computational cost).

PS. 6700XT performace Is better than PS5 in this game despite the fact that the developer insists that PS5 is equivalent in performance to a 6700XT.
+CVars=r.RenderTargetPoolMin=600
 

shamoomoo

Member
I wouldn't say he insists unless I missed something. He says the PS5's GPU is basically a 6700 XT but falls behind because it has no boost clocks. The 6700 XT has more CUs but lower memory bandwidth so in general it's not THAT far ahead but should be faster most of the time, Sony developed games being the obvious exception. But yeah, in this game, a 6700 XT crushes the 2080 by 25% or so.
I don't think 4 more CUs is going performs better as there's isn't memory contention since the 6700xt is one unit and has more cache plus a higher clock than the PS5.
 

Kataploom

Gold Member
I wonder if decisions about settings where made by different teams of people? Some people like sharpening filters and feels it's worth the perceived recovery in detail despite sharpening artifacts, other people hate it.
I said it twice on this thread, both versions seem to be done by people not knowing other version's state so basically no full parity
 
I see many saying that Microsoft should have released a $400 digital Series X, but is that really something they can do?

Phil Spencer recently said that he loses between 100 and 200 dollars for each console sold, while Sony in 2021 declared that it was no longer losing money on the standard edition.
 
If that dev really wrote that performance tool all I'm gonna say after looking at the PC menu is that he has no clue when it comes to the performance cost of settings so take what he is saying with a pinch of salt. He may be great at whatever his main job is but he's definitely not good at gauging performance.

Stuff that has literally zero impact on performance on PC can have a huge cost assigned to it in the tool. It's at best misleading and at worst completely useless.
 

Mr Moose

Member
If that dev really wrote that performance tool all I'm gonna say after looking at the PC menu is that he has no clue when it comes to the performance cost of settings so take what he is saying with a pinch of salt. He may be great at whatever his main job is but he's definitely not good at gauging performance.

Stuff that has literally zero impact on performance on PC can have a huge cost assigned to it in the tool. It's at best misleading and at worst completely useless.
Poor Epic, leave them alone!
UNDERSTANDING THE GPU AND CPU BUDGETS
The performance budget tool is integrated into the game’s Graphics menu. Upon launching Immortals of Aveum for the first time, the game calculates your budget using Unreal Engine 5’s benchmark tool called Synthetic Benchmark. This tool provides performance insights to specific Unreal Engine 5 functions on your hardware. Synthetic Benchmark returns two scores, the GPUIndex and the CPUIndex, which are displayed as your GPU budget and CPU budget. The game then presets your graphics settings to recommended levels based on your hardware.
IOA-PC-Performance-Budget-Tool-web-1024x576.jpg
 
Last edited:

Kataploom

Gold Member
I see many saying that Microsoft should have released a $400 digital Series X, but is that really something they can do?

Phil Spencer recently said that he loses between 100 and 200 dollars for each console sold, while Sony in 2021 declared that it was no longer losing money on the standard edition.
I think it mostly had to do with chip availability, they could produce as many Series S as they wanted basically, since they were not competing with PS5 and XSX for supplies. I don't know much about it but that's as far as I could wrap by the little I could learn from GPU crypto fiasco technical conversations
 
Last edited:

Mr.Phoenix

Member
More confusion regarding the developer's statements.....

Those settings that have computational cost in the PC analysis are the same in XSX and PS5. From the .ini published by the developer for XSX and PS5 only the one related to Sharpening/CAS would be different. The rest are either 1:1 or only indicated on XSX and not on PS5.

We still do not know what (if any) is that higher setting on PS5 that makes IQ better and that is not CAS activated (which lacks any computational cost).

PS. 6700XT performace Is better than PS5 in this game despite the fact that the developer insists that PS5 is equivalent in performance to a 6700XT.
You know the thing about being on a witch hunt is that you will find witches no matter what.

The guy made the damn game, and you are here trying to discredit what he is saying.

But more importantly,it doesn't take a rocket scientist to know that whatever info he has provided isn't exhaustive of everything that went into every configuration they had of the game. And that 6700XT comparison you are making... have any idea how stupid it is to make such a comparison? Are you using exact same CPU, are you running at the exact same clocks, are you even using the exact same drivers or running on the same platform? You do realize that you can pair a 6700xt with zen4 16c/32t CPU running at 5Ghz with a 6700xt and get vastly different performance with the same GPU but paired with a 6c/12tZen 2 CPU running at 3.5Ghz right?

If he says this is the PS5 equivalent GPU, he is speaking holistically not specifically. gain, doesn't take a genius to figure this out,but when viewing info through biased glasses reason is the first thing that tends to go out of the window. The dev even stated where the differences between the GPU on the PC and then on the PS5 differ or fall short. In the same way he spoke to the things that are just better on the PS5 than on a PC or Xbox.
 

SlimySnake

Flashless at the Golden Globes
If that dev really wrote that performance tool all I'm gonna say after looking at the PC menu is that he has no clue when it comes to the performance cost of settings so take what he is saying with a pinch of salt. He may be great at whatever his main job is but he's definitely not good at gauging performance.

Stuff that has literally zero impact on performance on PC can have a huge cost assigned to it in the tool. It's at best misleading and at worst completely useless.
The tool isnt perfect, he himself said that.

Here are his comments on the tool which leverages the actual benchmark from Epic.

As for the perf tool - the synthetic benchmark is something Epic wrote. Its really GPU oriented. IIRC it does like 8 GPU tests with various weights vs like one single core test on the CPU.


SMA and Resizable Bar both help but those are both about getting data to the GPU faster and won't really factor into the CPU rating.

We are actually in the middle of trying to figure out if there is a better version of testing the CPU and figuring the cost. Right now the minspec 9700k scores around a 185 and my 2 primary development machines (5950x 4.45 all core and 12900k stock) don't score all that much better. 220-240 depending on what else I have running. So that isn't really all that helpful for trying to determine average FPS or low 1%. We are working on it. Please give us time. This whole thing is pretty new.

From what I understand the actual benchmark was written by Epic to try and figure out which GPU/console runs their nanite system the best since its mostly IO driven. He himself said that the tool isnt helpful for determining the average or 1% fps. Its a work in progress.

On another note, he said that they never ran their tool on the consoles but hes going to give a shot today because it sounds like fun to him. I love this guy.
I doubt we've ever run it on the consoles but that sounds like fun. I'll see if I have a second tomorrow to try it on a devkit.
 
Last edited:

Mr Moose

Member
You know the thing about being on a witch hunt is that you will find witches no matter what.

The guy made the damn game, and you are here trying to discredit what he is saying.

But more importantly,it doesn't take a rocket scientist to know that whatever info he has provided isn't exhaustive of everything that went into every configuration they had of the game. And that 6700XT comparison you are making... have any idea how stupid it is to make such a comparison? Are you using exact same CPU, are you running at the exact same clocks, are you even using the exact same drivers or running on the same platform? You do realize that you can pair a 6700xt with zen4 16c/32t CPU running at 5Ghz with a 6700xt and get vastly different performance with the same GPU but paired with a 6c/12tZen 2 CPU running at 3.5Ghz right?

If he says this is the PS5 equivalent GPU, he is speaking holistically not specifically. gain, doesn't take a genius to figure this out,but when viewing info through biased glasses reason is the first thing that tends to go out of the window. The dev even stated where the differences between the GPU on the PC and then on the PS5 differ or fall short. In the same way he spoke to the things that are just better on the PS5 than on a PC or Xbox.
He must be Toms GAF account, because he's blind. (Joking)
The rest are either 1:1 or only indicated on XSX and not on PS5.
He missed the part that says
+CVars=r.RenderTargetPoolMin=600
Which is not on the Xbox part.

"The other devs said about async and DS! So this guy is obviously wrong!"
 

Mr.Phoenix

Member
A bit off topic, but does anyone actually care about the game itself? I have the feeling it's one of those cases where the tech reviews are more interesting than the actual product.
Definitely off topic, if you want to ta about the game, the game review thread, or OT if that exists is where that happens.

These inda threads are exclusively based on the tech. The game is just the horse the tech rode in on.
 

Darsxx82

Member
I wouldn't say he insists unless I missed something. He says the PS5's GPU is basically a 6700 XT but falls behind because it has no boost clocks. The 6700 XT has more CUs but lower memory bandwidth so in general it's not THAT far ahead but should be faster most of the time, Sony developed games being the obvious exception. But yeah, in this game, a 6700 XT crushes the 2080 by 25% or so.
It also fails to mention that the 6700XT also features the 96mb infinite cache that boosts bandwidth.
You know the thing about being on a witch hunt is that you will find witches no matter what.

The guy made the damn game, and you are here trying to discredit what he is saying.

But more importantly,it doesn't take a rocket scientist to know that whatever info he has provided isn't exhaustive of everything that went into every configuration they had of the game. And that 6700XT comparison you are making... have any idea how stupid it is to make such a comparison? Are you using exact same CPU, are you running at the exact same clocks, are you even using the exact same drivers or running on the same platform? You do realize that you can pair a 6700xt with zen4 16c/32t CPU running at 5Ghz with a 6700xt and get vastly different performance with the same GPU but paired with a 6c/12tZen 2 CPU running at 3.5Ghz right?

If he says this is the PS5 equivalent GPU, he is speaking holistically not specifically. gain, doesn't take a genius to figure this out,but when viewing info through biased glasses reason is the first thing that tends to go out of the window. The dev even stated where the differences between the GPU on the PC and then on the PS5 differ or fall short. In the same way he spoke to the things that are just better on the PS5 than on a PC or Xbox.
There is no witch hunt, I am simply trying to point out inconsistencies in his statements that is very different from denying his knowledge and veracity of what he says.

Yes, comparing GPU performance on PC vs console isn't 1:1, but I'm not punctuating this. I point out certain statements that do not show a real image or situation. Specifically when he compare the bandwidth and forget to mention the plus that for the 6700XT is 96mg of infinite cache. That is, it is not showing a 1:1 comparison.



He must be Toms GAF account, because he's blind. (Joking)

He missed the part that says

Which is not on the Xbox part.
Ohhh! You TOM obsesión always present ..🙃🙃

Yes, sorry, i missed that part, among other things because I focused on the XSX part more in those that I understand could affect the difference in IQ that is seen in the comparisons based on the fact that both have the same resolution base...
The thing is, publishing those .ini if anything, has only caused more confusion.


"The other devs said about async and DS! So this guy is obviously wrong!"

It Is you being "joking" again ehhh? 🙃
 
Last edited:

SomeGit

Member
A bit off topic, but does anyone actually care about the game itself? I have the feeling it's one of those cases where the tech reviews are more interesting than the actual product.

If it wasn't the first game to use most of UE5's bells and whistles I doubt many would care honestly.
 

sinnergy

Member
Ps5 has sharpening filter applied, it’s not there for Xbox in the ini, explains the same res, easy to patch in though .,
 

mrcroket

Member
Well so looks like in modern engines without use of raytracing consoles run like a 2080.... Some people will be really mad here...
 

Gaiff

SBI’s Resident Gaslighter
Well so looks like in modern engines without use of raytracing consoles run like a 2080.... Some people will be really mad here...
Why? This is about where it should land. Anywhere from a 2070S to a 2080S is typically where consoles performance lies against NVIDIA hardware. Better to compare them to AMD hardware though. In this case, seems something like a 6600 XT or 6700 non XT would match the PS5.
 

mrcroket

Member
Why? This is about where it should land. Anywhere from a 2070S to a 2080S is typically where consoles performance lies against NVIDIA hardware. Better to compare them to AMD hardware though. In this case, seems something like a 6600 XT or 6700 non XT would match the PS5.
Take a look around other threads, there was a user who swore to me that his 2070s was much better than the gpus on consoles. And he is not the only one I have found here claiming similar things.
 

Mr.Phoenix

Member
Why? This is about where it should land. Anywhere from a 2070S to a 2080S is typically where consoles performance lies against NVIDIA hardware. Better to compare them to AMD hardware though. In this case, seems something like a 6600 XT or 6700 non XT would match the PS5.
You see this is just not true. And whenever I see stuff like that all I see is someone who just went and looked up GPUTF numbers and found the closest one to the PS5 and called it a day. But it just doesn't work that way. PS5-level performance isn't just a TF thing. Its a TF, bandwidth, IO, SD platform and whatever other special customizations are made. There is a reason why they say console hardware has a way of punching above its weight.

I don't really wanna get into too much detail as I feel this is a waste of my time, but I will say this much.

If comparing PC GPU to consoe, round up.. not down. Eg.

the 6600XT is a 36CU, 8GB, 256GB/s, 8TF (base) - 10.6TF (boost) GPU.
the 6700XT is a 40CU, 12GB, 384GB/s, 11.8TF (base) - 13.2TF (boost) GPU.

Its flat-out stupid to look at those two GPUs and say, the 6600XT is more like the PS5 because it has a similar peak TF number. Because everything that goes into that GPU matters too. How much RAM does it have, whats its bandwidth, what are its drivers on PC, how often is it running at boost or base clocks, what CPU is paired with that GPU... hell, is it DX11 or DX12? This is the reason why whenever comparing PC to console hardware, you round up. You basically would need a PC with a 6700XT if using a CPU that is in the PS5 ballpark to get PS5-level performance.
 

Bojji

Member
You see this is just not true. And whenever I see stuff like that all I see is someone who just went and looked up GPUTF numbers and found the closest one to the PS5 and called it a day. But it just doesn't work that way. PS5-level performance isn't just a TF thing. Its a TF, bandwidth, IO, SD platform and whatever other special customizations are made. There is a reason why they say console hardware has a way of punching above its weight.

I don't really wanna get into too much detail as I feel this is a waste of my time, but I will say this much.

If comparing PC GPU to consoe, round up.. not down. Eg.

the 6600XT is a 36CU, 8GB, 256GB/s, 8TF (base) - 10.6TF (boost) GPU.
the 6700XT is a 40CU, 12GB, 384GB/s, 11.8TF (base) - 13.2TF (boost) GPU.

Its flat-out stupid to look at those two GPUs and say, the 6600XT is more like the PS5 because it has a similar peak TF number. Because everything that goes into that GPU matters too. How much RAM does it have, whats its bandwidth, what are its drivers on PC, how often is it running at boost or base clocks, what CPU is paired with that GPU... hell, is it DX11 or DX12? This is the reason why whenever comparing PC to console hardware, you round up. You basically would need a PC with a 6700XT if using a CPU that is in the PS5 ballpark to get PS5-level performance.

PS5 has 6600XT but with almost twice memory bandwidth. There is no GPU like that in PC space so yeah direct comparisons doesn't really work.
 

winjer

Gold Member
PS5 has 6600XT but with almost twice memory bandwidth. There is no GPU like that in PC space so yeah direct comparisons doesn't really work.

But the 6600XT does have 32MB of L3 cache. It's not much, but at 1080p, 32MB manages to have a 50% cache hit.
The problem is that at 1440p, the cache hit ratio goes down to 35%. And at 4K it's around 25%.
Although the 6600XT has half the memory channels, it has the memory clocked higher, at 2000Mhz. While the memory of the PS5 is clocked at 1750Mhz.
On the other side, the PS5 GPU has to share that memory bandwidth with the CPU. That is probably 52GB/s out of the 448Gb/s.
So the diference in the memory subsystem is not as big as it seems.
Probably the biggest problem with the 6600XT is it's PCI-e 8X bus. On PC going from the CPU to GPU, and respective memory pools, is costly. But with this bus cut in half it makes it even worse.
Meanwhile, on the PS5, the CPU and GPU are right next to each other and talking to each other is much easier and cheaper, than on the PC.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
PS5 has 6600XT but with almost twice memory bandwidth. There is no GPU like that in PC space so yeah direct comparisons doesn't really work.

The Immortals dev compared it to a 13 tflops 6700xt (not the non-xt version). He specifically mentioned the higher vram bandwidth on the PS5. 6700xt has 380 GBps which is probably what the PS5 GPU has access to after the CPU reserves whatever it needs.

It seems Nanite or UE5 in general is very IO bound so the PS5 with its fast IO seems to be performing more like a 13 tflops AMD card.
 

Gaiff

SBI’s Resident Gaslighter
You see this is just not true. And whenever I see stuff like that all I see is someone who just went and looked up GPUTF numbers and found the closest one to the PS5 and called it a day. But it just doesn't work that way. PS5-level performance isn't just a TF thing. Its a TF, bandwidth, IO, SD platform and whatever other special customizations are made. There is a reason why they say console hardware has a way of punching above its weight.

I don't really wanna get into too much detail as I feel this is a waste of my time, but I will say this much.

If comparing PC GPU to consoe, round up.. not down. Eg.

the 6600XT is a 36CU, 8GB, 256GB/s, 8TF (base) - 10.6TF (boost) GPU.
the 6700XT is a 40CU, 12GB, 384GB/s, 11.8TF (base) - 13.2TF (boost) GPU.

Its flat-out stupid to look at those two GPUs and say, the 6600XT is more like the PS5 because it has a similar peak TF number. Because everything that goes into that GPU matters too. How much RAM does it have, whats its bandwidth, what are its drivers on PC, how often is it running at boost or base clocks, what CPU is paired with that GPU... hell, is it DX11 or DX12? This is the reason why whenever comparing PC to console hardware, you round up. You basically would need a PC with a 6700XT if using a CPU that is in the PS5 ballpark to get PS5-level performance.
Why do you act like this is new to you and you don't understand? In this particular game, this is how a 6600 XT performs:


performance-1920-1080.png

min-fps-1920-1080.png


This at 1080p/Max settings. Notice how the 6700 XT is above a 2080 Ti/3070? The 2080 as per DF benchmark is in the ballpark of a PS5 but AMD GPUs perform much better in that game. The PS5 would thus fall somewhere around a 6600 XT which would be around a 2080 or around a standard 6700. A 6700 XT wipes the floor with the 2080 in this game.

This is in this particular game. The 6600 XT sometimes performs much worse than the PS5, sometimes similarly, depending on the bottleneck.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
You see this is just not true. And whenever I see stuff like that all I see is someone who just went and looked up GPUTF numbers and found the closest one to the PS5 and called it a day. But it just doesn't work that way. PS5-level performance isn't just a TF thing. Its a TF, bandwidth, IO, SD platform and whatever other special customizations are made. There is a reason why they say console hardware has a way of punching above its weight.

I don't really wanna get into too much detail as I feel this is a waste of my time, but I will say this much.

If comparing PC GPU to consoe, round up.. not down. Eg.

the 6600XT is a 36CU, 8GB, 256GB/s, 8TF (base) - 10.6TF (boost) GPU.
the 6700XT is a 40CU, 12GB, 384GB/s, 11.8TF (base) - 13.2TF (boost) GPU.

Its flat-out stupid to look at those two GPUs and say, the 6600XT is more like the PS5 because it has a similar peak TF number. Because everything that goes into that GPU matters too. How much RAM does it have, whats its bandwidth, what are its drivers on PC, how often is it running at boost or base clocks, what CPU is paired with that GPU... hell, is it DX11 or DX12? This is the reason why whenever comparing PC to console hardware, you round up. You basically would need a PC with a 6700XT if using a CPU that is in the PS5 ballpark to get PS5-level performance.
It wouldve been perfectly reasonable to assert that the 6600xt is more like the PS5 before UE5. UE5 seems to be a paradigm shift in how engines are designed. They are leveraging IO and memory bandwidth to stream in data really fast. Something UE4 and other last gen engines did not do. 6600xt was indeed bottlenecked by the 256 GBps vram but it only affected 4k performance. 1080p and 1440p performance was roughly on par with the PS5 in most games.

With UE5, that changes because now vram becomes important even at 1080p. And it seems Tim Sweeney was right when he said this:

tdAuqJF.jpg


People forget that Tim Sweeney pestered Cerny for an SSD. Probably because he knew where he wanted to take Unreal Engine in the future.
We'd been getting requests for an SSD all the way back to PlayStation 4. In particular, Tim Sweeney, who is the visionary founder of Epic Games, he said hard drives were holding the industry back. He didn't say hard drives though, he said, "rusty spinning media." [...] Developers asked for an NVME SSD with at least 1 GB/s of read speed. And we looked at that and we decided to go maybe 5 to 10 times that speed. It's always good to have a high target there.
 

Mr.Phoenix

Member
On the other side, the PS5 GPU has to share that memory bandwidth with the CPU. That is probably 52GB/s out of the 448Gb/s.
So the diference in the memory subsystem is not as big as it seems.
People do this a lot, but that's not how it works. You don't look at a CPU and GPU that shares the same bus and RAMand say,o the CPU takes up 52GB/s anymore than one says 2/2.5GB of RAM is reserved for the OS as if some RAM chips get taken off the table.

If the CPU footprint is 2-3GB at any time, that 3GB is spread across all 8 RAM chips in the PS5.That means that at any time the CPU has access to all 448GB/s of bandwidth the same way the GPU that has an 8GB footprint has access to 448GB/s of bandwidth.

Memory contention, which is what a lot of people refer to, only happens if individual memory chips are segregated for either the CPU or GPU, in which case, that dedicated chip (or two or three) can only communicate via a specific bus, which in turn reduces the remaining available bus width for everything else. But that doesn't make sense when every RAM chip is identical.

The PS5 CPU and GPU have 448GB/s of bandwidth available to either of them... all the time. Simply because their data is spread out across all 8 RAM chips at any given time.

Why do you act like this is new to you and you don't understand? In this particular game, this is how a 6600 XT performs:


performance-1920-1080.png

min-fps-1920-1080.png


This at 1080p/Max settings. Notice how the 6700 XT is above a 2080 Ti/3070? The 2080 as per DF benchmark is in the ballpark of a PS5 but AMD GPUs perform much better in that game. The PS5 would thus fall somewhere around a 6600 XT which would be around a 2080 or around a standard 6700. A 6700 XT wipes the floor with the 2080 in this game.

This is in this particular game. The 6600 XT sometimes performs much worse than the PS5, sometimes similarly, depending on the bottleneck.
I get that... I am speaking generally though, not just about this game. Simply saying the PS5 is basically the 6600XT, is cutting it too close, because you would find more situations than not where the PS5 would outperform that GPU. And just saying, all those other things in the GPU that are not TF that are ignored, are very important, especially when compared to consoles.
It wouldve been perfectly reasonable to assert that the 6600xt is more like the PS5 before UE5. UE5 seems to be a paradigm shift in how engines are designed. They are leveraging IO and memory bandwidth to stream in data really fast. Something UE4 and other last gen engines did not do. 6600xt was indeed bottlenecked by the 256 GBps vram but it only affected 4k performance. 1080p and 1440p performance was roughly on par with the PS5 in most games.

With UE5, that changes because now vram becomes important even at 1080p. And it seems Tim Sweeney was right when he said this:

tdAuqJF.jpg


People forget that Tim Sweeney pestered Cerny for an SSD. Probably because he knew where he wanted to take Unreal Engine in the future.
Exactly, its why I am speaking generally, There is a lot more than just a GPU TF number that ties to a game's overall performance.
 
Last edited:

Gaiff

SBI’s Resident Gaslighter
I get that... I am speaking generally though, not just about this game. Simply saying the PS5 is basically the 6600XT, is cutting it too close, because you would find more situations than not where the PS5 would outperform that GPU. And just saying, all those other things in the GPU that are not TF that are ignored, are very important, especially when compared to consoles.
I was specifically referring to this game. The PS5 can range anywhere from a 6600 XT all the way up to a 6750 XT in Sony exclusives, sometimes even coming not too far behind the 6800 in GPU-bound scenarios. The PS5 GPU has no equivalent indeed but we can at least have a range.
 

winjer

Gold Member
People do this a lot, but that's not how it works. You don't look at a CPU and GPU that shares the same bus and RAMand say,o the CPU takes up 52GB/s anymore than one says 2/2.5GB of RAM is reserved for the OS as if some RAM chips get taken off the table.

If the CPU footprint is 2-3GB at any time, that 3GB is spread across all 8 RAM chips in the PS5.That means that at any time the CPU has access to all 448GB/s of bandwidth the same way the GPU that has an 8GB footprint has access to 448GB/s of bandwidth.

Memory contention, which is what a lot of people refer to, only happens if individual memory chips are segregated for either the CPU or GPU, in which case, that dedicated chip (or two or three) can only communicate via a specific bus, which in turn reduces the remaining available bus width for everything else. But that doesn't make sense when every RAM chip is identical.

The PS5 CPU and GPU have 448GB/s of bandwidth available to either of them... all the time. Simply because their data is spread out across all 8 RAM chips at any given time.

No, the CPU never has access to all those 448Gb/s. There just is not need to have a memory controller on the CPU with that bandwidth.
Case in point, the test with the Series X chip, the 4800S. The CPU has a memory bandwidth of only 70Gb/s.
Something similar happens with the PS4, Xbox One and PS5, but with different values.

pv9jfzk.png
 

Mr.Phoenix

Member
No, the CPU never has access to all those 448Gb/s. There just is not need to have a memory controller on the CPU with that bandwidth.
Case in point, the test with the Series X chip, the 4800S. The CPU has a memory bandwidth of only 70Gb/s.
Something similar happens with the PS4, Xbox One and PS5, but with different values.

pv9jfzk.png
And this is the mistake people keep making. like you using the `series X` chip, in a PC system to measure console performance.

Unified RAM. Its not like the CPU and GPU have separate memory controllers in a console. When looking at unified RAM,only thing that matters is how much of the RAM is available.
 

winjer

Gold Member
And this is the mistake people keep making. like you using the `series X` chip, in a PC system to measure console performance.

Unified RAM. Its not like the CPU and GPU have separate memory controllers in a console. When looking at unified RAM,only thing that matters is how much of the RAM is available.

This is a salvaged board from a Series X. The thing it's missing is the GPU.
But the CPU and memory are identical.

There is no reason to waste transistors creating a data path and controller, to have 560GB/s to the CPU, when it will never use such bandwidth.
 

Mr.Phoenix

Member
This is a salvaged board from a Series X. The thing it's missing is the GPU.
But the CPU and memory are identical.

There is no reason to waste transistors creating a data path and controller, to have 560GB/s to the CPU, when it will never use such bandwidth.
That the CPu never uses that much bandwidth,is not saying the bandwidth s not available to it. Again, when looking at unified RAM, as long as the data is striped across all available RAM chips,you are getting the full bandwidth of that combined bus.

I am sure you know this, but I can't be too certain. On a PS5, the APU is connected to system RAM via 8 x (2x16) PHY memory interfaces which in turn are connected via 4 Unified memory controllers. The CPU does not have its own memory controller, it has its own cache that is connected to the same unified memory controller that the GPU is connected to. I don't know why the CPU would not use the available bandwidth, maybe its cache or timings... have no idea. What I know tho, is that there is no such thing in a system ie this such as, because the CPU is accessing RAM then it means GPU has such and such amount of bandwidth remaining.
 

hlm666

Member
That the CPu never uses that much bandwidth,is not saying the bandwidth s not available to it. Again, when looking at unified RAM, as long as the data is striped across all available RAM chips,you are getting the full bandwidth of that combined bus.

I am sure you know this, but I can't be too certain. On a PS5, the APU is connected to system RAM via 8 x (2x16) PHY memory interfaces which in turn are connected via 4 Unified memory controllers. The CPU does not have its own memory controller, it has its own cache that is connected to the same unified memory controller that the GPU is connected to. I don't know why the CPU would not use the available bandwidth, maybe its cache or timings... have no idea. What I know tho, is that there is no such thing in a system ie this such as, because the CPU is accessing RAM then it means GPU has such and such amount of bandwidth remaining.
memory contention is a thing, Sony even had a slide for the ps4 showing cpu use costing the gpu bandwidth. PS5 may have improved the cost, or the extra bandwidth makes it less of an issue but here is such a system with unified memory doing what your saying isn't a thing.

PS4-GPU-Bandwidth-140-not-176.png
 

CGNoire

Member
This was something they saw in Metro. It had a 20% advantage in pixel counts when you were out and about just exploring. But the moment you started action sequences, the DRS would kick in and bring down the resolution to PS5 levels.

For some reason, the XSX hardware struggles to keep up when dynamic elements are added to the screen. We mostly saw this in 120 fps modes of several games where the PS5 surprisingly kept pace with the xsx and we didnt see a 18% advantage in line with the tflops difference. When we did it was wildly inconsistent.
Split memory pools @ different bandwidths?
 

winjer

Gold Member
That the CPu never uses that much bandwidth,is not saying the bandwidth s not available to it. Again, when looking at unified RAM, as long as the data is striped across all available RAM chips,you are getting the full bandwidth of that combined bus.

I am sure you know this, but I can't be too certain. On a PS5, the APU is connected to system RAM via 8 x (2x16) PHY memory interfaces which in turn are connected via 4 Unified memory controllers. The CPU does not have its own memory controller, it has its own cache that is connected to the same unified memory controller that the GPU is connected to. I don't know why the CPU would not use the available bandwidth, maybe its cache or timings... have no idea. What I know tho, is that there is no such thing in a system ie this such as, because the CPU is accessing RAM then it means GPU has such and such amount of bandwidth remaining.

That memory bandwidth is available to the SoC. But inside the SoC each part will have it's own data path.
And the data path to the CPU is only 70Gb/s on the Series X. And if the number is correct, the number I heard for the PS5 is 52Gb/s. For the PS4 it's 25GB/s.

The thing is a modern AMD SoC, has several domains. Each with it's voltage, and bandwidth.
Unfortunately, AMD has not provided specific data regarding the SoC of these consoles. Otherwise, we could calculate exactly the values for each part of the SoC.

But I can give you a modern example of how the memory bandwidth of a CPU is not the bandwidth of the memory.
Zen4 can use DDR5 up to 8000 MT/s. This means a read memory bandwidth of 128GB/s.
But the each CCX only has one link of Infinity Fabric of 32b per clock cycle. This means that a single CCX Zen4 CPU will have half the IF bandwidth of a dual CCX CPU.
So for example, if someone clocks the IF to 2100Mhz, it means that a single CCX CPU will have 67.2Gb/s but a dual CCX CPU will have 134.4GB/s.
This is why people with CPUs like the 7600, 7700, 7800X3D see no improvements to performance when overclocking memory. Because, although the memory can reach much higher bandwidth values, the IF will bottleneck the system.
 

Mr.Phoenix

Member
That memory bandwidth is available to the SoC. But inside the SoC each part will have it's own data path.
And the data path to the CPU is only 70Gb/s on the Series X. And if the number is correct, the number I heard for the PS5 is 52Gb/s. For the PS4 it's 25GB/s.

The thing is a modern AMD SoC, has several domains. Each with it's voltage, and bandwidth.
Unfortunately, AMD has not provided specific data regarding the SoC of these consoles. Otherwise, we could calculate exactly the values for each part of the SoC.

But I can give you a modern example of how the memory bandwidth of a CPU is not the bandwidth of the memory.
Zen4 can use DDR5 up to 8000 MT/s. This means a read memory bandwidth of 128GB/s.
But the each CCX only has one link of Infinity Fabric of 32b per clock cycle. This means that a single CCX Zen4 CPU will have half the IF bandwidth of a dual CCX CPU.
So for example, if someone clocks the IF to 2100Mhz, it means that a single CCX CPU will have 67.2Gb/s but a dual CCX CPU will have 134.4GB/s.
This is why people with CPUs like the 7600, 7700, 7800X3D see no improvements to performance when overclocking memory. Because, although the memory can reach much higher bandwidth values, the IF will bottleneck the system.
Thanks. I get what you are saying now and I understand.
 

Lysandros

Member
"The other devs said about async and DS! So this guy is obviously wrong!"
Again, there is absolutely nothing in the in the interview stating that PS5 ASYNC-I/O interaction is less performant than XSX ASYNC-DirectStorage (it doesn't make the slightest bit sense to begin with specs at hand), nothing. It is just his own distorted interpretation of it in a desperate attempt to create a contradiction. Now with the newly acquired info from the game's developer, there isn't even the slightest bit of doubt that the system which has a better ASYNC throughput from the interview is the PS5. Just to be accepted and move on.
 
Last edited:

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
Remnant 2 doesn't even have lumen enabled and still drops to 720p with frame drops. Fortnite drops below 900p and that is first party and is still using software lumen. It's also not even pushing the hardware that hard.

It's not a question of dev talent, but just that the major features of UE5 hammer the GPU.


Can't imagine what artifacts it can cause that would force it to be disabled. CAS looks fine on the PC build.

Well, for once it's the norm among devs using UE5... If those devs didn't have any problem working with UE4 or other engines last gen but suddenly can't make their games look and perform decently in UE5, then in my opinion:

1. The features (or other engine underlying features) are too heavy for current consoles, or...

2. Such features are hard to optimize for these console so your average dev team won't be able to make it work properly more often than not or won't for a long time (probably the remaining of this gen anyway).

If the dev in Reddit is not fake, this team did a lot of I+D so probably the problem doesn't lie on devs lack of expertise.


Gotta say..........you both make some good points. 🤔


The Immortals dev compared it to a 13 tflops 6700xt (not the non-xt version). He specifically mentioned the higher vram bandwidth on the PS5. 6700xt has 380 GBps which is probably what the PS5 GPU has access to after the CPU reserves whatever it needs.

It seems Nanite or UE5 in general is very IO bound so the PS5 with its fast IO seems to be performing more like a 13 tflops AMD card.

Which is pretty much what we all expected in 2019
 
Last edited:
Top Bottom