• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS5 vs Xbox Series X ‘Secret Sauce’ – SSD Speed And Velocity Architecture

SonGoku

Member
You're throwing an assumed conclusion to this that was never the point of my argument, however. I already know the factor of diminishing returns, and a 17 - 21% difference is smaller than a 35% difference. But that wasn't the focus of the discussion on my end, just to illustrate that achieving parity in that department would require a sacrifice on PS5's end in another area. That's it.
That wasn't the point of takeaway here, either. My purpose was to illustrate the efficiency in GPGPU asynchronous compute-related tasks going forward due to dev familiarity, engine scalability improves, new and improved algorithms, scripts, coding concepts etc. And how this will help better benefit GPGPU asynchronous programming going forward.
You say that but then you say this:
with an extra benefit to XSX due to having more GPU headroom, which they can utilize while retaining visual and framerate fidelity with PS5 (more or less).
Asynchronous compute can only help realize its full 21% advantage not surpass it... XSX wont have any extra GPGPU oriented advantage if the PS5 runs at 21% lower resolution
It doesn't if you are only focusing on percentages, but that's why I said focusing on percentages alone is effectively meaningless. You have to also consider the context of what the percentages are in reference to and the weight in which those contexts have onto the overall scope, in this case a game console's performance knowing what game consoles are designed to perform.
The context is for ideal asynchronous compute utilization on XSX, PS5 GPU can match the XSX GPU output at 21% lower resolution
I don't know what you quite mean by "more has to be sacrified" to exploit a wider GPU
I meant a game designed to exploit the PS5 SSD will require more sacrifices to run on XSX than a game designed to exploit the XSX GPU will require to run on PS5 (21% lower resolution)
 
Last edited:
PS5 is 10 times faster to load a tech demo designed to demonstrate fast loading.
Xbox SX is 4.6 times faster to load an entire game, not optimized for SSD.

I know which is more impressive.
Just how do you know that ? Maybe it's the other way around. Maybe MS tested several games and levels and they picked the one that got the best improvement on XSX.

It's not because you don't believe of the 10x improvements that it doesn't make it less true. This gen is going to be hard for you. I hope a slightly higher res for XSX games will be enough for you.

Because PS5 SSD will be at least twice faster (in real loadings) than XSX SSD.
 

semicool

Banned
Well, you could be disingenuous and say the same about 12.3 TFLOPS ;).
You obviously don't know how data is accessed on I/O for drives and how it's data dependent. Or even watch the metics of a drive while being read or written from/to or read an analysis or two on drive performance.
 
Last edited:

Journey

Banned
PS5 - XBSX

Cores: 2304 Cores: 3328
TMUs: 144 TMUs: 208
Rops: 64 Rops: 80
Bus: 256 Bit Bus: 320 Bit

That's a substantial difference between the two platforms for those trying to push that the only difference is 18% and limited to CU count.

43% more cores.
44% more TMUs
25% more Rops
25% more Mem Bus.

And what's also very important:

You can still do a lot more work with 2TF's RDNA2 than you can with 500GF's of GCN.

anyway, we will see as soon as the games arrive! But don't act like it's just a "18%" difference. also, keep in mind that PS5 clocks are variable


Interesting, I didn't look at those details closely since we always tend to focus on just the Compute units.

So Xbox Series X has 208 texture mapping units vs PS5's 144? and also has 80 Render Output Units vs PS5's 64? How come no one talks about this when they mention Balance and especially Bottlenecks?

To sum it up, Xbox Series X has a CPU that can run 8 physical cores at 3.8Ghz without boost, a huge advantage for games that don't require multi-threading beyond 8 physical cores. It may be some time before games use up all 16 threads. But even when playing Devils advocate and multi-threading is involved, we still have a stable 3.6Ghz CPU nominal speed with SMT, that's each of the 16 threads running at 100Mhz more than the 3.5Ghz PS5 in boost mode, and we know that 3.5Ghz is peak and may drop in favor of the GPU since PS5 can't have both at max at the same time.

Xbox Series X has a faster CPU, has more bandwidth, fillrate and texture mapping units than the PS5..... but dat SSD Doe :pie_thinking:
 
Last edited:

Thirty7ven

Banned
I wish you were more respectful than this.

Do you disagree with Gaming Bolt’s analysis or my summary of it?

It’s a crock of shit. What matters is how fast the system can get assets into the RAM, thanks to SSD read speed speed, then how fast the decompressor can move it to the RAM. There’s 16Gb of GDDR6 in both of these systems, and one system can fill that in half the time the other does. The rest is just codenames for software applications and those even the devs themselves will come up with different solutions as the gen goes by.

Just like Sony doesn’t have some magical software solution to eliminate the CU gap. It’s physics and creative narratives by trash sites like Gamingbolt won’t affect the physical reality displayed once the games are out.
 
You say that but then you say this:

Asynchronous compute can only help realize its full 21% advantage not surpass it... XSX wont have any extra GPGPU oriented advantage if the PS5 runs at 21% lower resolution

I have not actually argued against this (although this may be slightly up to question, depending on what customizations have been made with the GPUs. Keep an open mind on this because we do not know the full scope of customizations for the GPUs just yet). The point is to show how that 21% can be more effectively used with upcoming/modern algorithms, programming techniques, architectural improvements and developer familiarity than the 35% or so headroom PS4 had over XBO was for GPGPU asynchronous compute during it's time. You are focusing on raw percentages and I am looking at the improvements that come within those as well as keeping the raw percentages in mind.

You're seemingly using the delta percentage as a direct comparison between the systems; I'm merely acknowledging the delta is what it is, and noting how it may effectively seem like "doing more with less" because even if the percentage delta is notably smaller, familiarity among devs, improved engine scalability, more advanced algorithms and more efficient coding practices plus the general architectural improvements will net quite a bit more "juice" out of the orange, so to speak, even if the orange (GPU percentage delta) this time around is a bit smaller.

The context is for ideal asynchronous compute utilization on XSX, PS5 GPU can match the XSX GPU output at 21% lower resolution

This is essentially what I've been getting at the entire time. Although, again, that's just going off what we know right now regarding the systems. New information that comes to light when the system specs are more clearly defined could reduce that delta some, or grow the delta some. Fairly even probability for either or just the same, the delta remains as-is.

I meant a game designed to exploit the PS5 SSD will require more sacrifices to run on XSX than a game designed to exploit the XSX GPU will require to run on PS5 (21% lower resolution)

This is an assumption, and isn't factoring in potential customizations of the GPU nor is it really factoring in any interconnected systems that work in tandem with the GPU (hardware and software) that might be of particular use-cases on XSX creating situations where a game may need to do more than just scale resolution down by 17% - 21% to provide similar room on PS5 to match those asynchronous compute tasks.

That of course comes down a lot to how the games are programmed; in both cases (SSD, GPU), you are mainly looking at 1st-party teams taking extreme advantages of those respective strengths on those platforms. How much 3rd-parties will do so depends on any financial incentives (such as timed exclusivity), publisher leeway to the developers, and how easily MS and Sony provide solutions to 3rd-parties to have a lot of that busy work (for devs who don't want to knitty-gritty tinker) handled mostly autonomously and in the background.

Interesting, I didn't look at those details closely since we always tend to focus on just the Compute units.

So Xbox Series X has 208 texture mapping units vs PS5's 144? and also has 80 Render Output Units vs PS5's 64? How come no one talks about this when they mention Balance and especially Bottlenecks?

To sum it up, Xbox Series X has a CPU that can run 8 physical cores at 3.8Ghz without boost, a huge advantage for games that don't require multi-threading beyond 8 physical cores. It may be some time before games use up all 16 threads. But even when playing Devils advocate and multi-threading is involved, we still have a stable 3.6Ghz CPU nominal speed with SMT, that's each of the 16 cores running at 100Mhz more than the 3.5Ghz PS5 in boost mode, and we know that 3.5Ghz is peak and may drop in favor of the GPU since PS5 can't have both at max at the same time.

Xbox Series X has a faster CPU, has more bandwidth, fillrate and texture mapping units than the PS5..... but dat SSD Doe :pie_thinking:

Slight issue here with the bolded (emphasis mine, not yours); it's not 16 cores, it's 16 threads. 16 cores would be a hell of a get, but also push the prices up that much further and frankly would probably be redundant anyway. These are gaming machines, not workstation systems ;)
 
Last edited:

Journey

Banned
Yes, I meant threads, that's why I said 8 physical cores, but with SMT you get 16 threads, correct. I even said it's going to take some time before games use 16 threads :messenger_winking:
 
Last edited:
PS5 - XBSX

Cores:2304 Cores:3328
TMUs:144 TMUs:208
Rops:64 Rops:80
Bus:256 Bit Bus:320 Bit

[....]

43% more cores.
44% more TMUs
25% more tops
25% more Mem Bus.

And what's also very important:

You can still do a lot more work with 2TF's RDNA2 than you can with 500GF's of GCN.

All of these (except the bolded, which wasn't necessarily touched on) were points mentioned in NX Gamer's GPU video, as well. He also did go into the clock stuff (small aside: kinda irritated when people say it increase the cache bandwidth instead of saying it increase the cache speed; bandwidth and speed are not interchangeable).

Overall it was a good breakdown of the benefits between more hardware on the GPU itself (and more bandwidth) versus faster clocks for increasing fillrate and cache speeds. They're both very valid methods to take and have their uses, but it would seem the XSX's approach is the better one overall when accounting for gains in efficiency, GPU saturation, etc.

You've done enough research to say he is assuming?

Looks like your information is not based on much, but rather you're looking for reasons to doubt him. He is not going to make assumptions on all his work because it's clear that tons of research and testing has been going on for years, which is why they made 5.5GB\s a target.

No matter how you slice it, the numbers back up his statement when it comes to streaming.



Of course he's speculating, but we still don't know how their memory setup will work. Any bit of extra resources is beneficial, and you can't claim it will not result in some real world performance.

I'm allowed to have my own takeaways that don't line up with everything someone like Cerny says, you know. That doesn't mean I'm disrespecting them on any level, just that I have my doubts. Skepticism is normal and healthy to have as long as it's not ill-spirited.

For sure Cerny and his team have done their research, but there are always multiple ways of solving a lot of the same problems. They could've taken one method while MS has taken another method, both methods being roughly similar in how to solve the problems Cerny touched on, just with different means utilized to arrive at this. This is normal in life, especially in fields like technology.

Basically, Cerny's word and research have a lot of worth, but he is not the absolute authority on what is considered a viable solution and what is not. Rather, he is simply one of many. And while myself and everyone else here might not be in Cerny's position or at his level, we should be free to speculate even if, yes, that means having doubts about certain claims from people like Cerny himself. All done respectfully, of course (hopefully).

You're right; we don't know the exact memory setup with PS5...but we don't know everything with XSX's memory setup, either. And regardless of what we find out, there are still inherent limitations on that particular idea he mentioned which I feel should be taken into account. Again, healthy speculation, no harm no foul.
 
Last edited:

SonGoku

Member
So Xbox Series X has 208 texture mapping units vs PS5's 144? and also has 80 Render Output Units vs PS5's 64? How come no one talks about this w
Clocks affect the equation, each of PS5 TMUs and ROPs do 22% more work. XSX still comes on top with a 17-21% advantage, I haven't seen anyone denying such advantage why not be happy with it instead of trying to make it appear bigger than it actually is?
ROPs unconfirmed btw, not that it'll affect anything either way

XSX CPU (3.8GHz) is 8.5% faster than PS5s (3.5GHz)
Compare that with XB1 (1.75GHz) is 9.3% faster than PS4s (1.6GHz)

GPU will be the limit this gen not CPU.
XSX's approach is the better one overall when accounting for gains in efficiency, GPU saturation, etc.
Efficiency depends of the metric you use to judge it so.
I'd say XBX GPU is more power efficient while the PS5 GPU is more transistor efficient
As far as saturation the slower GPU will reach higher utilization with the same amount of threads, not sure what you meant by it?
 
Last edited:
This is a lie. That is the point a good portion of posters have been trying to argue, generally by misinterpreting how SSDs and NAND actually works. It hasn't been so much some people downplaying SSDS, so much as anyone attempting to be a realist with regard to the SSDs is automatically viewed by some others as downplaying the SSDs.

Let's face it; after the Road to PS5 presentation (and I don't like to do this but a spade's a spade) a big flock of Sony fans on the forums who were obsessed to high hell over Teraflops (even when multiple posters, myself included, were trying to tell them Teraflops didn't mean everything), silently conceded that front. They began downplaying the Teraflop difference between the two systems (not in terms of percentages per-se, but in what the extra TF advantage on XSX can actually be utilized for) while hyping up the SSD and audio since Sony focused on those in particular with their presentation and gave specs that, on paper, seemed more impressive than MS's in those areas. It allowed those people to shift the narrative to the SSD, audio, I/O etc. while similarly creating a fake narrative of XSX "brute forcing" a solution and PS5 was the system pushing elegant optimizations, conveniently cutting out any focus on MS's deliberate optimizations and customizations with the XSX to further push this fake narrative.

All the while, many of these same people continue to over-inflate the SSDs in terms of being a game-changing technology or paradigm shift, and completely downplay the GPGPU performance edge XSX has over PS5 (or pretend it doesn't exist at all and that the extra GPU throughput of XSX will only go to resolution, ignoring the ML texture upscaling in the GPU built to free up heavy expenditure of GPU resources on processing raw pixels through to the display, freeing up processing power for other tasks). When you try telling them that NAND has limits inherent to the technology that will prevent granularity of asset data for streaming in a way similar to volatile RAM, somehow that gets lumped into "downplaying the PS5 SSD", even though this affects both systems. Same if you bring up questions regarding the random write speeds, latency figures, page and block sizes, etc.

We are seemingly allowed to speculate on Sony using tech from other departments of their company as R&D foundations for potential PS5 features, but doing the same with MS regarding XSX is considered being a fanboy, wishful thinking, or foolish...even though they have already admitted to members of the Surface team working on the Xbox team. All the same, some strongly pro-Sony people who obsess over customizations on PS5 do not provide any leeway to entertain similar customizations conceptually being present on XSX, but expect strong pro-Microsoft people to bend the knee and do so when it comes to XSX features potentially being present on PS5. And all of this leads to disingenuous, lopsided, biased takes and discussions in next-gen speculation because there are a group of people who put out a false image of wanting the best for both systems but secretly only want their preferred platform to "win", even if that means generating fake narratives.

Yes there are some Xbox people who do this but from what I've noticed it is not to the same degree as the Sony fans engaging in similar tactics on the forum (as just one example). Now that might be going a bit beyond your point here but it needs to be stressed that claiming "When people tried to explain how SSDs will work, people just started saying, "It's not going to close the power gap in consoles" when that wasn't the point people on here were trying to make." is in fact demostrably false when keeping in mind the long-term discussion that's been prevalent for months by now on these systems.

There absolutely have been people trying to imply this very thing, maybe not directly and often layered in subtext, but it's been an idea fostered for a good bit by now. Again, predicated on things like "secret sauce" like the way this article words it, which is irresponsible considering so many important aspects of the systems are not even divulged yet. But that's all I want to say on that and thought what you mentioned was a good time to segway into it. Hopefully people get what I'm saying here.

My post of the year. Very well said, very true, and very fair.
 


Info comes from this source mostly.


And then people clearly recognize the one that's from Digital Foundry.
 

TBiddy

Member
Just how do you know that ? Maybe it's the other way around. Maybe MS tested several games and levels and they picked the one that got the best improvement on XSX.

It's not because you don't believe of the 10x improvements that it doesn't make it less true. This gen is going to be hard for you. I hope a slightly higher res for XSX games will be enough for you.

Because PS5 SSD will be at least twice faster (in real loadings) than XSX SSD.

I'm sure MIcrosoft picked a good example. However, as they clearly stated multiple times, the game was running as-is and was not optimized for an SSD. IF Microsoft had created a tech-demo to demonstrate the loading times, don't you think they would've removed the pop-in?

Sony, on the other hand, demonstrated their loading times, clearly using a tech demo (or at least, a demo designed for that specific use case).

I have no doubt that the SSD in the PS5 is faster than the one in the XSX. My point was just that it's absurd to conclude anything, whatsoever, given what we've seen so far. It's a given, that in sequential read/write the PS5 will be around double as fast as the XSX. But how will that show in real life? I certainly doubt it'll load games twice as fast.

PS. Why do you think this gen is going to be hard for me?
 

SonGoku

Member
The point is to show how that 21% can be more effectively used with upcoming/modern algorithms, programming techniques, architectural improvements and developer familiarity than the 35% or so headroom PS4 had over XBO was for GPGPU asynchronous compute during it's time. You are focusing on raw percentages and I am looking at the improvements that come within those as well as keeping the raw percentages in mind.
noting how it may effectively seem like "doing more with less" because even if the percentage delta is notably smaller, familiarity among devs, improved engine scalability, more advanced algorithms and more efficient coding practices plus the general architectural improvements will net quite a bit more "juice" out of the orange
Again you say that but then you follow up with this:
This is an assumption, and isn't factoring in potential customizations of the GPU nor is it really factoring in any interconnected systems that work in tandem with the GPU (hardware and software) that might be of particular use-cases on XSX creating situations where a game may need to do more than just scale resolution down by 17% - 21% to provide similar room on PS5 to match those asynchronous compute tasks.
You agree apparently only to disagree in your follow up. Lets be clear so im not misinterpreting you:
Asynchronous compute benefits both systems and the best case utilization for XSX will net 21% higher resolution all settings being equal. Do you agree or not? We can take the discussion from there otherwise we are just doing a loop
New information that comes to light when the system specs are more clearly defined could reduce that delta some, or grow the delta some
I agree which is why i find it weird you would imply XSX GPU has some extra features that will make the gap bigger due to some bottleneck in PS5
Based on what we know the best case scenario for XSX is to run at 21% higher resolution under heavy asynchronous compute utilization
or multitude of others things
If they run at the same resolution sure but that wouldn't be wise unless the extra effects are barely noticeable it would be best to run at slightly lower resolution
 
Last edited:
I'm sure MIcrosoft picked a good example. However, as they clearly stated multiple times, the game was running as-is and was not optimized for an SSD. IF Microsoft had created a tech-demo to demonstrate the loading times, don't you think they would've removed the pop-in?

Sony, on the other hand, demonstrated their loading times, clearly using a tech demo (or at least, a demo designed for that specific use case).

I have no doubt that the SSD in the PS5 is faster than the one in the XSX. My point was just that it's absurd to conclude anything, whatsoever, given what we've seen so far. It's a given, that in sequential read/write the PS5 will be around double as fast as the XSX. But how will that show in real life? I certainly doubt it'll load games twice as fast.

PS. Why do you think this gen is going to be hard for me?
No. Most games have specific debug functions once played on a devkit. How do you think one guy found plenty new ennemies in Bloodborne ? He just played the standard game on a devkit and played with the debug functions available in the retail game.
 
Last edited:
Again you say that but then you follow up with this:

You agree apparently only to disagree in your follow up. Lets be clear so im not misinterpreting you:
Asynchronous compute benefits both systems and the best case utilization for XSX will net 21% higher resolution all settings being equal. Do you agree or not? We can take the discussion from there otherwise we are just doing a loop

I agree which is why i find it weird you would imply XSX GPU has some extra features that will make the gap bigger due to some bottleneck in PS5
Based on what we know the best case scenario for XSX is to run at 21% higher resolution under heavy asynchronous compute utilization


If they run at the same resolution sure but that wouldn't be wise unless the extra effects are barely noticeable it would be best to run at slightly lower resolution

he is saying that is better to wait to see how things impact the differences between games, there is a lot of extra functions that greatly can affect performance on each machine and we dont know much about them, because of this is also not a good idea to relate a percent in flops to resolution cause it doesnt necesarily work like that, I expect all the extra functions on PS4 pro (that were underutilized) now to be a norm(and some of them can greatly improve resolution) and now will be utilized(and maybe simplified) add to that whatever was added for PS5, and who know what MS also included and if they made optimizations or maybe a specific DX12 for their console given that is a closed hardware compared
 
Last edited:

TBiddy

Member
No. Most games have specific debug functions once played on a devkit. How do you think one guy found plenty new ennemies in Bloodborne ? He just played the standard game on a devkit and played with the debug functions available in the retail game.

I fail to see the relevance.
 

Journey

Banned
Clocks affect the equation, each of PS5 TMUs and ROPs do 22% more work. XSX still comes on top with a 17-21% advantage, I haven't seen anyone denying such advantage why not be happy with it instead of trying to make it appear bigger than it actually is?
ROPs unconfirmed btw, not that it'll affect anything either way

XSX CPU (3.8GHz) is 8.5% faster than PS5s (3.5GHz)
Compare that with XB1 (1.75GHz) is 9.3% faster than PS4s (1.6GHz)

GPU will be the limit this gen not CPU.

Efficiency depends of the metric you use to judge it so.
I'd say XBX GPU is more power efficient while the PS5 GPU is more transistor efficient
As far as saturation the slower GPU will reach higher utilization with the same amount of threads, not sure what you meant by it?


So I find it interesting that some fans want to point out how Xbox Series X has more Teraflops, but PS5 may have less bottlenecks, and I made those points to illustrate how it is the PS5 (Not Xbox Series X) that will reach those limits before Xbox Series X does. Your argument however is to downplay every single advantage as being No Big Deal as if to defend PS5's honor.


Ok let me summarize again by component.

CPU
Both PS4 and Xbox One were bottlenecked by their CPU, PS4 more so than XB1
That will not be the case with PS5 and Xbox SX, however if a CPU bottleneck were to occur, it will happen first on PS5

Memory Bandwidth
Both PS5 and XSX are using GDDR6, however XSX is using a 320-bit bus and arranged 10GB of its memory to run at 560GB/s bandwidth over PS5's 448GB/s. The most demanding PC games running at 4K and everything set to ultra shows from 3-4GB of VRAM consumption, it used to be half or less than that when we were running games at 1080p or 1440p, the only reason it has doubled in the past few years is because we're running games at 4K now, so we cannot expect that number to double anytime soon if we're still running games at 4K, unless we start going towards 8K, but I don't see either console going for 8K resolution, but even playing devil's advocate and assuming that VRAM figures will double despite staying at 4K res, 8GB of VRAM usage still falls within the 10GB of XSX high bandwidth memory allocation. So once again, PS5 will be the first to hit a wall in terms of memory bandwidth. The only thing to consider would be if we ever exceeded 10GB and say a game used 12GB of VRAm, you still have to account that although the PS5 has 16GB total unified, it will still reserve 3.5GB for its OS if it follows the PS4's footsteps, that leaves 12.5GB, but the CPU and audio consume memory, so at the end of the day, you will also have about 10GB left for VRAM, albeit at the slower 448GB/s.

ROPs
Rops can affect the Fillrate and despite the clock difference, XSX has a higher fillrate than PS5 because it has 80 ROPs vs 64ROPSs. So if there were ever a bottleneck due to fillrate, the PS5 will hit that wall first.
 
Last edited:

TBiddy

Member
Not so clear to me, any receipts?

None, unfortunately. I'm just presuming that Sony would create a demo to well.. demonstrate the loading times.

Spiderman was not optimized to run on PS5. It's just the standard game with debug activated because of the devkit. All PS4 games will load in the same way.

Seems unlikely. The video we've seen was shown to - what we can presume are - investors or shareholders. Of course the demo was optimized for maximum effect.

Also - you still haven't told me why you think this will be a hard gen for me.
 
Last edited:

psorcerer

Banned
None, unfortunately.

It's all wishful thinking then.
Both systems should be able to accelerate existing games loading times.
The question is what it will do to the future games.
Current games will have no interesting features apart from fast travel.
 

Journey

Banned
Both have 64 ROPs. 80ROPs for XBSX is just a wishful thinking.
None of the current AMD RDNA GPUs have 5 raster engines clusters per shader array.

You said current, XSX GPU is something new.

Maybe the chart below is wrong? :pie_thinking: However if the chart is right, do you find that impressive then?

zJU42jx.png
 

TBiddy

Member
It's all wishful thinking then.
Both systems should be able to accelerate existing games loading times.
The question is what it will do to the future games.
Current games will have no interesting features apart from fast travel.

No, I don't wish for that. I hope for the coming gen that we will see very short loading times. Whether it's 5 times faster,10 times faster or 40 timers faster, remains to be seen. I'm merely arguing that the videos we've seen are borderline useless to deduce anything from.
 
So I find it interesting that some fans want to point out how Xbox Series X has more Teraflops, but PS5 may have less bottlenecks, and I made those points to illustrate how it is the PS5 (Not Xbox Series X) that will reach those limits before Xbox Series X does. Your argument however is to downplay every single advantage as being No Big Deal as if to defend PS5's honor.


Ok let me summarize again by component.

CPU
Both PS4 and Xbox One were bottlenecked by their CPU, PS4 more so than XB1
That will not be the case with PS5 and Xbox SX, however if a CPU bottleneck were to occur, it will happen first on PS5

Memory Bandwidth
Both PS5 and XSX are using GDDR6, however XSX is using a 320-bit bus and arranged 10GB of its memory to run at 560GB/s bandwidth over PS5's 448GB/s. The most demanding PC games running at 4K and everything set to ultra shows from 3-4GB of VRAM consumption, it used to be half or less than that when we were running games at 1080p or 1440p, the only reason it has doubled in the past few years is because we're running games at 4K now, so we cannot expect that number to double anytime soon if we're still running games at 4K, unless we start going towards 8K, but I don't see either console going for 8K resolution, but even playing devil's advocate and assuming that VRAM figures will double despite staying at 4K res, 8GB of VRAM usage still falls within the 10GB of XSX high bandwidth memory allocation. So once again, PS5 will be the first to hit a wall in terms of memory bandwidth. The only thing to consider would be if we ever exceeded 10GB and say a game used 12GB of VRAm, you still have to account that although the PS5 has 16GB total unified, it will still reserve 3.5GB for its OS if it follows the PS4's footsteps, that leaves 12.5GB, but the CPU and audio consume memory, so at the end of the day, you will also have about 10GB left for VRAM, albeit at the slower 448GB/s.

ROPs
Rops can affect the Fillrate and despite the clock difference, XSX has a higher fillrate than PS5 because it has 80 ROPs vs 64ROPSs. So if there were ever a bottleneck due to fillrate, the PS5 will hit that wall first.

I dont find that much of a problem with fillrate or with texels in fact they are very close

PS5
Pixel Rate 142.9 GPixel/s
Texture Rate 321.6 GTexel/s
FP16 (half) performance 20.58 TFLOPS (2:1)
FP32 (float) performance 10.29 TFLOPS
FP64 (double) performance 643.1 GFLOPS (1:16)

XBSX
Pixel Rate 146.0 GPixel/s
Texture Rate 379.6 GTexel/s
FP16 (half) performance 24.29 TFLOPS (2:1)
FP32 (float) performance 12.15 TFLOPS
FP64 (double) performance 759.2 GFLOPS (1:16)

anyway you shouldnt worry about fillrate that is theoretical limits how much you can use depends entirely in other parts of the game that take frametime

if the cpu on PS4 and Xbox one was a botleneck is depending the game I dont think you can generalize, specially as lot of AAA games were very complex and 30 fps games wont run 60 FPS just because you change CPU, also mid generation lot of process was moved to GPU, things like physics for example that improves greatly performance on CPU

the ram bandwidht is a problem depending what you are doing, not every game is impacted the same or require as much and also making comparison is tricky because they are not isolated they are part of a system that intends to run a game, not every game works the same or is programmed the same way even ports, if a system has something that can utilize less RAM then it can still perform as good as a system with more BW, the SSD can diminish RAM and BW usage depending how its used, so having lot of BW is good but having less and having faster SSD can be almost as good, as good or better depending what you are doing
 
Last edited:

Journey

Banned
Yep. They are usually wrong anyway.


Remember when not too long ago everyone was saying how it would be impossible for the console's GPU to be RDNA 2, that it was wishful thinking? yea, that happened.

We'll have to wait and see, I wouldn't dismiss it just like that.
 

SonGoku

Member
he is saying that is better to wait to see how things impact the differences between games, there is a lot of extra functions that greatly can affect performance on each machine and we dont know much about them, because of this is also not a good idea to relate a percent in flops to resolution cause it doesnt necesarily work like that, I expect all the extra functions on PS4 pro (that were underutilized) now to be a norm(and some of them can greatly improve resolution) and now will be utilized(and maybe simplified) add to that whatever was added for PS5, and who know what MS also included and if they made optimizations or maybe a specific DX12 for their console given that is a closed hardware compared
These consoles share the same core GPU architecture and features, this isn't a PS3 vs 360 scenario. We can extrapolate a 21% delta will translate to 21% higher resolution best case, the trend is likely to show even smaller differences.
Barring some unforeseen bottlenecks this is what we can expect, speculating over secret sauces and bottlenecks will lead nowhere as you know that argument can go either way
and I made those points to illustrate how it is the PS5 (Not Xbox Series X) that will reach those limits before Xbox Series X does.
Actually i agree with this, early on PS5 GPU will reach higher utilization so the performance delta wont fully manifest in early games say only 8-10%
By midgen devs will start reaching higher utilization on the XSX GPU and the 17-21% delta wil start to materialize
Your argument however is to downplay every single advantage as being No Big Deal as if to defend PS5's honor.
No, my argument is that the GPU delta is 17-21%. No more no less
How big of a deal 17-21% higher resolution is its up to each individual. All i said on the matter is that'll be less noticeable than PS4/XB1.
however if a CPU bottleneck were to occur, it will happen first on PS5
By 8.5% sure (60 fps vs 55fps) on that particular CPU bound scenario
So once again, PS5 will be the first to hit a wall in terms of memory bandwidth.
On this point they are pretty much dead on equal, roughly equal bandwidth proportionate to GPU power (same GB/s per TF). XSX needs the extra bandwidth to materialize its compute advantage.
it will still reserve 3.5GB for its OS if it follows the PS4's footsteps, that leaves 12.5GB,
Why would it follow the PS4 steps? if anything OS allocation will be smaller due to faster SSD caching applications. 14GB is much more likely
but the CPU and audio consume memory, so at the end of the day, you will also have about 10GB left for VRAM, albeit at the slower 448GB/s.
Same for XSX, the benefit of a unified pool is on its flexibility, devs can distribute memory however they see fit
ROPs
Rops can affect the Fillrate and despite the clock difference, XSX has a higher fillrate than PS5 because it has 80 ROPs vs 64ROPSs. So if there were ever a bottleneck due to fillrate, the PS5 will hit that wall first.
Again not confirmed... if it ends up being just 64 will you be singing the same tune?
ROPs are rarely a limiting factor, you are more likely to be bandwidth bound. For reference Pro had 64ROPs and X 32
Remember when not too long ago everyone was saying how it would be impossible for the console's GPU to be RDNA 2,
hehe for me less than RDNA2 was impossible all along, RDNA1.5 never made sense
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
You obviously don't know how data is accessed on I/O for drives and how it's data dependent. Or even watch the metics of a drive while being read or written from/to or read an analysis or two on drive performance.

Sure... and they say that it is PS fans that overreact :rolleyes:.

You can go on making your own assumptions about me or realise that my point was actually similar: I agree theoretical peaks are theoretical for a reason. There are issues that may prevent peak performance to be achieved. So? “Concern”?

Do we know for either drive if that is theoretical max or estimated sustained? Likely theoretical, but how close?
No reason to believe that sustained performance will be much lower in either console especially a fixed specs console with dedicated co processors and memory as well as low level API’s to allow developers to make efficient use of the SSD HW at hand).
 
Again you say that but then you follow up with this:

You agree apparently only to disagree in your follow up. Lets be clear so im not misinterpreting you:
Asynchronous compute benefits both systems and the best case utilization for XSX will net 21% higher resolution all settings being equal. Do you agree or not? We can take the discussion from there otherwise we are just doing a loop

I agree which is why i find it weird you would imply XSX GPU has some extra features that will make the gap bigger due to some bottleneck in PS5
Based on what we know the best case scenario for XSX is to run at 21% higher resolution under heavy asynchronous compute utilization

That's right. Everything I have made mention of regarding GPU asynchronous compute applies to both systems. What I have been saying is that the XSX has more headroom for such tasks with all else being equal regarding visual fidelity to PS5, since it has the larger GPU.

But the larger emphasis of my point here has been to illustrate how beneficial GPGPU asynchronous compute and programming will be next gen due to efficiency gains in architecture, dev familiarity, coding techniques and algorithms centered around the task. Plus, the very real likelihood MS and Sony have developed a lot of tools to enabled third parties easier means of targeting that type of taskwork on their systems.

I want to stress that because usually when people bring up the GPU percentage delta they are only doing so in relational focus of resolution, but that is them completely ignoring the role asynchronous compute will play in game design foundations for next year. Arguably more than the SSDs imho, but that would not be me downplaying the SSDs, just putting it all into perspective.

I agree which is why i find it weird you would imply XSX GPU has some extra features that will make the gap bigger due to some bottleneck in PS5
Based on what we know the best case scenario for XSX is to run at 21% higher resolution under heavy asynchronous compute utilization

Because there's actually a pocket of people who don't seem to know the extent of XSX's own customizations, but are intrigued with PS5's. Which is cool and everything (I'm intrigued by those very same things, too), but when they go to compare those with XSX, the latter gets misrepresented. I'm only speaking about a small slice of people, btw, but it happens.

For example, some people bring up the PS5's GPU cache scrubbers like they are a revolutionary feature, but they don't know what cache scrubbers actually are or what they do. It's another way of enforcing memory cleaning, just at a local cache hierarchy. But some of these same people seem to forget XSX has ECC memory for the main RAM, which essentially serves a very similar purpose, only at a different level of the memory hierarchy.

I wasn't suggesting XSX has customizations that'd increase the gap due to PS5 bottlenecks; what I suggested was that there could be some modifications (or customizations, however you want to word it) to the GPU that alongside with other aspects of the system that could, depending on how they all work together (either outright or potentially with any level of precision by developers), result in efficiency throughput that could be a bit more than what the paper numbers regarding the percentage delta convey.

It's fair to speculate this IMHO because it's really no different than what many people are already doing with speculation on the SSDs, but I am keeping things balanced out here. Not trying to imply it would double the delta or any nonsense to that degree. Possibly some margin of error (2-3%) at most, and hey that could swing in PS5's favor for reducing that 17% - 21% delta as well.

Either way the delta remains, but my argument has never really been focused on the delta itself or how big or small it is. It's just been a reference to illustrate that it exists, and how whatever system has the advantage there can benefit from it while still maintaining parity with the other system in aspects outside of that metric.

he is saying that is better to wait to see how things impact the differences between games, there is a lot of extra functions that greatly can affect performance on each machine and we dont know much about them, because of this is also not a good idea to relate a percent in flops to resolution cause it doesnt necesarily work like that, I expect all the extra functions on PS4 pro (that were underutilized) now to be a norm(and some of them can greatly improve resolution) and now will be utilized(and maybe simplified) add to that whatever was added for PS5, and who know what MS also included and if they made optimizations or maybe a specific DX12 for their console given that is a closed hardware compared

They have already made some of these customizations, in fact. For example, they have features to BCPack that specific for XSX and aren't present on the PC side. They have 256 group object support in their implementation of mesh shading for XSX that is a higher limit (2x) that of the max size Nvidia's cards support. And they have already gone on saying the DX12U stack for XSX will include a lot of customizations specifically made for the console and its hardware/featureset.

I think if a lot of people weren't so obsessed with TFLOPs when MS announced some of this stuff we wouldn't have the false narrative of XSX being "off the shelf" or "brute forcing" a solution that seems to have picked up some traction in certain parts. But ultimately, it's up to MS to more clearly communicate those sort of things in a way that places them front-and-center, because most people won't be bothered to go digging for the info on their own or going to disparate numbers of locations to get it, either.



Info comes from this source mostly.


And then people clearly recognize the one that's from Digital Foundry.


I think both systems are going to be very big evolutions on this same concept, even if they have different implementations of it at various points. Which is a very exciting prospect.

While I personally still wish someone went with 3D Xpoint persistent memory, I understand why they didn't. Same goes for MRAM, which is only really available in super-small capacities for embedded systems, mainly. I think those technologies will be featured prominently with the mid-gen refreshes of PS5 and XSX, though, which will be very exciting.
 
Last edited:
So I find it interesting that some fans want to point out how Xbox Series X has more Teraflops, but PS5 may have less bottlenecks, and I made those points to illustrate how it is the PS5 (Not Xbox Series X) that will reach those limits before Xbox Series X does. Your argument however is to downplay every single advantage as being No Big Deal as if to defend PS5's honor.


Ok let me summarize again by component.

CPU
Both PS4 and Xbox One were bottlenecked by their CPU, PS4 more so than XB1
That will not be the case with PS5 and Xbox SX, however if a CPU bottleneck were to occur, it will happen first on PS5

Memory Bandwidth
Both PS5 and XSX are using GDDR6, however XSX is using a 320-bit bus and arranged 10GB of its memory to run at 560GB/s bandwidth over PS5's 448GB/s. The most demanding PC games running at 4K and everything set to ultra shows from 3-4GB of VRAM consumption, it used to be half or less than that when we were running games at 1080p or 1440p, the only reason it has doubled in the past few years is because we're running games at 4K now, so we cannot expect that number to double anytime soon if we're still running games at 4K, unless we start going towards 8K, but I don't see either console going for 8K resolution, but even playing devil's advocate and assuming that VRAM figures will double despite staying at 4K res, 8GB of VRAM usage still falls within the 10GB of XSX high bandwidth memory allocation. So once again, PS5 will be the first to hit a wall in terms of memory bandwidth. The only thing to consider would be if we ever exceeded 10GB and say a game used 12GB of VRAm, you still have to account that although the PS5 has 16GB total unified, it will still reserve 3.5GB for its OS if it follows the PS4's footsteps, that leaves 12.5GB, but the CPU and audio consume memory, so at the end of the day, you will also have about 10GB left for VRAM, albeit at the slower 448GB/s.

ROPs
Rops can affect the Fillrate and despite the clock difference, XSX has a higher fillrate than PS5 because it has 80 ROPs vs 64ROPSs. So if there were ever a bottleneck due to fillrate, the PS5 will hit that wall first.

Maybe you forgot that audio on the ps5 has its own Spu pool without using the cpu. Also werent the ps4 using seperate ram for its OS. I persume the ps5 would be doing the same. And with the speed of the ssd it wouldnt need that much vram. So in theory ps5 could use atleast 14gig. And xbox did this before on xbox one with its split ram and look how it turned out?

The xbox is going to hit the wall before the ps5 due to its split ram
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
For example, some people bring up the PS5's GPU cache scrubbers like they are a revolutionary feature, but they don't know what cache scrubbers actually are or what they do. It's another way of enforcing memory cleaning, just at a local cache hierarchy. But some of these same people seem to forget XSX has ECC memory for the main RAM, which essentially serves a very similar purpose, only at a different level of the memory hierarchy

https://people.csail.mit.edu/emer/papers/2004.03.prdc.cache_scrub.pdf

The way Cerny was talking about it and the coherency engines seemed to be about smart cache line invalidation and not Error recovery / data integrity which is what ECC is for with a wide and high frequency GDDR6 channel like XSX has. Although yes in literature we have scrubbers as a way to help with error correction.
 
Last edited:

DForce

NaughtyDog Defense Force
I'm allowed to have my own takeaways that don't line up with everything someone like Cerny says, you know. That doesn't mean I'm disrespecting them on any level, just that I have my doubts. Skepticism is normal and healthy to have as long as it's not ill-spirited.

For sure Cerny and his team have done their research, but there are always multiple ways of solving a lot of the same problems. They could've taken one method while MS has taken another method, both methods being roughly similar in how to solve the problems Cerny touched on, just with different means utilized to arrive at this. This is normal in life, especially in fields like technology.

Basically, Cerny's word and research have a lot of worth, but he is not the absolute authority on what is considered a viable solution and what is not. Rather, he is simply one of many. And while myself and everyone else here might not be in Cerny's position or at his level, we should be free to speculate even if, yes, that means having doubts about certain claims from people like Cerny himself. All done respectfully, of course (hopefully).

You're right; we don't know the exact memory setup with PS5...but we don't know everything with XSX's memory setup, either. And regardless of what we find out, there are still inherent limitations on that particular idea he mentioned which I feel should be taken into account. Again, healthy speculation, no harm no foul.


You know what's funny? You just quoted a post that has to do with Xbox's advantage in GPU performance, but some numbers are not confirmed.

The numbered have not been confirmed, but you overlooked that and pointed out that's what NX Gamer said in this video. You're judging based off of numbers alone.

Yet Mark Cerny mentions streaming speed of his SSD with numbers, you become skeptical. You never gave a reason why you doubt him with the numbers he provided when it comes to pure SSD speed.


It's not hard to tell when someone is bias.
 

SonGoku

Member
What I have been saying is that the XSX has more headroom for such tasks with all else being equal regarding visual fidelity to PS5, since it has the larger GPU.
I actually agree with this and expect PS5 to reach higher utilization early on and by mid gen devs start reaching higher utilization with the XSX GPU
But also important to point out the 21% resolution delta already accounts for this
Because there's actually a pocket of people who don't seem to know the extent of XSX's own customizations, but are intrigued with PS5's. Which is cool and everything (I'm intrigued by those very same things, too), but when they go to compare those with XSX, the latter gets misrepresented. I'm only speaking about a small slice of people, btw, but it happens.
Based on information available and common sense I expect both GPUs to be at near feature parity with both having specific customizations to make the most out of their specific APU setup.
People forget MS/Sony are working with AMD they have access to the same intellectual resources there's no magic secret optimizations that only one party has access to. They just had different priorities: Sony was content with a nextgen capable powerful but small GPU and focused on going above and beyond with I/O. MS was content with a next gen capable fast I/O and focused on getting the performance crown title (which they have) with a big and powerful GPU
I want to stress that because usually when people bring up the GPU percentage delta they are only doing so in relational focus of resolution, but that is them completely ignoring the role asynchronous compute will play in game design foundations for next year. Arguably more than the SSDs imho, but that would not be me downplaying the SSDs, just putting it all into perspective.
This is where you lost me again... Asynchronous compute will help XSX realize its compute advantage not surpass it
Keep in mind PS5 is RDNA2 too, all devs have to do to free enough resources to match the XSX output (including asynchronous compute) is drop resolution by 21%

The practical/in-game difference will always be a resolution difference because no matter how hard the xsx is pushed you can rest assured the ps5 will be pushed just as hard if not more
It's fair to speculate this IMHO because it's really no different than what many people are already doing with speculation on the SSDs, but I am keeping things balanced out here. Not trying to imply it would double the delta or any nonsense to that degree. Possibly some margin of error (2-3%) at most, and hey that could swing in PS5's favor for reducing that 17% - 21% delta as well.
Sure if what you mean is a few percent increase/decrease to account for edge cases that's totally reasonable. I was thinking of exaggerating the gap to say 30% and conversely shrinking it to only 5% due to unforeseen developments. Both scenarios are based on wishful thinking not based in numbers or info.
 
Last edited:

dano1

A Sheep
I used to think it was ultimately about the games?

Well for the most part that is true. But I only bought Nintendo consoles until they decided not to stay with the competition! They make some of the best games but I also want next gen graphics. And they are alway two generations behind. That’s not acceptable!
Ive been on PlayStation since my SNES and haven’t looked back. if Xbox was twice as powerful than the PlayStation I would probably switch. but I never see that happening. Can’t wait for November!
 
since PS5 can't have both at max at the same time.
actually it can, it just depends on the workload.
But some of these same people seem to forget XSX has ECC memory for the main RAM,
ECC memory tends to be more expensive, and I don't see why it'd be needed with just 16GBs of ram and non datacenter workloads. It might simply be there since the same h/w is going into servers apparently.
 
Top Bottom