• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS5 vs Xbox Series X ‘Secret Sauce’ – SSD Speed And Velocity Architecture

TC8LsdU.jpg
 

Shmunter

Member
Look, guys, the PS5's SSD configuration (12 ports) is way faster than Xbox Series X's poor 3 ports, there's no 2 ways around it, it's a massive difference, no matter how much you say "architecture differences, VELOCITY" and all that crap, the PS5 will have Its own efficiencies and architectural nuances that are even better than XSX in those regards, even developers are saying the same thing, that the PS5 is superior to XSX in many ways.

Just like XSX GPU is superior to the PS5, by a slight advantage.

We should all be happy that this generation will have the smallest differences ever between consoles, except the SSD, there will be a big difference and will show up in Sony's exclusive games.
I would go as far as to say that if there is third-party parity in levels of detail on the screen, the PS5 is being held back. No different to if there is parity between the two in pixel resolution and iq, XsX is being under-utilised. Both have their strengths.

Lots of discussion on relating ssd to ram. The fact is next gen ssd is nothing more than an exponential increase of today’s engine streaming techniques. Also less reliance on workarounds.

Faster streaming simply means more ram available to the real-time visible scene because less buffering reserves are needed for things off screen. This translates to availability of higher quality assets, whether they be textures, variety, animation - anything that makes up the game. And yes, these assets need to be rendered within the capability of the cpu/GPU, and no even PS5 monster ssd is not fast enough to saturate the GPU, it’s just about those buffers vs visible scene ram ratio that’s it.
 
Last edited:

Dante83

Banned
PS5 may have faster load times, but you can't replace that weak cpu/gpu, less efficient "kraken" compression and slower RAM. XSX is still the more powerful console overall. Why people are still debating this is beyond me...
 

DForce

NaughtyDog Defense Force
PS5 may have faster load times, but you can't replace that weak cpu/gpu, less efficient "kraken" compression and slower RAM. XSX is still the more powerful console overall. Why people are still debating this is beyond me...

GPU is only about 18% faster and the CPU less than 3%. That's like saying RTX 2070 super is very weak in comparison to a RTX 2080 super.
 

Kagey K

Banned
PS5 may have faster load times, but you can't replace that weak cpu/gpu, less efficient "kraken" compression and slower RAM. XSX is still the more powerful console overall. Why people are still debating this is beyond me...
Because we haven’t seen any games yet (specifically multiplatform games) Worse yet is that even when we do they might be debatable if they choose to optimize for one platform over the other.

The proof will be in the pudding. We just have to wait and see what that pudding look like in real life, instead of on paper.
 

Tamy

Banned
GPU is only about 18% faster and the CPU less than 3%. That's like saying RTX 2070 super is very weak in comparison to a RTX 2080 super.

PS5 - XBSX

Cores:2304 Cores:3328
TMUs:144 TMUs:208
Rops:64 Rops:80
Bus:256 Bit Bus:320 Bit

That's a substantial difference between the two platforms for those trying to push that the only difference is 18% and limited to CU count.

43% more cores.
44% more TMUs
25% more tops
25% more Mem Bus.

And what's also very important:

You can still do a lot more work with 2TF's RDNA2 than you can with 500GF's of GCN.

anyway, we will see as soon as the games arrive! But don't act like it's just a "18%" difference. also, keep in mind that PS5 clocks are variable
 

Dory16

Banned
I would go as far as to say that if there is third-party parity in levels of detail on the screen, the PS5 is being held back. No different to if there is parity between the two in pixel resolution and iq, XsX is being under-utilised. Both have their strengths.

Lots of discussion on relating ssd to ram. The fact is next gen ssd is nothing more than an exponential increase of today’s engine streaming techniques. Also less reliance on workarounds.

Faster streaming simply means more ram available to the real-time visible scene because less buffering reserves are needed for things off screen. This translates to availability of higher quality assets, whether they be textures, variety, animation - anything that makes up the game. And yes, these assets need to be rendered within the capability of the cpu/GPU, and no even PS5 monster ssd is not fast enough to saturate the GPU, it’s just about those buffers vs visible scene ram ratio that’s it.
We’ll make sure to ask you what system is being held back when digital Foundry starts revealing the performance disparities.
Preemptive damage control smh
 

Bo_Hazem

Banned
Let's cut it short to facts:

Xbox Series X (11sec) vs Xbox One X (51sec): State of Decay 2. Difference is only 4.6x.




PS5 pre-devkit state (1 year ago, 0.8sec) vs PS4 Pro (8sec): Spider-man. Difference is 10x, with WIRED reporting 0.8 vs 15sec on another test (18x), questioning the 0.8sec being due to other stuff happening inside the system before loading. Plus it's reported to be a slower version. No need to take all that talk as anything, just pay attention to actual videos we can see:

 
Last edited:

Lort

Banned
As the original post says..
Xbox gpu 15-30% faster
Xbox ssd 30-50% slower

That’s just the facts.
 
Last edited:

Kagey K

Banned
Let's cut it short to facts:

Xbox Series X (11sec) vs Xbox One X (51sec): State of Decay 2. Difference is only 4.6x.




PS5 pre-devkit state (1 year ago, 0.8sec) vs PS4 Pro (8sec): Spider-man. Difference is 10x, with WIRED reporting 0.8 vs 15sec on another test (18x), questioning the 0.8sec being due to other stuff happening inside the system before loading. Plus it's reported to be a slower version. No need to take all that talk as anything, just pay attention to actual videos we can see:



I believe.
 

Bo_Hazem

Banned
I’m not going to argue it, only time will tell.

No need to. Just wanted to throw what we have as solid, visible tests and rethink about all the PR talk from both sides. It's always safer to wait and see actual head-to-head comparisons of the same games.
 
Last edited:

sinnergy

Member
j4eGPco.jpg



This is from Cerny’s talk , it shows the same speed as Series X ... 5 GBs, SSD on the next slide they talk about 5.5. at 7.37 in Cerny pres. What’s that about ??

MS has theirs up to 6 GBs.
 
Last edited:

pawel86ck

Banned
Or maybe that's just the difference due to random read speeds vs sequential the producers provide, which never actually happens in games?
I dont think so because the difference is too big. MS engineer has told Dealergaming how fast XSX load sea of thieves level and it was only 3-5 seconds. Later on dealer has made comparison with his PC SDD evo 970 3.5GB/s and the same game needed 15 seconds. So XSX SDD despite being slower (2.5 vs 3.5 GB/s raw speed) load the same game 3x times faster. Only decompression bottleneck on PC can explain such big difference. And keep in mind, PS5 SDD will be even faster :p
 
Last edited:

sinnergy

Member
I dont think so because the difference is too big. MS engineer has told Dealergaming how fast XSX load sea of thieves level and it was only 3-5 seconds. Later on dealer has made comparison with his PC SDD evo 970 3.5GB/s and the same game needed 15 seconds. So XSX SDD despite being slower (2.5 vs 3.5 GB/s raw speed) load the same game 3x times faster. Only decompression bottleneck on PC can explain such big difference.
That’s because console are a closed box and have more customizations, has always been the case, maximizing performance .
 

DForce

NaughtyDog Defense Force
PS5 - XBSX

Cores:2304 Cores:3328
TMUs:144 TMUs:208
Rops:64 Rops:80
Bus:256 Bit Bus:320 Bit

That's a substantial difference between the two platforms for those trying to push that the only difference is 18% and limited to CU count.

43% more cores.
44% more TMUs
25% more tops
25% more Mem Bus.

And what's also very important:

You can still do a lot more work with 2TF's RDNA2 than you can with 500GF's of GCN.

anyway, we will see as soon as the games arrive! But don't act like it's just a "18%" difference. also, keep in mind that PS5 clocks are variable
Part of our information is coming from www.techpowerup.com with unconfirmed numbers.

If you're going to try to post numbers, at least bring confirmed numbers and not guesses.

This was pointed out in another thread, but for some reason, you still posted them.

We also don't know how PS5's memory setup is.
 

Panajev2001a

GAF's Pleasant Genius
j4eGPco.jpg



This is from Cerny’s talk , it shows the same speed as Series X ... 5 GBs, SSD on the next slide they talk about 5.5. at 7.37 in Cerny pres. What’s that about ??

MS has theirs up to 6 GBs.

Target was 5 GB/s uncompressed, overshot slightly by reaching 5.5 GB/s. Kraken enhances compression can bring it to an average of 8-9 GB/s effective once you start transmitting compressed data and calculate the effective data rate post decompression. The Kraken decoder’s max decompression rate is about 22 GB/s.
 

temroi

Neo Member
j4eGPco.jpg



This is from Cerny’s talk , it shows the same speed as Series X ... 5 GBs, SSD on the next slide they talk about 5.5. at 7.37 in Cerny pres. What’s that about ??

MS has theirs up to 6 GBs.
This is the Bandwidth uncompressed.
For the Xbox the 5 GBs are the Bandwidth when compressed. Otherwise its around 2.5 GBs.
 
because after the hdd has loaded the game into ram, it's advantage ends. All this SSD, SSD, SSD shit, have you been living under a rock?

current games stream data all the time, its a necessity because you cannot have the entire of the assets and its LODs on ram unless you are running an indie game that can have all the game on ram, if you cannot compensate in speed you have to compensate with size for example a cache to mitigate slow streaming, the seek time is another advantage it allows using less ram to store assets(assuming BW is enough for its size )


different strategies
 

93xfan

Banned
The proof will be in the pudding. We just have to wait and see what that pudding look like in real life, instead of on paper.

can’t wait for Digital Foundry to clear this up. The proof of the pudding will be in the tasting.
 

ZywyPL

Banned
PS5 is 10 times faster to load a tech demo designed to demonstrate fast loading.
Xbox SX is 4.6 times faster to load an entire game, not optimized for SSD.

I know which is more impressive.

Both actually show that neither 4800% nor 9000% more bandwidth vs HDD doesn't scale linearly with loading times reduction.
 

Three

Member
j4eGPco.jpg



This is from Cerny’s talk , it shows the same speed as Series X ... 5 GBs, SSD on the next slide they talk about 5.5. at 7.37 in Cerny pres. What’s that about ??

MS has theirs up to 6 GBs.
The "at least" is uncompressed which is why it is an 'at least' value I believe. MS is an 'up to' value with compression.
 
Sure, there are limiting factors, but it also depends on how the data is streamed. Cerny's idea is to stream data as the player is turning. There's only one console who can stream data that fast and there will be limiting factors based on pure speeds. This is also not factoring in their I/O setup which they went in to great lengths to stop any soft of bottlenecks.


But what if they're able to use more memory as NX Gamer suggested? Xbox using a split memory could become a factor as the generation goes on.

These are mere assumptions. We don't have an established, real-world proof of what speeds are required to stream in data to the player as they are turning, without resulting in texture pop-in or immersion-breaking. Many are assuming 5.5 GB/s or more when that may or may not be the case.

NX Gamer's suggestion on the memory was in reference to compacting OS data to the SSD when not needed at runtime. However, you CAN'T compact all OS data, like the kernel, this way, otherwise the system will be unstable and crash. A good deal of OS data has to be resident in main RAM at all times because the system components need that data as quickly as possible (not to mention, games will be using system services that therefore need to be in RAM). As well, they need a level of granularity and alterability that the NAND modules on the SSD won't provide, and a way of that data being sent which PCIe cannot provide.

The idea of offloading chunks of the OS off to the SSD works better if talking about the product of some files and services that have had to run, and won't be expected to run again for a while. For example you could just package all of the relevant services into some compressed file and put it at a specified location on the SSD, then load it again and decompress into main memory when it's needed.

But even THAT doesn't have a ton of realistic use-cases; for starters when OSes install programs they tend to group all of the program's registries, .dll (using Windows as reference here, but this is something all OSes tend to do), image files etc. very close to each other, in the same relevant location, contained to folders and structured in a hierarchy. They aren't installing Program A's image files in Program Z's folders, for example. This kind of hierarchy is done specifically to ensure faster loading of program data and to prevent errors in inability to find key files for the program at expected places, lest the program need to search for the file (and this depends on if it knows where to search as contingencies, and IF it can even do this, since said type of thing would be more at the OS level and would be ran through a program-agnostic OS utility, usually when the program is not running).

All OSes do this, and this also relates to how those program files are written to storage devices when installed or modified. That's why file I/O exists; you don't want to write Program A's image files to a block sector of the drive where Program Z's files are being kept, because it'll increase the read time and you want any program's contents to load as quickly as possible, which is best accomplished through a combination of fast enough speed on the hardware accessing the data (and providing the data for access) AND smart organization of the data on the drive. NX Gamer's idea would be more pertinent if the system in question had poor file management organization, and at that point the system has MUCH bigger problems than offloading OS data to the SSD would entail.

But for the very small use-cases where such an idea could possibly work, both systems have SSDs and I/O fast enough to facilitate this. I just don't see it being particularly useful outside of fringe situations, however.

But the gap was bigger... XSX doesn't have the same headroom its advantage will be much less apparent than PS4/XB1 and that's before factoring diminishing returns from resolutions in the region of 4K, you can't possibly expect the same difference in output. Dismissing percentages to focus on flops differences between generations (500GF & 2TF) is meaningless without taking into account proportions

When i said parity i meant everything from visuals to physics, particle effects etc. All it would take to produce the same results (visual and compute oriented) is to run at 17-21% lower resolution

Asynchronous compute techniques will also benefit PS5 though... The best case scenario for XSX with asynchronous compute is it'll reach the same level of GPU utilization as PS5 which is again a 21% gap at best. Running at 21% lower resolution would ensure it can reach settings parity

Even taking what you've said, the XSX has a GPU compute advantage with all things considered equal. You even admit that yourself when saying PS5 will be capable of those same features (it will), but to match them it will have to lower resolution (and possibly framerate).

It will come down to developer preference and what things the game specifically benefits from. But at the end of the day the XSX will hold the advantage in that department due to having a bigger GPU. What I'm more interested in is seeing the efficiency gains in both developer programming techniques targeted at GPGPU programming/compute and the leaps in algorithms, scripts, coding techniques, capability etc. of logic centered around on GPGPU asynchronous compute. You will be able to get a lot more out of, say, 300 GFs worth the coming gen then we got with the equivalent amount last gen, due to more dev familiarity, much better engine scalability (and fragmentation of engine components for asynchronous scaling), better API tools and features targeting such capabilities, and efficiency gains in RDNA2 architecture.

THAT is why I've said just taking the percentage deltas alone is meaningless on its own, because in practice, with both systems featuring visual parity, and just simply using GPGPU programming metrics off the GCN architecture, you will most likely get a good deal more than just 17 - 21% more asynchronous compute delta in practice considering efficiency scaling, IF devs were simply using unaltered code, scripts, algorithms etc. designed with GCN architecture in mind. Now obviously that's not an exactly correct way of looking at it, but the point is to illustrate the potential level of coding, scripting, algorithm etc. improvements in asynchronous compute tasks moving forward from the confluence of factors mentioned in the above paragraph. Things that will potentially allow devs to do more tasks with less cycles.

Essentially, it's similar an advantage to what the the PS5's SSD has over the XSX's SSD. It doesn't mean the other system is incapable of doing the same essential functions as the other, but it does mean they may have to sacrifice on a few things match the other in technical capability or feature capability on that particular component (SSD, GPGPU asynchronous compute, etc.), or use some combination of other system resources cleverly to simulate the function. There are limits to this of course, but there is some wiggle room on certain things.
 
Last edited:

Ascend

Member
I dont think so because the difference is too big. MS engineer has told Dealergaming how fast XSX load sea of thieves level and it was only 3-5 seconds. Later on dealer has made comparison with his PC SDD evo 970 3.5GB/s and the same game needed 15 seconds. So XSX SDD despite being slower (2.5 vs 3.5 GB/s raw speed) load the same game 3x times faster. Only decompression bottleneck on PC can explain such big difference. And keep in mind, PS5 SDD will be even faster :p
I don't know... Decompression time is dependent on the amount of compression and the file size, all other things being equal. The XSX version has not been upgraded compared to the original Xbox One version, right? So, even though it might be a decompression bottleneck, if the assets like textures for example are a higher resolution on the PC compared to the console version, it will naturally take longer to compress/decompress, and will therefore load slower, independent of the SSD speeds. I don't know how the assets compare in reality. I'm just pointing this out.
 

DForce

NaughtyDog Defense Force
These are mere assumptions. We don't have an established, real-world proof of what speeds are required to stream in data to the player as they are turning, without resulting in texture pop-in or immersion-breaking. Many are assuming 5.5 GB/s or more when that may or may not be the case.

So now Cerny is just assuming how it will work?

You guys are so quick to discredit everything. It's getting ridiculous at this point.


NX Gamer's suggestion on the memory was in reference to compacting OS data to the SSD when not needed at runtime. However, you CAN'T compact all OS data, like the kernel, this way, otherwise the system will be unstable and crash. A good deal of OS data has to be resident in main RAM at all times because the system components need that data as quickly as possible (not to mention, games will be using system services that therefore need to be in RAM). As well, they need a level of granularity and alterability that the NAND modules on the SSD won't provide, and a way of that data being sent which PCIe cannot provide.

The idea of offloading chunks of the OS off to the SSD works better if talking about the product of some files and services that have had to run, and won't be expected to run again for a while. For example you could just package all of the relevant services into some compressed file and put it at a specified location on the SSD, then load it again and decompress into main memory when it's needed.

But even THAT doesn't have a ton of realistic use-cases; for starters when OSes install programs they tend to group all of the program's registries, .dll (using Windows as reference here, but this is something all OSes tend to do), image files etc. very close to each other, in the same relevant location, contained to folders and structured in a hierarchy. They aren't installing Program A's image files in Program Z's folders, for example. This kind of hierarchy is done specifically to ensure faster loading of program data and to prevent errors in inability to find key files for the program at expected places, lest the program need to search for the file (and this depends on if it knows where to search as contingencies, and IF it can even do this, since said type of thing would be more at the OS level and would be ran through a program-agnostic OS utility, usually when the program is not running).

He referenced caching idle RAM parts. Every bit of freed RAM would have real world performance. It's purely speculating how freeing up more memory is possible, but we still don't know how it's actually set up. Giving devs extra room to work with is always beneficial no matter what.

There's a clear trend with members who support XboxGAF on here. When XB has an advantage on paper, it's always made clear, but when it's for the PS5, it's always wait and see because the advantages may be nullified.
 

SonGoku

Member
Even taking what you've said, the XSX has a GPU compute advantage with all things considered equal. You even admit that yourself when saying PS5 will be capable of those same features (it will), but to match them it will have to lower resolution
But at the end of the day the XSX will hold the advantage in that department due to having a bigger GPU
I always acknowledged the XSX comes on top, what i said is the PS5 can replicate the exact same games (visually and compute) on a 17-21% lower resolution. This 21% difference (at best) will be much less noticeable than the differences between PS4/XB1
What I'm more interested in is seeing the efficiency gains in both developer programming techniques targeted at GPGPU programming/compute and the leaps in algorithms, scripts, coding techniques, capability etc. of logic centered around on GPGPU asynchronous compute. You will be able to get a lot more out of, say, 300 GFs worth the coming gen then we got with the equivalent amount last gen, due to more dev familiarity, much better engine scalability (and fragmentation of engine components for asynchronous scaling), better API tools and features targeting such capabilities, and efficiency gains in RDNA2 architecture.
I think this argument is flawed because PS5 is RDNA2 so the proportions remain the same and 17-21% already accounts for ideal use of resources which wont always be the case so the in game advantage could be even lower.
you will most likely get a good deal more than just 17 - 21%
This is patently wrong, neither console can go over its theoretical peak, that is the best case scenario. Both consoles share the same arch, Asynchronous compute will benefit PS5 all the same
Even if a game went crazy with fine grained Asynchronous compute specifically designed to max the XSX GPU, the absolute best you can hope for is XSX reaches the same level of VALU utilization as PS5 which again would translate to 21% higher resolution at exact same settings (visual and compute)
Essentially, it's similar an advantage to what the the PS5's SSD has over the XSX's SSD.
I dont think a 1.21X advantage at best is similar to at all to a 2.29X advantage. More will have to be sacrificed if a game fully exploits it
That said, I dont expect 3rd parties to fully exploit PS5's SSD so differences will come down to loading, more/less apparent LOD transitions and pop in. Is this what you meant to similar advantage in practice due to devs targeting XSX SSD as the base?
These are mere assumptions. We don't have an established, real-world proof of what speeds are required to stream in data to the player as they are turning,
Cerny gave figures though so its not just assumptions, in the half second it takes to turn they can load 4GB of compressed data which is apropiate for next gen assets quality.
 
Last edited:

SonGoku

Member
PS5 - XBSX

Cores:2304 Cores:3328
TMUs:144 TMUs:208
Rops:64 Rops:80
Bus:256 Bit Bus:320 Bit

That's a substantial difference between the two platforms for those trying to push that the only difference is 18% and limited to CU count.

43% more cores.
44% more TMUs
25% more tops
25% more Mem Bus.
But you forget clock-speeds affect the equation
PS5 has less cores, tmus and rops(unconfirmed) but each of those units is doing 22% more work than the respective xsx units, this closes the gap to 18%
XSX needs the extra bandwidth to materialize its 18-21% GPU advantage over PS5

You can still do a lot more work with 2TF's RDNA2 than you can with 500GF's of GCN.
PS5 is RDNA2 too though.... So 2TF will translate to 21% higher resolution at best
 
Last edited:

Dory16

Banned
Why are ps5 fans arguing about performance? Is it not more rational to focus on other aspects of the value of a console? Even if they are right and the xsx is “only” 10% more powerful in RDNA2 terms, why go on for pages worth of threads about it? That’s like saying you can only knock my teeth out but not make me unconscious in a fight. If I don’t smile, nobody will know.
 

DeepEnigma

Gold Member
Why are ps5 fans arguing about performance? Is it not more rational to focus on other aspects of the value of a console? Even if they are right and the xsx is “only” 10% more powerful in RDNA2 terms, why go on for pages worth of threads about it? That’s like saying you can only knock my teeth out but not make me unconscious in a fight. If I don’t smile, nobody will know.

Only PS5 fans, eh?
 
So now Cerny is just assuming how it will work?

You guys are so quick to discredit everything. It's getting ridiculous at this point.

Cerny is not the only engineer in the world. And nothing's being discredited; if we can question claims from Xbox people like Phil Spencer, we should be able to question claims from Mark Cerny. It's not my fault I've done enough of my own research into this stuff to cast a bit of doubt on certain things he claimed in his presentation (which mind, was as much for devs as it was a PR piece).

He referenced caching idle RAM parts. Every bit of freed RAM would have real world performance. It's purely speculating how freeing up more memory is possible, but we still don't know how it's actually set up. Giving devs extra room to work with is always beneficial no matter what.

There's a clear trend with members who support XboxGAF on here. When XB has an advantage on paper, it's always made clear, but when it's for the PS5, it's always wait and see because the advantages may be nullified.

I know what he referenced, and addressed that. The point is that the use-cases for such a task are going to be limited because there are a lot of critical services to the OS that NEED to be in the main RAM, as well as the fact many games require presence of those same services and utilities.

He speculated, and I pontificated on that speculation. Nothing more. It has nothing to do with "picking a side" when it comes to choosing when to support paper specs or not, because I theorized a lot of things in terms of advantages PS5 can have over XSX shortly after Road to PS5, and quite a lot of those I still stand by.

However, it's not hard to acknowledge a majority of posters on the threads, mostly hardcore Sony ones, are caping very hard for PS5 and pecking away at XSX's advantages as much as possible. Sorry if I want to bring some balance back to next-gen discussions, maybe that has been banned and I didn't get the memo. But I'll ignore it regardless because I am genuinely interested in giving the systems their due where they are merited. And no, that doesn't mean playing along to fake narratives like "brute forcing" vs "optimization and elegance" or "narrow and fast" vs "wide and slow"; when you look at the systems deeper they both embody elements of all of these design philosophies at various levels.

And while you probably think otherwise since I'm being critical of some PS5 performance claims (and SSD claims which would mean I'm being critical of both systems in that department), I actually am a Sony fan as well and want the best out of PS5. But I don't need to have a desire of wanting the worst from XSX to do so.

I always acknowledged the XSX comes on top, what i said is the PS5 can replicate the exact same games (visually and compute) on a 17-21% lower resolution. This 21% difference (at best) will be much less noticeable than the differences between PS4/XB1

You're throwing an assumed conclusion to this that was never the point of my argument, however. I already know the factor of diminishing returns, and a 17 - 21% difference is smaller than a 35% difference. But that wasn't the focus of the discussion on my end, just to illustrate that achieving parity in that department would require a sacrifice on PS5's end in another area. That's it.

I think this argument is flawed because PS5 is RDNA2 so the proportions remain the same and 17-21% already accounts for ideal use of resources which wont always be the case so the in game advantage could be even lower.

That wasn't the point of takeaway here, either. My purpose was to illustrate the efficiency in GPGPU asynchronous compute-related tasks going forward due to dev familiarity, engine scalability improves, new and improved algorithms, scripts, coding concepts etc. And how this will help better benefit GPGPU asynchronous programming going forward.

Something both systems will benefit from, obviously, but something that comes with an extra benefit to XSX due to having more GPU headroom, which they can utilize while retaining visual and framerate fidelity with PS5 (more or less). Once you get into the focus of "well, the percentage delta is still smaller", that's moving away from the main point because anything regarding the percentage delta can be assumed to be truthful while not impacting the purpose of what was already mentioned.

This is patently wrong, neither console can go over its theoretical peak, that is the best case scenario. Both consoles share the same arch, Asynchronous compute will benefit PS5 all the same
Even if a game went crazy with fine grained Asynchronous compute specifically designed to max the XSX GPU, the absolute best you can hope for is XSX reaches the same level of VALU utilization as PS5 which again would translate to 21% higher resolution at exact same settings (visual and compute)

You mistook this part of my post, as well. That part was not mentioned as an emphasized comparison, but as a means of alluding to how much more efficiently that type/amount of GPU performance for asynchronous tasks can be realized due to architectural improvements, programming improvements, engine scalability improvements and algorithm improvements. I just happened to use it in regards to the 17 - 21% delta to give as an example, but we've long established that these improvements will benefit both systems regardless of what percentage of their own GPU workload is actually targeted towards asynchronous compute.

Speaking of VALU, NX Gamer's GPU video touches on it, with the assumption both systems will see MUCH better VALU compared to PS4 and XBO, due to RDNA2 improvements and node efficiency gains. IIRC his numbers were 7.97 TF PS5 and 9.42 TF XSX, both at 67.5%. That's just one estimate, however, but I felt it relevant to mention since you brought up VALU.

A bigger question would be if PS5's higher clock results in any significant performance benefits, because IMHO they are still well north of the upper sweetspot for RDNA2 on 7nm DUV enhanced, and while RDNA2 might have improved some on that front I doubt it is by magnitudes to the point where you get linear power-to-frequency scaling. In fact, the suggestion the frequency drops by 2% with a 10% power decrease somewhat hints at this, as that's a 5:1 ratio. BUT, we will need to see how their cooling solution can aid in this as well, to be perfectly fair.

I dont think a 1.21X advantage at best is similar to at all to a 2.29X advantage. More will have to be sacrificed if a game fully exploits it
That said, I dont expect 3rd parties to fully exploit PS5's SSD so differences will come down to loading, more/less apparent LOD transitions and pop in. Is this what you meant to similar advantage in practice due to devs targeting XSX SSD as the base?

It doesn't if you are only focusing on percentages, but that's why I said focusing on percentages alone is effectively meaningless. You have to also consider the context of what the percentages are in reference to and the weight in which those contexts have onto the overall scope, in this case a game console's performance knowing what game consoles are designed to perform.

I don't know what you quite mean by "more has to be sacrified" to exploit a wider GPU. Will it be as straigtforward as exploiting an SSD advantage? No. But these are GPUs; by their nature they are perfectly suited for parallelized workloads, that is at the heart of their design. Developers have gotten much more used to GPU programming and advanced APIs help with scaling and organizing workflows for GPUs these days more than ever, this also includes with optimizing saturation of the GPU for those workloads. Not to mention advances in game design, programming, algorithms etc. which will invariably help in easing up on the demand from developers to target GPGPU asynchronous compute.

So you're partly right on what you ask towards the end; if the PS5 SSD I/O and memory controller have more hardware benefits at the silicon level, that will just allow for some amount of "raw" headroom advantage regardless of how well optimized XSX's SSD setup and I/O are, since there's physically less hardware to play with. That is analogous to XSX's advantage when it comes to the GPU, but where you and I seem to differ on that is the potential ease of utilization of that "raw" advantage; I'm in the ballpark where it won't be particularly difficult, tho perhaps not as easily as utilizing a raw SSD I/O headroom advantage.
 
Last edited:

DForce

NaughtyDog Defense Force
Cerny is not the only engineer in the world. And nothing's being discredited; if we can question claims from Xbox people like Phil Spencer, we should be able to question claims from Mark Cerny. It's not my fault I've done enough of my own research into this stuff to cast a bit of doubt on certain things he claimed in his presentation (which mind, was as much for devs as it was a PR piece).



I know what he referenced, and addressed that. The point is that the use-cases for such a task are going to be limited because there are a lot of critical services to the OS that NEED to be in the main RAM, as well as the fact many games require presence of those same services and utilities.

You've done enough research to say he is assuming?

Looks like your information is not based on much, but rather you're looking for reasons to doubt him. He is not going to make assumptions on all his work because it's clear that tons of research and testing has been going on for years, which is why they made 5.5GB\s a target.

No matter how you slice it, the numbers back up his statement when it comes to streaming.

He speculated, and I pontificated on that speculation. Nothing more. It has nothing to do with "picking a side" when it comes to choosing when to support paper specs or not, because I theorized a lot of things in terms of advantages PS5 can have over XSX shortly after Road to PS5, and quite a lot of those I still stand by.

However, it's not hard to acknowledge a majority of posters on the threads, mostly hardcore Sony ones, are caping very hard for PS5 and pecking away at XSX's advantages as much as possible. Sorry if I want to bring some balance back to next-gen discussions, maybe that has been banned and I didn't get the memo. But I'll ignore it regardless because I am genuinely interested in giving the systems their due where they are merited. And no, that doesn't mean playing along to fake narratives like "brute forcing" vs "optimization and elegance" or "narrow and fast" vs "wide and slow"; when you look at the systems deeper they both embody elements of all of these design philosophies at various levels.

And while you probably think otherwise since I'm being critical of some PS5 performance claims (and SSD claims which would mean I'm being critical of both systems in that department), I actually am a Sony fan as well and want the best out of PS5. But I don't need to have a desire of wanting the worst from XSX to do so.

Of course he's speculating, but we still don't know how their memory setup will work. Any bit of extra resources is beneficial, and you can't claim it will not result in some real world performance.
 
Top Bottom