• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS5 vs Xbox Series X ‘Secret Sauce’ – SSD Speed And Velocity Architecture

Panajev2001a

GAF's Pleasant Genius
One reason of the top of my head: Remember Shang Tsung in MKII? How he could just shape shift into any other character on the fly? It would become a thing again.

This kind of discussions seems to always turn into a “What would an SSD ever do for us in return?” kind of hilarity (paraphrasing this skit:

).

“Apart from <... etc... etc...> what would fast SSD’s ever do for us?”... On and on...
 
Last edited:

oldergamer

Member
One reason of the top of my head: Remember Shang Tsung in MKII? How he could just shape shift into any other character on the fly? It would become a thing again.
This isn't a good example. You mean back when they were using sprites? This would he a lot harder to do convincingly now as, but in theory you could already load multiple characters into memory at once, so if the developers wanted to, they could do this on current gen consoles. not a good example.

....but please come up with 4 more gameplay revolutions. none of which are just speeding start up load time, and none of which Xbox sex couldn't also achieve via SSD?
 

DForce

NaughtyDog Defense Force
PS5 SSD has clearly faster data transfer, but what does that mean? Because Cerny himself wasn't able to show. I asked like a month ago what exactly this bandwidth change, and I meant concrete examples, what will be that spammed to death "new paradign", and needless to say, I am yet to receive a single... So I don't know, maybe you missed to post, so I'm asking you personally, because you seem/act like you have some top secret knowledge that is yet to be revealed - what will be different on my TV screen once i fire up Fifa 2022? Or CoD from 2024? Or next Forza/GT? Examples, give me at least some theoretical possibilities. Seriously, people who spam all the treads with all the SSD buzzwords but are unable to come of any example how the games will be different on upcoming consoles should just STFU or get banned, Or both. And when pushed to a corner they just go back to thet SM demo which shows nothing but just a faster loading time, pathetic.

So I'm asking you for, let's say just 5 examples of how the games gameplay will change because of SSD. Give me the "new paradign", the "game-changer", I want to understand what everyone is so hyped about.

At this point you guys are not willing to listen.

Digital Foundry made videos explaining how SSDs are going to change gaming next generation.

NX Gamer made a videos about it and so did RedGamingTech.


Their videos were posted throughout the next-generation analysis thread, but for some odd reason, you guys need someone to explain it to you again.

It seems like you guys would rather accept your tech analysis from Dealer, Colteastwood or some fake developer with an anime avatar instead of Digital Foundry.

It's not a buzzword
It's not secret sauce.

There's no need to explain it again. They explain how and why in their videos.
 

FranXico

Member
This isn't a good example. You mean back when they were using sprites? This would he a lot harder to do convincingly now as, but in theory you could already load multiple characters into memory at once, so if the developers wanted to, they could do this on current gen consoles. not a good example.

....but please come up with 4 more gameplay revolutions. none of which are just speeding start up load time, and none of which Xbox sex couldn't also achieve via SSD?
That is exactly what a fast SSD would help with. Not just loading multiple characters at once, but being able to stream the entire character assets and data of any character in the roster.
 

SleepDoctor

Banned
You forgot the latest one:
PS5 GPU is even weaker, because one of the CUs gets the cache disabled for audio use (so, Tempest is just a software feature contrary to what Cerny said in his presentation).
And although audio on XSX has similar features, there it really is done in a stand-alone chip.


You forget when you join these spin threads you're entering a twilight zone lol. Where absolutely nothing makes sense 9.2 tf > 12.1 tf.

Can't take them seriously. They all become professional engineers via the Cerny presentation 😂.
 

ZywyPL

Banned
At this point you guys are not willing to listen.

Digital Foundry made videos explaining how SSDs are going to change gaming next generation.

NX Gamer made a videos about it and so did RedGamingTech.


Their videos were posted throughout the next-generation analysis thread, but for some odd reason, you guys need someone to explain it to you again.

It seems like you guys would rather accept your tech analysis from Dealer, Colteastwood or some fake developer with an anime avatar instead of Digital Foundry.

It's not a buzzword
It's not secret sauce.

There's no need to explain it again. They explain how and why in their videos.

Then list some of those examples.
 

DForce

NaughtyDog Defense Force
This isn't a good example. You mean back when they were using sprites? This would he a lot harder to do convincingly now as, but in theory you could already load multiple characters into memory at once, so if the developers wanted to, they could do this on current gen consoles. not a good example.

....but please come up with 4 more gameplay revolutions. none of which are just speeding start up load time, and none of which Xbox sex couldn't also achieve via SSD?

It can't.

You're talking about a 30+ character roster that you would need to load on to memory for Shang Tsung to have access to. Current gen consoles won't be able to handle that much data.

These characters can be streamed from HDD to the PS4\XB1's memory, but why would they do that and break the game? Players would have to wait until the characters load while playing competitively.
 
This isn't a good example. You mean back when they were using sprites? This would he a lot harder to do convincingly now as, but in theory you could already load multiple characters into memory at once, so if the developers wanted to, they could do this on current gen consoles. not a good example.

Yes, but you now have to sacrifice precious RAM capacity and lower graphical details and fidelity.

Devs could make an entire game with only 13GB of RAM. They don't have to worry about streaming textures, geometry, animation, assets and data. Imagine FF7 Remake done without the constraints of streaming and the entire game loaded to 5.5GB of RAM. That hypothetical FF7 Remake wouldn't need narrow pathways and corridors, but it will look like Sea of Thieves than the CGI-looking it is now.
 
It can't.

You're talking about a 30+ character roster that you would need to load on to memory for Shang Tsung to have access to.

They can. But they have to remove so much details it will probably look PS3 graphics. But devs know that kind of game won't survive with today's graphical expectations.
 

Shmunter

Member
Then list some of those examples.
Games are yet to come. But if you acknowledge the basic principle of 2x speed = 2 x the asset quality, or 2 x as many assets in the same amount of time then I think anyone can grasp how that can translate to a game.

It doesn’t mean more polygons or more draw distance, or more pressure on a GPU. It just can mean something as simple as 100 unique trees in a forest of a 1000 vs only 50 unique trees. Or twice the detail in a cars dashboard as a camera zooms in through the car window. Not rocket science.
 

Panajev2001a

GAF's Pleasant Genius
Games are yet to come. But if you acknowledge the basic principle of 2x speed = 2 x the asset quality, or 2 x as many assets in the same amount of time then I think anyone can grasp how that can translate to a game.

It doesn’t mean more polygons or more draw distance, or more pressure on a GPU. It just can mean something as simple as 100 unique trees in a forest of a 1000 vs only 50 unique trees. Or twice the detail in a cars dashboard as a camera zooms in through the car window. Not rocket science.

It is nice how you must prove with almost full tech demos why having a very fast SSD with old style cart like access speed is a big generational change, but not what 15% faster FLOPS for the GPU mean at a similar burden of proof.

Sure, some people will come back with the resolution independent generational revolutionary hand wavy FLOPS argument and around the same time have to juggle that with potentially having XSX games that need to be running on a 4 TFLOPS machine (1/3 of the speed) like XSS or a 6 TFLOPS machine (1/2 of the performance) like Xbox One X at the same time and with the other talking point where this is easy to do/trivial scaling but a much smaller difference with the PS5 means an impossible to defeat challenge.

Nice mental gymnastics. Asking for high burden of proof from others and when requested from others offering laugh reactions to posts ;).
 
Last edited:
EcWh0vm.gif

freedom-gif.gif

You forgot the latest one:
PS5 GPU is even weaker, because one of the CUs gets the cache disabled for audio use (so, Tempest is just a software feature contrary to what Cerny said in his presentation).
And although audio on XSX has similar features, there it really is done in a stand-alone chip.


That's the new thing from the discord? That's pretty pathetic.
 
Last edited:

ZywyPL

Banned
They can. But they have to remove so much details it will probably look PS3 graphics. But devs know that kind of game won't survive with today's graphical expectations.

Fighting games look like shit anyway, with just two characters on screen and a tiny arena, those games should've the absolute best graphics out there, but that's far from truth, it's quite the opposite actually... So if the devs could introduce some new gameplay possibilities I'd say go for it, there's nothing to lose.
 

Ascend

Member
You forgot the latest one:
PS5 GPU is even weaker, because one of the CUs gets the cache disabled for audio use (so, Tempest is just a software feature contrary to what Cerny said in his presentation).
And although audio on XSX has similar features, there it really is done in a stand-alone chip.
That's interesting. I still haven't seen any confirmation on whether it is a separate CU from the GPU, or if it's integrated. So many things are vague with this reveal...
 

SleepDoctor

Banned
It is nice how you must prove with almost full tech demos why having a very fast SSD with old style cart like access speed is a big generational change, but not what 15% faster FLOPS for the GPU mean at a similar burden of proof.

Sure, some people will come back with the resolution independent generational revolutionary hand wavy FLOPS argument and around the same time have to juggle that with potentially having XSX running on a 4 TFLOPS machine (1/3 of the speed) like XSS or a 6 TFLOPS machine (1/2 of the performance) like Xbox One X at the same time and yet this being easy to do and a much smaller difference with the PS5 meaning an impossible to defeat challenge.

Nice mental gymnastics. Asking for high burden of proof from others and when requested from others offering laugh reactions to posts ;).


9.2 isn't greater than 12, sorry kiddo. And boosted to 10.2 its still aint. You guys also don't have either dev kit. All you guys are doing is "speculating" and spinning lol. Obviously my post struck a nerve and i wasn't even quoting you.

That's why i usually just read these threads, for the entertainment 😉
 

Panajev2001a

GAF's Pleasant Genius
9.2 isn't greater than 12, sorry kiddo. And boosted to 10.2 its still aint.
Having fun with straw man arguments nobody is making? I did appreciate the “9.2 sometimes boosted to 10.2” pointed remark, so very subtle ;).

That's why i usually just read these threads, for the entertainment 😉
Oh that is clear, you are trying to get a ruse out of people and troll them instead of shilling for a system. It still does not make your contributions all high and mighty as you may like them to be.
 
Last edited:
9.2 isn't greater than 12, sorry kiddo. And boosted to 10.2 its still aint. You guys also don't have either dev kit. All you guys are doing is "speculating" and spinning lol. Obviously my post struck a nerve and i wasn't even quoting you.

That's why i usually just read these threads, for the entertainment 😉
U didnt understand the first thing about gpu haha 5700xt(9.2tf) has nothing to do with ps5 as its rdna1 . And ps5 isn't boosted in traditional sense . It is 10.12 to 10.3 tf depending on the power of draw of apu based on the game code . but u do u 😂
 
Last edited:

Shmunter

Member
That's interesting. I still haven't seen any confirmation on whether it is a separate CU from the GPU, or if it's integrated. So many things are vague with this reveal...
I’ll confirm it for you right here - It’s a seperate hardware unit. Straight from Cerny confirming it to 14m people that watched the video. I appreciate it may be over some people heads however, but my word is as good as gold because I actually did listen.
 

SleepDoctor

Banned
Having fun with straw man arguments nobody is making? I did appreciate the “9.2 sometimes boosted to 10.2” pointed remark, so very subtle ;).


Oh that is clear, you are trying to get a ruse out of people and troll them instead of shilling for a system. It still does not make your contributions all high and mighty as you may like them to be.


I get a ruse out of people pretending to know what they're talking about

Shilling? Or wait... astroturfing? Nice but you can see my post history and aee i never even ran with the githu leaks or anything. But watching you guys move the goalpost after you didn't get your 13.3 tf is quite amusing.


Im still getting both consoles while you guys do the "shillng" with over two pages of ssd threads 🤷🏻‍♂️
 

Ascend

Member
I’ll confirm it for you right here - It’s a seperate hardware unit. Straight from Cerny confirming it to 14m people that watched the video. I appreciate it may be over some people heads however, but my word is as good as gold because I actually did listen.
Fair enough. I did watch it, but, I don't remember every single detail. Not everyone has a splendid flawless memory like you.
It makes sense that it's a separate unit though. Otherwise it would be a pain with yields.
 
https://people.csail.mit.edu/emer/papers/2004.03.prdc.cache_scrub.pdf

The way Cerny was talking about it and the coherency engines seemed to be about smart cache line invalidation and not Error recovery / data integrity which is what ECC is for with a wide and high frequency GDDR6 channel like XSX has. Although yes in literature we have scrubbers as a way to help with error correction.

It's an interesting distinction nonetheless. And I'd like to see how it plays out in practice tbh.

actually it can, it just depends on the workload.

ECC memory tends to be more expensive, and I don't see why it'd be needed with just 16GBs of ram and non datacenter workloads. It might simply be there since the same h/w is going into servers apparently.

Yes ECC goes for more, but there's a reason for it. And you're also right the main reason is because XSX is also going into datacenters/server blades, but it's a side benefit the consumer console will be able to enjoy.

Probably more interesting is that despite being ECC, it's still very fast. Usually ECC memory runs slower than the non-ECC equivalent, just look to ECC and non-ECC DDR3/DDR4 modules for example. So essentially it seems like a decision that will benefit the system in both environments.

You know what's funny? You just quoted a post that has to do with Xbox's advantage in GPU performance, but some numbers are not confirmed.

I never said the numbers were all confirmed, and did not speculate on the basis as if the numbers were all confirmed. I just touched on how NX Gamer's video mentioned things regarding the role of those parts of the GPU. Nice try, but you failed.

The numbered have not been confirmed, but you overlooked that and pointed out that's what NX Gamer said in this video. You're judging based off of numbers alone.

...and the context of the numbers. Remember when I mentioned that earlier? Seems not. Again, I never quoted the post as confirmation of unconfirmed numbers, just the fact it brought up aspects of the GPU that are also highly influential in overall performance besides clock speeds. That was the point of me even quoting it.

Yet Mark Cerny mentions streaming speed of his SSD with numbers, you become skeptical. You never gave a reason why you doubt him with the numbers he provided when it comes to pure SSD speed.

Lol what? I've already mentioned my reasons MULTIPLE times across other posts in other threads. I shouldn't need to compile them again just to satiate your peace of mind. But if you'd like a basic rundown, fine, here are some of the reasons:

-NAND module bandwidth

-NAND module latency

-NAND module random access on first byte

-NAND module random access on first block

-NAND module general random access

-NAND module page size

-NAND module block size

-SRAM cache size

-long-term NAND module performance (wear-leveling)

-heat dissipation and whether high SSD power draw could factor as an impact into variable power reallocation in the system

-are the given speeds locked (consistent) or peak (goes with the other question above; could power reduction in the system cause a speed drop in the SSD operations and if so, by how much?)

....just for example. And all but the last two questions there ALSO apply to XSX's SSD, but you can keep pretending I"m only being critical of aspects of PS5's SSD if you want.

It's not hard to tell when someone is bias.

You know what, you're right. I do have a bias. It's a bias for evening out the discussion. I have a legitimate interest in both next-gen consoles, a genuine one, but there's a strong contingent of borderline toxic PS5 bias around. What does that mean? Well, it means going beyond mere preference (which is perfectly fine), and pushing into an angle to basically parade for one system by pecking away and downplaying the other. Whether that's blatantly obvious, or through subtlety, or through a given tone in persistent patterns of posting, or a mixture of the three, IMO it taints the well of discussion.

And let us be perfectly clear here; yes there's a small handful of Xbox fans around who have been doing some of the same, but this is the kicker: the number of PS/Sony fans doing this is magnitudes more, because there's this thing called scale-in-numbers, and with ratios remaining equal, since there are many more only-PS/Sony fans around, that invariably gives a larger pool of those who could be labeled toxic (i.e in the vein of doing what I mentioned above). If things were reversed, you'd have the inverse situation, but that isn't the case.

That hurts to say, too, because I was one of the guys being very optimistic/lenient in the Next-Gen speculation thread around the time the Github leak and testing data was coming out. You never saw me jump headfirst on #TeamGithub; I just kept that stuff as a possibility of being onto something. I made this thread shortly after Road to PS5 going over the two systems, and kept things as fair as I could've given when it was made (an updated version may be in order at some point). I've disagreed with people who have been insisting the SSDs are for nothing more than quicker load times, too, so....

...my bias, right? Yeah I know what you were trying to imply, and right now I might be more inclined to offer speculative clarification on XSX rather than PS5 given the state of overall discussion the past couple or so weeks throughout multiple threads, but that all ties back to evening out the discussion. If there was a massive presence of PS5 misinformation and few people actually trying to parse out the truth from the BS and cut down on FUD, I'd be doing this with PS5 instead. But they have Jason Scheirer, Sony's own 1st-party devs, multiple YT tech and gaming channels, and many people on forums such as yourself, doing a fine enough job of that already.

You can try saying there's an irony to my method since such a thing could be conflated as "sticking up for the little guy", and MS is neither a person nor a "little guy" in the grand scheme of things at all, but it's not a stretch to say Xbox is the underdog in this upcoming generation. It's the one that has more to prove to win back votes of confidence, so on some level of the skepticism towards what they've shown and announced is perhaps warranted. But that indirectly creates the type of scenario where I just feel a desire to try bringing back some nuance and balance to the discussion, because we've got too many other examples in other aspects of entertainment where that has fallen to the wayside and led to complete trainwrecks in discussions there because everything becomes very binary (as in exclusively-"us" or exclusively-"them") and partisan, and the fun drains away with it.

I don't want that happening with gaming any more than it already has, and certainly not when it comes to next-gen console speculation/discussion.
 
I actually agree with this and expect PS5 to reach higher utilization early on and by mid gen devs start reaching higher utilization with the XSX GPU
But also important to point out the 21% resolution delta already accounts for this

Fair enough. I don't personally think it'll take to mid-gen for devs to do such with XSX, assuming the gen will be about the same length as current-gen, given increases in architecture efficiency, dev workflow, API tool stacks etc.

But there could be an early period with PS5 being use-saturated more easily since you don't actually need to "do" anything to take advantage of faster clocks; the hardware takes care of all of that on its own.

Based on information available and common sense I expect both GPUs to be at near feature parity with both having specific customizations to make the most out of their specific APU setup.
People forget MS/Sony are working with AMD they have access to the same intellectual resources there's no magic secret optimizations that only one party has access to. They just had different priorities: Sony was content with a nextgen capable powerful but small GPU and focused on going above and beyond with I/O. MS was content with a next gen capable fast I/O and focused on getting the performance crown title (which they have) with a big and powerful GPU

I hear what you're saying but we also need to remember that just because one can do something, doesn't mean they will. In the Next-Gen speculation thread a lot of people assumed Sony just must be going with a GPU of same size as MS if not bigger, because MS was doing it. Sure, Sony could have done that, but they decided not to.

So honestly, outside of some baseline features that would require more effort to remove than leave in, it's a somewhat open question as to what features the two companies have decided to remove or add in. This also kind of gets back to what we discussed before regarding nomenclature standardization, which unfortunately probably won't happen because, as also discussed, it's beneficial for PR reasons to leave things as-is.

This is where you lost me again... Asynchronous compute will help XSX realize its compute advantage not surpass it
Keep in mind PS5 is RDNA2 too, all devs have to do to free enough resources to match the XSX output (including asynchronous compute) is drop resolution by 21%

Lol we are kind of saying the same thing here. I'm not trying to say that XSX will perform beyond a 17%-21% advantage in pure numerical terms. I have just been mentioning how advances in asynchronous compute understanding, algorithms, coding and familiarity, plus architectural advances, will help squeeze a lot more out per FLOP this time around compared to last gen when it comes to the task. Again, the orange (the "delta") is smaller, but there will be a lot more juice squeeze out versus from the bigger orange of last gen.

The practical/in-game difference will always be a resolution difference because no matter how hard the xsx is pushed you can rest assured the ps5 will be pushed just as hard if not more

This comes down to developer priorities and what the game actually demands, though. Also there will be differences in how the APIs handle some things between the systems; looking at the way that interconnects together along the pipeline when the gen goes on will be very interesting because that could help both systems get some slight improvements in certain areas, and might have some (very minuscule) impacts in others.
 

oldergamer

Member
That is exactly what a fast SSD would help with. Not just loading multiple characters at once, but being able to stream the entire character assets and data of any character in the roster.
again, there is nothing in what you said that couldn't be achieved by the Xbox NVME drive. If this SSD in PS5 is really going to change things dramatically as you and others here are championing, then tell us something the PS5 could do in a game that the new xbox could not? There is literally nothing.

Not only this, CPU & audio don't get much benefit from this extra bandwidth. I have a feeling in the real world outside of load perceivable load times, i think you aren't going to notice the difference in speed. is anyone really going to notice a 3 second load vs a 6 second load? not likely.
 

FranXico

Member
again, there is nothing in what you said that couldn't be achieved by the Xbox NVME drive. If this SSD in PS5 is really going to change things dramatically as you and others here are championing, then tell us something the PS5 could do in a game that the new xbox could not? There is literally nothing.
I never claimed that the XSX will not be able to achieve that. What I do claim is that current gen consoles cannot do it (which is what I thought ZywyPL ZywyPL was asking for).

Not only this, CPU & audio don't get much benefit from this extra bandwidth. I have a feeling in the real world outside of load perceivable load times, i think you aren't going to notice the difference in speed. is anyone really going to notice a 3 second load vs a 6 second load? not likely.
I just gave you one concrete example of a noticeable benefit to gameplay which is not just a reduced load time.
Honestly, anything related to streaming data will become extremely streamlined, it should really be obvious.
 
Last edited:

Ascend

Member
again, there is nothing in what you said that couldn't be achieved by the Xbox NVME drive. If this SSD in PS5 is really going to change things dramatically as you and others here are championing, then tell us something the PS5 could do in a game that the new xbox could not? There is literally nothing.

Not only this, CPU & audio don't get much benefit from this extra bandwidth. I have a feeling in the real world outside of load perceivable load times, i think you aren't going to notice the difference in speed. is anyone really going to notice a 3 second load vs a 6 second load? not likely.
There are many people here downplaying the XSX and championing the PS5 SSD... FranXico is not one of them. I agree with your assessment, btw. If anything, having a bunch of excess speed is practically a free pass for developers to do things inefficiently. At least in the beginning.
 

SonGoku

Member
I don't personally think it'll take to mid-gen
Yeah it was a rough estimate, don't really know when
So honestly, outside of some baseline features that would require more effort to remove than leave in, it's a somewhat open question as to what features the two companies have decided to remove or add in. This also kind of gets back to what we discussed before regarding nomenclature standardization, which unfortunately probably won't happen because, as also discussed, it's beneficial for PR reasons to leave things as-is.
I don't expect core features will be removed , they are part of the architecture, and why is this a thing now when it wasn't in the past? The only thing that could save them some die space is removing RT HW and even the impact of that is miniscule. I expect both consoles to be at feature parity with RDNA2 cards at launch

The differences will come from specific customizations to make the most out of their particular setups not silver bullet type features
Again, the orange (the "delta") is smaller, but there will be a lot more juice squeeze out versus from the bigger orange of last gen.
This is true of both consoles in general, thats why i don't think this distinction is important to make. You can get more out of each TF but both are in the same situation, the bigger the performance the smaller TF difference becomes.

In practice a 21% difference will translate to 21% resolution difference same as current gen
This comes down to developer priorities and what the game actually demands, though. Also there will be differences in how the APIs handle some things between the systems; looking at the way that interconnects together along the pipeline when the gen goes on will be very interesting because that could help both systems get some slight improvements in certain areas, and might have some (very minuscule) impacts in others.
Taking edge cases aside it'll come down to 3 situations i think
  1. Graphics and resolution parity with slightly better fps on the more powerful console (Destiny 1)
  2. Resolution Parity with some effects missing/toned down (think of high vs ultra type differences that you can't really notice even side by side unless highlighted)
  3. Graphics parity with lower resolution on the weaker console
1 & 2 might be present early on but by midgen i expect 3 to be more established
 
Last edited:
Games are yet to come. But if you acknowledge the basic principle of 2x speed = 2 x the asset quality, or 2 x as many assets in the same amount of time then I think anyone can grasp how that can translate to a game.

It doesn’t mean more polygons or more draw distance, or more pressure on a GPU. It just can mean something as simple as 100 unique trees in a forest of a 1000 vs only 50 unique trees. Or twice the detail in a cars dashboard as a camera zooms in through the car window. Not rocket science.

That has nothing to do with GPU?

Also, I just wholeheartedly disagree with your entire premise. If the SSD alone can account for 100 unique trees on PS5 it will perform similarly on XsX. Both these SSDs can fill their entire RAM pool insanely quickly

The horseshit is really starting to stink in here
 
Last edited:

Thirty7ven

Banned
Of course the PS5 won’t allow to a game to be made in it that wouldn’t be possible on the XSX. Both of these systems will be able to do similar things, the only difference is that PS5 will by nature be able to provide slightly better results.

Somehow some people have a hard time accepting that, but have no problem accepting the ridiculous notion that a 20% gap in compute will somehow allow for higher res, higher framerate and higher ray tracing at the same time.
 
This is true of both consoles in general, thats why i don't think this distinction is important to make. You can get more out of each TF but both are in the same situation, the bigger the performance the smaller TF difference becomes.

That's just it tho; I've been emphasizing this as something generally beneficial to both systems the whole time.! Maybe I could've done a better job in stating those as general GPGPU asynchronous compute that will be there for both systems, and then a more specific detail on how XSX benefits on top of that with the additional GPU headroom with graphical parity met between both systems.

At the very least I hope we can agree that asynchronous compute advancements and improvements from the platform holders (APIs, utilities and services), devs (experience, familiarity), architecture (better functioning for such tasks), algorithms (accomplishing more in less cycles, lower resource overhead). etc. will be a major factor in next-gen and benefit both PS5 and XSX, even if the systems actually have good CPUs this time around.

There are many people here downplaying the XSX and championing the PS5 SSD... FranXico is not one of them. I agree with your assessment, btw. If anything, having a bunch of excess speed is practically a free pass for developers to do things inefficiently. At least in the beginning.

This is also kind of something SonGoku is suggesting if considering, say, the faster clock on PS5's GPU which will benefit fillrates. It's essentially extra performance sitting there to kick in automatically without a lot of developer intervention, so I can't imagine it going to good use from the start.

One thing I will say is XSX seems like it has a bit more leeway for devs in how to use and maximize certain aspects of its setup, such as the memory configuration. So it might have a bit more of a learning curve compared to PS5 and require a bit more effort (relatively speaking) to take fuller advantage of some of its full potential. That doesn't mean XSX is a difficult machine to work with whatsoever, just that it seems to have a bit less automation in parts of its setup compared to PS5, from what we've seen so far.

But, that also usually always has the benefit of lending to a system with leeway in using certain aspects in ways not intentionally meant, and doing some great stuff with them. PS5 has leeway for that as well, but imo I think PS5's "ceiling" will probably be tapped before XSX's is. I don't see saturation of the CUs for programming tasks being that much an issue tbh, but regardless how "easy" it is, it still requires more effort than simply getting "free" boosts from faster GPU clocks as on PS5. That's the kind of stuff I'm trying to allude to.

This is all why these systems are so interesting, because if I wanted to compare them to older consoles, truth is you probably couldn't do any clean or simple comparisons. Like, from a hardware perspective to the 16-bit systems, the easiest comparison would be PS5 is to MegaDrive what XSX is to Super Nintendo. Or to the 5th-gen systems, it could go: PS5 is to PS1 what XSX is to SEGA Saturn. But these are obviously really simple comparisons that don't fully illustrate how they are similar or different without diving in, because in other aspects of the hardware design you can easily say PS5 is to N64 what XSX is to PS1, or PS5 is to SNES what XSX is to MegaDrive, etc.

I was thinking about trying to do this but, shit, it'd be hard. Plus it's probably better to wait until more info on the systems come out anyway.
 
Last edited:
That's just it tho; I've been emphasizing this as something generally beneficial to both systems the whole time.! Maybe I could've done a better job in stating those as general GPGPU asynchronous compute that will be there for both systems, and then a more specific detail on how XSX benefits on top of that with the additional GPU headroom with graphical parity met between both systems.

At the very least I hope we can agree that asynchronous compute advancements and improvements from the platform holders (APIs, utilities and services), devs (experience, familiarity), architecture (better functioning for such tasks), algorithms (accomplishing more in less cycles, lower resource overhead). etc. will be a major factor in next-gen and benefit both PS5 and XSX, even if the systems actually have good CPUs this time around.



This is also kind of something SonGoku is suggesting if considering, say, the faster clock on PS5's GPU which will benefit fillrates. It's essentially extra performance sitting there to kick in automatically without a lot of developer intervention, so I can't imagine it going to good use from the start.

One thing I will say is XSX seems like it has a bit more leeway for devs in how to use and maximize certain aspects of its setup, such as the memory configuration. So it might have a bit more of a learning curve compared to PS5 and require a bit more effort (relatively speaking) to take fuller advantage of some of its full potential. That doesn't mean XSX is a difficult machine to work with whatsoever, just that it seems to have a bit less automation in parts of its setup compared to PS5, from what we've seen so far.

But, that also usually always has the benefit of lending to a system with leeway in using certain aspects in ways not intentionally meant, and doing some great stuff with them. PS5 has leeway for that as well, but imo I think PS5's "ceiling" will probably be tapped before XSX's is. I don't see saturation of the CUs for programming tasks being that much an issue tbh, but regardless how "easy" it is, it still requires more effort than simply getting "free" boosts from faster GPU clocks as on PS5. That's the kind of stuff I'm trying to allude to.

This is all why these systems are so interesting, because if I wanted to compare them to older consoles, truth is you probably couldn't do any clean or simple comparisons. Like, from a hardware perspective to the 16-bit systems, the easiest comparison would be PS5 is to MegaDrive what XSX is to Super Nintendo. Or to the 5th-gen systems, it could go: PS5 is to PS1 what XSX is to SEGA Saturn. But these are obviously really simple comparisons that don't fully illustrate how they are similar or different without diving in, because in other aspects of the hardware design you can easily say PS5 is to N64 what XSX is to PS1, or PS5 is to SNES what XSX is to MegaDrive, etc.

I was thinking about trying to do this but, shit, it'd be hard. Plus it's probably better to wait until more info on the systems come out anyway.

Except that once devs hit that magic 10gig on that xsx that that would hit a ceiling then theres gonna be sacrifices to be made otherwise they would have to reduce its bus speed which would be lower then sonys. Theres noway it would have 13gig for games unless u can magically speed up the lower bus. Other then that its difficult to say. Id wait to the games come out and wait futher down the line
 

BadBurger

Is 'That Pure Potato'
Mark Cerny reminds me of doctor Tannis from Borderlands.

As to magic sauce or whatever, we're so far off from knowing exactly how these consoles are going to perform it almost feels pointless to discuss it now. They're both promising the moon right now without showing us, literally, shit.
 

psorcerer

Banned
Have you? Is this from the devkit? You seem to know more than us, spill the beans.

/s

This was made to sell the idea of a needed Secret Sauce Device

Obviously internal presentation was done on the devkit.
Maybe even pre-recorded!
No need to sell the idea, judging from the amount of tech that went into the drive - they were sold at least 4 years ago.
 

DForce

NaughtyDog Defense Force
I never said the numbers were all confirmed, and did not speculate on the basis as if the numbers were all confirmed. I just touched on how NX Gamer's video mentioned things regarding the role of those parts of the GPU. Nice try, but you failed.
You failed to read. I said you OVERLOOKED it and continued to quote his post.

If you're going to say I failed, at least put my words into context.

Lol what? I've already mentioned my reasons MULTIPLE times across other posts in other threads. I shouldn't need to compile them again just to satiate your peace of mind. But if you'd like a basic rundown, fine, here are some of the reasons:

-long-term NAND module performance (wear-leveling)

-heat dissipation and whether high SSD power draw could factor as an impact into variable power reallocation in the system

-are the given speeds locked (consistent) or peak (goes with the other question above; could power reduction in the system cause a speed drop in the SSD operations and if so, by how much?)

....just for example. And all but the last two questions there ALSO apply to XSX's SSD, but you can keep pretending I"m only being critical of aspects of PS5's SSD if you want.

There's no pretending, it's not hard to see whenever someone reads your posts.

You concluded that Mark Cerny was assuming 4GB/s figures, which means you believe he did not perform the necessary research.


-long-term NAND module performance (wear-leveling)

-heat dissipation and whether high SSD power draw could factor as an impact into variable power reallocation in the system

-are the given speeds locked (consistent) or peak (goes with the other question above; could power reduction in the system cause a speed drop in the SSD operations and if so, by how much?)

And in which Mark Cerny explained in his presentation.

"We don't use the actual temperature of the die, as that would cause two types of variance between PS5s," explains Mark Cerny. "One is variance caused by differences in ambient temperature; the console could be in a hotter or cooler location in the room. The other is variance caused by the individual custom chip in the console, some chips run hotter and some chips run cooler. So instead of using the temperature of the die, we use an algorithm in which the frequency depends on CPU and GPU activity information. That keeps behaviour between PS5s consistent."

This means the temperature would not be factored in the drop of variable power. Totally different than what you're alluding to. Many things have been made clear, but your conclusion is that he's only assuming these things. A good amount of testing goes into this before releasing a consumer product.


-NAND module bandwidth

-NAND module latency

-NAND module random access on first byte

-NAND module random access on first block

-NAND module general random access

-NAND module page size

-NAND module block size

-SRAM cache size


Lets look at your past from yesterday, okay?

These are mere assumptions. We don't have an established, real-world proof of what speeds are required to stream in data to the player as they are turning, without resulting in texture pop-in or immersion-breaking. Many are assuming 5.5 GB/s or more when that may or may not be the case.

Wow, so they created an I/O to get the fastest throughput possible, but based on your post, it was merely based on assumptions and guesses. You're going to tell everyone that Mark Cerny was guessing and did not factor in any of this while going for that 5.5GB/s target? You're reaching for the stars here.


You know what, you're right. I do have a bias. It's a bias for evening out the discussion. I have a legitimate interest in both next-gen consoles, a genuine one, but there's a strong contingent of borderline toxic PS5 bias around. What does that mean? Well, it means going beyond mere preference (which is perfectly fine), and pushing into an angle to basically parade for one system by pecking away and downplaying the other. Whether that's blatantly obvious, or through subtlety, or through a given tone in persistent patterns of posting, or a mixture of the three, IMO it taints the well of discussion.

Even though I just quoted you right above, lets not forget that you made the claim that Mark Cerny was assuming the figures on his SSD numbers.


And let us be perfectly clear here; yes there's a small handful of Xbox fans around who have been doing some of the same, but this is the kicker: the number of PS/Sony fans doing this is magnitudes more, because there's this thing called scale-in-numbers, and with ratios remaining equal, since there are many more only-PS/Sony fans around, that invariably gives a larger pool of those who could be labeled toxic (i.e in the vein of doing what I mentioned above). If things were reversed, you'd have the inverse situation, but that isn't the case.

That hurts to say, too, because I was one of the guys being very optimistic/lenient in the Next-Gen speculation thread around the time the Github leak and testing data was coming out. You never saw me jump headfirst on #TeamGithub; I just kept that stuff as a possibility of being onto something. I made this thread shortly after Road to PS5 going over the two systems, and kept things as fair as I could've given when it was made (an updated version may be in order at some point). I've disagreed with people who have been insisting the SSDs are for nothing more than quicker load times, too, so....

...my bias, right? Yeah I know what you were trying to imply, and right now I might be more inclined to offer speculative clarification on XSX rather than PS5 given the state of overall discussion the past couple or so weeks throughout multiple threads, but that all ties back to evening out the discussion. If there was a massive presence of PS5 misinformation and few people actually trying to parse out the truth from the BS and cut down on FUD, I'd be doing this with PS5 instead. But they have Jason Scheirer, Sony's own 1st-party devs, multiple YT tech and gaming channels, and many people on forums such as yourself, doing a fine enough job of that already.

You can try saying there's an irony to my method since such a thing could be conflated as "sticking up for the little guy", and MS is neither a person nor a "little guy" in the grand scheme of things at all, but it's not a stretch to say Xbox is the underdog in this upcoming generation. It's the one that has more to prove to win back votes of confidence, so on some level of the skepticism towards what they've shown and announced is perhaps warranted. But that indirectly creates the type of scenario where I just feel a desire to try bringing back some nuance and balance to the discussion, because we've got too many other examples in other aspects of entertainment where that has fallen to the wayside and led to complete trainwrecks in discussions there because everything becomes very binary (as in exclusively-"us" or exclusively-"them") and partisan, and the fun drains away with it.

I don't want that happening with gaming any more than it already has, and certainly not when it comes to next-gen console speculation/discussion.

Yes, bias.

But before I continue, want to see something funny?

Mar 19, 2020
The point was more to just illustrate how some people supposing the SSD will make up for differences in areas like GPU compute and volatile memory bandwidth & speed are not understanding how SSDs, particularly the NAND memory they are constructed from, actually operate.

But yesterday you accused me of lying.

DForce said:
When people tried to explain how SSDs will work, people just started saying, "It's not going to close the power gap in consoles" when that wasn't the point people on here were trying to make.

This is a lie. That is the point a good portion of posters have been trying to argue, generally by misinterpreting how SSDs and NAND actually works. It hasn't been so much some people downplaying SSDS, so much as anyone attempting to be a realist with regard to the SSDs is automatically viewed by some others as downplaying the SSDs

You made this claim before, but you accused me of lying. It's not shocking that you were the saying this.

Were you quick to say first party devs were going to speak well of the SSD after the reveal, even though Jason said third-party devs have been saying the same exact thing?


You can tell me differently, but your what you have been posting tells me a different story. You may not be like some other posters on here, but my point still stands.
 
What will be the actual real world performance advantage that PS5 ssd will offer compared to series X. Is it just 1 second loading times faster? NVMEs are 7 times faster than regular ssd but loading times are just one second faster.

With the series X its very easy to explain. The graphics fidelity will be the best with series X having the best resolution, graphical effects and ray tracing. Can someone explain this in real world performance what PS5 ssd will offer.

Data manipulation on bigger scale.
 

sendit

Member
PS5 SSD has clearly faster data transfer, but what does that mean? Because Cerny himself wasn't able to show. I asked like a month ago what exactly this bandwidth change, and I meant concrete examples, what will be that spammed to death "new paradign", and needless to say, I am yet to receive a single... So I don't know, maybe you missed to post, so I'm asking you personally, because you seem/act like you have some top secret knowledge that is yet to be revealed - what will be different on my TV screen once i fire up Fifa 2022? Or CoD from 2024? Or next Forza/GT? Examples, give me at least some theoretical possibilities. Seriously, people who spam all the treads with all the SSD buzzwords but are unable to come of any example how the games will be different on upcoming consoles should just STFU or get banned, Or both. And when pushed to a corner they just go back to thet SM demo which shows nothing but just a faster loading time, pathetic.

So I'm asking you for, let's say just 5 examples of how the games gameplay will change because of SSD. Give me the "new paradign", the "game-changer", I want to understand what everyone is so hyped about.

Agreed. Sony needs to show us what 129% diff in SSD speed translates to in terms of gaming. Here is a real world example of what 12 Teraflops sustained means to gaming:
 
Last edited:
This isn't a good example. You mean back when they were using sprites? This would he a lot harder to do convincingly now as, but in theory you could already load multiple characters into memory at once, so if the developers wanted to, they could do this on current gen consoles. not a good example.

....but please come up with 4 more gameplay revolutions. none of which are just speeding start up load time, and none of which Xbox sex couldn't also achieve via SSD?

actually is a good example just that you not understand it, shang tsung when changes it becomes other character, for that it has to be present the entire sprite sheet of that character and that is a big deal memory wise specially mortal kombat and specially cd based systems(such as playstation) if resolution is not reduced or a compression algorithm is used even if its a 2d game that doenst mean is easier than 3d, in arcade or eprom based systems(as long as compression doesnt cause problems) you can simply switch to the memory address, roms are mapped into memory so you can simply access them as you do with ram but you cannot use this from CD without a transfer to a memory(RAM/VRAM) cache

games are currently designed to have cache of data according to the situation, you can stream relatively fast from HDD but is not that fast and depending the game there are gameplay and objects restrictions also world design restrictions and things like data redundancy where you need multiple copies of data so that the cd/dvd or HDD doesn't have to move too much to get the data, you interchange speed with memory size basically, when you use SSD you improve this a lot but the game is not designed with this in mind it was designed to use other mediums you are using a faster medium that improves but the game wont use it optimally, with SSD in mind and specially one that is very fast with 0 seek time you can take different approach to problems, you can do things like reduce memory cache in RAM that saves memory, less cache size equals less reads and seek time that save bandwidth also 3d worlds specially open worlds use a way to stream data, this again changes not only wath cache you require it also changes how you design the world, in the past looked more clearly as games that use megatextures like baldurs gate dark alliance and other games with sowblind engine stored the world in DVD in a way that favours the speed of the DVD and seek time if you glitch your character speed you could see the empty blocks before they were loaded, if you use emulators such as PCSX2 you will probably remember that these games have(or had) stalls as you move, that is because the game code tried to work as it was intended and your PC and emulator couldnt provide data as the game expected fast enough despite the fact your HDD is way faster than PS2 DVD drive, a solution may be to brute-force and store the entire ISO on RAM but that is a lot of memory to run a game from a console that used 32 MB if RAM and all that trouble because the way the game was made, a modern game probably wont be designed with a fast SSD in mind because it also want to target machines with standard Hdds but in consoles you can develop for the specific of the system maybe you wont improve too much at first but in your next games you may try different things and improve as the generation advance who know what dev will come up with but SSD is a big change
 
Agreed. Sony needs to show us what 129% diff in SSD speed translates to in terms of gaming. Here is a real world example of what 12 Teraflops sustained means to gaming:



in your example the frames prior to the shell opening where easier to calculate(require less flops) the frames after the perl as light source is activated require to also mix the light information in the pixels that are affected that involves more calculations than before also notice that the framerate is stable and no indication of free framerate all the time so that means that neither frame takes less or more time than the other despite the fact some frames require less calculations than others that means the system is idle for part of the frametime when is not required waiting for the next frame to start and work again that is time where no floating point operations were made and flops are calculations per second

basically you are not using 12 teraflops to do that scene, you are using a 12 teraflops capable machine to run a scene of unknown complexity and that vary its complexity frame by frame with the intention of making frames that will remain in screen the same time no matter if they resolve before their time frame
 
Last edited:
in your example the frames prior to the shell opening where easier to calculate(require less flops) the frames after the perl as light source is activated require to also mix the light information in the pixels that are affected that involves more calculations than before also notice that the framerate is stable and no indication of free framerate all the time so that means that neither frame takes less or more time than the other despite the fact some frames require less calculations than others that means the system is idle for part of the frametime when is not required waiting for the next frame to start and work again that is time where no floating point operations were made and flops are calculations per second

basically you are not using 12 teraflops to do that scene, you are using a 12 teraflops capable machine to run a scene of unknown complexity and that vary its complexity frame by frame with the intention of making frames that will remain in screen the same time no matter if they resolve before their time frame

LOL, can you believe that I tought the compression artifacts was raytracing :messenger_downcast_sweat:
 
Last edited:
Obviously internal presentation was done on the devkit.
Maybe even pre-recorded!
No need to sell the idea, judging from the amount of tech that went into the drive - they were sold at least 4 years ago.
You do realize most console showcases are done from a PC? From trailers to demo's, etc.. That could have easily been done from a standard HDD. Load all of the buildings and car assets from the HDD to the RAM, and display the demo. There were only a handful off different cars and buildings. Not much data to deal with.

Could be the same thing from a devkit or a console with more ram. The demo was not anything spectacular, or different from what we have seen from several years ago. But for some reason, some people think this is something new. Maybe on console, but even then, with 5400 RPM drives, you can't expect too much.
 

semicool

Banned
First party. Rarely.
And what's the point in showing hardware tech without the hardware?
You're routinely accusing people in Sony that they lie in their internal demos.
It's not how it works. That's not a startup and investors (and there it also happens pretty rarely, reputation damage is hard to repair).
How do you know a salesman is lying?

Obviously the whole thing is a sales pitch, PR. Don't expect those to be real world numbers or statements...overselling, both sides are doing it.

Would you call that "lying" or spin or overselling, marketing or is it all the same? Presenting in the best possible light? Ie spin. Not completely forthright or partially inaccurate? Etc...

The games are going to show real world metrics, we'll see how close to best case scenario, theoretical numbers they are then. Then we can revisit these posts?
 
Last edited:
You do realize most console showcases are done from a PC? From trailers to demo's, etc.. That could have easily been done from a standard HDD. Load all of the buildings and car assets from the HDD to the RAM, and display the demo. There were only a handful off different cars and buildings. Not much data to deal with.

Could be the same thing from a devkit or a console with more ram. The demo was not anything spectacular, or different from what we have seen from several years ago. But for some reason, some people think this is something new. Maybe on console, but even then, with 5400 RPM drives, you can't expect too much.

showcase =/= tech demo

a tech demo can be centered around a particular tech aspect of a game some can even be artificially handicapped for test purpose(like DX12 drawcalls techdemo)

there are artificial tests that can be done in different environments yes but as long as the parameters match the result is the same that is how mathematics and physics works if you have information that indicates the spiderman test was not tested on a ps5 sdk or was made in an environment that doesn't match the game and console specifications feel free to present evidence if not you are speculating based in nothing and harming your own credibility

spider man uses more than buildings and cars and even then there is variety betwen cars and buildings so you are not just cloning something you already have in memory, there is a lot of data transfers for multiple props in the city and NPC, spider man is a highly populated open world game, the game takes a certain time to load world chunks in PS4 and takes much less time on PS5

“Spidey stands in a small plaza. Cerny presses a button on the controller, initiating a fast-travel interstitial screen. When Spidey reappears in a totally different spot in Manhattan, 15 seconds have elapsed. Then Cerny does the same thing on a next-gen devkit connected to a different TV. (The devkit, an early ‘low-speed’ version, is concealed in a big silver tower, with no visible componentry.) What took 15 seconds now takes less than one: 0.8 seconds, to be exact.”

 
showcase =/= tech demo

a tech demo can be centered around a particular tech aspect of a game some can even be artificially handicapped for test purpose(like DX12 drawcalls techdemo)

there are artificial tests that can be done in different environments yes but as long as the parameters match the result is the same that is how mathematics and physics works if you have information that indicates the spiderman test was not tested on a ps5 sdk or was made in an environment that doesn't match the game and console specifications feel free to present evidence if not you are speculating based in nothing and harming your own credibility

spider man uses more than buildings and cars and even then there is variety betwen cars and buildings so you are not just cloning something you already have in memory, there is a lot of data transfers for multiple props in the city and NPC, spider man is a highly populated open world game, the game takes a certain time to load world chunks in PS4 and takes much less time on PS5

“Spidey stands in a small plaza. Cerny presses a button on the controller, initiating a fast-travel interstitial screen. When Spidey reappears in a totally different spot in Manhattan, 15 seconds have elapsed. Then Cerny does the same thing on a next-gen devkit connected to a different TV. (The devkit, an early ‘low-speed’ version, is concealed in a big silver tower, with no visible componentry.) What took 15 seconds now takes less than one: 0.8 seconds, to be exact.”

Lmfaooo. Let me ask you this. Did the demo have multiple different cars? Like in real life, or just a repeat of a few subset of vehicles, repeated over and over? Did each vehicle have a different subset of bumpers, grills, scratch marks, dents, etc? Why couldn't this have been done on a potato pc or a current gen console? Because Sony wants to give this falsehood of a NEED for SSD. There is no need to stream anything in a simple ass demo like this. It was not an open world demo, with characters on screen, and a huge variety of data. Only the same few cars and same few buildings. A current gen console could do this with ease. If you think otherwise, please provide proof.
 
Top Bottom