• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Oberon PlayStation 5 SoC Die Delidded and Pictured

onQ123

Member
Source is Locuza (same as Navi 10 anyways).



Navi 10 only had 64

jxniefh8pvb41.jpg
 

Sosokrates

Report me if I continue to console war
Are we still doing this?
Doing what? The ps5 will lower gpu frequency to reduce power on demanding games. The XsX is designed where the GPU can remain at 1825mhz.
Not that it really matter given how scalable engines are these days.
But peoole should not claim the PS5s compute performance is a constant, its not, 10.28tflops is its maximum caperbility but not its constant. It will fluctuate between about 10 - 10.28tflops, but xsx can run at 12.15tflops constantly.
 
Last edited:

skit_data

Member
Doing what? The ps5 will lower gpu frequency to reduce power on demanding games. The XsX is designed where the GPU can remain at 1825mhz.
It reduces the frequency by a few percentages at most. Even a 5% reduction (I highly doubt it would downclock that much) would still keep it at over 2,1 GHz.

That is if it is supplied with a very, very high amount of work, something that would probably affect the Series X in a negative way as well.
 

onQ123

Member
Doing what? The ps5 will lower gpu frequency to reduce power on demanding games. The XsX is designed where the GPU can remain at 1825mhz.
Not that it really matter given how scalable engines are these days.
But peoole should not claim the PS5s compute performance is a constant, its not, 10.28tflops is its maximum caperbility but not its constant. It will fluctuate between about 10 - 10.28tflops, but xsx can run at 12.15tflops constantly.
Yeah you don't understand FLOPS at all.


The chance of a dev creating code that can hit the peak number of flops on PS5 or Xbox Series X is really low.
 

Sosokrates

Report me if I continue to console war
It reduces the frequency by a few percentages at most. Even a 5% reduction (I highly doubt it would downclock that much) would still keep it at over 2,1 GHz.

That is if it is supplied with a very, very high amount of work, something that would probably affect the Series X in a negative way as well.
Yes indeed.
Cerny said it will run at 2230mhz most of the time, where a few percent (lets say 2%) clock reduction is necessary on that worse case game. 2% is like 45mhz, so it would be running at 2185mhz under these heavy loads which is 10.07tflops which is 2.08tfops weaker then xsx and 20.6% weaker then xsx. So the PS5s gpu has a range of less power compared to the XSXs it would be more accurate to say the PS5s gpu is 18-20% weaker. It could be more depending on the boundaries of what cerny meant by "few percent".
 
Last edited:

Sosokrates

Report me if I continue to console war
Yeah you don't understand FLOPS at all.


The chance of a dev creating code that can hit the peak number of flops on PS5 or Xbox Series X is really low.
Cerny said he expects the PS5 gpu to spend most of its time at or close to 2230mhz.
 

Boglin

Member
The PS5 should only throttle when the CPU and GPU overburden the 350W PSU from an intense workload. The PS5 has a variant of smartshift so in the scenario that the GPU needs a crazy amount of power it can get it at the expense of the CPU.

Does anyone know what happens to a fixed clock system like the XSX in the same scenario? It has a 315W PSU with no smartshift and it obviously can't create more electricity out of thin air so does the GPU just do less work while keeping the same clock speed in this situation?

Edit: I'm genuinely curious. I'm not saying the PS5 is better than the XSX
 
Last edited:

Sosokrates

Report me if I continue to console war
I'm saying that the numbers are the "Theoretical Peak Performance" numbers & they don't change depending on if you hit that peak or not.


Even if you render a pong line the theoretical peak performance number will still be the same.

Yes of course power draw is dependent on load. But to say the PS5s gpu is 10.28tflops is not accurate. Its the first time a console has had this scalable clock feature.
And no console will run at peak power all the time, but we acknowledge the xbox one was 1.31tf, the PS4 was 1.84 and the seriesx is 12.15. it not the same with the PS5 because of this new scalable frequency method.
 

onQ123

Member
Yes of course power draw is dependent on load. But to say the PS5s gpu is 10.28tflops is not accurate. Its the first time a console has had this scalable clock feature.
And no console will run at peak power all the time, but we acknowledge the xbox one was 1.31tf, the PS4 was 1.84 and the seriesx is 12.15. it not the same with the PS5 because of this new scalable frequency method.



That number has always been the "Theoretical Peak" to say that it's not accurate to say PS5 GPU is 10.28tflops is crazy.
 

TheAssist

Member
I thought we had this figured out last year. The point of the variable clock is to more often come close to the theoretical max performance. No system is ever going to hit its theoretical max performance with any sane code in a game, but with this feature they try to use the gpu and cpu closer to their maximal possible performance because often you dont need both to run at maximum frequency, because the gpu is still waiting for the cpu to do its things. In that moment the gpu can clock down and allow some power to go to the CPU, so that the cpu is faster and give its finished calculations to the gpu and vice versa. Thats the whole point of variable frequency. Its not to lower frequency because the system is running hot, its to upclock the system beyond what would otherwise be possible with a chip like this. The variable frequency is the whole reason the PS GPU can run this fast. They are basically getting more out of the same design, that otherwise might only be able to run a few hundred Mhz lower with a constant frequency.

Both CPU and GPU running at a fixed frequencies does not improve performance, it only wastes power and creates heat. Switching the power between CPU and GPU allows Sony to reach these high GPU frequencies to begin with and allowed them to save some silicon, since they can now operate closer to the maximum possible Teraflop.

Whether or not this is enough to beat the XSX I dont know, but I'm pretty sure the difference between the two will be so small, that absolutely no one will be able to tell during normal gameplay and only on zoomed in freeze frames, which is way this discussion is pointless for anyone who is not deeply interested in the exact technical details and only wants to shout out tera flop numbers as if those mean anything.
 

Seph-

Member
1 year later and we're still having the same redundant argument. Ones a 10.28TF machine, the others a 12TF machine. Neither are going to practically ever ESPECIALLY constantly run or be near the peak, because those totals are all theoretical. Game engines don't get written that way because well, the consoles would basically just shut off or heavily throttle. So, in the end none of this matters, especially to the developers. Go play more games, honestly you'll have a way more fun time doing that then you will going in circles with this bunch of crap.
 

Sosokrates

Report me if I continue to console war
1 year later and we're still having the same redundant argument. Ones a 10.28TF machine, the others a 12TF machine. Neither are going to practically ever ESPECIALLY constantly run or be near the peak, because those totals are all theoretical. Game engines don't get written that way because well, the consoles would basically just shut off or heavily throttle. So, in the end none of this matters, especially to the developers. Go play more games, honestly you'll have a way more fun time doing that then you will going in circles with this bunch of crap.

12.15tf
 
Yes indeed.
Cerny said it will run at 2230mhz most of the time, where a few percent (lets say 2%) clock reduction is necessary on that worse case game. 2% is like 45mhz, so it would be running at 2185mhz under these heavy loads which is 10.07tflops which is 2.08tfops weaker then xsx and 20.6% weaker then xsx. So the PS5s gpu has a range of less power compared to the XSXs it would be more accurate to say the PS5s gpu is 18-20% weaker. It could be more depending on the boundaries of what cerny meant by "few percent".
TFs are only important for coding scenarios that are compute-driven, which isn't as important for games as it would be for say mining, or raw data processing on servers.

Certain things like pixel fillrate, geometry culling and triangle rasterization rate are not bound to the CUs explicitly, so the design with higher clocks tends to win out in those cases, which happens to be PS5. Texture fillrate is trickier because those are bound by CUs in RDNA2 so technically speaking the more CUs active the higher Texture and texel fillrate (and due to how RDNA 2 is designed, BVH traversal intersection rates) would be.

However, those things are still determined in some way by clock speeds and are also bound to what saturation levels are being loaded to the CUs across the GPU. In Series X's case you'd need 44 CUs regularly saturated with work on their TMUs (4 each) to match the texture/texel fillrate of PS5 (321.2 Gtexels/S). That means a game would need to regularly make 8 more CUs active on Series X to match the same throughput for texture/texel fillrate on PS5. So that is one of the downsides of having lower GPU clocks.

Series X does make up for that in a way with larger GDDR6 memory bandwidth but you also need to remember this is shared between GPU (560 GB/s) and CPU/audio(336 GB/s). So, if for a given percentage of frame time in a second (let's say 15%) the CPU & audio are using the GDDR6, then that's 15% of a second where the Series X memory is running at 336 GB/s, not 560 GB/s. So effective bandwidth in that scenario is actually closer to 560 GB/s * .85 = 476 GB/s. However, that's probably a more extreme scenario, since most CPUs of Series X equivalent on PC use about 50 GB/s of DDR4 bandwidth IIRC, but the audio could potentially use another 20 GB/s on top of that (if it's around what PS5 offers with Tempest Engine), so typical GPU bandwidth on Series X is probably around 490 GB/s in most cases.

That's still higher than PS5's memory bandwidth (448 GB/s), especially if CPU and audio usage is taken into account (which would bring PS5's GPU bandwidth closer to 383 GB/s assuming CPU usage and Tempest Audio usage), but Series X has to confine its GPU data to a 10 GB pool. Therefore if there's GPU-related data that might be sitting in the CPU/audio pool of 6 GB, and has to be moved to the 10 GB pool, hopefully that data is on the same GDDR6 module or else there'll be access latency penalty for moving the data around within the GDDR6 memory pool (which would happen anyway with just needing to move data from the 6 GB pool to the 10 GB pool; I know that this isn't hard-coded so theoretically the GPU could use the 6 GB pool for graphics data as well, but the application contextually switching from the two pools due to bandwidth differences is probably nowhere near an easy thing to maximize use of by devs I'd imagine).

With PS5 there's no need for that type of data management because its memory is fully unified and not virtually partitioned as two different banks at different bandwidths. That does help to save on latency and ensure effective bandwidth is what the raw numbers state, the only thing I am curious about is what type of penalty there is for that type of more thorough data management on Series X. If it's within a margin of error, say 2% penalty, that's still a potential further loss of 9.52 GB/s (476 GB/s) to 9.8 GB/s ) (490 GB/s) bandwidth, bringing those figures down to between 466.48 GB/s - 480.2 GB/s of likely actual GPU GDDR6 bandwidth for Series X (under typical real-use cases where the CPU and audio are also being used.

While I could throw SSD transfers into that as well (any data going to or from the SSD on either system eats at the available memory bandwidth), that isn't too important considering they both have the same physical footprint of GDDR6 memory. However, the PS5 can decompress data at a higher rate meaning if a game for example needs 8 GB of new texture data, that could be done quicker on PS5 (under one second) while on Series X you'd need a bit more than one second (since its peak for texture decompression is 6 GB/s). For PS5 if it's a particularly well compressed texture that 8 GB could be provided in half a second into system RAM (since 8 GB/s is less than half of 17 GB/s). Additionally there are things related to data decompression and caching of SSD data on PS5 that on Series X the CPU has to do a bit of work on, so that means less effective CPU bandwidth for game-specific tasks.

In some ways the Series X therefore benefits from having lower effective rates in some areas like geometry culling, because that means it needs less CPU time to generate the commands to the GPU for creating the polygons, but that's still counterbalanced by other things on its own side such as what I just mentioned, and on PS5 such as cache scrubbers which are not present on the Series systems (which help with cutting down the amount of trips needed to system GDDR6 RAM, and avoiding the access latency penalty that comes with that), etc. So yeah, in terms of pure FLOPs the PS5 loses out if saturation is pushed on both it and Series X, but that's clearly only one fraction of the whole pie and not the most important when it comes to gaming performance, either.

I'm guessing Microsoft are envisioning a big shift in the near future to fully programmable mesh shading (which TBF, is something the PS5 only partially has with its Primitive Shaders), and some early benchmarks have shown huge gains in throughput performance there, but that's also hinged on a pretty big design paradigm shift when it comes to the 3D pipeline, and possibly not one that benefits every type of game. Even there it's not like the PS5 is a generation behind; while most of whatever Sony's customizations there were likely based on an update AMD themselves did, there isn't a massive gulf in capability between that and Mesh Shaders, tho overall it is one of the areas Series X has an advantage (at least in terms of potential use-cases).

Hopefully that clears some things up, altho I also want to stress that both PS5 and Series X are very future-proofed in terms of this generation is concerned. I just don't think you're going to see any scenario where the latter is clearly blowing out the former in performance over the course of the generation. Just expect more of what we're generally seeing right now, with maybe a slight bias towards Series X depending on Mesh Shader adoption rates. But yeah, don't get any hopes up for any PS2/OG Xbox or early PS3/360 levels of performance gulfs this gen. Even PS4/XBO levels of gulfs might be pushing it.

Wonder how many people/fanboys who cry about me being an "Xbox fanboy" or write "big passages of nothing "are gonna try saying that again after this. Those folks-and they know who they are-can go hold their L's in the corner. I'm not in this for some console war bullshit or being on "a side", especially considering I like all three and have always shown that and will continue to do so. Keep that to yourselves and kick rocks.
 
Last edited:

cormack12

Gold Member
I thought we had this figured out last year. The point of the variable clock is to more often come close to the theoretical max performance. No system is ever going to hit its theoretical max performance with any sane code in a game, but with this feature they try to use the gpu and cpu closer to their maximal possible performance because often you dont need both to run at maximum frequency, because the gpu is still waiting for the cpu to do its things. In that moment the gpu can clock down and allow some power to go to the CPU, so that the cpu is faster and give its finished calculations to the gpu and vice versa. Thats the whole point of variable frequency. Its not to lower frequency because the system is running hot, its to upclock the system beyond what would otherwise be possible with a chip like this. The variable frequency is the whole reason the PS GPU can run this fast. They are basically getting more out of the same design, that otherwise might only be able to run a few hundred Mhz lower with a constant frequency.

Both CPU and GPU running at a fixed frequencies does not improve performance, it only wastes power and creates heat. Switching the power between CPU and GPU allows Sony to reach these high GPU frequencies to begin with and allowed them to save some silicon, since they can now operate closer to the maximum possible Teraflop.

Whether or not this is enough to beat the XSX I dont know, but I'm pretty sure the difference between the two will be so small, that absolutely no one will be able to tell during normal gameplay and only on zoomed in freeze frames, which is way this discussion is pointless for anyone who is not deeply interested in the exact technical details and only wants to shout out tera flop numbers as if those mean anything.

citizen-kane-orson-welles.gif
 

Sosokrates

Report me if I continue to console war
TFs are only important for coding scenarios that are compute-driven, which isn't as important for games as it would be for say mining, or raw data processing on servers.

Certain things like pixel fillrate, geometry culling and triangle rasterization rate are not bound to the CUs explicitly, so the design with higher clocks tends to win out in those cases, which happens to be PS5. Texture fillrate is trickier because those are bound by CUs in RDNA2 so technically speaking the more CUs active the higher Texture and texel fillrate (and due to how RDNA 2 is designed, BVH traversal intersection rates) would be.

However, those things are still determined in some way by clock speeds and are also bound to what saturation levels are being loaded to the CUs across the GPU. In Series X's case you'd need 44 CUs regularly saturated with work on their TMUs (4 each) to match the texture/texel fillrate of PS5 (321.2 Gtexels/S). That means a game would need to regularly make 8 more CUs active on Series X to match the same throughput for texture/texel fillrate on PS5. So that is one of the downsides of having lower GPU clocks.

Series X does make up for that in a way with larger GDDR6 memory bandwidth but you also need to remember this is shared between GPU (560 GB/s) and CPU/audio(336 GB/s). So, if for a given percentage of frame time in a second (let's say 15%) the CPU & audio are using the GDDR6, then that's 15% of a second where the Series X memory is running at 336 GB/s, not 560 GB/s. So effective bandwidth in that scenario is actually closer to 560 GB/s * .85 = 476 GB/s. However, that's probably a more extreme scenario, since most CPUs of Series X equivalent on PC use about 50 GB/s of DDR4 bandwidth IIRC, but the audio could potentially use another 20 GB/s on top of that (if it's around what PS5 offers with Tempest Engine), so typical GPU bandwidth on Series X is probably around 490 GB/s in most cases.

That's still higher than PS5's memory bandwidth (448 GB/s), especially if CPU and audio usage is taken into account (which would bring PS5's GPU bandwidth closer to 383 GB/s assuming CPU usage and Tempest Audio usage), but Series X has to confine its GPU data to a 10 GB pool. Therefore if there's GPU-related data that might be sitting in the CPU/audio pool of 6 GB, and has to be moved to the 10 GB pool, hopefully that data is on the same GDDR6 module or else there'll be access latency penalty for moving the data around within the GDDR6 memory pool (which would happen anyway with just needing to move data from the 6 GB pool to the 10 GB pool; I know that this isn't hard-coded so theoretically the GPU could use the 6 GB pool for graphics data as well, but the application contextually switching from the two pools due to bandwidth differences is probably nowhere near an easy thing to maximize use of by devs I'd imagine).

With PS5 there's no need for that type of data management because its memory is fully unified and not virtually partitioned as two different banks at different bandwidths. That does help to save on latency and ensure effective bandwidth is what the raw numbers state, the only thing I am curious about is what type of penalty there is for that type of more thorough data management on Series X. If it's within a margin of error, say 2% penalty, that's still a potential further loss of 9.52 GB/s (476 GB/s) to 9.8 GB/s ) (490 GB/s) bandwidth, bringing those figures down to between 466.48 GB/s - 480.2 GB/s of likely actual GDDR6 bandwidth for Series X (under typical real-use cases where the CPU and audio are also being used.

While I could throw SSD transfers into that as well (any data going to or from the SSD on either system eats at the available memory bandwidth), that isn't too important considering they both have the same physical footprint of GDDR6 memory. However, the PS5 can decompress data at a higher rate meaning if a game for example needs 8 GB of new texture data, that could be done quicker on PS5 (under one second) while on Series X you'd need a bit more than one second (since its peak for texture decompression is 6 GB/s). For PS5 if it's a particularly well compressed texture that 8 GB could be provided in half a second into system RAM (since 8 GB/s is less than half of 17 GB/s). Additionally there are things related to data decompression and caching of SSD data on PS5 that on Series X the CPU has to do a bit of work on, so that means less effective CPU bandwidth for game-specific tasks.

In some ways the Series X therefore benefits from having lower effective rates in some areas like geometry culling, because that means it needs less CPU time to generate the commands to the GPU for creating the polygons, but that's still counterbalanced by other things on its own side such as what I just mentioned, and on PS5 such as cache scrubbers which are not present on the Series systems (which help with cutting down the amount of trips needed to system GDDR6 RAM, and avoiding the access latency penalty that comes with that), etc. So yeah, in terms of pure FLOPs the PS5 loses out if saturation is pushed on both it and Series X, but that's clearly only one fraction of the whole pie and not the most important when it comes to gaming performance, either.

I'm guessing Microsoft are envisioning a big shift in the near future to fully programmable mesh shading (which TBF, is something the PS5 only partially has with its Primitive Shaders), and some early benchmarks have shown huge gains in throughput performance there, but that's also hinged on a pretty big design paradigm shift when it comes to the 3D pipeline, and possibly not one that benefits every type of game. Even there it's not like the PS5 is a generation behind; while most of whatever Sony's customizations there were likely based on an update AMD themselves did, there isn't a massive gulf in capability between that and Mesh Shaders, tho overall it is one of the areas Series X has an advantage (at least in terms of potential use-cases).

Hopefully that clears some things up, altho I also want to stress that both PS5 and Series X are very future-proofed in terms of this generation is concerned. I just don't think you're going to see any scenario where the latter is clearly blowing out the former in performance over the course of the generation. Just expect more of what we're generally seeing right now, with maybe a slight bias towards Series X depending on Mesh Shader adoption rates. But yeah, don't get any hopes up for any PS2/OG Xbox or early PS3/360 levels of performance gulfs this gen. Even PS4/XBO levels of gulfs might be pushing it.

Wonder how many people/fanboys who cry about me being an "Xbox fanboy" or write "big passages of nothing "are gonna try saying that again after this. Those folks-and they know who they are-can go hold their L's in the corner. I'm not in this for some console war bullshit or being on "a side", especially considering I like all three and have always shown that and will continue to do so. Keep that to yourselves and kick rocks.

I was just disputing that people keep saying theres an 18% difference in compute between the xsx and ps5 GPU, i dont think its accurate because the PS5 has to reduce its clocks by a few percent when running certain demanding games.

Maybe a better to way explain it is if the ps5 was like the PS4 and had a static clock of 2230mhz it would perform slightly better then the PS5 that was released because it would not need to lower its clock in certain situations.
 

skit_data

Member
Yes indeed.
Cerny said it will run at 2230mhz most of the time, where a few percent (lets say 2%) clock reduction is necessary on that worse case game. 2% is like 45mhz, so it would be running at 2185mhz under these heavy loads which is 10.07tflops which is 2.08tfops weaker then xsx and 20.6% weaker then xsx. So the PS5s gpu has a range of less power compared to the XSXs it would be more accurate to say the PS5s gpu is 18-20% weaker. It could be more depending on the boundaries of what cerny meant by "few percent".
That is not what he said AFAIK, you are welcome to point me to the exact quote.
He said that a few percent decrease in clock frequency will give a relatively large decrease in power consumption.
Cerny also stresses that power consumption and clock speeds don't have a linear relationship. Dropping frequency by 10 per cent reduces power consumption by around 27 per cent. "In general, a 10 per cent power reduction is just a few per cent reduction in frequency," Cerny emphasises.

I.E it could very well be interpreted that a sub 1% reduction in frequency is enough. I guess your point is that Series Xs compute power is more ”reliable”, but I argue the ”reliability” of PS5s compute power is more or less the same.
 
Last edited:

Seph-

Member
Its not about feeling better. I mean did including the decimal places for PS5 and not xsx make you feel better?
Not really, frankly I could have cared less if either had them. The .28 was just easier to remember than the .15 due to how the Xbox was marketed. I also don't necessarily think stating a more demanding games is more accurate than saying a more terribly coded game considering the later is what would make more of a difference *See New World*. Frankly the idea that the frequency drops or changes for us considering we aren't the ones making the games is basically impossible to know because we don't know what's being demanded by the engine per scene per frame. That just makes anything we say or any assumptions we make pure speculation.
 

MikeM

Member
I can’t believe we are back on this. Forget TFs, forget model SOC power consumption, smartshift, etc.
Series X will win the most of the resolution battles. Its been proven. May be some occurances where PS5 comes out on top, but all I can say is that the differences between both machines are not big enough for me nor most of the user base to notice. Its getting tiring seeing the same shit on these boards.
 

Seph-

Member
I can’t believe we are back on this. Forget TFs, forget model SOC power consumption, smartshift, etc.
Series X will win the most of the resolution battles. Its been proven. May be some occurances where PS5 comes out on top, but all I can say is that the differences between both machines are not big enough for me nor most of the user base to notice. Its getting tiring seeing the same shit on these boards.
Nail on the head. We play games, not numbers. It's fun to know aspects of things, but lately or especially the past year atleast it's seemed much worse than previous years that I've browsed. It's incredibly tiring to say the least.
 

Sosokrates

Report me if I continue to console war
Not really, frankly I could have cared less if either had them. The .28 was just easier to remember than the .15 due to how the Xbox was marketed. I also don't necessarily think stating a more demanding games is more accurate than saying a more terribly coded game considering the later is what would make more of a difference *See New World*. Frankly the idea that the frequency drops or changes for us considering we aren't the ones making the games is basically impossible to know because we don't know what's being demanded by the engine per scene per frame. That just makes anything we say or any assumptions we make pure speculation.
Ok got it. Everything is speculation.
The varible frequency method was implemented in order to get more performance within the power consumption and price budget.

But thats not really the point i was trying to make. If the PS5 was made the "old way" and had a clock of 2230mhz that was not varible it would perform better then PS5 released, but they didnt do that because it was not within the power consumption/heat/price budget.

I mean if it means that much to people call the difference between the GPUs ”18%" but its not accurate.
 

Seph-

Member
Ok got it. Everything is speculation.
The varible frequency method was implemented in order to get more performance within the power consumption and price budget.

But thats not really the point i was trying to make. If the PS5 was made the "old way" and had a clock of 2230mhz that was not varible it would perform better then PS5 released, but they didnt do that because it was not within the power consumption/heat/price budget.

I mean if it means that much to people call the difference between the GPUs ”18%" but its not accurate.
The honest answer to that is we really wouldn't know. Every game engine uses aspects different, it's why you get discrepancies between versions of games already. One engine might use more compute thus taking advantage of more CUs while another might not. Every engine doesn't work the same, simple as that. It's also entirely likely though that if they did it the old way at that clock it would cost more to produce and cool. Which that again is just speculation because I don't work for Sony or AMD or TSMC so. Frankly putting a simple % regardless the side isn't accurate, whether you're on the line of It's 18%< or 18%> both are wrong.

There's a good quote I tend to always remember with game development and it tends to always be pretty spot on. "No one statistic is a measure of power of a console, there are too many variables, and no one calculation to produce a result. It varies per game, per engine, per firmware, per development team, and per patch, it always has and it always will."
 

Sosokrates

Report me if I continue to console war
The honest answer to that is we really wouldn't know. Every game engine uses aspects different, it's why you get discrepancies between versions of games already. One engine might use more compute thus taking advantage of more CUs while another might not. Every engine doesn't work the same, simple as that. It's also entirely likely though that if they did it the old way at that clock it would cost more to produce and cool. Which that again is just speculation because I don't work for Sony or AMD or TSMC so. Frankly putting a simple % regardless the side isn't accurate, whether you're on the line of It's 18%< or 18%> both are wrong.

There's a good quote I tend to always remember with game development and it tends to always be pretty spot on. "No one statistic is a measure of power of a console, there are too many variables, and no one calculation to produce a result. It varies per game, per engine, per firmware, per development team, and per patch, it always has and it always will."

We know that a higher clock rate provides more performance on the same chip.
If the gpu did not have to reduce frequency when that "worse case game" was played it would perform better.
The devs would design around it though, in the real world it may mean the resolution may be slightly higher or slightly more frames when the framerate is not locked.
 

Seph-

Member
We know that a higher clock rate provides more performance on the same chip.
If the gpu did not have to reduce frequency when that "worse case game" was played it would perform better.
The devs would design around it though, in the real world it may mean the resolution may be slightly higher or slightly more frames when the framerate is not locked.
That's assuming that every dev will catch everything though, also again see New World being just a awfully coded game. The reality is, yes there are performance benefits to both models. However any trade offs we really don't have access to definitive proof of what creates issues or uses more unless we're the ones making those games or coding those engines. Point is or truth be told both are great hardware for what they are, both have trade offs which is a given. I'm kinda burnt out already on this topic so this will likely be the last post in this thread for me, unless I decide to chime in. I just at this point recommend ppl stop chasing narratives of one console being x over another. We play games, not numbers. Again each do something better than the other, each do things different than the other. These aren't bad things. From a hardware standpoint there really isn't some massive gap but the ability to narrow it down to what it actually is, is frankly something we don't have the ability to gauge. Both in the end though are great hardware that are going to give you great AND similar experiences with very little difference or trade-offs. Enjoys the games really, there's a boat load of great ones coming to both I'm sure.
 
I was just disputing that people keep saying theres an 18% difference in compute between the xsx and ps5 GPU, i dont think its accurate because the PS5 has to reduce its clocks by a few percent when running certain demanding games.

Maybe a better to way explain it is if the ps5 was like the PS4 and had a static clock of 2230mhz it would perform slightly better then the PS5 that was released because it would not need to lower its clock in certain situations.
I think we're actually misunderstanding what's meant by "locked clocks" here. It's not that the Series X will always be at 1825 GHz for the CPU (or 2230 MHz for the PS5 GPU), because there are going to be normal game scenarios that don't need clocks that high.

What Microsoft meant by sustained clocks is that when a game needs the 1825 MHz clock throughput of the GPU, they are always guaranteed that clock. That isn't the case on PS5; if certain power limits are hit (such as for example due to certain instructions being processed), then GPU will drop its clock rate, by (according to Road to PS5) up to 2%, while reducing the power load by up to 10%.

Otherwise, on PS5 the CPU and GPU (and I'd assume Tempest and I/O complex) all have to share a fixed power budget, and if one component needs more juice, the other will have to give up some of its power budget. If that can't be done, then the component that needs that extra power won't get it, and lower its power load, causing some reduction of the clock rate. On the Series systems, both the CPU and GPU will always be able to provide whatever clock frequencies are required of them, it's just a matter of drawing more power if necessary to reach and sustain them for however long they need to be.

So while the clock frequencies on PS5 may be variable, the power budget on Series systems is variable. Tradeoffs in both cases that have their benefits and drawbacks.
 

rnlval

Member
That's the thing, XSX doesn't have a '20% more pure power' or 'power (as a whole)', it has 18% more compute power (along with texel fill rate) over PS5 while actually being in deficit by around 20% in other GPU 'power' metrics tied to fixed function units due to frequency difference. Thus expecting a consistent difference of 20% in resolution or FPS is illogical to begin with.

And going by Cerny's statement of "when the triangles are small it's more difficult to feed the CUs with useful work" future next-gen titles with more complex geometry can actually favor PS5's 'deep' design more. There was also a post by Matt Hargett on Twitter alluding that optimized code (with higher cache hit rate) will benefit PS5's faster cache subsystem more. Furthermore, i really don't think that PS5's Geometry Engine and I/O complex are even remotely close to being maxed out.
XSX GPU has about 18% higher compute, texture, and raytracing power. CU scales compute, texture, and raytracing hardware. Raytracing denoise runs one compute, BVH transverse runs on compute and raytracing intersection testing runs with hardware accelerated.

XSX GPU has 5 MB L2 cache at 1825 Mhz.
PS5 GPU has 4 MB L2 cache up to 2230 Mhz

The main purpose of the next-generation geometry pipeline is to scale with CU count!

RTX Ampere has a large increase in compute power e.g. next-generation geometry pipeline, PC DirectStorage decompression, and raytracing denoise. RTX has hardware-accelerated BVH transverse.
 
Last edited:

Loxus

Member
XSX GPU has about 18% higher compute, texture, and raytracing power. CU scales compute, texture, and raytracing hardware. Raytracing denoise runs one compute, BVH transverse runs on compute and raytracing intersection testing runs with hardware accelerated.

XSX GPU has 5 MB L2 cache at 1825 Mhz.
PS5 GPU has 4 MB L2 cache up to 2230 Mhz

The main purpose of the next-generation geometry pipeline is to scale with CU count!

RTX Ampere has a large increase in compute power e.g. next-generation geometry pipeline, PC DirectStorage decompression, and raytracing denoise. RTX has hardware-accelerated BVH transverse.
I would not compare compute and ray trace capabilities by the number of CUs because the CUs and TMUs+Ray are designed differently. Thus performance and efficiency may be also different.
bqOGiYS.jpg
 

Kenpachii

Member
Its not that hard people. Xbox series X gpu is faster. It's that simple. Let's not pretend those cu's are hard to push in modern games. PC games do it all day long and specially at higher resolutions those CU's are easily used.

The reality however is, it doesn't matter because a developer will never make a game and not make it run on atleast 30 fps on the PS5. So optimization will always be done on the most populaire platform that makes them the most money which is the PS5. If they don't there game will be review bombed to oblivion and its bad business as result u can see this with cyberpunk.

If they focus on the PS5 model and push 60 fps for example as target, the 18% of the xbox ( if its 18% i just fly with what people say here ) is going to be used for minor shit that nobody really notices or cares for. Its like having a 18% faster PC gpu with a 60 fps lock. congratz u can now push 1 shadow setting to a higher level or they just leave that 18% idle or increase the resolution a little bit.

About RT and CU's. again if the PS5 can't run RT at quarter of the resolution which they are doing right now, the game will simple not have RT to start with. ( far cry 6 ) in order to push parity most likely through contracts or simple use the left over performance for minor shit that nobody cares about.

This is why people say constantly, it doesn't matter what the differences are. the differences aren't big enough to be noticable. The 400% zoom in, to see 1 preset quality difference is simple just nonsense.

The same way as hardware unboxed stated with there far cry 6 review, we can't detect raytracing in this picture, but our raytracing experts we put this job infront could nitpick the difference if you zoom in 400x. At that point its useless and defeats its goal.

Now why do you sometimes see dips below the 60 on the xbox and not the PS5. the same way how a 3090 with 5950x and 64gb of ram, dips below the 60's with microstutter in BF5 and the PS5 don't. Optimisation is dog shit or api used have issue's, nothing to do with hardware.

At the end of the day, the cpu's are great, ram is acceptable, SSD's in those boxes are major gigantic improvements over those shit HDD's and the GPU's are servicable. No matter what the marketting teams tell you how gigantic of a difference x makes over y, its all PR shit to make you buy there hardware. The boxes are practical identical on the grand scheme of things.
 
Last edited:

ToTTenTranz

Banned
The chance of a dev creating code that can hit the peak number of flops on PS5 or Xbox Series X is really low.

I remember seeing someone claiming the register occupancy is usually 60-80%. Though that doesn't tell the whole story eiter.
Regardless, the only way to get close to 100% FLOPs is with a power virus without a framerate limiter, like furmark. But that wouldn't pass Sony's or Microsoft's compliance tests for publication anyway.

Its not that hard people. Xbox series X gpu is faster. It's that simple.
Faster at what?




On compute-limited scenarios the Series X should be up to 20% faster, but on fillrate-limited scenarios the PS5 is up to 20% faster. Which happens more often is probably dependent on game, engine, scenario, etc.
 

Kenpachii

Member
I remember seeing someone claiming the register occupancy is usually 60-80%. Though that doesn't tell the whole story eiter.
Regardless, the only way to get close to 100% FLOPs is with a power virus without a framerate limiter, like furmark. But that wouldn't pass Sony's or Microsoft's compliance tests for publication anyway.


Faster at what?




On compute-limited scenarios the Series X should be up to 20% faster, but on fillrate-limited scenarios the PS5 is up to 20% faster. Which happens more often is probably dependent on game, engine, scenario, etc.


Games made in the current time for the current consoles. I have no interest in looking how a game functions that is builded for the PS4 because why even bother buying a PS5 at that point. Its about todays games. And in todays games CU's are ultilized without effort simpel by the fact they are running at higher resolutions which will already consume those cu's. 36 or 52 cu's aren't that many at the end of the day specially for 4k when u look at PC hardware that sits in a class above it, its kinda lowish.

People here pretend and try to find evidence to support there narrative that CU only matters in certain scenario's while in reality every single game that's made today specially at 4k will use those cu's without effort. Now will u notice the difference? that all depends on what parity the developer is searching for. Which i come back to my 30 fps remark.

Nobody is going to optimize a game that runs at 30 fps on a xbox series X and 24 fps for the PS5 because they will be review bombed so its 30 fps lock versus 30 fps lock. Do those CU's matter then? not really as it will only be applied to higher dynamic resolutions or maybe some other useless metric nobody cares about or see's at the end of the day besides pixel counters. Or RT which i commented on and why i also put the example of BF into the mix with PC.

Edit

Now obviously u could make a game run at 36 cu usage cap and get the performance advantage on the PS5, much like how far cry 6 performs better with RT on a 6800xt over a 3080 even while the 3080 is a fuck ton more faster then a 6800xt at RT. At that point its just the developers being special and having a marketing agenda to push certain solutions forwards. More CU's however isn't something specially added for marketing purposes, its basically how the GPU's push performance forwards at higher resolutions, and the same could be said about the PS5 higher clocks on lower CU gpu's is a option u can go for when resolution isn't your first priority or RT.

So whatever that guy in his tweet says, yes he is right, however in modern games using 52cu's isn't something weird or special as people pretend here it is. its pretty much always going in modern titles because of resolutions and even RT ( RT lol )

Still its all irrelevant shit as developers will build around what they deem useful. so for the end user its meaningless.

This is why i stated, those boxes are practical identical for consumers and comparisons are meaningless other then science.
 
Last edited:

Darius87

Member
Games made in the current time for the current consoles. I have no interest in looking how a game functions that is builded for the PS4 because why even bother buying a PS5 at that point. Its about todays games. And in todays games CU's are ultilized without effort simpel by the fact they are running at higher resolutions which will already consume those cu's. 36 or 52 cu's aren't that many at the end of the day specially for 4k when u look at PC hardware that sits in a class above it, its kinda lowish.

People here pretend and try to find evidence to support there narrative that CU only matters in certain scenario's while in reality every single game that's made today specially at 4k will use those cu's without effort. Now will u notice the difference? that all depends on what parity the developer is searching for. Which i come back to my 30 fps remark.
ALU's will allways be last bottleneck even at 4K you can't fill all those ALU's with tasks at given frame that's why Async compute was created to better utilize CU's they always will be BW and power starved. So CU amount matters at the point of which you can utilize/parallelize them effectively, you have to be pretty amazing programming ninja to utilize them at high efficiency on average.
 
Top Bottom