• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[GamingBolt] WRC Generations Dev: Xbox Series X’s Raw GPU Performance is Better Than PS5’s, but Harder to Exploit

Lysandros

Member
The same design that plagued the PS3 in extracting its potential. It requires workflows and process to be executed in a more parallel manner to take advantage of the higher CU count. On top of PS3's dual core CPU setup, it also had 8 additional co-processors.
What's so CELL or SPU/SIMD like in XSX architecture leading to a "hidden potential" and much harder programming comparatively?
 
That's the thing - I feel like Xbox first party is even more constricted. As I pondered in OP, how far can GPU parallel compute development go when the lead SKU is 20 CUs? Serious question for those who have a better understanding of game development.
This is what I'm worried about and I can't see MS ever getting around it. MS first parties will always be tethered to a significantly weaker system while the ps5 won't (as long as it's current gen only). So MS games will always be handcuffed by the S.

I don't know how damaging this will actually be to game design or graphics. Only time will tell.
 

rnlval

Member
That's some very basic take coming from a developer actually but NDAs and such i understand to some degree. I also find "Xbox Series X’s GPU raw performance is better" comment to be somewhat misleading. I think what he really means is "XSX has a higher theoretical 'compute' ceiling/power", 'performance' alludes to final real world thoughtput which is contradictory to the context and also compute is only one facet of it. More so than parallelism XSX's main problem is that its higher CU count design is coupled with a slower GPU back and front end compared PS5, which naturally reduces its real/whole GPU power. Thus i am not adhering to XSX is "more powerful but harder to exploit" narrative as being the main reason behind the real world results. PS5/XSX situation is very far from being analogous to PS3/X360 one.
XSX's lower front-end (geometry) and back-end (RBE) performance can be worked around with mesh shaders and compute shader/ texture units (AMD's Async Compute marketing). One of the main points about DX12U is moving the front-end geometry into compute shaders.

For each generation, NVIDIA GA102 and AD102 have the TFLOPS compute superiority over AMD's flagship counterparts. Hardware accelerated raytracing is on compute shader path that is next to texture units (RT cores) and raytracing denoise (compute shader).

The major improvements with NAVI 31 are with compute shader power i.e. NAVI 21's ~24 TFLOPS/320 TMU/80 RT cores(missing transverse)/128 ROPS to NAVI 31's ~61 TFLOPS/384 TMU/96 RT cores/192 ROPS.

NVIDIA AD102 has 82.58 TFLOPS compute, 512 TMUs, 128 RT cores, and 192 ROPS.
NVIDIA GA102 has 40 TFLOPS compute, 336 TMUs, 84 RT cores, and 112 ROPS.
 

Danknugz

Member
Sometimes people hear what they want to hear. The message is very clear. PS5 easier to exploit, Series X has a higher performance ceiling. It's up to the devs and engine developers to prioritize where they want their resources.

How you read this and come to the conclusion that it's states one is better than the other is beyond me.
i think i have an idea 🤔
 

Danknugz

Member
Most the differences have been down to a frame dropped here and there, with many more being technically different but imperceptible outside for forensic scrutiny. Anything more significant seemed to be more the games fault than whatever console. I don't even know how df pulls views for these comparisons when every one is a big nothingburger.
it's clearly a huge deal for so many people to get their "told ya so" sentiment in whenever their console wins the latest special olympics events of ps5 vs xbox series x (titans of technology and computing power).
 
Last edited:

proandrad

Member
Going higher end on the CPU tends to be the best play if you want smoother frame times. In pc gaming whenever your GPU is the limiting factor you can lower your resolution or graphics settings for higher performance. However, when a game is CPU bound lowering the graphics settings does very little to increase game performance. Open world games and ray-tracing tends to put more load on your CPU. A good example of a CPU bound game is Gotham Knights. In theory since both the series X and S have the same CPU, the Series S should have no issue running current gen games as long as Devs lower the graphics settings or resolution.
 

Crayon

Member
it's clearly a huge deal for so many people to get their "told ya so" sentiment in whenever their console wins the latest special olympics events of ps5 vs xbox series x (titans of technology and computing power).

The gods themselves tremble before the technological might of NECKZ JEN.
 
This is pretty much what we all knew.
Sony's big push was time to triangle. Keeping the same CU count, having very mature tools made it alot easier to get more of its potential out earlier on.

MS is relying on alot of tech that hasn't even been used by any devs yet such as Mesh Shaders and Sampler Feedback Streaming.

Will be interesting to see how games multiplatform games compare at the end of the gen.
 

S0ULZB0URNE

Member
who-dat.gif
Robin Williams What Year Is It GIF


Gamingbolt?

Gamingbolt.
59aa0560413b6c10bc36276bc700ed6d.gif
 

azertydu91

Hard to Kill
This is pretty much what we all knew.
Sony's big push was time to triangle. Keeping the same CU count, having very mature tools made it alot easier to get more of its potential out earlier on.

MS is relying on alot of tech that hasn't even been used by any devs yet such as Mesh Shaders and Sampler Feedback Streaming.

Will be interesting to see how games multiplatform games compare at the end of the gen.
It is crazy how you have no idea what you are talking about, what you mentionned is not new nor exclusive to MS, it just has a different nomenclature under DX 12... mesh and primitve saders are basically the same etc but ,you never learn, no matter how much people tell you that you are wrong.
 

Corndog

Banned
https://gamingbolt.com/xbox-series-...s5s-but-harder-to-exploit-wrc-generations-dev



Neither the topic nor the sentiment is new, but it's always nice to hear input from 3rd party developers on platform comparisons. My question as it relates to the future is how far MS first party is able to extract Series X GPU parallelism while also developing for the more popular Series S, which has significantly less CUs (~65% less) not to mention lower clock speed.
This has been asked and answered ad Infinitum. The same way they do it for pc.
 

LordOfChaos

Member
I feel like most of this thread is talking about GPU cores as if they're CPU cores and you have to program explicit multithreading or the extra ones just don't work. It doesn't really work like that, and I maintain that the application of the term "cores" to GPU groupings of ALUs was a mistake that continue to fool uninformed people.

GPUs and GPU programming are "embarrassingly parallel" machines, the parallelism is inherent in their nature. You're not sitting there going, oh fuck, I have to make a thread for CU 51 now. If there's an issue scaling up to a higher CU count, there's something bottlenecking it or not moving fast enough to feed it. The PS5s GPU simplified as being weaker because people take the shaders * clock speed = Tflops as everything they need to know, but the higher clock speed actually clocks other parts of its logic higher, for example its pixel fill rate of 142Gpixel/s vs 116 on the XSX, and then all the Compute Unit command processor logic etc.

It sounds like maybe these bottlenecks are being worked around and gradually showing more of the XSX's higher peak shader performance, and the APIs and OS and toolset are surely getting better to do so as well. The XBO generation also showed their API was heavier despite being among the "low level" ones, maybe some of that still going on and improving as well. The PS5 for its part has its own leads in hardware as well, Gflops are the simplest baseline paper calculation, it's like comparing CPU speeds just by clock speeds.
 

supernova8

Banned
WR generations looks no better than Dirt Rally 2.0 which came out in 2019

d9Vb7XD.jpg


Surely both the PS5 and Xbox Series X should be more than enough to run WRC generations at high resolution and a high framerate.

I mean fucking seriously... rally games should look waaaaaay better than this considering that they have no AI to deal with. They literally just need to render the game environment, the one vehicle (no opponent vehicles of course since it's rally), and do the physics and that's it. I guess it's a budget issue .
 
Last edited:

rnlval

Member
I feel like most of this thread is talking about GPU cores as if they're CPU cores and you have to program explicit multithreading or the extra ones just don't work. It doesn't really work like that, and I maintain that the application of the term "cores" to GPU groupings of ALUs was a mistake that continue to fool uninformed people.

GPUs and GPU programming are "embarrassingly parallel" machines, the parallelism is inherent in their nature. You're not sitting there going, oh fuck, I have to make a thread for CU 51 now. If there's an issue scaling up to a higher CU count, there's something bottlenecking it or not moving fast enough to feed it. The PS5s GPU simplified as being weaker because people take the shaders * clock speed = Tflops as everything they need to know, but the higher clock speed actually clocks other parts of its logic higher, for example its pixel fill rate of 142Gpixel/s vs 116 on the XSX, and then all the Compute Unit command processor logic etc.

It sounds like maybe these bottlenecks are being worked around and gradually showing more of the XSX's higher peak shader performance, and the APIs and OS and toolset are surely getting better to do so as well. The XBO generation also showed their API was heavier despite being among the "low level" ones, maybe some of that still going on and improving as well. The PS5 for its part has its own leads in hardware as well, Gflops are the simplest baseline paper calculation, it's like comparing CPU speeds just by clock speeds.
FYI, RBEs (Render Back End that contains color ROPS and z-ROPS) are not the only hardware for read/write I/O.

The programmer has the option to use the compute shader/TMU for the read/write IO path.

ePzQRp2.png


After applying delta color compression (DCC), be aware that any pixel fillrate argument will be bound by external memory bandwidth.


Read http://www.humus.name/Articles/Persson_LowlevelShaderOptimization.pdf

Doom Eternal has extensive Async Compute usage.

Basic AMD Vega/RDNA GPU 101 design

3zbxOLK.jpg

Your pixel fill rate argument is processed via the pixel engine I/O path.

AMD's Async Compute argument is processed via compute engine I/O path which includes TMU (texture management units) and this is a known workaround for ROPS bottlenecks.
 
Last edited:

rofif

Banned
Most the differences have been down to a frame dropped here and there, with many more being technically different but imperceptible outside for forensic scrutiny. Anything more significant seemed to be more the games fault than whatever console. I don't even know how df pulls views for these comparisons when every one is a big nothingburger.
Yeah or dynamic res that can drop to 1721p on one console and 1823 on other
 

Arioco

Member
What are you going on about? This article has nothing to do with Mark Cerny nor the Roadmap to PS5 event. Benoit Jacquier is a dev who made specific statement. You used it to create a fictitious claim of intent.


Dude, that's exactly what Cerny said in The Road to PS5. And still it has nothing to do with it?



Also, it's easier to fully use 36 CUs in parallel than it is to fully use 48 CUs. When triangles are small its much harder to fill all those CUs with useful work.



Min. 32:55, time stamped.

 

lh032

I cry about Xbox and hate PlayStation.
WR generations looks no better than Dirt Rally 2.0 which came out in 2019

d9Vb7XD.jpg


Surely both the PS5 and Xbox Series X should be more than enough to run WRC generations at high resolution and a high framerate.

I mean fucking seriously... rally games should look waaaaaay better than this considering that they have no AI to deal with. They literally just need to render the game environment, the one vehicle (no opponent vehicles of course since it's rally), and do the physics and that's it. I guess it's a budget issue .
either budget issue or its a launch game so they didnt fully utilize the power of the console.
 

PJX

Member
https://gamingbolt.com/xbox-series-...s5s-but-harder-to-exploit-wrc-generations-dev



Neither the topic nor the sentiment is new, but it's always nice to hear input from 3rd party developers on platform comparisons. My question as it relates to the future is how far MS first party is able to extract Series X GPU parallelism while also developing for the more popular Series S, which has significantly less CUs (~65% less) not to mention lower clock speed.
As a developer, our priority is to develop for the XSX when it comes to the Xbox platform. Then do what we can with the XSS. Talking to a few other developers that is their way of thinking also.
 

supernova8

Banned
As a developer, our priority is to develop for the XSX when it comes to the Xbox platform. Then do what we can with the XSS. Talking to a few other developers that is their way of thinking also.

What requirements are you bound to in terms of performance differential between the Series X and S? Or are there none and it just has to "be" on the Series S and run reasonably well-ish?
 

ChiefDada

Gold Member
This has been asked and answered ad Infinitum. The same way they do it for pc.

And that's my concern. That would be awful. Alternative such as GPGPU optimization would be much more interesting than improved resolutions that you typically see in PC environment.

As a developer, our priority is to develop for the XSX when it comes to the Xbox platform. Then do what we can with the XSS. Talking to a few other developers that is their way of thinking also.

Interesting. Thanks for sharing. I'm assuming you're 3rd party?
 

PJX

Member
And that's my concern. That would be awful. Alternative such as GPGPU optimization would be much more interesting than improved resolutions that you typically see in PC environment.



Interesting. Thanks for sharing. I'm assuming you're 3rd party?

Yes
 
Even if the Series X is supposed to have a a performance advantage, how “big” are people expecting it to be?

I think some people got caught up with what happened with One X vs. PS4 Pro and thought it PS5 vs. Series X would be a repeat of that but they couldn’t have been more wrong.

Best case scenario for Series X would be a 10-15% advantage over the PS5, but that’s likely not going to happen, as the PS5 is faster in other areas like rasterisation, cache bandwidth and pixel throughput.

As for features like SFS which are exclusive to Series X and desktop RDNA 2, these can be easily run on software on PS5 however there is a small hit to performance but it’s negligible. Mesh Shaders are just an API implementation of Primitive Shaders and the underlying hardware across Series X, PS5 and RDNA 2 desktop is the same for this feature.

The consoles will trade blows on a game by game basis, anyone expecting anything else will be in for a rude awakening.
 

diffusionx

Gold Member
Sometimes people hear what they want to hear. The message is very clear. PS5 easier to exploit, Series X has a higher performance ceiling. It's up to the devs and engine developers to prioritize where they want their resources.

How you read this and come to the conclusion that it's states one is better than the other is beyond me.
Unfortunately for MS, there is no way that devs are actually going to deploy those resources for a platform with a significantly smaller user base and lower sales.
 

MarkMe2525

Gold Member
Unfortunately for MS, there is no way that devs are actually going to deploy those resources for a platform with a significantly smaller user base and lower sales.
If only there was another larger MS platform that devs developed for. That way MS could implement some form of standard template of graphical feature sets that game engine developers could implement that could also be shared "direct" with their "x"box platform. If only such a thing existed.
 
Last edited:

diffusionx

Gold Member
If only there was another larger MS platform that devs developed for. That way MS could implement some form of standard template of graphical feature sets that game engine developers could implement that could also be shared "direct" with their xbox platform. If only such a thing existed.
Obviously MS has been doing that for a very long time. There’s always going to be a gap between a PC running a regular OS and a console.

Game engines don't look at platform sales and say " nah I'm not gonna scale to this platform "

:messenger_grinning_sweat:
I am not sure you realize this but game engines don’t code and optimize themselves. Yet.
 
Top Bottom