• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[GamingBolt] WRC Generations Dev: Xbox Series X’s Raw GPU Performance is Better Than PS5’s, but Harder to Exploit

ChiefDada

Gold Member
https://gamingbolt.com/xbox-series-...s5s-but-harder-to-exploit-wrc-generations-dev

As per KT Racing’s Benoit Jacquier, technical director on WRC Generations, the PS5’s faster clockspeed (2.23 GHz as compared to the Xbox Series X’s 1.825 GHz) actually gives it a bit of an advantage, allowing developers more direct and simpler performance advantages.

“Due to faster clockspeed, PS5 gives us direct, simpler performance advantages from previous generation,” Jacquier said in a recent interview with GamingBolt. “Although the Xbox Series X’s GPU raw performance is better, it’s harder to exploit- it requires a better parallelism to exploits the 52 CUs. But I suppose devs could, in the long term, obtain better performance on the Xbox Series X.”

Neither the topic nor the sentiment is new, but it's always nice to hear input from 3rd party developers on platform comparisons. My question as it relates to the future is how far MS first party is able to extract Series X GPU parallelism while also developing for the more popular Series S, which has significantly less CUs (~65% less) not to mention lower clock speed.
 

DeepEnigma

Gold Member
Robin Williams What Year Is It GIF


Gamingbolt?

Gamingbolt.
 

sinnergy

Member
Yup that’s what I have been telling for years 🤣 engines need to be more paralyzed, for the optimal CU benefit of the Series X. These are all old gen engines .
 
Last edited:

MarkMe2525

Gold Member
The most backhanded way of saying PS5 for the win.
Sometimes people hear what they want to hear. The message is very clear. PS5 easier to exploit, Series X has a higher performance ceiling. It's up to the devs and engine developers to prioritize where they want their resources.

How you read this and come to the conclusion that it's states one is better than the other is beyond me.
 

Mr Moose

Member
In general I like running the GPU at a higher frequency. Let me show you why.

f:id:keepitreal:20200329151024j:plain


Here's two possible configurations for a GPU roughly of the level of the PlayStation 4 Pro. This is a thought experiment don't take these configurations too seriously.

If you just calculate teraflops you get the same number, but actually the performance is noticeably different because teraflops is defined as the computational capability of the vector ALU.

That's just one part of the GPU there are a lot of other units and those other units all run faster when the GPU frequency is higher at 33% higher frequency rasterization goes 33% faster processing the command buffer goes that much faster the 2 and other caches have that much higher bandwidth and so on.

About the only downside is that system memory is 33% further away in terms of cycles. But the large number of benefits more than counterbalanced that.

As a friend of mine says a rising tide lifts all boats.

Also it's easier to fully use 36CUs in parallel than it is to fully use 48CUs when triangles are small it's much harder to fill although CUs with useful work
Welcome to 2020, Gamingbolt.
 

analog_future

Resident Crybaby
Is it harder to exploit because of its retarded little brother?

Yes. Of course it is.

Say you know nothing about game development without saying you know nothing about game development


Makes sense to me. I think we'll continue to see multiplatform games perform really similarly across both platforms.

What interests me the most is how platform exclusives are going to look on each platform in the years to come. Some unique, distinctive advantages on each console so it'll be cool to see how they end up being utilized.
 
Last edited:

ChiefDada

Gold Member
I mean, this has been known. Gameingbolt just loves these punchy engagement articles.

I agree. But I'm still amazed by the number of related debates that pop up in this forum over this. Maybe the more 3rd party positions we have on record the less arguments. One can dream...

Peace Sign GIF
 
this is the problem with not having big first party software, 3rd party developers aren’t gonna dedicate the resources into taking advantage of the systems strengths

Its why Guerrilla and Naughty Dog can make something that looks as good as HFW or Part 2 on shitty ps4 hardware
 
Last edited:

ChiefDada

Gold Member
this is the problem with not having big first party software, 3rd party developers aren’t gonna dedicate the resources into taking advantage of the systems strengths

That's the thing - I feel like Xbox first party is even more constricted. As I pondered in OP, how far can GPU parallel compute development go when the lead SKU is 20 CUs? Serious question for those who have a better understanding of game development.
 

Crayon

Member
Most the differences have been down to a frame dropped here and there, with many more being technically different but imperceptible outside for forensic scrutiny. Anything more significant seemed to be more the games fault than whatever console. I don't even know how df pulls views for these comparisons when every one is a big nothingburger.
 

MarkMe2525

Gold Member
Mark Cerny laid it out in what we called the most boring presentation ever. He spoke specifically about the power narrative and measuring power vs design which leads to harnessing said power. It's not people hearing what they want to hear. More so Mark Cerny speaking factually.
What are you going on about? This article has nothing to do with Mark Cerny nor the Roadmap to PS5 event. Benoit Jacquier is a dev who made specific statement. You used it to create a fictitious claim of intent.
 

midnightAI

Member
Sometimes people hear what they want to hear. The message is very clear. PS5 easier to exploit, Series X has a higher performance ceiling. It's up to the devs and engine developers to prioritize where they want their resources.

How you read this and come to the conclusion that it's states one is better than the other is beyond me.
As far as first party goes, Sony's studio's have the higher performance ceiling :messenger_winking:
 

MarkMe2525

Gold Member
The same design that plagued the PS3 in extracting its potential. It requires workflows and process to be executed in a more parallel manner to take advantage of the higher CU count. On top of PS3's dual core CPU setup, it also had 8 additional co-processors.
This isn't a flaw, it's a future looking feature. Nvidia and AMD have been increasing gpu performance not just through clock speed (which has thermal and power draw limits) but by adding more compute cores. This has been and will continue to be the trajectory hardware manufacturers take to increase performance.

Just as with every gpu generation, game developers and engines will be tailored to take advantage of the increases computational resources. MS invested their resources in a gpu that fits this model and is far from a flawed design.

BTW, the ps3 had a single powerpc cpu core and 7 spe's

As far as first party goes, Sony's studio's have the higher performance ceiling :messenger_winking:
I would agree but not on topic.
 
Last edited:

Lysandros

Member
That's some very basic take coming from a developer actually but NDAs and such i understand to some degree. I also find "Xbox Series X’s GPU raw performance is better" comment to be somewhat misleading. I think what he really means is "XSX has a higher theoretical 'compute' ceiling/power", 'performance' alludes to final real world thoughtput which is contradictory to the context and also compute is only one facet of it. More so than parallelism XSX's main problem is that its higher CU count design is coupled with a slower GPU back and front end compared PS5, which naturally reduces its real/whole GPU power. Thus i am not adhering to XSX is "more powerful but harder to exploit" narrative as being the main reason behind the real world results. PS5/XSX situation is very far from being analogous to PS3/X360 one.
 
Last edited:

Lysandros

Member
That's the thing - I feel like Xbox first party is even more constricted. As I pondered in OP, how far can GPU parallel compute development go when the lead SKU is 20 CUs? Serious question for those who have a better understanding of game development.
Why are you approaching this from a CU count centric approach and not per CU efficiency and whole architecture? As far i am aware there isn't really a method of developing a game for "X number of CUs".
 

Gaiff

Gold Member
That's some very basic take coming from a developer actually but NDAs and such i understand to some degree. I also find "Xbox Series X’s GPU raw performance is better" comment to be somewhat misleading. I think what he really means is "XSX has a higher theoretical 'compute' ceiling/power", 'performance' alludes to final real world thoughtput which is contradictory to the context and also compute is only one facet of it. More so than parallelism XSX's main problem is that its higher CU count design is coupled with a slower GPU back and front end compared PS5, which naturally reduces its real/whole GPU power. Thus i am not adhering to XSX is "more powerful but harder to exploit" narrative as being the main reason behind the real world results. PS5/XSX situation is very far from being analogous to PS3/X360 one.
*Looks at post history*

Literally only console warring.
 
Top Bottom