• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

HW| does Xbox Series X really have 12 Teraflops ?

X-Wing

Member
It probably does, the issue isn't the hardware. If I had to guess I'd say the issue is DirectX, that's also why so many of the poor PC ports seem to have better performance on Vulkan.
 

DeepEnigma

Gold Member
The real question is, "is terraflops the best way to judge a consoles performance"?
Nt8gtyv.gif
 
Is it the split memory that makes Xbox harder to optimize? I think we have heard that from devs before, haven't we?

Split memory, lower pixel fillrate, more OS overhead taxing the CPU (PS5 has dedicated hardware for offloading almost all I/O tasks from CPU, and dedicated RAM as a cache buffer for SSD. Series systems have no dedicated RAM as cache buffer and weaker I/O subsystem so CPU has to do more of the heavy lifting) would be among some of the potential issues if it's not simply isolated to GDK/DX12U quirks.

PS5 having other advantages like cache scrubbers and cache coherency engines shouldn't be underestimated in what they do in helping the system out in terms of performance compared to Series X. But again that's assuming it's even anything at the hardware level; DX12U itself may not be curated the way it should for Xbox's GDK.

bd6.png


Enough already, TFs are theoretical maximum of crunching float numbers. Thus it generally means that GPU with more Flops, will have higher power output, but it is not a guarantee.

Especially when we have this Elephant in the room which is Direct X and situation, where PC code 100% compiles (without using those specific Xbox apis), thus we have receipt for disaster.

Devs still haven't been able to really grasps Direct X 12, so I don't expect them to utilize given HW/API. On Playstation, you simply have no other choice. It is more of Windows vs Apple software mentality.

Yeah, Alex on Digital Foundry was referring to the bolded in a prior podcast himself. On PlayStation, you have one API solution for each potential thing you would want to do. On Xbox, you have a sea of solutions, some of which may actually be detrimental, even cause issues with a solution that seemed to work for another thing you just found the optimal solution for!

Xbox's GDK environment seems to have the problem of too many choices, due to utilizing DX12U, which by its nature has to offer a large range of solutions to account for various PC system configurations. I am surprised Microsoft have seemingly yet to make a curated development package of DX12U features, API calls etc. that are specific to optimal performance on Series X, and a similar one for Series S.
 
Last edited:

The Alien

Banned
Don't think it matters TBH.

But if it did matter, 12tflops is on the box & has been advertised as such. So pretty sure, instead of GAF speculation, we'd have an actual lawsuit by now if it wasn't the case.
 

M1chl

Currently Gif and Meme Champion
Split memory, lower pixel fillrate, more OS overhead taxing the CPU (PS5 has dedicated hardware for offloading almost all I/O tasks from CPU, and dedicated RAM as a cache buffer for SSD. Series systems have no dedicated RAM as cache buffer and weaker I/O subsystem so CPU has to do more of the heavy lifting) would be among some of the potential issues if it's not simply isolated to GDK/DX12U quirks.

PS5 having other advantages like cache scrubbers and cache coherency engines shouldn't be underestimated in what they do in helping the system out in terms of performance compared to Series X. But again that's assuming it's even anything at the hardware level; DX12U itself may not be curated the way it should for Xbox's GDK.



Yeah, Alex on Digital Foundry was referring to the bolded in a prior podcast himself. On PlayStation, you have one API solution for each potential thing you would want to do. On Xbox, you have a sea of solutions, some of which may actually be detrimental, even cause issues with a solution that seemed to work for another thing you just found the optimal solution for!

Xbox's GDK environment seems to have the problem of too many choices, due to utilizing DX12U, which by its nature has to offer a large range of solutions to account for various PC system configurations. I am surprised Microsoft have seemingly yet to make a curated development package of DX12U features, API calls etc. that are specific to optimal performance on Series X, and a similar one for Series S.
Those calls are there, but it isn't separate it is addition of the Dx SDK, it has additional tooling, but the problem is that it is not enforced, like one thing would be enough, if the compiler would say "hey the target hw isn't compatible with these calls". Another elephant in the room is that dx11 to 12 wrapper, despite pundits want this info to disappear, it is a real thing.
 

DenchDeckard

Moderated wildly
that game is completely broken.
that port was very clearly rushed out the door.
they even managed to wrongly align the light sources when RT shadows are enabled on Xbox...
they literally managed to not have VRR working in the 2 modes that are meant to be used with VRR displays...

like... that port is broken beyond belief
you can stand still and look straight up into the sky, and it will have drops below the SX's VRR window at 120hz, and half a second later it will be at 90fps, then back down to 20, back up to 70 etc.

It's almost as if Bethesda is sabotaging Microsoft 😅😆😄
 

diffusionx

Gold Member
The real question is, "is terraflops the best way to judge a consoles performance"?
Nobody cared about muh flops until last gen where it provided, for whatever reason, useful shorthand to discuss the differences in performance between XBO and PS4. Turns out that the PS4 had like, 40% more flops and also about that much more performance in cross platform games. But there were other differences too, like the PS4 had much faster memory and the XBO at launch had a garbage API and mandated Kinect usage that sapped away tons of resources. Also, the few times the XBO came out "ahead" was mostly during heavily CPU dependent games because XBO had a slightly faster CPU (Unity was one and there was maybe one or two more). When the One X and PS4 Pro came out, it was interesting because the gap in flops did not correspond to the gap in performance in the same way, but this was mostly unnoticed as I guess those were ultimately niche consoles.
 
Last edited:

ReBurn

Gold Member
The 4090 RTX founder edition is 83 TFLOPS according to Nvidia. Did that translate to 8x more framerate than XBS or PS5 ? 8x more resolution ? Of course not, TFLOPS is one thing but there's many other factors. You can't reduce console performance to just a TFLOP number.
And that piece of crap can't even run Jedi Survivor at a stable framerate. Horrible tech.

/s
 
I know it's getting late in the generation but really we haven't seen what Series X can really do because of low sales & the software needing to be backed up Xbox One by & PC user base .

Who is willing to put all their eggs in Series X basket when it's 10 million owners at the most now & a lot of people will just play the game on Gamepass.
And the xbox series S....
 

onQ123

Member
The real question is, "is terraflops the best way to judge a consoles performance"?
In my opinion compute is for when you don't know what's going to be important in the coming generation but once you have a good idea of what's needed you should add fixed function units or programmable logic to the hardware for the next console.

PS5 has the advantage in fixed function units because of the higher clocks & devs are not wasting time trying to come up with ways to use compute when most things just work now.


PS4 had extra compute in comparison to the fixed function units & you seen MM , Q-Games , Johhethan Blow & others pretty much waste a full generation trying to get the best out of compute but no one is putting that type of time in this generation for a small reward
 
Those calls are there, but it isn't separate it is addition of the Dx SDK, it has additional tooling, but the problem is that it is not enforced, like one thing would be enough, if the compiler would say "hey the target hw isn't compatible with these calls". Another elephant in the room is that dx11 to 12 wrapper, despite pundits want this info to disappear, it is a real thing.

Ah, okay. Yeah, 'enforcement' of the calls would be a better way to phrase it. A way programmers can be told tat such and such call is sub-optimal for the hardware target. Sometimes limiting options is actually a great thing.

I don't know a lot on the DX11 > DX12 wrapper. What's that about?

In my opinion compute is for when you don't know what's going to be important in the coming generation but once you have a good idea of what's needed you should add fixed function units or programmable logic to the hardware for the next console.

PS5 has the advantage in fixed function units because of the higher clocks & devs are not wasting time trying to come up with ways to use compute when most things just work now.


PS4 had extra compute in comparison to the fixed function units & you seen MM , Q-Games , Johhethan Blow & others pretty much waste a full generation trying to get the best out of compute but no one is putting that type of time in this generation for a small reward

This could be a big oof for Series X in the future if holds true. So you're of the mind that the X's compute advantage won't manifest into much after all? That's one of the things I thought would work out well enough for it down the line but if only a very few handful of games leveraged PS4's compute advantage on PS4 (some 1P, a couple 3P exclusives at most) in a targeted capacity, maybe that is a hint leveraging compute for specific tasks has a high barrier with low payoff.

It's possible X's compute advantage could still manifest into something if/when Mesh Shading takes off...although part of that boat isn't the party Xbox fans may've wanted to believe since both systems are capable of utilizing meshes, they just enforce the implementation differently within their graphics pipelines.
 
Last edited:

M1chl

Currently Gif and Meme Champion
Ah, okay. Yeah, 'enforcement' of the calls would be a better way to phrase it. A way programmers can be told tat such and such call is sub-optimal for the hardware target. Sometimes limiting options is actually a great thing.

I don't know a lot on the DX11 > DX12 wrapper. What's that about?
The wrapper is translation layer, so you can keep your dx11 calls, wrap it in this thing and it will output un-optimized dx12 calls, just so you can run it on dx12 specific hw like Xbox (and with added rt for example), it is open source, you can find it here: microsoft/D3D11On12: The Direct3D11-On-12 mapping layer (github.com)
 

DeepEnigma

Gold Member
The point I was making is that we haven't seen a dev focus on the specs of Series X , being that Series S has less memory it would limit what can be attempted by devs.
The whole, "it uses smaller sized textures and resolution than big brother, so it won't be an issue, just scale down" is being proven as a myth in the PC arena where even 1080p is struggling on DX12 with 8GB for current gen only engines/builds. Look at the Baldur's Gate 3's issues which revolves entirely around design (in its case, split-screen).
 
The wrapper is translation layer, so you can keep your dx11 calls, wrap it in this thing and it will output un-optimized dx12 calls, just so you can run it on dx12 specific hw like Xbox (and with added rt for example), it is open source, you can find it here: microsoft/D3D11On12: The Direct3D11-On-12 mapping layer (github.com)

Oh, that....sounds bad. Like yes, it's an easy way to get DX11 code up and running on DX12-compliant devices. I also figure it's a good way to get XBO software programmed in DX11 to "just run" on Series X and S.

But it might also encourage some developers to continue using DX11 calls if they can just rely on the wrapper to port their calls to DX12 (even if unoptimized) and then maybe try optimizing the translated calls where it seems fit. But there could also be many instances where DX12 (and Ultimate) has its own new calls to completely replace the old ones, and method of handling certain functions with series of calls that would differ significantly from DX11. In those cases the wrapper probably isn't cutting it because those new calls and methodologies would be of the more optimized nature but the wrapper's only dumping the translated DX11 calls to unoptimized variants compliant with DX12 & DX12U.

So yeah, I can see how that creates some major problems. It can be both a blessing and a curse.
 

jm89

Member
In the end, Cerny and other developers who said the same thing were absolutely right, but then it was only a reason for ridicule and even portals like DF helped with it (there were even doubts about hardware RT on PS5... see it now on Ghostwire tokyo better than on Xbox xDD) this was 2020 in the forums... now it's time to pick up the cable when PS5 has shut up.

PS5-FUD.jpg


:pie_thinking:
Ah the great playstation fud list. iirc it was sircaw sircaw who gave us this classic
 
Last edited:

thatJohann

Member
bd6.png


Enough already, TFs are theoretical maximum of crunching float numbers. Thus it generally means that GPU with more Flops, will have higher power output, but it is not a guarantee.

Especially when we have this Elephant in the room which is Direct X and situation, where PC code 100% compiles (without using those specific Xbox apis), thus we have receipt for disaster.

Devs still haven't been able to really grasps Direct X 12, so I don't expect them to utilize given HW/API. On Playstation, you simply have no other choice. It is more of Windows vs Apple software mentality.

In your analogy of Windows vs Apple software mentality, would Windows be Xbox and Playstation be Apple?
 

M1chl

Currently Gif and Meme Champion
Witcher 3 is this, right?
Cannot verify, but more or less any "next-gen update" will be this on PC/Xbox. If they didn't have DX12 engine, so it is shady as fuck if I am being honest.

Oh, that....sounds bad. Like yes, it's an easy way to get DX11 code up and running on DX12-compliant devices. I also figure it's a good way to get XBO software programmed in DX11 to "just run" on Series X and S.

But it might also encourage some developers to continue using DX11 calls if they can just rely on the wrapper to port their calls to DX12 (even if unoptimized) and then maybe try optimizing the translated calls where it seems fit. But there could also be many instances where DX12 (and Ultimate) has its own new calls to completely replace the old ones, and method of handling certain functions with series of calls that would differ significantly from DX11. In those cases the wrapper probably isn't cutting it because those new calls and methodologies would be of the more optimized nature but the wrapper's only dumping the translated DX11 calls to unoptimized variants compliant with DX12 & DX12U.

So yeah, I can see how that creates some major problems. It can be both a blessing and a curse.
And that's really fucking bad, because new cards, aren't really optimized anymore for DX11 or Open GL a lot of what has been done in last 10 years or so, has been under utilized, by using these old ass APIs and not rewriting their code. They are still doing Unreal 4 games, because their middleware does not work on 5 and so on, it is ton of things.
 

onQ123

Member
Ah, okay. Yeah, 'enforcement' of the calls would be a better way to phrase it. A way programmers can be told tat such and such call is sub-optimal for the hardware target. Sometimes limiting options is actually a great thing.

I don't know a lot on the DX11 > DX12 wrapper. What's that about?



This could be a big oof for Series X in the future if holds true. So you're of the mind that the X's compute advantage won't manifest into much after all? That's one of the things I thought would work out well enough for it down the line but if only a very few handful of games leveraged PS4's compute advantage on PS4 (some 1P, a couple 3P exclusives at most) in a targeted capacity, maybe that is a hint leveraging compute for specific tasks has a high barrier with low payoff.

It's possible X's compute advantage could still manifest into something if/when Mesh Shading takes off...although part of that boat isn't the party Xbox fans may've wanted to believe since both systems are capable of utilizing meshes, they just enforce the implementation differently within their graphics pipelines.
That's not what I'm saying at all & I actually expect some devs to take advantage of compute but for the most part they will go with what just works
 
That's not what I'm saying at all & I actually expect some devs to take advantage of compute but for the most part they will go with what just works

My bad, then.

Cannot verify, but more or less any "next-gen update" will be this on PC/Xbox. If they didn't have DX12 engine, so it is shady as fuck if I am being honest.


And that's really fucking bad, because new cards, aren't really optimized anymore for DX11 or Open GL a lot of what has been done in last 10 years or so, has been under utilized, by using these old ass APIs and not rewriting their code. They are still doing Unreal 4 games, because their middleware does not work on 5 and so on, it is ton of things.

Eventually as UE5 replaces UE4 we should see more devs shift away from DX11 and zero in on DX12 and leverage what modern GPUs are able to actually do. But that could still lead to a few more years of growing pains for devs who primarily rely on the DirectX side of things and are behind in adapting their engines and middleware to DX12U.

Apparently that might be the majority of XGS and Zenimax teams.
 

Pelta88

Member
I think one thing is for certain, the millions of posts on forums about specs that we had in the lead up to the PS5/SX launch, wont be repeated. Who'd have thought that all that power and TFLOP talk actually boiled down to...

Series X: Expensive ingredients.
PS5: Ingredients made with love

Resulting in two vastly different outcomes.
 

M1chl

Currently Gif and Meme Champion
My bad, then.



Eventually as UE5 replaces UE4 we should see more devs shift away from DX11 and zero in on DX12 and leverage what modern GPUs are able to actually do. But that could still lead to a few more years of growing pains for devs who primarily rely on the DirectX side of things and are behind in adapting their engines and middleware to DX12U.

Apparently that might be the majority of XGS and Zenimax teams.
Eventually, yeah. Here are some historic data:
Oy0JZSv.png


Almost 8 FUCKING YEARS
 

Crayon

Member
MS went full retard on the teraflops thing. Just had to have a higher number. They got stuck making a much larger chip than sony and they apparently lose $200 per console while sony doesn't lose any money on a ps5. They got their engineering shit pushed in to have a bigger number and now nothing to show for it but a fat bill.
 
Last edited:

onQ123

Member
The whole, "it uses smaller sized textures and resolution than big brother, so it won't be an issue, just scale down" is being proven as a myth in the PC arena where even 1080p is struggling on DX12 with 8GB for current gen only engines/builds. Look at the Baldur's Gate 3's issues which revolves entirely around design (in its case, split-screen).
And when you bring in large deta sets that can't be split up into smaller pieces the difference in memory would mean that it would have to fit into the limits of Series S .
 

Three

Member
It's a theoretical max for a simplified specific GPU/CPU task. so yeah it has 12tf but it doesn't mean it's a good relative metric for real world software performance as a complete system. Tflop performance can even be stated to be double by supporting rapid packed math but it again means very little for real world tasks/software for the entire system even if it does. Think of it like brake horsepower. Just because a cars BHP is high doesn't mean it's the fastest around the track. So if the game has lower performance on the X it could be a number of things that are not related to the theoretical teraflops.
 
Last edited:

Kar

Member
Mark Cerny said so himself in The Road to PS5:


At minute 32:55 - "Also, it's easier to fully use 36 CU's in parallel that it is to fully use 48 CU's (Series X has 52), When triangles are small, it's much harder to fill those CU's with usefull work"

When he said that 3 years ago, many of us (including me) thought it was just damage control but reality it's Mark Cerny's world and we all just live in it.
 

THE DUCK

voted poster of the decade by bots
The amount of negative xbox threads on here this week I'm starting to wonder if Sony is paying people to start new negative threads at this point.
In reality, it performs right about where you expect it to given the specs. It's really the PS5 that reaches up, not the Series X that reaches down.
 

hinch7

Member
In the end, Cerny and other developers who said the same thing were absolutely right, but then it was only a reason for ridicule and even portals like DF helped with it (there were even doubts about hardware RT on PS5... see it now on Ghostwire tokyo better than on Xbox xDD) this was 2020 in the forums... now it's time to pick up the cable when PS5 has shut up.

PS5-FUD.jpg


:pie_thinking:
Alex Battaglia joined Xbox Era discord? Lol. I always thought he came accross as PC/Nvidia fanboy coming from his rants. Guess that kinda explains his disinterest for PS consoles.
 
Last edited:

Shifty1897

Member
This is like expecting a person who scored high on an IQ test to always have better grades in every subject over someone with a slightly lower IQ.
 
The amount of negative xbox threads on here this week I'm starting to wonder if Sony is paying people to start new negative threads at this point.
In reality, it performs right about where you expect it to given the specs. It's really the PS5 that reaches up, not the Series X that reaches down.

The PS5 isn't just "reaching up", though; it's routinely outperforming the more powerful (on paper) console.

Just look at RT performance in Ghostwire between both platforms. Or any number of other 3P multiplats. And that's before getting into the gulf of 1P in terms of visual showcase games There's nothing from Xbox 1P for example at the level of HFW Burning Shores visually, or even base HFW. Its most impressive games visually (at AAA level) are Hivebusters (looks quite good), Forza Horizon 5 and Flight Simulator.

There's a very valid argument to be made that Series X is performing below technical expectations considering the on-paper specs, but there's also the potential that people expecting more simply misunderstood the paper specs and only focused on a couple of metrics. Failed to understand the other important factors in gaming hardware performance, etc.
 
Top Bottom