• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

HW| does Xbox Series X really have 12 Teraflops ?

Bernoulli

M2 slut
i was wondering why devs like this say these things about the Series X, shouldn't it be easier to build games for this machine since it has more power?
or is xbox limiting the hardware by software not letting it hit the Full 12 teraflops potential

specially with how microsoft uses the same tools as PC with directX it should be easier and we are now almost 3 years after launch

or the 12 Teraflops are the maximum power but there 2 allocated to the UI or something

if we say we have a RTX 3060 and 3070 the 2nd should just perform better without any additional optimisation needed or isn't that the case

any tech experts can explain why




Full specs if somebody can spot something that could cause this

PROCESSOR​

CPU. 8X Cores @ 3.8 GHz (3.66 GHz w/SMT) Custom Zen 2 CPU
GPU. 12 TFLOPS, 52 CUs @1.825 GHz Custom RDNA 2 GPU
SOC Die Size. 360.45 mm2
Process. 7nm Enhanced

MEMORY & STORAGE​

Memory. 16GB GDDR6 w/320 bit-wide bus
Memory Bandwidth. 10GB @ 560 GB/s, 6GB @ 336 GB/s.
Internal Storage. 1TB Custom NVME SSD
I/O Throughput. 2.4 GB/s (Raw), 4.8 GB/s (Compressed, with custom hardware decompression block)
Expandable Storage. Support for 1TB Seagate Expansion Card for Xbox Series X|S matches internal storage exactly (sold separately). Support for USB 3.1 external HDD (sold separately).
 
Last edited:

Gaiff

Member
2 and a half years later, people are still stuck on FLOPs, huh? It's one metric. Trying to understand the entirety of a system's performance based on that alone is a fool's errand. Some Crytek engineer said that back in like 2019 (or 2020?) and got shat on. Turns out he was right.
 
Last edited:

Bernoulli

M2 slut
2 and a half years later, people are still stuck on FLOPs, huh? It's one metric. Trying to understand the entirety of a system's performance based on that alone is a fool's errand. Some Crytek engineer said that back in like 2019 (or 2020?) and got shat on. Turns out he was right.
even cerny said it's dangerous to rely on that

but when you are on the same architecture it helps to compare for example a 3060 vs a 3080

it doesn't make sense that a 3080 needs more works or performs worse than the 3060
 

dotnotbot

Member
The Oc GIF
 

M1chl

Currently Gif and Meme Champion
bd6.png


Enough already, TFs are theoretical maximum of crunching float numbers. Thus it generally means that GPU with more Flops, will have higher power output, but it is not a guarantee.

Especially when we have this Elephant in the room which is Direct X and situation, where PC code 100% compiles (without using those specific Xbox apis), thus we have receipt for disaster.

Devs still haven't been able to really grasps Direct X 12, so I don't expect them to utilize given HW/API. On Playstation, you simply have no other choice. It is more of Windows vs Apple software mentality.
 

M1chl

Currently Gif and Meme Champion
Is it the split memory that makes Xbox harder to optimize? I think we have heard that from devs before, haven't we?
No that isn't it, it is not really split in a real sense. The compiler, what I have pleasure to see, is really doing a good job to separate concerns (meaning GPU/CPU). It is obviously a drawback, but not necessarily that big to be the main point, why Xbox is lagging.
 

RoboFu

One of the green rats
think of it as a semi-truck versus a small 6 cylinder sports car.

the truck has more power with a slower engine.. the sports car has less power but is lighter with higher revs.

Xbox has more TF but the overhead is greater and clocks are slower. PS5 has less power but more efficient with higher clocks.
 

Topher

Gold Member
No that isn't it, it is not really split in a real sense. The compiler, what I have pleasure to see, is really doing a good job to separate concerns (meaning GPU/CPU). It is obviously a drawback, but not necessarily that big to be the main point, why Xbox is lagging.

Gotcha. So it is more about the APIs being used like you mentioned in your first post? Folks used to talk about the tools being used. Do you think that is still an issue as well?
 

diffusionx

Gold Member
If the XSX has any issues, I bet it is related to the two different RAM speeds. I am sure MS had reasons to do this, but it's undoubtedly easier to work with PS5 where you have one RAM speed that is right in the middle between the XSX speeds.
 

Codes 208

Member
even cerny said it's dangerous to rely on that

but when you are on the same architecture it helps to compare for example a 3060 vs a 3080

it doesn't make sense that a 3080 needs more works or performs worse than the 3060
More like comparing a 3060 to a 3070, and both being third party alterations rather straight from the manufacturer since even though they are using the same base architecture, ms and sony went separate ways with optimization. Sony focusing on higher bandwidth speed and ms on the gpu brute force.

But the way i like to see it is remember the scene from dbz when trunks bulked out and couldnt hit cell because even though he was physically stronger, he was much slower? That seems to be the case here

Well, that and it seems that ps5 is just easier on the coding side for optimization, been simple sinc ethe ps4 days while ms has been playing catch-up on that front
 

Honey Bunny

Member
I havent kept up with the digital foundry comparison stuff, what percentage of third party games are better on ps5 vs series x?
 

01011001

Banned
how is it that noone actually just accepts the simplest and most logical answer?

Publishers sell less copies on Xbox.
Publishers therefore put less time into Xbox versions.
Less time spent optimising, with less motivation to optimise = worse performance.

in a world were games exist that literally ran worse on One X compared to PS4 Pro, it shouldn't be hard to see how once the consoles are even closer in power, that this would happen from time to time.

additionally developers simply don't care.
if a game happens to run 5% faster on one system compared to the other, they don't give a shit as long as it reaches what they deem to be acceptable.
if they run a benchmark that has an average framerate of 58.5fps and their internal goal was 57fps, then they don't give a shit to further optimise the game on one system over the other, as long as both systems are above 57fps average in their internal tests.
so to them one system running 58.5fps and the other 59.8fps, is enough.

further futhermore it's also very obvious by now that there is sometimes missing communication between the various people that work on different versions of the game.
and we can use examples for that that don't have anything to do with performance as well, just to show it's not a hardware issue.
The Deadzones in RE4 remake, to this day, are not the same on PC, Xbox and PS5, and for absolutely no reason... they even patched the deadzone on Xbox TWICE now and still have a discrepancy... but at least the shape of the deadzone is now the same as on PS5 I guess 🤷

the fact is, these 2 consoles are as close as no other console generation before them.
seeing differences here is almost always due to time spent on one version and simply the developers being way more lenient than people think.

both of which can be proven by how quickly sometimes the performance increases with post launch patches.
similarly how the post launch patch for TLOU on PC has proven that 8GB GPUs are just fine if the game is actually using the memory correctly... nothing prevented the game from looking good on 8GB cards other than the developers initially launching a bad port.
 
Last edited:

M1chl

Currently Gif and Meme Champion
Gotcha. So it is more about the APIs being used like you mentioned in your first post? Folks used to talk about the tools being used. Do you think that is still an issue as well?
Tooling is a lot of things, mainly something which has zero to do what programmer does and more to with so called "performance stack", which is something which exists in these highly abstract - ized APIs, where you can't make direct call to the GPU/CPU, it has to go through some code which is done by MS themselves. So basically if we take a simple example of creating and allocating variable, it isn't a system call, but it is an API call, where you can't influence what is after the "wall". Thus it is really important, for them to not waste precious resources. Like good example are for loops in python which are like 400x slower than in RUST/C, si you aren't really doing them in place, but you call some other program which managing it for you.


By my own account, where we were developing KC: D from 2015 to 2018, there were Xbox One X coming and it was fucking mess, it constantly were throwing these "memory can't be allocated", "this feature is not on target HW" and so on. It took them from release of Xbox One X, to reach "full operation capability" more or less 2 years, where it was just a one year till XSX.

So when I was saying "tools" this is what I meant.

But sure, we can continue, since profiling suite for Visual Studio GDK is broken and that's why we many times see those higher resolution on Xbox while it cannot maintain its target FPS, it is due to the fact, that XSX for developing is more powerful, but profiler gives zero fucks about that. Series S, on the other hand have pretty conservative estimate and in general those retail units performs better, so that's why you see often games on XSS without drops, but in eye straing type resolution. We can go on, but I hope you get the feel for the situation.
 

kikkis

Member
think of it as a semi-truck versus a small 6 cylinder sports car.

the truck has more power with a slower engine.. the sports car has less power but is lighter with higher revs.

Xbox has more TF but the overhead is greater and clocks are slower. PS5 has less power but more efficient with higher clocks.
I think more but slower lawnmowers vs less but faster lawnmowers is better analogy. It's more planning to utilize more lawnmowers but gets the job done faster if you do.
 

Topher

Gold Member
Tooling is a lot of things, mainly something which has zero to do what programmer does and more to with so called "performance stack", which is something which exists in these highly abstract - ized APIs, where you can't make direct call to the GPU/CPU, it has to go through some code which is done by MS themselves. So basically if we take a simple example of creating and allocating variable, it isn't a system call, but it is an API call, where you can't influence what is after the "wall". Thus it is really important, for them to not waste precious resources. Like good example are for loops in python which are like 400x slower than in RUST/C, si you aren't really doing them in place, but you call some other program which managing it for you.


By my own account, where we were developing KC: D from 2015 to 2018, there were Xbox One X coming and it was fucking mess, it constantly were throwing these "memory can't be allocated", "this feature is not on target HW" and so on. It took them from release of Xbox One X, to reach "full operation capability" more or less 2 years, where it was just a one year till XSX.

So when I was saying "tools" this is what I meant.

But sure, we can continue, since profiling suite for Visual Studio GDK is broken and that's why we many times see those higher resolution on Xbox while it cannot maintain its target FPS, it is due to the fact, that XSX for developing is more powerful, but profiler gives zero fucks about that. Series S, on the other hand have pretty conservative estimate and in general those retail units performs better, so that's why you see often games on XSS without drops, but in eye straing type resolution. We can go on, but I hope you get the feel for the situation.

Good stuff. As a non-gaming developer, I use Visual Studio 2022 every day. I've always been a huge fan of that IDE, but for me, VS 2022 has been a buggy mess. I'm curious if you've experienced that and do you think the profiling issue is related?
 

M1chl

Currently Gif and Meme Champion
Good stuff. As a non-gaming developer, I use Visual Studio 2022 every day. I've always been a huge fan of that IDE, but for me, VS 2022 has been a buggy mess. I'm curious if you've experienced that and do you think the profiling issue is related?
Are you doing C# .net? For that I have a lot of comments too, but only thing which I have in mind is why not GO, it is such a better language. Hell RUST is probably easier to manage with big repos, due to the fact that you don't have partial classes and all that wild ass abstraction, which makes it even less readable than C++.

Anyway. If it makes a buck, don't care, it is just a rambling.

Yeah it is worse every year, but thse GDK things are like a plug-ins which aren't developed by the VS team, so it would be unfair to call it a VS problem, it is simply something which Xbox Studios sucking at.
 

DeepEnigma

Gold Member
Gotcha. So it is more about the APIs being used like you mentioned in your first post? Folks used to talk about the tools being used. Do you think that is still an issue as well?
Could be a DX12 thing and the layers needed for the "easy PC porting." Look how taxing and shit it is compared to DX11 on the PC space. Vulkan is eating its lunch in performance more often than not.
 

ToTTenTranz

Banned
i was wondering why devs like this say these things about the Series X, shouldn't it be easier to build games for this machine since it has more power?
or is xbox limiting the hardware by software not letting it hit the Full 12 teraflops potential

specially with how microsoft uses the same tools as PC with directX it should be easier and we are now almost 3 years after launch

or the 12 Teraflops are the maximum power but there 2 allocated to the UI or something

if we say we have a RTX 3060 and 3070 the 2nd should just perform better without any additional optimisation needed or isn't that the case

any tech experts can explain why




Full specs if somebody can spot something that could cause this

PROCESSOR​

CPU. 8X Cores @ 3.8 GHz (3.66 GHz w/SMT) Custom Zen 2 CPU
GPU. 12 TFLOPS, 52 CUs @1.825 GHz Custom RDNA 2 GPU
SOC Die Size. 360.45 mm2
Process. 7nm Enhanced

MEMORY & STORAGE​

Memory. 16GB GDDR6 w/320 bit-wide bus
Memory Bandwidth. 10GB @ 560 GB/s, 6GB @ 336 GB/s.
Internal Storage. 1TB Custom NVME SSD
I/O Throughput. 2.4 GB/s (Raw), 4.8 GB/s (Compressed, with custom hardware decompression block)
Expandable Storage. Support for 1TB Seagate Expansion Card for Xbox Series X|S matches internal storage exactly (sold separately). Support for USB 3.1 external HDD (sold separately).



You're mistaking "TFLOPs" with "[Gaming] Performance".

The Series X has 52 RDNA2 Compute Units working at 1.825MHz.
Each RDNA2 Compute Unit (CU) does 64 Multiply-Add (MADD) Floating Point 32bit (FP32) operations per clock. 52 * 64 MADD = 3328 FP32 MADD / clock.
But a MADD operation is actually two operations, Multiply and Sum, so one MADD counts as two Floating Point Operations. 3328 FP32 MADD / clock = 6656 FP32 Floating Point Operations / clock.

The clock is at 1.825MHz, so 6656 * 1 825 000 000 Hz = 12 147 200 000 000 Floating Point Operations FP32, or 12.147 Tera-Floating Point Operations FP32, or 12.147 TFLOPs FP32.
There's no magic here, just math.


Gaming performance isn't really measured on a global scale. It will depend on software development ecossystem and optimization, engine optimization, game optimization, etc. The Series X has higher potential compute throughput than the PS5 but it also has lower pixel / texel fillrate and lower triangle setup rate, for example.
The people expecting the Series X to simply perform >20% better than the PS5, like e.g. the folks at DigitalFoundry, were simply being ignorant about how hardware specs relate to actual performance.
 
Last edited:

Mr.Phoenix

Member
Many have said it in this thread already, and many have said it many times before in other threads...

That TF thing is just one piece of the puzzle. Take for instance, there are 7 steps a platform takes to render each frame. The whole TF thing probably only is involved with just 3 of those steps. And even at that, how efficiently its used need to be considered.

What we are seeing now (and some of us saw coming three years ago), and mind you, this isn't only just affecting Xbox but it's also affecting the PC too, is that those remaining (theoretical) 4 steps, used to be things that were either so bad on consoles that the bulk of the heavy lifting came down to how many TFs the platform had and on the PC side of things, could just be brute forced. That is not the case anymore, now those 4 other steps are so fine-tuned or fast that they now create a bottleneck on other platforms.

The stuff that you can do with TFs still run great and having more in that regard is always better, its just everything that comes before that and after it that is causing the problem.
 
Top Bottom