• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry about XSX teraflops advantage : It's kinda all blowing up in the face of Xbox Series X

Yeah but I'm not surprised with the op trying to misrepresent DF.

On paper there is an advantage and that's what they where looking for, just like on paper there is a huge SSD advantage on the PS5 but in the real world it doesn't amount to much difference.
Devs have to use the SSD speed to full effect, they won't do this for cross platform games, they will use the slower SSD speed for both machines.
 

AngelMuffin

Member
ErTzseE.jpg
 

Solidus_T

Member
Pretty funny, since DF helped with this whole narrative that the Series X would greatly outperform the PS5 because of the TFLOP compute capability. You can be sure that they had the blessings of Microsoft and Xbox while they did this.
 
DF are behind the times, they should have moved to measuring in Gamecubes years ago now

 

JimboJones

Member
Devs have to use the SSD speed to full effect, they won't do this for cross platform games, they will use the slower SSD speed for both machines.
Unless you're a developer working on a multi platform game you will have to forgive me for being very skeptical of that statement.
 

Mokus

Member
Unless you're a developer working on a multi platform game you will have to forgive me for being very skeptical of that statement.
I'm not a developer but maybe you can understand my explanation. The last gen game engines that were not updated, can't use fully the SSD speed because the data will go through the processor by design. But the processor (even with the big Hz bump) can't process as fast as can receive the data stream from the new SSDs... unless you throw all the cores for this task alone, but then you don't have a game. That is why there is a specialized I/O created for the SSDs alone in the new consoles to take full advantage of the increased speeds. But you need to update the memory management in the game engines and from what I understood it is not easy for an ongoing project. For example even the much anticipated God of War Ragnarok still uses the old method to load the data in to the RAM, that's why it has loading screen on the PS5.
 
Last edited:
The github leak was real and based on prototypes, right after the PS5 oficial specs came in some people in the media confirmed that.

Im actually talking about people's interpretation not the validity of the leak.

That's where you made your mistake.
 

twilo99

Member
I'm not a developer but maybe you can understand my explanation. The last gen game engines that were not updated, can't use fully the SSD speed because the data will go through the processor by design. But the processor (even with the big Hz bump) can't process as fast as can receive the data stream from the new SSDs... unless you throw all the cores for this task alone, but then you don't have a game. That is why there is a specialized I/O created for the SSDs alone in the new consoles to take full advantage of the increased speeds. But you need to update the memory management in the game engines and from what I understood it is not easy for an ongoing project. For example even the much anticipated God of War Ragnarok still uses the old method to load the data in to the RAM, that's why it has loading screen on the PS5.

Wait, can they use the SSD in xss in any way to aid the RAM situation? Is that what they are trying to do with Baldurs Gate...
 

Mokus

Member
Wait, can they use the SSD in xss in any way to aid the RAM situation? Is that what they are trying to do with Baldurs Gate...
It's one of the possibilities and it would be the best one. This method would help to push even further the graphical settings on the PS5 and Xbox Series X since it would "free up" some RAM (and CPU)
 
Last edited:

Lysandros

Member
but most of the time for GPU bound scenarios I have seen Xbox performing a little better than ps5. At least on digital foundry analysis.
Fair, that is just your perception and interpretation based on a single source. Everyone is free to draw his own unique conclusions, no problem there, but this doesn't establish a fact.
 

PaintTinJr

Member
Am I seeing that right? The shadow area on the rock looks a bit more rounded instead of that hard edge on the Series' consoles.
No I don't think you are. It definitely looks like more geometry on display or different normal mapping and texture use between the two versions in that comparative shot.

(when zoomed in) On the XsX shot the brick work on the far left wall looks pretty flat as though a low texture bump map hast been used, whereas the same wall in the PS5 shot looks like a higher quality texture and at a minimum a quality normal map for the bricks to look like they have geometry, but given how low polygon all the geometry looks, it is hardly working to nanite's strengths on PS5 and could easily be done the same on last-gen consoles just at lower resolution.

Very unimpressive use of both consoles and UE5 tech IMO, but no surprise given the game's focus is on bagging high player counts on smartphones for mtx.
 
Last edited:

twilo99

Member
It's one of the possibilities and it would be the best one. This method would help to push even further the graphical settings on the PS5 and Xbox Series X since it would "free up" some RAM (and CPU)

It sounds like a lot of work and it probably needs to be implemented early in the development process, but it does make sense.
 

Killer8

Member
It's becoming difficult to take their analysis seriously. Reminds me of the time they said they'd stop counting pixels because it's not useful in the age of image reconstruction, but then they're still using it as a point of comparison.
 

twilo99

Member
It's becoming difficult to take their analysis seriously. Reminds me of the time they said they'd stop counting pixels because it's not useful in the age of image reconstruction, but then they're still using it as a point of comparison.

We went through something similar in photography.
 

PaintTinJr

Member
Look before specs were even announced. Most Sony guys were hyping up teraflops here
But maybe that demonstrates the real problem.

PlayStations have all been very balanced in their CPU, geometry processing and pixel processing - just a quick comparison of PS2 and OG Xbox shows the latter's GPU dropped the ball on z-buffering and fill-rate, despite a two year advantage of being newer, and having similar TFs to the EmotionEngine had less than half the pixel-rate of the PS2 GS. And again the same story with the X1X. The Pro had a better balance of TF to fill-rate despite being two years older and having less FP32 TF, but more FP16 half teraflops.

So PlayStation customers can sort of rely on the TF number as it is always in balance, or in PS4, Pro and PS5's case, the FP16 Half TF number, Nintendo are quite similar too, the Cube was in perfect balance. It had more FLOPs than the PS2 and OG Xbox, but at the expense of older fixed path H/W T&L, and despite having real-world fillrate compared to OG Xbox, it also had a HW zbuffer meaning it was more effective at saving fill-rate from overdraw. So IMO, the outlier is just Xbox and how they arrive at their specs where the FLOPs number is never in balance with the rest of the system's specs for gaming..
 
Last edited:

SlimySnake

Flashless at the Golden Globes
So IMO, the outlier is just Xbox and how they arrive at their specs where the FLOPs number is never in balance with the rest of the system's specs for gaming..
I thought the xbox 360's 250 gflops GPU was very well balanced by its xenon processor and 512 MB of unified vram. Especially for a $299 console. It was the PS3 that was served a dude of a GPU with bottlenecks everywhere and kutaragi's ridiculous decision to split the vram.

The x1 was held back by Don's ridiculous push for kinect and tv, but their 1.2 tflops gpu wasnt exactly held back by the ESRAM and jaguar CPUs. it was just a dated old system, just not what i would call unbalanced.

I had no idea the OG xbox was lacking in fill rate because it was running games the PS4 simply couldnt run. Doom 3 and Half Life 2 were never ported to the PS2 and by the end of the gen, it was even running some games at 720p.

I think this is the first time the xbox team has created an unbalanced console. Even the X1x got a big ram increase to 12 GB where the PS4 Pro stayed at 8GB) and they made sure to give the bandwidth a massive increase as well. resulting in several games running at native 4k when the PS4 Pro had to settle for 4kcb or 1440p. The XSX is just a unique mess because of their insistence of hitting 12 tflops at any cost.
 
Last edited:

Fafalada

Fafracer forever
It was the PS3 that was served a dude of a GPU with bottlenecks everywhere and kutaragi's ridiculous decision to split the vram.
PS3 ended up with ram split after addition of NVidia GPU - originally it was unified (well, the same way 360 and PS2 were unified).

I had no idea the OG xbox was lacking in fill rate because it was running games the PS4 simply couldnt run. Doom 3 and Half Life 2 were never ported to the PS2 and by the end of the gen, it was even running some games at 720p.
Ram and HDD were the biggest differentiators there. It also had a substantially faster general purpose CPU than anything else that gen which helped a lot with PC ports as well. GPU was constrained by unified mem (fillrate and all) but overall it was like a 'Pro/X' console compared to the rest of competitors, so it had more than enough headroom to brute-force past any limitations.
 

Neo_game

Member
This is a very incomplete and basic listing with only one GPU metric where PS5 is shown to be ahead of XSX this being the pixel fillrate while omitting others as relevant to game performance such as geometry throughput, culling rate, shared L1 cache amount/bandwidth available per CU, number of depth ROPs (twice as much on PS5), ACEs/schedulers and architectural differences like Cache Scrubbers. Also, the 336 GB/s of lower VRAM bandwidth for remaining 3 GB pool (which is/can still be GPU memory whose access can impact the fast pool) on XSX is nowhere to be found. To be fair even a basic knowledge about the base GPU architectures is enough to deduce most of it from the 2233 MHz of frequency compared to 1825 MHz. It seems that you are stuck at 2020 about the matter.

Not really. PS5 is faster in rasterization but other metrics SX has advantage. What you talking are some of the bottlenecks which they both have but SX seems to have more and hence its advantage is not much. I like the car analogy. SX engine has more power but it also weighs more. PS5 power is little less but since it is also lighter. The power to weight ratio of both in comparison is much closer than just looking at engine power output.
 

Gaiff

SBI’s Resident Gaslighter
Even though in the PC space, a fucking 10 tflops rdna 2.0 card would never get beaten by a 12 tflosp rdna 2.0 card.
I could 100% build a system featuring a slower card that consistently beats another system featuring a faster card. You are correct in your assessment, but the Series X is seemingly incompetently designed. That split memory pool is just bizarre and reminiscent of the GTX 970.
 
Last edited:
Top Bottom