• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry about XSX teraflops advantage : It's kinda all blowing up in the face of Xbox Series X

Lysandros

Member
I think it does matter, and not to be a contrarian but I do think it could be the API. Something doesn't add up about the Series X. The CPU is even faster. The ram is faster. Could be due to the ram being split but I am not sure. The PS5 isn't doing something exotic either, other than having a higher GPU clock but I have never seen this replicated with PC cards. If Sony really does have some magic custom hardware then they would say.
May I remind that PS5 GPU is ~20% faster in fixed function troughput/pure rasterisation and has significantly more shared L1 bandwidth per SA and that its caches are equipped of custom scrubbers? CPU and RAM side of things are about a match, it's not all about the mere 100Mhz of higher frequency, there are other factors to consider like the I/O processing and latencies. There is nothing wrong about XSX, it's performing as it should being essentially on par with PS5.
 
Last edited:

Bry0

Member
It’s not that Sony have secret sauce, they just aren’t bogging down their system with layers upon layers of sub-optimal abstraction. Sony’s consoles allow easier access “to the metal” while on Xbox the whole OS is virtualised and encrypted through their highly inefficient (relatively speaking) hypervisor.
Abstraction is not always a bad thing, especially when porting between Xbox/pc but yeah, this certainly plays a role.
 
Just because you have the most horsepower in your car, will not mean you can win all the races.

tumblr_mlae39II7s1qa5ui1o3_250.gif

tumblr_mlae39II7s1qa5ui1o4_250.gif

9f089682b3a7e0194af38ac3d2f8cba6a9270abb.gifv
 
How much cable collection I am reading these weeks, it has only taken 3 years for many to realize and recognize it, Series S was a commercial error and a drag, PS5 is performing very well and the Tflops by themselves are not a metric Definitely, you have to look at the whole, a lot of laughter with Cerny but he was right and DF in the end had to drop their pants although in their day they doubted everything he said (until PS5 had RT for hardware), I would like to know the reason for this change of opinion right now that a PS5 pro is rumored, do tflops no longer matter? xDD

When you talk about the "revolutionary" PS5, I suppose you are referring to its entire I/0 system, it is the way to go in the future when it comes to data transfer.
 

SkylineRKR

Member
Everyone with some knowledge should know it doesn't matter. Every developer is different, and architecture working well in tandem is more important.

For such knowledgeable guys I always found DF to be strange, constantly saying PS5 is doing better than expected because of less raw power. PS5 has some edges over the Series X where it matters, like less of an overhead, and it has no weaker alternative to deal with too.

Generally, I favor the PS5. Installation size generally seems to be smaller, load times a tad faster, its more efficient overall.
 

StereoVsn

Member
It is a bit strange though. On paper XSX should be more powerful between it's GPU and CPU advantage. Yes, there are some memory and I/O disadvantages as compared to PS5 but the advantages should outweigh them.

I have dealt with MS dev tools for years on non-gaming side so can't help but think some of the complications with them lose efficiency and performance. In addition while MS virtualization techniques do offer advantages in overall OS design and security, I can't help that there are some downsides in performance.
 

PaintTinJr

Member
Theoretical estimations of compute have always been oversold at launch. It’s a tale as old as the gaming industry itself. At one point, early on, it did matter. And overall it still does matter but only past a certain threshold. 20% variance of total compute at the high end is not going to make a noticeable difference in the vast majority of games, especially early on. As always, power differences like this will present themselves at the end of the generation. PS2 was able to produce awesome looking games like MGS3 and Shadow of the Colossus, but it had to do tricks and still ran into performance problems. Compare that to later Xbox titles like Ninja Gaiden Black, Splinter Cell: Chaos Theory. Even Halo 2. There was a magnitude of difference between PS2’s 6.2 GFLOPS and Xbox’s 20.


It’s always about how the software and hardware interact, because limitations in one will bottleneck the other. This is why Apple produces powerful and efficient products, and Sony was right to take the same approach. Make it easy and economical to develop for your platform, and don’t necessarily just go for high end potential that most games just won’t use. And by the end of this gen, unless there are no hardware boosts in updated consoles, a 20% difference means absolutely nothing.
I don't know if that 20GFLOP comparison figure is true - without calculating for the Pentium 3 FP units that couldn't be used beyond about 20-40% on an old general purpose single core PC without SMT - but the headline numbers that matter for comparison, were the 6.2 GFLOPs (of the Emotion engine) and the Nvidia GPU's 7.3GFLOPs, and the comparative fill-rate of the PS2 GS and the Nvidia GPU, which favoured the PS2 by over 2 : 1 ratio, and was at a full 32bit precision with a zbuffer, versus a w-buffer. ( 2.3 gigapixel/s vs only 932 megapixels/second)

PC ports massively favoured the Xbox - after it being very late to market - because of the masses of extra RAM available, and then the HDD for read/write to a pagefile system extending that effective ram even further, the PS2 in games that suited streaming was very good, or games like MGS2/MGS3/PES that were built around the PS2, but even that gen was really defined by the available RAM for PC ports IMO, and HDD IO.
 
Last edited:

Gaiff

SBI’s Resident Gaslighter
It seemed to matter a lot when PS4 had more than XB1.
Now, with PS5 getting demolished by XSX in terms of TF, suddenly it doesn't matter anymore. What a coincidence!
For one, I don't think the Xbox One had even one numerical advantage over the PS4. The PS5 has several key aspects in which it pulls ahead.

For two, the PS4 outperformed the Xbox One in pretty much every case.

I'm sure we'd be pushing the TFlops narrative if the SX was consistently beating the PS5 but it's not the case so evidently, that angle doesn't work.
 
Last edited:
Teraflops feels more like marketing speak more than anything else at this point. If Teraflops were the only performance metric, then it would make sense that Series X would consistently outperform PS5 in all benchmarks and have better looking games than PS5, which just doesn’t happen.
 

RoadHazard

Gold Member
I mean, obviously it does matter. A 20TF GPU is gonna perform better (shade more pixels per second etc) than a 10TF GPU. But other things matter too, sometimes more, especially when the raw theoretical compute difference is as small as it is between the XSX and PS5.
 
Last edited:

Lysandros

Member
It is a bit strange though. On paper XSX should be more powerful between it's GPU and CPU advantage. Yes, there are some memory and I/O disadvantages as compared to PS5 but the advantages should outweigh them.
PS5 has its own GPU and CPU advantages and those aren't any less significant contrary to the general assumption fueled by PR and DF.
 
Last edited:

shamoomoo

Member
Theoretical estimations of compute have always been oversold at launch. It’s a tale as old as the gaming industry itself. At one point, early on, it did matter. And overall it still does matter but only past a certain threshold. 20% variance of total compute at the high end is not going to make a noticeable difference in the vast majority of games, especially early on. As always, power differences like this will present themselves at the end of the generation. PS2 was able to produce awesome looking games like MGS3 and Shadow of the Colossus, but it had to do tricks and still ran into performance problems. Compare that to later Xbox titles like Ninja Gaiden Black, Splinter Cell: Chaos Theory. Even Halo 2. There was a magnitude of difference between PS2’s 6.2 GFLOPS and Xbox’s 20.


It’s always about how the software and hardware interact, because limitations in one will bottleneck the other. This is why Apple produces powerful and efficient products, and Sony was right to take the same approach. Make it easy and economical to develop for your platform, and don’t necessarily just go for high end potential that most games just won’t use. And by the end of this gen, unless there are no hardware boosts in updated consoles, a 20% difference means absolutely nothing.
Why do people forget that the PS2 was almost 2 years old when the OG Xbox and Game Cube came out,those consoles should've been better by default.
 
Last edited:

SkylineRKR

Member
DF almost looks sad lol. They were too focused on TF. Better luck next time.

I mean, obviously it does matter. A 20TF GPU is gonna perform better (shade more pixels per second etc) than a 10TF GPU. But other things matter too, sometimes more, especially when the raw theoretical compute difference is as small as it is between the XSX and PS5.

Exactly, a 20% TF difference isn't going to cut it if your architecture is worse. If Series X was 20 TF then yes, it would outperform the PS5 by brute force. But this isn't the case, its a small difference and the PS5 has some perks where it can render things faster so the difference is offset.

MS got what they wanted with marketing, but consumers obviously don't give a shit. Its about the games anyway.
 
Last edited:
Digital foundry admits teraflops don't matter anymore after 3 years of this being pushed by them
This is fitting with the new pro consoles rumors coming don't rely on Teraflops
Pro console rumors are relying on even more subjective figures like ram bandwidth so that's not saying much.
Exactly, a 20% TF difference isn't going to cut it if your architecture is worse. If Series X was 20 TF then yes, it would outperform the PS5 by brute force. But this isn't the case, its a small difference and the PS5 has some perks where it can render things faster so the difference is offset.
Architecture is mostly the same what changes is the implementation.

And the difference is very small so they're tripping on the bottlenecks they created on Xbox Series S and X, instead of being able to capitalize on their small advantage.

Sony bet on the right things, probably selling more out of the gate probably also helped, but unified memory, data compression and narrow/tall gpu architecture are clearly paying off.
 
Last edited:
The bare bones components are what they are, 99% AMD tech.
It's the APIs and tools (Yes, tools) that are making the difference here.
The development environment in the PS5 is very familiar and efficient, making it easier to get the performance out.
This is, and always has been, just as important as the raw hardware.

So yes, Teraflops arnt everything, and the PS5 is showing that.
 
It seemed to matter a lot when PS4 had more than XB1.
Now, with PS5 getting demolished by XSX in terms of TF, suddenly it doesn't matter anymore. What a coincidence!

This generation the PS5 was designed very well so that IO throughput was maximised, audio is processed on its own chip etc. Cerny said that tflops don’t really matter now and he’s not wrong.

The PS5 more often than not outperforms the Series X in direct comparisons despite the on paper significant difference. Cerny designed a machine that makes accessing and using its power easy.
 

ergem

Member
If teraflops don't matter what metric should we use?
The over-all architecture should be considered.

An example is the ratio of CUs per SE. Those CUs in Xsex are bandwidth starved. It only inflated the teraflops number and increased the die size (and prize of the chip).

But look at AMD’s own GPU. The ratio of CUs per SE is closer to what Sony did. Cerny designed PS5 APU to be optimal.
 

SkylineRKR

Member
We haven't even seen the Series X pushed yet bar Flight Sim.

It won't happen. MS chose for a certain strategy that involves developing games accross 2 architectures. Third parties can't really get Series X versions to run significantly better than PS5 also. It just won't happen. Besides I think the SX is already being pushed along with the PS5, they can only go so far and its not possible to run last-gen software at native 4k and 60fps. If they could, they would have. I think we won't see much of a jump. Even next-gen exclusives don't look worlds better than cross platform games.

What if PS5 Pro is real and comes out next year? Then the Series X unlocking its full potential would be moot anyway.
 

PaintTinJr

Member
I mean, obviously it does matter. A 20TF GPU is gonna perform better (shade more pixels per second etc) than a 10TF GPU. But other things matter too, sometimes more, especially when the raw theoretical compute difference is as small as it is between the XSX and PS5.
You probably meaning transforms more vertices or FMA(Fuse, Multiply, Add), unless talking about the rarely used(in X1X/PS4 Pro) using of CUs to shade in compute shaders.

Even a straight comparison between 10TF and 20TF isn't necessarily true.

Nvidia GPU FP16 and FP32 are the same performance, whereas the (PS4's)PS5's ACEs count for async compute mean that its FP16 are closer to double its effective FP32 teraflops rate - say if used in the nanite pass of UE5 and some of the lumen stuff like the signed distance field calculations for coarse RT.
 

Zuzu

Member
I'm a complete amateur but it seems to me all things being equal (or nearly equal), teraflops do matter. But things are not equal between PS5 & Series X of course. There are a range of differences both hardware and software wise between the two. And so, in addition to teraflops, those things should also be taken into account when determining the power of a machine.
 
Last edited:

ReBurn

Gold Member
It is a bit strange though. On paper XSX should be more powerful between it's GPU and CPU advantage. Yes, there are some memory and I/O disadvantages as compared to PS5 but the advantages should outweigh them.

I have dealt with MS dev tools for years on non-gaming side so can't help but think some of the complications with them lose efficiency and performance. In addition while MS virtualization techniques do offer advantages in overall OS design and security, I can't help that there are some downsides in performance.
The tools have become a meme, but the tools matter. As time goes on a larger and larger chunk of the performance equation comes down to how well the API's developers are given make the hardware go. Especially now since most developers can't code to the metal any more. Performance increases come from improvements to the API's and more efficient high level coding.

Your MS dev tools example is a great one. Take a simple concept like collections. Performance when using things like lists has increased in recent .NET versions due to improved memory management by framework. Developers are using lists pretty much the same way they have for years from a code perspective. Building the same code on a newer platform target yields a better performing app in many circumstances. As developers we like to take credit for this, but it really isn't us much of the time. I think we see the performance we do on PS5 because the combination of clocks and API efficiency probably make the job easier for the developer. Sometimes MIcrosoft makes some weird decisions with their tools.

Virtualization can lead to loss of performance, but it doesn't always. It definitely can if you're also doing emulation of a guest OS with a different architecture than the host. Things like that. But if you're running a container in a hypervisor and the software in the container targets the same architecture as the host then there's often little discernible loss in performance. The limits to the software running inside are set based on how much of the system resources are allocated to the container instance and that's what the software can use. That math is pretty simple.
 
It won't happen. MS chose for a certain strategy that involves developing games accross 2 architectures. Third parties can't really get Series X versions to run significantly better than PS5 also. It just won't happen. Besides I think the SX is already being pushed along with the PS5, they can only go so far and its not possible to run last-gen software at native 4k and 60fps. If they could, they would have. I think we won't see much of a jump. Even next-gen exclusives don't look worlds better than cross platform games.

What if PS5 Pro is real and comes out next year? Then the Series X unlocking its full potential would be moot anyway.
I find most of the trouble for Series X games when it comes to 3rd parties comes from Unreal 4. Otherwise is close or sometimes the Series X comes out on top. I'm looking forward to Forza and Hellblazer to really show off the Series X mind, just like Flight sim. By all means make fun, if I'm wrong.
 

shamoomoo

Member
All this talks about specs, and most games only use it to make prettier rocks. TOTK convinced me it is mostly about creativity and competency and I am just waiting for more games like BG3 and TOTK that pushes creativity to the next level.
With more processing power,the team at Nintendo could've done more and improved the image quality.
 

Tedditalk

Member
With more processing power,the team at Nintendo could've done more and improved the image quality.
Done more what? None of their vision was compromised save for the image quality and the textures. 60FPS would have been a great option though.
 
Last edited:
Top Bottom