• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

After one year of Xbox Series X and PS5, here is a summary of the results of all Digital Foundry comparison videos of new-gen games

Sosokrates

Report me if I continue to console war
Nope, the XSX-PS5 GPU situation isn't exactly comparable to 5700-5700 XT as I said here:

Look beyond teraflops and CU count.

The difference in clockspeed should still make some difference though according to cerny.
The whole conversation has been beyond the tflops and cu count so I dont know why u keep repeating that line.
 

Sosokrates

Report me if I continue to console war
Lol, the problem here people are stating things like they are fact.

Its all very well saying "ps5 has more depth Rops" or "PS5 has cahche scrubbers"

But talk is cheap, these statements need more detail and explanations for why they cause a performance increase over the xsx.


What part is misinformation? The new Xbox RB+ doubled the color ROPs, but not the depth ROPs.

Depth ROPs are for z/stencil operations which were not upgraded for the new Xbox RB+.
Since the PS5 has twice the number of units and they run at higher frequency, it has a ~2.5x advantage in this specific area.

For example in this post here, its all very well saying it but where is a source? I google "depth ROPS" and could not find anything.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Games can be bound by L1, L2 throughput too. I found this while profiling DOOM Eternal and Death Stranding (I even posted these screenshots in another thread). As you can see Eternal is predominantly CU/SM bound then memory bandwidth and then L1 and L2, which means the more SM/CU you throw at it the better it will perform, put simply it's compute-bound and explains why XSX is doing so much better in this title. Now look at Death Stranding frame, it's not behaving the same way as DOOM Eternal. It's almost always L1 throughput bound across many frames and less so by the SMs/CUs or other parts of the GPU.

PS5 GPU has higher L1, L2 cache bandwidth/FLOP than XSX GPU. So in scenarios like Death Stranding where it thrashes cache bandwidths, PS5's higher clock speed might allow it to be on par or outperform XSX. Your Control's corridor of doom e.g. is a good one, it might be interesting to look at what part of the GPU is being used the most in that section and what makes it different compared to other sections of the game.


HbTt8y5.png


i8uBQER.png
Very intriguing stuff. I suppose we better get used to face/offs with wildly different results for each game.

What's interesting about Death Stranding is that the Decima engine was made for wide and slow GPUs in the PS4 and PS4 Pro. While the x1x and base x1 always had faster clocks. 14% faster for x1s and upto 28% faster for x1x compared to the Pro. I wonder if Sony's internal studios will tailor the games and engines to take advantage of the higher clocks.
 

Sosokrates

Report me if I continue to console war
Very intriguing stuff. I suppose we better get used to face/offs with wildly different results for each game.

What's interesting about Death Stranding is that the Decima engine was made for wide and slow GPUs in the PS4 and PS4 Pro. While the x1x and base x1 always had faster clocks. 14% faster for x1s and upto 28% faster for x1x compared to the Pro. I wonder if Sony's internal studios will tailor the games and engines to take advantage of the higher clocks.

Cant do it to much if they plan on releasing there games on a wide range of PC hardware.
 
Now look at the framerates minimums, 51fps on PS5 and 57fps on Series X.

I think AC: Valhalla is mostly accurate in the chart, especially if you look at DF's video on the 1.04 patch, their FPS issue was solved by being more aggressive on DRS and dropping the resolution, so in similar scenes, the PS5 will hold a higher resolution, with less FPS, while the XSX will render that same scene with a lower resolution, but more FPS. But, the difference is so minimal in overall averages, that's not really a difference at all.
 

Boglin

Member
For example in this post here, its all very well saying it but where is a source? I google "depth ROPS" and could not find anything.

I'm not feeling like fetching you a link on Google right now but fortunately this other person was kind enough to post a link that mentions both the amount color rops and depth rops in the ps5, at 4 and 16 respectively. The link also mentions that the color rops were doubled from 4 to 8 in the new Xbox RB+ but it doesn't say the depth rops were also doubled.

 

Elog

Member
Fairly clear picture: It is a wash so far overall.

Details (so far): The more a game is relying on last-generation engine technology, the higher the probability the XSX comes out on top. The more a game is relying on updated engine technology, the higher the probability the PS5 comes out on top.

Either way differences are minor so far and the whole 'the most power console' language regarding XSX was massively misplaced.
 
Last edited:

onQ123

Member
Wheres your source?

I provided a source explaining that 1 ROP unit of the XSX has about double the performance of a PS5 ROP unit.
My proof is math the new RBE+ has 2X the color ROPS while still having the same amount of Depth ROPS so when Series X use less RBE+ to reach 64 Color ROPS they end up with less Depth ROPS than PS5.
 

Sosokrates

Report me if I continue to console war
My proof is math the new RBE+ has 2X the color ROPS while still having the same amount of Depth ROPS so when Series X use less RBE+ to reach 64 Color ROPS they end up with less Depth ROPS than PS5.
Could you share where you learnt about RBE and RBE+
I find it hard to belive that AMD would create an inferior solution for RDNA2
 

Boglin

Member
Could you share where you learnt about RBE and RBE+
I find it hard to belive that AMD would create an inferior solution for RDNA2
He didn't say the RBE+ is inferior. He said PS5 has twice the number of depth ROPS.

RBE+ = 8 color ROPS + 16 depth ROPS
RBE = 4 color ROPS + 16 depth ROPS

XSX= 8 RBE+ = 64 color ROPS + 128 depth ROPS
PS5= 16 RBE = 64 color ROPS + 256 depth ROPS

As you can plainly see, the RBE+ is superior yet the XSX still has less depth ROPS.
O onQ123

Regarding Depth ROPS its seems to be the difference between RDNA1 VS RDNA2.

JpqjZpL.jpg
wlY1HKp.jpg
Nice edit lol
 

Loxus

Member
The question is why the performance in ray tracing games like Watch Dogs is so similar but so much better in Metro and Doom. In Control, the performance advantage averaged out at 16% which lines up with the 18% advantage mentioned above. And yet we saw FPS counts of 32 vs 33 in the infamous corridor of doom where ray traced reflections are at its most straining. So clearly, the xbox is being held back by something or the PS5's GPU advantages are making it perform better than it should. Otherwise, the advantage would be consistently at 18%.
I was wondering if it's because XBSX Shader Arrays may be the issue.

None of the RDNA GPUs exceed 5 WGP per Shader Array. Even in RDNA3, it still only have 5 WGP max per Shader Array.
c109cJK.png


While XBSX has 7 WGP per Shader Array. IMO exceeding 5 WGP maybe causing efficiency issues and I think they only exceeded 5 WGP per Shader Array in order to get 12 TF, otherwise I can't see how they could of reached 12 FT without having another Shader Engine.
sC2SoDQ.png


Maybe this is why Cerny said this.
"Also it's easier to fully use 36CUs in parallel than it is to fully use 48CUs. When triangles are small, it's much harder to fill all those CUs with useful work."

To get 48 CUs (which actually works out to be 56 CUs with 8 disabled) the PS5 would of had 6 WGP per Shader Array and Cerny probably meant it's harder to utilize more than 10 CUs per Shader Array efficiently.
This should apply to TMUs + RT also.

Just throwing that out there, could be wrong though.
 
Last edited:

avin

Member
Try actually looking into these games instead of looking at a biased summary. The guy who made the summary even said he was biased! LOL.

If you're human, you have biases. The choice one has is whether biases are conscious, or unconscious.

The guy who made the summary did well to acknowledge his bias; his acknowledgement gives us information we could need to assess his conclusions. I may disagree with many of his assignments, but I genuinely appreciate his honesty, and wish it was more common.

avin
 
Last edited:

onQ123

Member
Could you share where you learnt about RBE and RBE+
I find it hard to belive that AMD would create an inferior solution for RDNA2

SMH it's only inferior when you use less of them like Xbox Series X did compared to PS5 that used more of the older RBEs
 

FrankWza

Member
He is indicating the source could ve biased. How do you not understand this? He is only displaying DFs results. Unless you are saying they are biased
Right. He didn’t say he was biased. He categorized DF results. That creates the need for the bias disclaimer because they don’t say “win here win there” in subcategories for every analysis. They give statistics or performance results.
 

Sosokrates

Report me if I continue to console war
He didn't say the RBE+ is inferior. He said PS5 has twice the number of depth ROPS.

RBE+ = 8 color ROPS + 16 depth ROPS
RBE = 4 color ROPS + 16 depth ROPS

XSX= 8 RBE+ = 64 color ROPS + 128 depth ROPS
PS5= 16 RBE = 64 color ROPS + 256 depth ROPS

As you can plainly see, the RBE+ is superior yet the XSX still has less depth ROPS.

Nice edit lol

Yeah i counted wrong, i didnt release that ps5 had aditional rbes behind the main cluster 😂🤪
 

Topher

Gold Member
Right. He didn’t say he was biased. He categorized DF results. That creates the need for the bias disclaimer because they don’t say “win here win there” in subcategories for every analysis. They give statistics or performance results.

Pointing at the "bias" disclaimer is a stretch. The guy is basically acknowledging that this isn't completely objective and thus, will have some bias. Does he say that bias is PS or Xbox? No. Just bias of being an imperfect human and not a machine.
 

Sosokrates

Report me if I continue to console war
SMH it's only inferior when you use less of them like Xbox Series X did compared to PS5 that used more of the older RBEs

Ive found this post from beyond3d which is interesting

Somone posted this:
Xbox Seires X has heavily cutdown ROPs (about half size of PS5 ROPs, PS5 has double Z/stencil ROPs) which could explain plenty of "weird", "unexplainable" framerate drops in many games from launch to now (the last one being little nightmare 2, almost all those games with DRS using different engines).

And someone else replied with this:
I've been thinking about that, and it's not that XSX ROPs are "cut down", it's that with RDNA 2 AMD has decided to use the die area elsewhere. But this doesn't mean that where the die area had previously been used (PS5 / RDNA 1) is irrelevant. If you've got it, and you can use it, there will be times where it shows.

It's a mistake IMO to think that one balance is universally better than another, and that old and new averaged balances don't have a gradual transition. RDNA 2 RBE is a best guess at the future, but it's not a "best in all cases" thing.


So it does seem that the PS5 could maybe use its additional depth rops to its advantage but we dont know what it could be and if it is the reason for the tourysts ps5 res advantage.

Also we dont know what else AMD did to mitigate the lower number of depth rops, while its is a comprise, they redisgned the RB's ultimately because it would result in a higher performance chip.
 
Last edited:

onQ123

Member
Ive found this post from beyond3d which is interesting

Somone posted this:


And someone else replied with this:



So it does seem that the PS5 could maybe use its additional depth rops to its advantage but we dont know what it could be and if it is the reason for the tourysts ps5 res advantage.

So me saying it you ignore & just discount but someone on beyond3d says it & a light bulb comes on?


But anyway it will mostly be when pushing these consoles to extremes like 8K or native 4K 120fps when you will see ROPs limits come into play
 

Zathalus

Member
Good post. Add Immortals Fenyx Rising to the PS5 column as well. Even after the update that improved Xbox performance, the PS5 doesnt drop resolution as much as XSX.





From VG Tech. It seems in some scenes the PS5 performs better in Performance while Xbox performed better in other scenes, but the PS5 had a higher lowest resolution in both modes.
VGtech tested the launch version of the game, that also had a stutter issue on PS5. It received performance patches on both consoles, and as DF noted, pixel counts were identical and the PS5 stutter issue were eliminated after said updates.
 

Sosokrates

Report me if I continue to console war
So me saying it you ignore & just discount but someone on beyond3d says it & a light bulb comes on?


But anyway it will mostly be when pushing these consoles to extremes like 8K or native 4K 120fps when you will see ROPs limits come into play
Well, u wernt exactly giving me a lot of information. I had to dig diagrams and more info by myself.

But yeah I apologise for being so flippant with u. But you do have a habbit of being rather cryptic with info.

I mean the more im learning on this is just presenting more questions.

Like, if the ROPS are responsible for the feeding the framebuffer with pixel, texel, Z and color information how many rops is required for a all compute units to be fully utilised? Were the depth ROPS overkill on RDNA1? which is why they were reduced in RDNA2?

We dont know the limits of the 64 color rops and 128 depth ROPS in the XSX are.


If you are correct and it helps @ 4k120 and 8k it would be good with VR maybe.
 
Last edited:

DForce

NaughtyDog Defense Force
Now look at the framerates minimums, 51fps on PS5 and 57fps on Series X.

I looked into this again.

Assassin's Creed Valhalla Update Made PS5's Performance Mode Worse​



Digital Foundry denied that this happened, but people pointed out that it made the performance worse and VG Tech Stats show it.

Assassin's Creed Valhalla - Release Version
PS5 and Xbox Series X use a dynamic resolution with the highest native resolution found being 3840x2160 and the lowest native resolution found being approximately 2432x1368. Both consoles rarely render at a native resolution of 3840x2160. The native resolution is usually between 2432x1368 and 2880x1620 on both consoles. PS5 and Xbox Series X use a form of temporal reconstruction to increase the resolution up to 3840x2160 when rendering natively below this resolution.

Assassin's Creed Valhalla Patch 1.04

PS5 in Performance Mode uses a dynamic resolution with the highest native resolution found being 3840x2160 and the lowest native resolution found being approximately 2432x1368. PS5 in Performance Mode rarely renders at a native resolution of 3840x2160.

Xbox Series X in Performance Mode uses a dynamic resolution with the highest native resolution found being 3840x2160 and the lowest native resolution found being 1920x1080. Xbox Series X in Performance Mode rarely renders at a native resolution of 3840x2160 and drops in resolution down to 1920x1080 seem to be uncommon.



Pre 1.04 Patch Stats
PlatformsPS5Xbox Series XXbox Series S
Frame Amounts
Game Frames303853026815205
Video Frames304623046230461
Frame Tearing Statistics
Total Torn Frames8654025110
Lowest Torn Line215221232087
Frame Height216021602160
Frame Time Statistics
Mean Frame Time16.71ms16.77ms33.39ms
Median Frame Time16.67ms16.67ms33.33ms
Maximum Frame Time37.28ms65.21ms66.67ms
Minimum Frame Time14.69ms13.17ms30.2ms
95th Percentile Frame Time16.67ms17.44ms33.33ms
99th Percentile Frame Time18.15ms18.46ms33.33ms
Frame Rate Statistics
Mean Frame Rate59.85fps59.62fps29.95fps
Median Frame Rate60fps60fps30fps
Maximum Frame Rate60fps60fps30fps
Minimum Frame Rate50fps50fps27fps
5th Percentile Frame Rate59fps57fps30fps
1st Percentile Frame Rate55fps55fps29fps
Frame Time Counts
0ms-16.67ms177 (0.58%)496 (1.64%)0 (0%)
16.67ms29630 (97.52%)26419 (87.28%)0 (0%)
16.67ms-33.33ms557 (1.83%)3330 (11%)50 (0.33%)
33.33ms19 (0.06%)19 (0.06%)15066 (99.09%)
33.33ms-50ms2 (0.01%)3 (0.01%)67 (0.44%)
50ms-66.67ms0 (0%)1 (0%)1 (0.01%)
66.67ms0 (0%)0 (0%)21 (0.14%)
Other
Dropped Frames000
Runt Frames000
Runt Frame Thresholds20 rows20 rows20 rows

Patch 1.04 Stats
PlatformsPS5 Performance ModeXbox Series X Performance ModeXbox Series S Performance ModePS5 Quality ModeXbox Series X Quality ModeXbox Series S Quality Mode
Frame Amounts
Game Frames314693163231114153651536515361
Video Frames316753167631676307743077430774
Frame Tearing Statistics
Total Torn Frames2668110761680964
Lowest Torn Line215721232142-5092055
Frame Height216021602160216021602160
Frame Time Statistics
Mean Frame Time16.78ms16.69ms16.97ms33.38ms33.38ms33.39ms
Median Frame Time16.67ms16.67ms16.67ms33.33ms33.33ms33.33ms
Maximum Frame Time37.78ms36.78ms38.5ms66.67ms66.67ms71.81ms
Minimum Frame Time14.42ms12.79ms14.47ms33.33ms31.91ms29.14ms
95th Percentile Frame Time17.65ms16.67ms18.77ms33.33ms33.33ms33.33ms
99th Percentile Frame Time18.64ms17.1ms20.37ms33.33ms33.33ms33.33ms
Frame Rate Statistics
Mean Frame Rate59.61fps59.92fps58.93fps29.96fps29.96fps29.95fps
Median Frame Rate60fps60fps60fps30fps30fps30fps
Maximum Frame Rate60fps60fps60fps30fps30fps30fps
Minimum Frame Rate51fps57fps47fps27fps27fps27fps
5th Percentile Frame Rate56fps59fps53fps30fps30fps30fps
1st Percentile Frame Rate53fps58fps49fps29fps29fps29fps
Frame Time Counts
0ms-16.67ms215 (0.68%)445 (1.41%)626 (2.01%)0 (0%)0 (0%)0 (0%)
16.67ms29046 (92.3%)30609 (96.77%)24916 (80.08%)0 (0%)0 (0%)0 (0%)
16.67ms-33.33ms2187 (6.95%)552 (1.75%)5545 (17.82%)0 (0%)5 (0.03%)30 (0.2%)
33.33ms19 (0.06%)21 (0.07%)18 (0.06%)15343 (99.86%)15336 (99.81%)15265 (99.38%)
33.33ms-50ms2 (0.01%)5 (0.02%)9 (0.03%)0 (0%)2 (0.01%)44 (0.29%)
66.67ms0 (0%)0 (0%)0 (0%)22 (0.14%)22 (0.14%)21 (0.14%)
66.67ms-83.33ms0 (0%)0 (0%)0 (0%)0 (0%)0 (0%)1 (0.01%)
Other
Dropped Frames000000
Runt Frames000000
Runt Frame Thresholds20 rows20 rows20 rows20 rows20 rows20 rows
 

onQ123

Member
Well, u wernt exactly giving me a lot of information. I had to dig diagrams and more info by myself.

But yeah I apologise for being so flippant with u. But you do have a habbit of being rather cryptic with info.

I mean the more im learning on this is just presenting more questions.

Like, if the ROPS are responsible for the feeding the framebuffer with pixel, texel, Z and color information how many rops is required for a all compute units to be fully utilised? Were the depth ROPS overkill on RDNA1? which is why they were reduced in RDNA2?

We dont know the limits of the 64 color rops and 128 depth ROPS in the XSX are.


If you are correct and it helps @ 4k120 and 8k it would be good with VR maybe.
I been saying it for months now & yes it's most likely Sony went this route for VR & 8k , Xbox will still be able to do 8K

y'all just kept doing drive-by post to discredit it
 
Last edited:

Lysandros

Member
I looked into this again.

Assassin's Creed Valhalla Update Made PS5's Performance Mode Worse​



Digital Foundry denied that this happened, but people pointed out that it made the performance worse and VG Tech Stats show it.

Assassin's Creed Valhalla - Release Version
PS5 and Xbox Series X use a dynamic resolution with the highest native resolution found being 3840x2160 and the lowest native resolution found being approximately 2432x1368. Both consoles rarely render at a native resolution of 3840x2160. The native resolution is usually between 2432x1368 and 2880x1620 on both consoles. PS5 and Xbox Series X use a form of temporal reconstruction to increase the resolution up to 3840x2160 when rendering natively below this resolution.

Assassin's Creed Valhalla Patch 1.04

PS5 in Performance Mode uses a dynamic resolution with the highest native resolution found being 3840x2160 and the lowest native resolution found being approximately 2432x1368. PS5 in Performance Mode rarely renders at a native resolution of 3840x2160.

Xbox Series X in Performance Mode uses a dynamic resolution with the highest native resolution found being 3840x2160 and the lowest native resolution found being 1920x1080. Xbox Series X in Performance Mode rarely renders at a native resolution of 3840x2160 and drops in resolution down to 1920x1080 seem to be uncommon.



Pre 1.04 Patch Stats


Patch 1.04 Stats
The cost of rendering up to 55% more pixels compared to XSX i guess. But i agree on it being a poor choice, before that patch PS5 version had slightly better framerate and much less tearing vs XSX.
 
Last edited:

Lognor

Banned
If you're human, you have biases. The choice one has is whether biases are conscious, or unconscious.

The guy who made the summary did well to acknowledge his bias; his acknowledgement gives us information we could need to assess his conclusions. I may disagree with many of his assignments, but I genuinely appreciate his honesty, and wish it was more common.

avin
Right. You have to appreciate the honesty. That way you can give him the benefit when done of these games are clearly credited to ps5 even they should be credited to xsx
 

Sosokrates

Report me if I continue to console war
I been saying it for months now & yes it's most likely Sony went this route for VR & 8k , Xbox will still be able to do 8K

y'all just kept doing drive-by post to discredit it

Its plausible speculation but dont get carried away like its a certainty. Theres still a lot we dont know about these performance differences.

And in my interaction with u on the subject you dont go into a lot of detail, you cant expect everyone just to know what u are talking about.
 

ToTTenTranz

Banned
While XBSX has 7 WGP per Shader Array. IMO exceeding 5 WGP maybe causing efficiency issues and I think they only exceeded 5 WGP per Shader Array in order to get 12 TF, otherwise I can't see how they could of reached 12 FT without having another Shader Engine.

In Anandtech's piece about the Series X they mention how important it was for Microsoft to reach the 12TFLOPs. At some point, Microsoft even considered enabling all the 28 WGPs in the chip while clocking lower at 1675MHz.

c22b82H.jpeg



This would have made it perform a bit worse in games, since the rest of the GPU would also run at lower speeds.
Perhaps the 12 TFLOPs were a goal set by the Azure team who probably helped bankroll the SoC's development.
 

avin

Member
Right. You have to appreciate the honesty. That way you can give him the benefit when done of these games are clearly credited to ps5 even they should be credited to xsx

My apologies, I got that wrong. It looked like a poorly constructed list, with some obviously incorrect choices. When I read your post I thought he'd admitted his bias, which would have been a plus in my book. But he didn't. Looking at the tweet, he just made some generic statement about bias, without acknowledging his own.

But, who cares what his motives are. We can correct the list, as people have done in this thread. The pattern is what I would expect, and people have suggested interesting reasons for the exceptions.

But I'm surely biased myself. I like it when things make sense.

avin
 

Lysandros

Member
In Anandtech's piece about the Series X they mention how important it was for Microsoft to reach the 12TFLOPs. At some point, Microsoft even considered enabling all the 28 WGPs in the chip while clocking lower at 1675MHz.

c22b82H.jpeg



This would have made it perform a bit worse in games, since the rest of the GPU would also run at lower speeds.
Perhaps the 12 TFLOPs were a goal set by the Azure team who probably helped bankroll the SoC's development.
As an example of opposite philosophy from Mark Cerny: "When we design hardware, we start with the goals we want to achieve. Power in and of itself is not a goal. The question is, what that power makes possible."
 
As an example of opposite philosophy from Mark Cerny: "When we design hardware, we start with the goals we want to achieve. Power in and of itself is not a goal. The question is, what that power makes possible."
You know the first goal is something we can sell for 500$ and not lose much money right?
 

M1chl

Currently Gif and Meme Champion
Indeed. Especially when those RDNA2 based 'custom' APUs have other differentiating features like the Cache Scrubbers on PS5, which are really underdiscussed in matter of game performance i think.
Love to know what does it do, since it's thrown around so much. Because I can guarantee you, unless you are doing your own implementation of code in ASM, building your own code executor (you can't use your own compiler, but Sony is using Clang-LLVM, same as in Apple for their apps. Their APIs works really similar to Metal. Which is probably the most efficient compiler there is), writing low level libraries. So you as a dev in 99.9% cases, really don't know what this shit do, much less someone on the forum.

High level code itself, without even needing to know the low level feature set, is what matters.

Much of the difference of how consoles performs is due to that, PS5 is running compiler optimized binary, directly on HW, where the Xbox is running the game on virtual machine, behind whole Direct X mess, which is going to take at least 2 years to properly optimize. And that takes it's resources. You don't need buzzwords, to see where the seemingly more powerful console is getting hammered by slower one.


If you are HW architect, than I apologize.
 
Last edited:

EverydayBeast

thinks Halo Infinite is a new graphical benchmark
Sony and Microsoft are aggressive when making powerful consoles during the Covid season getting one was big news now everyone is participating in Black Friday deals, online shopping, Microsoft views the series x as a rebound from the Xbox one. Not much difference between PS4 pro and PS5 not much to worry about if you’re Sony.
going off alex jones GIF
 

SlimySnake

Flashless at the Golden Globes
In Anandtech's piece about the Series X they mention how important it was for Microsoft to reach the 12TFLOPs. At some point, Microsoft even considered enabling all the 28 WGPs in the chip while clocking lower at 1675MHz.

This would have made it perform a bit worse in games, since the rest of the GPU would also run at lower speeds.
Perhaps the 12 TFLOPs were a goal set by the Azure team who probably helped bankroll the SoC's development.
lol I am sure I have talked about this before, but thats definitely the feeling I got. They really wanted to hit that 12 tflops numbers which is why it kept coming up all the time in rumors as early as January 2019. They did the same with the x1x when they leaked that the x1x would have 6 tflops pretty much immediately after the PS4 Pro leaks pegged it at 4.2 tflops.

While the tflops whore inside me admires them for aiming high, this kind of mentality just screams like the console was designed by marketing suits in a boardroom instead of engineers. I am still not sure why anyone would go for a split ram architecture like they have with the xsx. Which was likely necessitated due to them going with a more expensive APU. They have also found themselves way behind in I/O and SSD speeds, and should count their lucky stars that Sony with their cross gen strategy has squandered any IO advantage Cerny served his first party studios on a silver platter.

I am sure Jimbo and other suits also gave Cerny a budget and made sure he didnt go all kutaragi on them, it's great to see that his gamble on the I/O and higher clocks has paid off. I dont think he gets enough credit for getting 2.23 ghz in a console. Thats higher than the CPU clocks in the PS4 Pro.
 
Last edited:
The difference in clockspeed should still make some difference though according to cerny.
The whole conversation has been beyond the tflops and cu count so I dont know why u keep repeating that line.
Clock speed alone no, but given how it allows the ps5 to pull ahead in other matters to better "feed" it compute units it does allow it to get more done than if it was the exact same silicone at the one in the series x running at a lower frequency to end up with a 10.x tf machine.
Its plausible speculation but dont get carried away like its a certainty. Theres still a lot we dont know about these performance differences.

And in my interaction with u on the subject you dont go into a lot of detail, you cant expect everyone just to know what u are talking about.
Nobody will write a text with all references (well rarely) here to make up their argument. We don't write peer reviewed papers.
 

Sosokrates

Report me if I continue to console war
lol I am sure I have talked about this before, but thats definitely the feeling I got. They really wanted to hit that 12 tflops numbers which is why it kept coming up all the time in rumors as early as January 2019. They did the same with the x1x when they leaked that the x1x would have 6 tflops pretty much immediately after the PS4 Pro leaks pegged it at 4.2 tflops.

While the tflops whore inside me admires them for aiming high, this kind of mentality just screams like the console was designed by marketing suits in a boardroom instead of engineers. I am still not sure why anyone would go for a split ram architecture like they have with the xsx. Which was likely necessitated due to them going with a more expensive APU. They have also found themselves way behind in I/O and SSD speeds, and should count their lucky stars that Sony with their cross gen strategy has squandered any IO advantage Cerny served his first party studios on a silver platter.

I am sure Jimbo and other suits also gave Cerny a budget and made sure he didnt go all kutaragi on them, it's great to see that his gamble on the I/O and higher clocks has paid off. I dont think he gets enough credit for getting 2.23 ghz in a console. Thats higher than the CPU clocks in the PS4 Pro.

It was reported that the PS5 costs sony a little less then the xsx.which doesn't surprise, while its clear sony intended to make premium console with a $499 rrp its clear they small cost cuts were more important, sony decided to sold the flash chips for the sdd directly on to the motherboard, the SoC is considerably smaller, the packaging is garbage, they went with 8 memory chips instead of 10. There cooling solution is probably a bit cheaper.

Despite what cerny says tflops is still the best indicator for a GPU performance, its a shame cerny did not demonstrate his points with some benchmarks.

and anyone who says it seems like the XsX was designed by marketing suits and not engineers is a delusional fanboy, im sorry but its such a rediculous thing to say.
having double the compute of the X1X seems perfectly logical, and all the choices of the series consoles are smart ones, they were clealy designed for gaming.

sony and Microsoft simply had different goals which required different approaches both of them made great hardware.
 

Sosokrates

Report me if I continue to console war
Clock speed alone no, but given how it allows the ps5 to pull ahead in other matters to better "feed" it compute units it does allow it to get more done than if it was the exact same silicone at the one in the series x running at a lower frequency to end up with a 10.x tf machine.

Nobody will write a text with all references (well rarely) here to make up their argument. We don't write peer reviewed papers.

Lol, I never suggested peer reviewed papers, just maybe a few more sentences and sources if u disagree then how can anyone expect people to just magically understand.

It would be interested if the xsx was 1545mhz
 

Ezekiel_

Banned
But tflops are everything when it comes to comparing two classes of GPUs under the same family. A 2070 should never beat 2080, and a 5700 ahould never beat a 5700xt. So the fact that they are equal in some instances makes little sense. XSX performing worse in some games makes no sense.

The Zen 2 CPUs in the two consoles have the same architecture. The RDNA 2.0 GPUs have the same arch. The only difference is the PS5 I/O and the XSX's weird split memory architecture.
Both are custom APU based on RDNA 2, and we don't know the full extent of their customization.

Considering that fact, we can't do a simple comparison where TFlops would be relevant.

The fact that a newer architecture with theoretically less TFlops outperforms older architectures with theoretically more TFlops tells us the extent of how weak of a metric "TFlops" are.
 

ToTTenTranz

Banned
While the tflops whore inside me admires them for aiming high, this kind of mentality just screams like the console was designed by marketing suits in a boardroom instead of engineers.
I don't believe the console was designed by the marketing suits, even if the 12 TFLOPs did give them some talking points to use to their advantage (who then get repeated ad nauseum by console warriors).

It's important to know that the Anaconda project (SeriesX SoC) was designed not only for gaming loads, but also to be used in Azure blade servers for general compute. The end result was a system that is a bit more expensive to make (larger chip, more memory channels), but so far it's at least as equally capable in gaming loads as Sony's Oberon project.

It's easier to push those 12 TFLOPs on predictable and fairly sequential compute loads than it is for gaming, so it makes sense that the Azure Silicon Architecture Team (the ones who presented the Series X SoC at last year's HotChips) pushed for as much compute throughput as they could.

It's a bit like saying whether Vega 10 was a failure or not. Well as a competitor to Pascal GP104 / GTX 1080 it's a failure in cost/performance. But as a single chip that was used successfully in gaming and compute GPUs with pretty good performance (Vega 10 cards competed with the P100 at some point), and then spawned the CDNA architecture that ended up winning all these Exascale contracts, then it was probably a win for AMD. Especially considering how tight their GPU R&D budget was in 2015-2017 (while Zen was in development).



I am still not sure why anyone would go for a split ram architecture like they have with the xsx.
IMO not going 20GB GDDR6 on the same 10 channels was just a lost opportunity for Microsoft, even if the console had to cost some $50 more.
Not only could they hamper having a slower I/O by having more RAM to cache the data, they'd also avoid the memory contention issues they're apparently getting.

It could be that the option to not go with 20GB GDDR6 was related to supply, though.


They have also found themselves way behind in I/O and SSD speeds, and should count their lucky stars that Sony with their cross gen strategy has squandered any IO advantage Cerny served his first party studios on a silver platter.
I agree that the cross-gen strategy (which was probably fueled by Sony's inability to ramp up the PS5's production) is the major factor for not having new-gen I/O and geometry engines on God of War Ragnarok and Horizon Forbidden West, at least. Hopefully, Insomniac's Spiderman 2 and Wolverine will be free of these shackles, as Ratchet & Clank doesn't suffer from this problems.
 

Bramble

Member
The reflections are the same. The only difference other then resolution is the very slight distant geometry. Neither Nxgamer, DF, or VGtech picked up any other differences. If you can't tell the difference with a up to 33% resolution gap, you are not going to pick up the difference with the geometry.

Hell, if you can't tell the difference between 1080p and 4k then for you the Series S version of Far Cry 6 may as well be the same as the PS5 version in your eyes.

Lastly, El Analista is a bit of a joke when it comes to technical analysis. The only thing that is fine is the FPS counter, and they even got that wrong.

I've seen side by sides showing better reflections on the PS5 version. Apart from that the game is broken with the infinite respawns, so I wouldn't recommend it anyway.

I can tell the difference when I don't move my character en pay attention to it. It doesn't have a single impact on gameplay though. And during gameplay, I move. That's why 60fps is preferred.
 

DeepEnigma

Gold Member
Indeed. Especially when those RDNA2 based 'custom' APUs have other differentiating features like the Cache Scrubbers on PS5, which are really underdiscussed in matter of game performance i think.
They're underdiscussed, because places like DF don't get corporate handout PR sheets describing them, and they don't feel like doing due diligence in learning about them or asking developers of said games about it.

Same with the I/O setup... but you just wait until direct storage becomes mainstream on the PC, then it will be all the rage. Just as I said it would and was excited for after learning about the PS5's I/O. "PC is gonna benefit big time in the not so distant future, CAN'T WAIT!"
 
Last edited:
Top Bottom