• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Is Xbox Series X graphics chip significantly more powerful than PS5?

Is PS5 GPU very weak compared to Series X?


  • Total voters
    102
Status
Not open for further replies.
12TF delivered all day avery day every second VS theoretical 10TF that most of the time works below that severely depends on ambient temperature of room.

So Xbox is about 20% to 30% more powerful it might even be more if PS5 user is living in hot climate.
But if lockhart exist it won't matter since both PS5 and SEX games will be effectively upscaled 4TF games aside from few exclusives.
Reported.
 

ToadMan

Member
1) Tflops are CUs vs clock. A console is much more than just those two things - even the gpu component is more than just those things.

2) Tflops are a useful predictor of performance when a single component is being compared with an identical system around it - CPU, Memory, Storage, I/O. That’s why it’s useful for PC comparisons. It’s not useful when the whole system is custom as with PS5/Xsex.

3) So far, we haven’t seen one pixel output by an actual Xsex.

4) PS5 has 15% less tflops. Or one might say Xsex has 18% more tflops. Those are the numbers. Other numbers should be backed by an adequate explanation of where they come from and why they’re significant for comparison.

5) Halo looked bad because of nothing to do with Xsex and everything to do with poor planning and management.
 

Armorian

Banned
What do you think a performance summary is? The Series X GPU has already been benchmarked with a UE4 game, its a 2080 equivalent GPU. The Series X and PS5 share the same RDNA2 GPU architecture no different than the 2080Ti and 2080 share the same architecture. The PS5 GPU has less teraflops than the Series X GPU by 18-20%. Common sense.

You say yourself that it's 2080 level so i don't get why you operate with Ti version. Here we go folks:

PS5 is 2070 level
XSX is 2080 level

~20% difference between the two

relative-performance_2560-1440.png
 

Ghatazhak

Neo Member
12TF delivered all day avery day every second VS theoretical 10TF that most of the time works below that severely depends on ambient temperature of room.

So Xbox is about 20% to 30% more powerful it might even be more if PS5 user is living in hot climate.
But if lockhart exist it won't matter since both PS5 and SEX games will be effectively upscaled 4TF games aside from few exclusives.
So now we come to it being even more than 30%? What's next 50%? it will be 18% the vast majority of times.
 

Zannegan

Member
There will be a difference. PS warriors are betting on efficiency and secret sauce to close the gap. MS warriors use the PS5's variable clocks to exaggerate it. Most people won't notice and even fewer will care.

The facts are these: Halo had a bad showing. Horizon looked great. Flops matter. So does studio talent. Water is wet.
 
In terms of resolution I expect roughly the same difference as between PS4 and XO, but since we'll be looking at near 4K instead of near 1080p differences, it'll be less obvious even side by side.

The XSX may also have the feamerate and effects advantage, but that's harder to call because of their architectural differences.

We're basically comparing a very powerful PC in a small box to a powerful and more exotical made console.

Until we see actual like for like comparisons, running side by side natively, this is all baseless conjecture that's impossible to predict accurately on stuff like teraflop differences.
 

silent head

Member
Close to the difference between a 2080 vs 2080Ti.

PQAlcmx.gif

rtx 2080 reference
10.07 TFLOPS

rtx 2080ti reference
13.45 TFLOPS

----------------------
asus rog strix rtx 2080
11.48 TFLOPS

asus rog strix rtx 2080 ti
16.18 TFLOPS
 
Last edited:

silent head

Member
Teraflops vs. Performance

RTX 2070 Super (Overclocked) vs RTX 2080 (Stock) - Test in 20 Games - 1440p
MSI RTX 2070 SUPER GAMING X (OC) 2100MHz/2070MHz =10.75/10.59 TFLOPS +1200 Memory Clock
MSI RTX 2080 VENTUS 8G OC (Stock) 1950MHz/1905MHz = 11.48/11.21 TFLOPS



Z9fq5nv.jpg


sp2qm9H.jpg

PatHfbB.jpg

uN1fddT.jpg
 
Last edited:

Hobbygaming

has been asked to post in 'Grounded' mode.
Still sneaking that 10% in there?

Let's agree on 18% minimum, which is still pretty significant.

Enough to give Craig some hair.
18% is nothing in real world results especially when rendering methods will be even more improved and resolutions are too high for a notable difference

The difference between the PS4 and Xbox One was 40% and that didn't even show much in multiplatform titles
 

anothertech

Member
It's such a small difference, DF will have trouble finding the drama this time around.

Xbone and PS4 was way more entertaining than next gen will be. All those 720 vs 1080 reveals. Dam that was some good drama.
 

Hobbygaming

has been asked to post in 'Grounded' mode.
12TF delivered all day avery day every second VS theoretical 10TF that most of the time works below that severely depends on ambient temperature of room.

So Xbox is about 20% to 30% more powerful it might even be more if PS5 user is living in hot climate.
But if lockhart exist it won't matter since both PS5 and SEX games will be effectively upscaled 4TF games aside from few exclusives.
With faster clocks the PS5 will reach its peak performance more often than the XSX
 

Zathalus

Member
The difference in raw TFLOP numbers between the two is roughly 18%. In raw numbers (not benchmarks) that is the difference between a stock 2070 and a stock 2070 Super. However pure TFLOP is not the only metric by which a GPU's performance is measured. In pixel fillrate the PS5 is superior to the Xbox Series X, and cache is running much faster. Furthermore, we don't even have a clear understanding in the architecture customization between the two, which could further skew the results one way or the other.

One thing that get's overlooked on this forum is that higher number of CU's are more difficult to fully take advantage of then lower amount of CU's. Marky Cerny mentioned this as much in his presentation. This is known as Amdahl's law and can clearly be seen in the difference in performance between a 2080 and 2080 ti. Despite the 2080ti having almost 50% more SM units then the 2080 (as well as much higher bandwidth) the 2080ti is only around 18% faster on average.

Based on all of the above, I believe the gap between the GPU's is going to smaller then the 18% the raw TFLOP metric might indicate.
 

TGO

Hype Train conductor. Works harder than it steams.
I mean on paper it is and should be in reality.
But from what we've seen.... you wouldn't think so
 

DForce

NaughtyDog Defense Force
No. PS5 clocks are variable while Xbox clocks are static. Meaning that XSX is operating at 100% performance all the time while PS5 it depends on multiple factors like temperature of room, how clean your PS5 is, is game graphically intensive and so much more than that.

This is a lie.

You've been in the next-gen discussion thread and it's been stated over a million times that the PlayStation 5 doesn't drop frequency based on temperature.

If devs wants to use 10TF of GPU power with no problem, then they can. It only becomes a factor if devs want to use MAX CPU performance, which they would have to dial back the frequency for the GPU.

Mark Cerny said if the worse case scenario happens (GPU and CPU at max), there were be a small reduction in power, only reducing the GPU and CPU by a couple of percent.
 

Hobbygaming

has been asked to post in 'Grounded' mode.
No. PS5 clocks are variable while Xbox clocks are static. Meaning that XSX is operating at 100% performance all the time while PS5 it depends on multiple factors like temperature of room, how clean your PS5 is, is game graphically intensive and so much more than that.
This is incorrect. The XSX CUs aren't in use unless all of them have workload but that's difficult to do for 52 CUs and CUs need GPU resources to run code. PS went with faster clocks to benefit multiple parts of the GPU like its rasterization is faster because of the faster clocks
 

Dontero

Banned
In pixel fillrate the PS5 is superior to the Xbox Series X, and cache is running much faster.

Source on pixel fillrate ? XSX has more CUs which only mean it will have higher amount of ROPs.
If cache and latency was important to GPU then they would be called CPUs.

One thing that get's overlooked on this forum is that higher number of CU's are more difficult to fully take advantage of then lower amount of CU's. Marky Cerny mentioned this as much in his presentation.

If GPUs would have problem with that they wouldn't be called GPUs but CPUs. GPU code is inherently n threaded. There are some inneficiences going for more CUs but not in any way shape or form resulting in huge power differences. CU's are just generalized therm for core complexes management units. Modern GPUs runs 1000s of "cores" so adding 2000-3000 more is not that huge difference.

This is known as Amdahl's law and can clearly be seen in the difference in performance between a 2080 and 2080 ti. Despite the 2080ti having almost 50% more SM units then the 2080 (as well as much higher bandwidth) the 2080ti is only around 18% faster on average.

Because you are looking at inbalance. Just because you can push X amount of vertices it doesn't mean your shaders can come with that, same with geometry and 1000 other things or software can catch up to hardware power because shadows that worked well with X amount of geometry could crap themselves when you mutliply it by 2 or more.


Reason why Sony went with lower count of CUs is simple. It is much cheaper. All other explanations are just smoke and mirrors.

This is incorrect. The XSX CUs aren't in use unless all of them have workload but that's difficult to do for 52 CUs and CUs need GPU resources to run code. PS went with faster clocks to benefit multiple parts of the GPU like its rasterization is faster because of the faster clocks

Every GPU task is by definition N threaded so there are no cases where GPU can't fill CUs with work. If that would not be the case then GPUs wouldn't exist. You can fill less texture unit etc. but not in general CU's. Moreover if it would be such a hard task to scale GPU with CU's you would see games that do not benefit from higher CU's gpus. Such case doesn't exist and all games performance rise with CU's count.

The only difference between CPU and GPU is that GPU work on taks that are multithreaded by nature while CPU has to deal with tasks that are not easily threaded. IF not for those 2 types of problems we would all be running everything on GPUs.
 
Last edited:

YoodlePro

Member
12TF delivered all day avery day every second VS theoretical 10TF that most of the time works below that severely depends on ambient temperature of room.

So Xbox is about 20% to 30% more powerful it might even be more if PS5 user is living in hot climate.
But if lockhart exist it won't matter since both PS5 and SEX games will be effectively upscaled 4TF games aside from few exclusives.
it has been confirmed MANY MANY MANY MAAANY times that it does not depend on the temperature of the room.... So much hate man.
 

FranXico

Member
Let's call it 20%. It has the difference of a whole PS4, but bigger, because more modern tech, innit?
Complains about "shite maths" when someone rounds the difference down.

Unceremoniously uses "variable maximum" for theoretical maximum throughput numbers and cites "PS4 equivalent" difference, while rounding the difference up.

Not disingenuous at all.
 

FranXico

Member
It is around 18~30% better than ps5. Not too shabby launching at the same time and probably same prices.

It is up to developers to unleash that 30% potential
Are you adding up percentages, why not go up to 40% then? I guarantee that people who understand how all of this works will take you just as seriously.
 

Ozzie666

Member
Based on the games we've seen so far, difficult to tell. On paper, the PS5 is weaker. In real world actual performance, it should still be weaker. It appears it may punch above its weight, closing the gap. Will see.

The Xbox superiority maybe not be evident until the middle or end of this up coming generation. However I will say, I rather enjoyed my Pro and couldn't really distinguish any Xbox One X advantages.
 
So I came across this video by Austin Evans



In this video he says Xbox Series X GPU is significantly more powerful than PS5 and he's in the past continued to downplay PS5 tech and always comes across as biased in favor of Xbox. Dude even says Halo Infinite has higher fidelity textures on Series X. He continues to misinform his audience. What do you think GAF, is Series X GPU that much more powerful than PS5?

Chime in your thoughts, let the bloodbath begin!

It's a fact that it is more powerful so not understanding how a fake pole does anything.
 

Hobbygaming

has been asked to post in 'Grounded' mode.
Source on pixel fillrate ? XSX has more CUs which only mean it will have higher amount of ROPs.
If cache and latency was important to GPU then they would be called CPUs.



If GPUs would have problem with that they wouldn't be called GPUs but CPUs. GPU code is inherently n threaded. There are some inneficiences going for more CUs but not in any way shape or form resulting in huge power differences. CU's are just generalized therm for core complexes management units. Modern GPUs runs 1000s of "cores" so adding 2000-3000 more is not that huge difference.



Because you are looking at inbalance. Just because you can push X amount of vertices it doesn't mean your shaders can come with that, same with geometry and 1000 other things or software can catch up to hardware power because shadows that worked well with X amount of geometry could crap themselves when you mutliply it by 2 or more.


Reason why Sony went with lower count of CUs is simple. It is much cheaper. All other explanations are just smoke and mirrors.



Every GPU task is by definition N threaded so there are no cases where GPU can't fill CUs with work. If that would not be the case then GPUs wouldn't exist. You can fill less texture unit etc. but not in general CU's. Moreover if it would be such a hard task to scale GPU with CU's you would see games that do not benefit from higher CU's gpus. Such case doesn't exist and all games performance rise with CU's count.

The only difference between CPU and GPU is that GPU work on taks that are multithreaded by nature while CPU has to deal with tasks that are not easily threaded. IF not for those 2 types of problems we would all be running everything on GPUs.
No offense but this is so incorrect

CUs perform the processing and is only 1 section out of 20 on the GPU. Memory bottlenecks still exist and the 12 Tflops is still theoretical in the best of situations. Things like 4K and split ram pools already bottlenecks GPUs and so many things will have to fit into the XSX's fast pool
 
Last edited:
Based on the games we've seen so far, difficult to tell. On paper, the PS5 is weaker. In real world actual performance, it should still be weaker. It appears it may punch above its weight, closing the gap. Will see.

The Xbox superiority maybe not be evident until the middle or end of this up coming generation. However I will say, I rather enjoyed my Pro and couldn't really distinguish any Xbox One X advantages.
There were night and day differences between the pro and the x just because you choose not to distinguish it makes it irrelevant
 
There will be a difference. PS warriors are betting on efficiency and secret sauce to close the gap. MS warriors use the PS5's variable clocks to exaggerate it. Most people won't notice and even fewer will care.

The facts are these: Halo had a bad showing. Horizon looked great. Flops matter. So does studio talent. Water is wet.
Horizon was cgi so that does not count.
 

tryDEATH

Member
So we really are just going to start make threads with sign offs that incite console warring, that just lovely.
 
So now we come to it being even more than 30%? What's next 50%? it will be 18% the vast majority of times.
He even got the rest of his post wrong

"12TF delivered all day avery day every second VS theoretical 10TF that most of the time works below that"

Cerny actually said the opposite of that. He said "with this new paradigm (AMD Smart Shift) we're able to run WAY over that (2.23GHz) in fact we have to cap the GPU frequency at 2.23GHz so that we can guarantee the on-chip logic operates properly" "we expect the GPU to run at 2.23GHz MOST of the time at or close to that frequency and performance"





giphy.gif
 

kraspkibble

Permabanned.
not significantly but considerably.

9/10 vs 12TF.

what it'll mean is if a game can do native 4K and solid 60fps on XSX then on PS5 it will manage 1800p with 45-60fps unlocked or if it's closer to 30fps they might just lock it at 30.

otherwise the games will look more or less the same. PS5 might have faster loading times but the SSD won't affect performance much if at all during gameplay.
 
Last edited:
Source on pixel fillrate ? XSX has more CUs which only mean it will have higher amount of ROPs.
If cache and latency was important to GPU then they would be called CPUs.



If GPUs would have problem with that they wouldn't be called GPUs but CPUs. GPU code is inherently n threaded. There are some inneficiences going for more CUs but not in any way shape or form resulting in huge power differences. CU's are just generalized therm for core complexes management units. Modern GPUs runs 1000s of "cores" so adding 2000-3000 more is not that huge difference.



Because you are looking at inbalance. Just because you can push X amount of vertices it doesn't mean your shaders can come with that, same with geometry and 1000 other things or software can catch up to hardware power because shadows that worked well with X amount of geometry could crap themselves when you mutliply it by 2 or more.


Reason why Sony went with lower count of CUs is simple. It is much cheaper. All other explanations are just smoke and mirrors.



Every GPU task is by definition N threaded so there are no cases where GPU can't fill CUs with work. If that would not be the case then GPUs wouldn't exist. You can fill less texture unit etc. but not in general CU's. Moreover if it would be such a hard task to scale GPU with CU's you would see games that do not benefit from higher CU's gpus. Such case doesn't exist and all games performance rise with CU's count.

The only difference between CPU and GPU is that GPU work on taks that are multithreaded by nature while CPU has to deal with tasks that are not easily threaded. IF not for those 2 types of problems we would all be running everything on GPUs.

Can you just address where you read that PS5 performance is affected by ambient temp, FUDtero?
 
did you not watch the PS5 conference with Mark Cerny?

Find me the timestamp, I'll eat crow if you're right.

not significantly but considerably.

9/10 vs 12TF.

what it'll mean is if a game can do native 4K and solid 60fps on XSX then on PS5 it will manage 1800p with 45-60fps unlocked or if it's closer to 30fps they might just lock it at 30.

otherwise the games will look more or less the same. PS5 might have faster loading times but the SSD won't affect performance much if at all during gameplay.

That is significant. It's also wrong.

In your head an 18% difference = 4K 60 on Series X and 1800p 30-60fps possibly on PS5? are you serious?
 
Last edited:
Find me the timestamp, I'll eat crow if you're right.



That is significant. It's also wrong.

In your head an 18% difference = 4K 60 on Series X and 1800p 30-60fps possibly on PS5? are you serious?
His colors are showing and that's ok lol

Seems like hopes and dreams. I actually posted a very important segment from the Cerny tech dive above so people can stop grabbing facts from their buttocks.

I'm agreeing with you
 
His colors are showing and that's ok lol

Seems like hopes and dreams. I actually posted a very important segment from the Cerny tech dive above so people can stop grabbing facts from their buttocks.

I'm agreeing with you

they say this stuff like nobody will call them out on it. He said to watch road to ps5, which is what he should have done
 
Status
Not open for further replies.
Top Bottom