• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DF: Control PS5 Vs Xbox Series X Raytracing Benchmark

Don't be so defensive, nobody is attacking your Box, mate relax. I was just asking. I'm simply waiting for VGTech face-off.
I do not need to accept anything unless it comes from a reliable source, and don't ever tell me what I should accept or not if you cannot even understand what people say in their posts.
I'll tell you when you're wrong, you and others can't accept the Series X performing better that's all it is, now move along.
 

Caio

Member
I'll tell you when you're wrong, you and others can't accept the Series X performing better that's all it is, now move along.

You still keep quoting me without even understanding what people say. And it looks like PS5 was performing better in many other face off...

edited.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
This is almost verbatim what has been stated in this thread. People are in denial.
Not really, it says that while the methodology is sound (something people did give grief to Battaglia for, sure) the conclusion about which GPU is inherently better is not what has been stated in this thread.

nBALMlo.jpg

It is a scenario (“unoptimised old generation engine” as he put it) that XSX wins... and it is not unimportant, XOX’s successfully positioned itself not requiring developers to work too too hard beyond brute forcing their way to 4K resolutions and improved effects which helps some third party developers quite a bit (not all of them have time to get to know your customised Architecture well enough, especially early on... PS4 Pro suffered this quite a bit).

again its wrong, Ill give you an example, PS2 has 4 MB of eDRAM, it uses typically 2 MB for texture buffer but it can update this buffer many times during frame, let say it updates 10 times per frame so would you say it has 22 MB of VRAM? if your answer was yes you are wrong because if you require more VRAM or more bandwidth for another particular effect or for redrawing(something PS2 uses a lot)then you dont have as much texture avilable as before or as if you had 20 MB more, just because you can update data fast doesnt mean you have more RAM they are two different things


I think you exaggerate with SFS as PRT is its base and do most of its work and its benefits(there are many tiers so I am talking generally), SFS helps with a special filter that ensures there is a minimal mip map available so dont have to search and cause a dip, I think PS5 have the upper hand in this regard

Correct and well laid out on both points. PS2’s approach was built for, added some complexity as you had to do all the manual mip-map level selection and upload in software, but you could texture as if you had a larger portion of memory dedicated to texture data (you would try to batch by texture to maximise the reuse snd simplify streaming new data in and out).
SFS taking ~1/3rd the memory and bandwidth than best in class PRT based virtual texturing implementation is another case where MS gave a easy to repeat sound bite without grounding it not to be called on it too quickly, but it is still misleading (people thinking it can amplify bandwidth by 2-3x over “regular virtual texturing on competing consoles).
 
Last edited:
You are just suggesting how sad you are, and still keep quoting me without even understanding what people say. And it looks like PS5 was performing better in many other face off, move on kid.
So you say.....
I didn't know that, and things really become interesting now. When we should get the face-off of VGTech ?
What are you waiting for the VGTech face-off for? If only to question how accurate the DF analysis is I presume? The performance of both consoles is great but this is as close to a GPU test as we have got up till now, in that area X has an advantage. I'm not getting into an argument with you mate, the name calling is embarrassing by the way.
 

kuncol02

Banned
Not really, it says that while the methodology is sound (something people did give grief to Battaglia for, sure) the conclusion about which GPU is inherently better is not what has been stated in this thread.

nBALMlo.jpg

It is a scenario (“unoptimised old generation engine” as he put it) that XSX wins... and it is not unimportant, XOX’s successfully positioned itself not requiring developers to work too too hard beyond brute forcing their way to 4K resolutions and improved effects which helps some third party developers quite a bit (not all of them have time to get to know your customised Architecture well enough, especially early on... PS4 Pro suffered this quite a bit).



Correct and well laid out on both points. PS2’s approach was built for, added some complexity as you had to do all the manual mip-map level selection and upload in software, but you could texture as if you had a larger portion of memory dedicated to texture data (you would try to batch by texture to maximise the reuse snd simplify streaming new data in and out).
SFS taking ~1/3rd the memory and bandwidth than best in class PRT based virtual texturing implementation is another case where MS gave a easy to repeat sound bite without grounding it not to be called on it too quickly, but it is still misleading (people thinking it can amplify bandwidth by 2-3x over “regular virtual texturing on competing consoles).
For a year people are spreading bulshit that optimization for wider GPU is super hard and now suddenly unoptimized titles run better on it because they are unoptimized? Can you guys choose one narration and not jump between them whenever you like?
 

CrustyBritches

Gold Member
Not really, it says that while the methodology is sound (something people did give grief to Battaglia for, sure) the conclusion about which GPU is inherently better is not what has been stated in this thread.

nBALMlo.jpg
2 days ago...
Something to remember, too, is that certain engines and settings config will favor narrow/fast or wide/less fast. We should see a back-and-forth like later gen HD Twin PS3 vs 360 multiplats.
Even within the same game on PC you'll see Red vs Green flip-flop in performance advantage based on the settings and resolution used.
It is a scenario (“unoptimised old generation engine” as he put it) that XSX wins...
This is not the conclusion he reached, as made clear by "and vice versa" to show that it goes both ways, and "looking forward to more "benchmarkable" scenarios across engines" meaning there is no conclusion.
 

MonarchJT

Banned
Not really, it says that while the methodology is sound (something people did give grief to Battaglia for, sure) the conclusion about which GPU is inherently better is not what has been stated in this thread.

nBALMlo.jpg

It is a scenario (“unoptimised old generation engine” as he put it) that XSX wins... and it is not unimportant, XOX’s successfully positioned itself not requiring developers to work too too hard beyond brute forcing their way to 4K resolutions and improved effects which helps some third party developers quite a bit (not all of them have time to get to know your customised Architecture well enough, especially early on... PS4 Pro suffered this quite a bit).



Correct and well laid out on both points. PS2’s approach was built for, added some complexity as you had to do all the manual mip-map level selection and upload in software, but you could texture as if you had a larger portion of memory dedicated to texture data (you would try to batch by texture to maximise the reuse snd simplify streaming new data in and out).
SFS taking ~1/3rd the memory and bandwidth than best in class PRT based virtual texturing implementation is another case where MS gave a easy to repeat sound bite without grounding it not to be called on it too quickly, but it is still misleading (people thinking it can amplify bandwidth by 2-3x over “regular virtual texturing on competing consoles).
it can amplify for sure...we have this see by how much in real life but still the point is that 10 gb will be in MOST cases enough
 

sinnergy

Member
it can amplify for sure...we have this see by how much in real life but still the point is that 10 gb will be in MOST cases enough
Most engines the coming 2 years will be last gen ... none really show what both machines can do , when utilizing most features , like VRS 2.0 / mesh shading / SFS/ Geometry engine / caches etc ..
 

DForce

NaughtyDog Defense Force
For a year people are spreading bulshit that optimization for wider GPU is super hard and now suddenly unoptimized titles run better on it because they are unoptimized? Can you guys choose one narration and not jump between them whenever you like?

Mark Cerny said in his deep dive video that it's easier to get more out of a GPU with a narrow vs wider approach. Developers have confirmed that this is true.



We're learning little by little. At the very least, we're getting info from people who work in the industry rather than some self proclaimed tech guy like Dealer.
 

clintar

Member
I just wonder why they didn't try to find a spot where fps drops lower than 30 during gameplay on xsx and go into photo mode to compare. Probably too hard to get like for like scenario. Otherwise, this is a pretty good benchmark. I'd just like to know what causes xsx to drop lower sometimes.
 
Last edited:

phil_t98

#SonyToo
Mark Cerny said in his deep dive video that it's easier to get more out of a GPU with a narrow vs wider approach. Developers have confirmed that this is true.



We're learning little by little. At the very least, we're getting info from people who work in the industry rather than some self proclaimed tech guy like Dealer.

will see if that's the best approach later in the gen when all the new features of the new chipsets come into force
 

Bitmap Frogs

Mr. Community
will see if that's the best approach later in the gen when all the new features of the new chipsets come into force

Used to have a buddy who worked in gpu accelerated commercial b2b simulation and modeling software and I remember he was way more interested in the MHz count in the ps5 that the cus in the xbox.

I know, apples and oranges
 
Used to have a buddy who worked in gpu accelerated commercial b2b simulation and modeling software and I remember he was way more interested in the MHz count in the ps5 that the cus in the xbox.

I know, apples and oranges
A rtx 2080 has a good bit higher base clockspeed than the rtx 2080 ti for instance. But the 2080 ti is a much better gpu. Same with the 3070, 3080, and 3090. The larger GPUs generally have a lower clockspeed than the smaller ones, but they have more cores.

This is why I believe Xbox will more than likely have the better performance in multiplats going forward.
 

skit_data

Member
A rtx 2080 has a good bit higher base clockspeed than the rtx 2080 ti for instance. But the 2080 ti is a much better gpu. Same with the 3070, 3080, and 3090. The larger GPUs generally have a lower clockspeed than the smaller ones, but they have more cores.

This is why I believe Xbox will more than likely have the better performance in multiplats going forward.
Isnt it possible higher clocks were preferred in an optimal scenario but they have to go with more CUs for cooling to be sufficient?
 

phil_t98

#SonyToo
Used to have a buddy who worked in gpu accelerated commercial b2b simulation and modeling software and I remember he was way more interested in the MHz count in the ps5 that the cus in the xbox.

I know, apples and oranges

different approach so different results, plus the bigger GPU may have a difference so we will see
 
Isnt it possible higher clocks were preferred in an optimal scenario but they have to go with more CUs for cooling to be sufficient?
Going with more CU's for cooling to be sufficient? Far from it. You'll almost always be able to get higher clocks with less cu's, vs the other way around. Higher clockspeed didn't really improve RDNA2 gpu's, as 3GHZ only drew more power/heat, and didn't really add much to the overall performance. But ps5 may have an advantage there, but it won't be enough compared to a larger GPU with more bandwith. Especially when it comes to raytracing and 4K titles. You need as much bandwith as possible.
 

Caio

Member
So you say.....

What are you waiting for the VGTech face-off for? If only to question how accurate the DF analysis is I presume? The performance of both consoles is great but this is as close to a GPU test as we have got up till now, in that area X has an advantage. I'm not getting into an argument with you mate, the name calling is embarrassing by the way.

What's wrong in waiting for VGTech face-off ? And why you got so defensive saying people to accept things. My concern/curiosity had nothing to do with ""accepting"" or ""not accepting"" your GPU superiority. Furthermore, you should know that PS5 GPU has 21% advantage over XSX in triangle rasterization, triangle culling and 22% advantage in pixel fillrate, while XSX have an advantage in textute fillrate and raytracing(18%), so your raw and cold statement about XSX GPU being superior in absolute terms is not even true, actually very wrong.

Sorry if I called you a kid, but here people is free to ask whatever they want, and I can't find anything wrong if some would like to wait and see for VGTech face-off. Some of you should relax, nothing is happening, there is no war, all is fine, let people ask and be curuous about things and debate without getting so defensive. You jumped on my post first suggesting to accept things, ignoring the reason/point of my post.

My curiosity and concern was about this(read below), and you totally missed it, jumping in my post like a hurricane.

""
VGTech is not saying DF is making up results but that the tools are lacking.

That is something Batalhia already confirmed when he said that he had to manual count the framerate in Vanalla due the tools having issues with screen tearing.
-----------------------------------------------------------------------------------------------------------------------------
VGTech is not getting the same framerate results than Bathallia... he questioned him on Beyond3D.

Seems like the DF tools interpret torn frames as unique and so increase the real fps. ""


Nobody here is debating about which console is more capable in raytacing, we know the answer.
But really you should learn how to read posts and understand what people say.
 
Last edited:

Topher

Gold Member
A rtx 2080 has a good bit higher base clockspeed than the rtx 2080 ti for instance. But the 2080 ti is a much better gpu. Same with the 3070, 3080, and 3090. The larger GPUs generally have a lower clockspeed than the smaller ones, but they have more cores.

This is why I believe Xbox will more than likely have the better performance in multiplats going forward.

Generally, I agree with you. However, we have to consider that those GPUs all have variable frequencies like PS5 whereas XSX is fixed. This is why I've hypothesized that PS5 has performed better on multiplats that have utilized dynamic resolution such as AC:Valhalla whereas XSX shines in games with native resolutions like Hitman 3 and here where the GPU stands alone in photo mode with Control. So if I'm right then I think comparative results very well may depend on the tech used in the game.

Now that I've laid that "theory" of mine out......watch XSX shit all over PS5 in the next game featuring DRS.

Fed Up Reaction GIF
 

MonarchJT

Banned
If you are on a budget, building a gpu with less cu's and less silicon is what you have to do especially if you have already invested a large part of your budget on I / O. Having said that, Cerny project it is certainly optimal compared to the gpu he had. Raising the clock and making it variable allows you to fully squeeze the GPU. but that it goes and behaves beyond its maximum specifications is a nonsense as large as a house, its roof is at 10.28tf m and we all know that teraflops are a theorical number that happen when you squeeze EVERYTHING .It is certainly easier to manage a higher clocked gpu and less cu's than the other way around .. so hat off for how Cerny squeezed his gpu but I expect over the course of gen when the developers will know how to squeeze both consoles that games will more clearly mirror the specs of the two gpus. It is unavoidable unless and there are hardware design problems, which I rule out almost 100%>
 
Last edited:
AC:Valhalla whereas XSX shines in games with native resolutions like Hitman 3 and here where the GPU stands alone in photo mode with Control.

Isn't Hitman 3 updated BC game? PS4Pro version for PS5 and X1X version for XSX. Which of course, X1X version of Hitman 3 provides better results than PS4Pro version.
 

Topher

Gold Member
Isn't Hitman 3 updated BC game? PS4Pro version for PS5 and X1X version for XSX. Which of course, X1X version of Hitman 3 provides better results than PS4Pro version.

It was released for PS5/XSX at the same time as PS4 Pro/X1X though. And it isn't like PS5 is running at the same resolution as PS4 Pro so why it would be the PS4 Pro version?
 
It was released for PS5/XSX at the same time as PS4 Pro/X1X though. And it isn't like PS5 is running at the same resolution as PS4 Pro so why it would be the PS4 Pro version?

Outside framerate, resolution increase and some improved lighting, the rest is the same as on last-gen consoles.
Well, you can see Division 2 update then which is basically and updated BC game from PS4 Pro or X1X ( for PS5 and XSX )
 
Last edited:

ToTTenTranz

Banned
Mark Cerny said in his deep dive video that it's easier to get more out of a GPU with a narrow vs wider approach. Developers have confirmed that this is true.
We saw that already with GCN4/5 (wider + lower clocks) vs. Pascal (narrower + higher clocks).


But Matt Hargett's latest tweet seems to indicate that future engines will make better use of the PS5's faster+narrower architecture, which seems a bit counterintuitive to me.
I would have assumed that future engines would be more optimized for wider architectures since we weren't expecting GPU clocks to start ballooning anytime soon.
 

MonarchJT

Banned
We saw that already with GCN4/5 (wider + lower clocks) vs. Pascal (narrower + higher clocks).


But Matt Hargett's latest tweet seems to indicate that future engines will make better use of the PS5's faster+narrower architecture, which seems a bit counterintuitive to me.
I would have assumed that future engines would be more optimized for wider architectures since we weren't expecting GPU clocks to start ballooning anytime soon.
Honestly apart from the fact that he's an ex Playstation employee I don't see other valid reasons that can lead us to a reality where less cu's with a higher clock are preferred than the opposite. Hw manufacturers prove it gpu after gpu. Period.

Probably he is talking about the in-house playstation studios engines .
 
Last edited:
Because it was proved numerous times that VGTech's fps measures are more precise. VGTech ia very assiduous about details. This is why VGTech provides far more informative fps results than Digital Foundry
DF speak directly to developers, VGTech do not. There are always going to be variations when numbers are concerned. DF go far deeper than numbers, and have made developers aware of problems with their games.
 

Shmunter

Member
Honestly apart from the fact that he's an ex Playstation employee I don't see other valid reasons that can lead us to a reality where less cu's with a higher clock are preferred than the opposite. Hw manufacturers prove it gpu after gpu. Period.

Probably he is talking about the in-house playstation studios engines .
More cu’s is definitely better, but that’s when a comparison is made with the equivalent clock.

A faster clock offers benefits like faster cache, faster rasterisation - basic facts.

These are all part of the rendering pipeline in any game. The tunnel of doom is point in proof that once the renderer stresses the entire gpu workload, efficiency on balance starts to converge.
 
Last edited:
Uh, if they don't actually know, how are they supposed to explain it?

They can at least use their technical knowledge to guess. I've seen them do that before with other things. It just frustrates me when they don't even try to produce a theory to help explain something. I wouldn't take it as fact until it's proven but it would give us an idea on what the issue might be.
 

MonarchJT

Banned
More cu’s is definitely better, but that’s when a comparison is made with the equivalent clock.

A faster clock offers benefits like faster cache, faster rasterisation - basic facts.

These are all part of the rendering pipeline in any game. The tunnel of doom is point in proof that once the renderer stresses the entire gpu workload, efficiency on balance starts to converge.
It's the opposite ..with the same number of CU's it is (clearly) better to have higher clock. unless you go very VERY weirdly slow with the clock speed is always better to go with higher count of CU's nowdays

Of course hughes clocks speed have the advantages you mentioned
 
Last edited:

ToTTenTranz

Banned
AFAIK, DigitalFoundry provides QA testing services to developers and publishers, and their association to Eurogamer as recognized journalists facilitates their approach towards developers.
I think there's no question whether or not DF has a closer proximity to developers than any of the other youtube channels doing similar comparisons.
Which is mostly a good thing.



Honestly apart from the fact that he's an ex Playstation employee I don't see other valid reasons that can lead us to a reality where less cu's with a higher clock are preferred than the opposite. Hw manufacturers prove it gpu after gpu. Period.
This is not what we're seeing with RDNA2 vs. Ampere on rasterization performance, though.
Navi 21 is a narrower architecture than GA102 and it's competitive with a fraction of the (theoretical) execution resources.
 

MonarchJT

Banned
You can laugh many times you want
It's not bullshit. Even VGTech measured resolutions what DF couldn't.
If that makes you happier. probably because it favors your favorite console.
But that vgtech has more precise measurements than DF is all to be tested
 
Last edited:
It's the opposite ..with the same number of CU's it is (clearly) better to have higher clock. unless you go very VERY weirdly slow with the clock speed is always better to go with higher count of CU's nowdays

Of course hughes clocks speed have the advantages you mentioned

Makes me wonder why Sony made the mistake of going with a fast and narrow design. If you think about it Mark isn't dumb enough to not realize what the benefit of additional CUs will bring. This makes me believe that Sonys focus was elsewhere. Only time will prove if they made the right decision or not.
 

MonarchJT

Banned
AFAIK, DigitalFoundry provides QA testing services to developers and publishers, and their association to Eurogamer as recognized journalists facilitates their approach towards developers.
I think there's no question whether or not DF has a closer proximity to developers than any of the other youtube channels doing similar comparisons.
Which is mostly a good thing.


This is not what we're seeing with RDNA2 vs. Ampere on rasterization performance, though.
Navi 21 is a narrower architecture than GA102 and it's competitive with a fraction of the (theoretical) execution resources.
Rumored amd big navi 31 point at. a dual (chiplet) 80 CU's fixed clock gpu.
Very very distant and different from the 36 cu's variable clock cerny design. I would say diametrically opposite design
 
Last edited:
Makes me wonder why Sony made the mistake of going with a fast and narrow design. If you think about it Mark isn't dumb enough to not realize what the benefit of additional CUs will bring. This makes me believe that Sonys focus was elsewhere. Only time will prove if they made the right decision or not.
Sony like ms made all their decisions based around cost and profitability.
 
Sony like ms made all their decisions based around cost and profitability.

But both systems are priced the same. It's not like they couldn't have had a wider GPU and priced the system the same as the competition. If you look at the Road to PS5 Sony seemed to place a lot of bets on the I/O.
 
Last edited:

ToTTenTranz

Banned
What's the point of speaking to developers when VGTechs analysis is more precise
Does VGTech have testing equipment for HDMI 2.1 sources?
This is something that's been holding DF back quite a bit IMO, as everytime they want to test 120Hz modes they need to fall back to 1080p.


Makes me wonder why Sony made the mistake of going with a fast and narrow design.
So far Sony's console is achieving equal performance results in actual games, with a smaller and cheaper SoC, a cheaper 8-channel memory subsystem and similar power consumption levels.
I would hardly call that a mistake.


Rumored and navi 31 point at. a dual (chiplet) 80 CU's fixed clock gpu.
Fixed clock? Are you sure about this? Why would they go back almost 10 years on that?
 

MonarchJT

Banned
Makes me wonder why Sony made the mistake of going with a fast and narrow design. If you think about it Mark isn't dumb enough to not realize what the benefit of additional CUs will bring. This makes me believe that Sonys focus was elsewhere. Only time will prove if they made the right decision or not.
Guys it's simpler than you think. their budget ... take into account all the other expenses and customizations allowed that type of silicon investment in that type of gpu. Do you think that if they gave Cerny double the budget they wouldn't have increased the number of CU's? the genius in its design lies in being able to squeeze the GPU almost to the maximum. which either doesn't happen often and we're seeing it initially on the x series that even being undoubtedly a more powerful gpu it is more complicated to squeeze up to the last teraflop
 

Topher

Gold Member
But both systems are priced the same. It's not like they couldn't have had a wider GPU and priced the system the same as the competition. If you look at the Road to PS5 Sony seemed to place a lot of bets on the I/O.

But it isn't all about the APU. You've got the faster SSD, DualSense and audio improvements as well and those innovations cost money. Yeah, Cerny and Sony could have matched XSX 12 TF right off the bat but then they would have to either eliminate the other improvements or jack the price up quite a bit.

I'm happy with the end result of PS5, personally, even if it means sacrificing 16% power from the GPU.
 

MonarchJT

Banned
Does VGTech have testing equipment for HDMI 2.1 sources?
This is something that's been holding DF back quite a bit IMO, as everytime they want to test 120Hz modes they need to fall back to 1080p.



So far Sony's console is achieving equal performance results in actual games, with a smaller and cheaper SoC, a cheaper 8-channel memory subsystem and similar power consumption levels.
I would hardly call that a mistake.



Fixed clock? Are you sure about this? Why would they go back almost 10 years on that?
I'm not saying about not having a boost clock. i'm saying it won't be variable like ps5
 
But both systems are priced the same. It's not like they couldn't have had a wider GPU and priced the system the same as the competition. If you look at the Road to PS5 Sony seemed to place a lot of bets on the I/O.
Let's put it this way. Me and you have 50$ to goto the grocery store to make a nice tasty healthy meal. Are we both going to leave with the exact same things?
 
Guys it's simpler than you think. their budget ... take into account all the other expenses and customizations allowed that type of silicon investment in that type of gpu. Do you think that if they gave Cerny double the budget they wouldn't have increased the number of CU's? the genius in its design lies in being able to squeeze the GPU almost to the maximum. which either doesn't happen often and we're seeing it initially on the x series that even being undoubtedly a more powerful gpu it is more complicated to squeeze up to the last teraflop

All I can think of is that Cerny wanted to include other things besides just more CUs into the system. I wish we had a die shot but it appears there's a lot of custom hardware for the I/O in it. It seems like they are making a big bet on that. Hopefully it doesn't ruin the system later in the gen because it lacks compute power.
 
Let's put it this way. Me and you have 50$ to goto the grocery store to make a nice tasty healthy meal. Are we both going to leave with the exact same things?

But having additional CUs was an option for both and Sony didn't take it. I mean the XSX did end up costing the same as the PS5. Why did Sony think it was best to leave out GPU power?

That's my question.
 
Top Bottom