• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Ampere Estimatez: PS5 sold 21 million, Xbox Series sold 13.8 million.

Sosokrates

Report me if I continue to console war
Because Xbox exclusive games need to be designed to work with that spec and scale up. If anything it is extra work to develop and test and support over time for a developer than if you only had a single target to optimise for. The more you add differences between both (less RAM, lower clockspeed for the GPU, a lot less compute power, etc… games that target 1440p or less and use DLSS like reconstruction to achieve 4K on the bigger consoles would do what with the XSS target if designed this way?) the more work it is for you if you want to really maximise the game on both hardware platforms, closer and closer to making two games the more they differ.

Considering the discussion over the years I feel it is a bit of a disingenuous question, but more a bait for a “gotcha” attempt doen the line ;). Sosokrates Sosokrates is already there and broke formation despite the implicit “Hoold it! Hooold it!” :p.
You ad hominem commentary just shows you lack of willingness to discuss the subject matter, probably because you're incorrect on the matter.

The XSS is not holding XSX back, not to any worthwhile degree.

Whats the hoold it! Hooold it u are refering to?
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
The XSS is not holding XSX back, not to any worthwhile degree.
This is willingness to be open to discussion 101. I will agree to disagree with you. I am arguing my part, there has been tons of discussion here already, your stance is basically “no it is not. Period.”, so 🤷‍♂️
 

Panajev2001a

GAF's Pleasant Genius
so your now worried about xbox game studios having to do some extra work? bless you
No, not worried. They will not do the extra work :) and they do not own all devs so there is more than XGS out there for now at least.

But sure we can all look at how devs took Xbox One X and PS4 Pro and made them target platforms soon after they were out instead of putting interns brute forcing patches out or the equivalent of that right? Right?
 

Sosokrates

Report me if I continue to console war
This is willingness to be open to discussion 101. I will agree to disagree with you. I am arguing my part, there has been tons of discussion here already, your stance is basically “no it is not. Period.”, so 🤷‍♂️

This is what you do?

At least my opinion has something to back it up, like the matrix Demo and just how game engines work and how scalable they are.

Even a sony exclusive like returnal can run on the steam deck.

Theres my data to back up my opinion, wheres yours?
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
At least my opinion has something to back it up, like the matrix Demo and just how game engines work and how scalable they are.
Engine being able to scale is a thing, making a game out of it can be a totally other thing. You chose an interesting example where you had to have an UE expert first party studio optimise/adapt the demo to improve Xbox performance of a demo. Again, not what happens generally when you develop software and have a wide variety of targets your single software sku needs to support. Back to the ID Engine devs min specs matter comment and also common sense when you develop/debug/support and you are give more versions to support (ask iOS devs how well they optimise for a single device and/or OS if they support say the last three OS revisions).

An Android app vs iOS app is two revenue streams, an iPhone 13 Pro vs iPhone 11 Pro are not.

Even a sony exclusive like returnal can run on the steam deck.
Given time, sacrifices, and effort it can be converted and sold again. This is not the XSX and XSS scenario.
 
Last edited:

Wohc

Banned
I am not sure of that, they have intelligence but are not clairvoyance and these things are set in motion quite a while earlier. Sony had NEVER made two SKU’s at launch and their only physical media less console, PSP Go, was a disaster. I do not think they expected that… like they did not expect PS4 having 8 GB of GDDR5 and those specs at launch for $399… with all the Giant Corp resources they should have been able to predict PS4’s launch too and Sony should have been able to predict the Xbox 360 game plan…

Over reaction to the Xbox One launch issues and a Pincer Movement strategy make much more sense to me. It would have worked super well to sell the XSX if Sony only had the $499 PS5 and Xbox had both the cheapest by far at $299 and the fastest by far on paper at $499. $399 PS5 threw a wrench in this.
Sure, they don't know every single bit, but i bet something like a second console wouldn't be a secret for long. Too many people involved.
I think MS planned the S from the beginning and Sony made the DE after they heard about it. It's easier to remove a drive than to make a complete new console. That would also explain why there are so few DE compared to the normal PS5. Sony doesn't like losing money, but they wanted to close the 200 Dollar gap. I also think Sony raised the PS5's TF after hearing about the X's TF, because the cooling solution seemed a bit exaggerated and they quickly changed it after the release. But we'll never know for sure who did what and why. We have to see how it works out, but seeing all big 3 doing well is great.
 

Panajev2001a

GAF's Pleasant Genius
I also think Sony raised the PS5's TF after hearing about the X's TF
Disagree, I mean they could be lying and this being a giant smokescreen but their clockspeed target was meant to be high from the beginning. They literally designed everything for a very high clockspeed and for launch they wanted to make sure they had no heat related failures which would have been a disastrous PR.

Their model relied on the HW performing the same exact way despite environment temperature and being based on a pre determined power consumption model based on the workload. They overspecced cooling to ensure a smooth launch and then they revised it. Last minute major clock increases seem to come only from forum speculations.
 

Sosokrates

Report me if I continue to console war
Engine being able to scale is a thing, making a game out of it can be a totally other thing. You chose an interesting example where you had to have an UE expert first party studio optimise/adapt the demo to improve Xbox performance of a demo. Again, not what happens generally when you develop software and have a wide variety of targets your single software sku needs to support. An Android app vs iOS app is two revenue streams, an iPhone 13 Pro vs iPhone 11 Pro are not.
There are many examples of this, Horizon forbidden west, GT7, Halo infinite, so what you are sying is incorrect, here are real world examples where devs have to optimise for multiple hardware platforms.

Also UE5 was built with scalability in mind.
Given time, sacrifices, and effort it can be converted and sold again. This is not the XSX and XSS scenario.
So going by this logic given more time and cutbacks a game can work on a less powerful platform without comprising the experience on the more powerful platform...
I agree with this, of course developing for multiple platforms will take more time. This seems to be the direction the industry is going, even sony are going this direction with bringing PS games to PC.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
There are many examples of this, Horizon forbidden west, GT7, Halo infinite, so what you are sying is incorrect, here are real world examples where devs have to optimise for multiple hardware platforms.
They mostly are not a demonstration that the high end platform , XSX or PS5, was used to its full potential. Not sure what you are saying probes anything.

Also UE5 was built with scalability in mind.
Platitudes, in and of itself it means everything and nothing. Scaleability is a tradeoff too, same thing as abstractions allowing you to forget about hardware differences in your PC GPU’s. Nothing is free.

The more you become aware of those differences to take advantage of each of them the more work you need to put in.

So going by this logic given more time and cutbacks a game can work on a less powerful platform without comprising the experience on the more powerful platform...
To some extents yes.

I agree with this, of course developing for multiple platforms will take more time. This seems to be the direction the industry is going, even sony are going this direction with bringing PS games to PC.
Or reduce scope and complexity and ambition not to increase costs.
 

octiny

Banned
? Like what? Not disputing, just it seems like we are taking some estimates over others based on not much more than preference sometimes.

I had something partially written out (forgot it was there) on my last post but decided not to finish it as to not ruffle the feathers of a few here. In the end, it wont change those who are already set in their ways. All 2nd hand bullshit estimates regardless.

But damn, you must've looked at my post literally 5 seconds after it was posted cause that's how long it took me to edit out that unfinished part 😛
 
Last edited:

Sosokrates

Report me if I continue to console war
Disagree, I mean they could be lying and this being a giant smokescreen but their clockspeed target was meant to be high from the beginning. They literally designed everything for a very high clockspeed and for launch they wanted to make sure they had no heat related failures which would have been a disastrous PR.

Their model relied on the HW performing the same exact way despite environment temperature and being based on a pre determined power consumption model based on the workload. They overspecced cooling to ensure a smooth launch and then they revised it. Last minute major clock increases seem to come only from forum speculations.
While the decisions made about the PS5s design will always likely be secret. I do agree with what your saying here, because I believe the plan was always to go with 36 active compute units to enable there BC method and a high clockrate was necessary to get the performance they wanted, if 2230mhz was always the target, who knows, early rumours did point at 2000mhz which is still high, but that was just early rumours so eho knows.

Early rumors for a code name for the XSX chip I think were clocked lower too, at like 1565mhz or something, I forget the name of the codename, artimus or somthing?
 

Sosokrates

Report me if I continue to console war
They mostly are not a demonstration that the high end platform , XSX or PS5, was used to its full potential. Not sure what you are saying probes anything.
Well you said that matrix demo was different to normal game development because the coalition optimised for the seriesS and I gave examples of devs optimising for multiple platforms on real games, and yes they are cross gen games, the only current gen games we have seen this on is flight sim and deathloop.
Or reduce scope and complexity and ambition not to increase costs
This is certainly possible, but what do you think this would entail?

I mean from a game design perspective devs last gen were more constrained by budget then hardware power.

Its why imo RDR2 is more "next gen" to me then forspoken despite the latter being made on hardware a generation ahead.
I dont think an open world game will top RDR2 beside some visual aspects until GTA6, because of the insane budgets these type of games require.


But why stop with SeiresS holding stuff back, i mean the XSX and PS5 only have 13.5gb ram for games and 10-12tf gpus with ok RT, imagine how much they are holding games back compared to a console with 24gbram, RDNA4 20TFLOPS etc etc
 

Loxus

Member
Sure, they don't know every single bit, but i bet something like a second console wouldn't be a secret for long. Too many people involved.
I think MS planned the S from the beginning and Sony made the DE after they heard about it. It's easier to remove a drive than to make a complete new console. That would also explain why there are so few DE compared to the normal PS5. Sony doesn't like losing money, but they wanted to close the 200 Dollar gap. I also think Sony raised the PS5's TF after hearing about the X's TF, because the cooling solution seemed a bit exaggerated and they quickly changed it after the release. But we'll never know for sure who did what and why. We have to see how it works out, but seeing all big 3 doing well is great.
So, way can't it be the opposite?

I don't think Sony cares what Microsoft is doing, Sony is doing their own thing.

We see Sony is more interested in SSD I/O and VR than a more powerful console.

They could of gone with a more powerful console if they wanted to but decided not to as they had a 48 CU PS5 in mind.
AjyPw0l.jpg
 
Last edited:

Wohc

Banned
Disagree, I mean they could be lying and this being a giant smokescreen but their clockspeed target was meant to be high from the beginning. They literally designed everything for a very high clockspeed and for launch they wanted to make sure they had no heat related failures which would have been a disastrous PR.

Their model relied on the HW performing the same exact way despite environment temperature and being based on a pre determined power consumption model based on the workload. They overspecced cooling to ensure a smooth launch and then they revised it. Last minute major clock increases seem to come only from forum speculations.
Like i said, we'll never know for sure. It just makes the most sense to me, because you don't have to overspec and almost immediately revise something, if you have tested it for many months and those consoles are in the works for years. But that's just our gut feeling and it actually doesn't matter if they did or did not. Nothing changes.
So, way can't it be the opposite?
It can be, it just makes more sense to me that Microsoft desperately wanted a cheaper console after the disaster last gen and because there are so few PS5 DE.
 
Last edited:

Sosokrates

Report me if I continue to console war
So, way can't it be the opposite?

I don't think Sony cares what Microsoft is doing, Sony is doing their own thing.

We see Sony is more interested in SSD I/O and VR than a more powerful console.

They could of gone with a more powerful console if they wanted to but decided not to as they had a 48 CU PS5 in mind.
AjyPw0l.jpg
They went with 36cus and a high clock rate because its cheaper then going with more cus. They could of gone with 44 active cus @1855mhz , it would of given similar performance but the chip would cost more, the bigger cooling system for the current PS5 is probably not much more expensive then a smaller one.
 

phil_t98

#SonyToo
No, not worried. They will not do the extra work :) and they do not own all devs so there is more than XGS out there for now at least.

But sure we can all look at how devs took Xbox One X and PS4 Pro and made them target platforms soon after they were out instead of putting interns brute forcing patches out or the equivalent of that right? Right?

yeah but they make the game then scale it down, it isn't that much work. if it was we may see a price hike like PS5 games have had or the upgrade path form xbox one to series consoles costing money. its free at the momen btw
 

Panajev2001a

GAF's Pleasant Genius
They went with 36cus and a high clock rate because its cheaper then going with more cus. They could of gone with 44 active cus @1855mhz , it would of given similar performance but the chip would cost more, the bigger cooling system for the current PS5 is probably not much more expensive then a smaller one.
I think there are also benefit to higher clock rate unless they made architectural sacrifices to get there (think Pentium 4), but RDNA2 and RDNA3 both seem to be intentionally high clockspeed optimised designs and they seem to do well without many compromises (RDNA3 is just rumors and leaks right now). XSX|S and PS5 GPU are using the same fundamental RDNA2 architecture so you do not have a design on PS5 that is fundamentally less efficient because it trades it off for higher clocks… if you have it then you have it on both consoles but one console went with a wider array of DCU’s instead of raising the clock as much as the architecture was designed for… it has pros and cons (same as the amount of DCU’s per Shader Array and the shared cache each has).

There is shared HW, same amount of logic across the two GPU’s, that benefits from higher clock rate and also there is work the same CU might iterate over and over more and clock rate helps with that too (dynamic code with branches , complex logic doing many steps over the same bucket of data, etc…).
 

Panajev2001a

GAF's Pleasant Genius
yeah but they make the game then scale it down
I do not think it is the best direction or the direction devs generally take unless they can afford to sell a second SKU and dedicate like a different team to make the downpour.

Scaling up or choosing a target in the middle of the two consoles is far more efficient. Take a
PS4 optimised game at 30 FPS and 1080p… now push Native 4K and double the framerate at 60 FPS, increase the resolution of shadows and other effects, use larger higher quality textures, and if you have time add effects like Ray tracing for AO/shadows/or replacing some cube map reflections. People on the higher end console still get the best looking version.

You do not start by designing a CPU stressful 30 FPS games with 1440p or 1080p resolution target and TAA/fancy scaling to reach 4K, make heavy RT use, and design effects for a 30 FPS frame time on PS5, a streaming engine optimised around the SSD speed on PS5, and then quickly port it down to a base PS4.

Same thing in mobile apps, nobody starts with an iOS 16 target on an iPhone 13 Pro Max as baseline and then adds iOS 13 and 14 support and checks how it performs and how the UI looks on an iPhone SE.
 

Panajev2001a

GAF's Pleasant Genius
I had something partially written out (forgot it was there) on my last post but decided not to finish it as to not ruffle the feathers of a few here. In the end, it wont change those who are already set in their ways. All 2nd hand bullshit estimates regardless.

But damn, you must've looked at my post literally 5 seconds after it was posted cause that's how long it took me to edit out that unfinished part 😛
Hehe, I was already replying and it popped up :).
 

phil_t98

#SonyToo
I do not think it is the best direction or the direction devs generally take unless they can afford to sell a second SKU and dedicate like a different team to make the downpour.

Scaling up or choosing a target in the middle of the two consoles is far more efficient. Take a
PS4 optimised game at 30 FPS and 1080p… now push Native 4K and double the framerate at 60 FPS, increase the resolution of shadows and other effects, use larger higher quality textures, and if you have time add effects like Ray tracing for AO/shadows/or replacing some cube map reflections. People on the higher end console still get the best looking version.

You do not start by designing a CPU stressful 30 FPS games with 1440p or 1080p resolution target and TAA/fancy scaling to reach 4K, make heavy RT use, and design effects for a 30 FPS frame time on PS5, a streaming engine optimised around the SSD speed on PS5, and then quickly port it down to a base PS4.

Same thing in mobile apps, nobody starts with an iOS 16 target on an iPhone 13 Pro Max as baseline and then adds iOS 13 and 14 support and checks how it performs and how the UI looks on an iPhone SE.

programming games for consoles these days is pretty much the same as PC's

so your saying building an xbox game requires more work than a playstation game right?
 

Panajev2001a

GAF's Pleasant Genius
Like i said, we'll never know for sure. It just makes the most sense to me, because you don't have to overspec and almost immediately revise something
With a mass market product that is exactly how you do. Both in SW (hence why OS requirements go down and not up over the console lifetime) and especially in HW as fundamental issues in the console design that escape testing, especially since the approach Sony took was not exactly mainstream so they were charting new waters (very dangerous to pioneer HW changes even if it is an evolution of existing approaches).

It is also the same strategy they built all of their consoles for over 25 years with: high spec, then cost reduce.
 

Panajev2001a

GAF's Pleasant Genius
programming games for consoles these days is pretty much the same as PC's
It depends. How far down the rabbit hole you go? How much do you optimise for the HW how much money you expect to make back (ROI), etc…
so your saying building an xbox game requires more work than a playstation game right?
It depends. Maybe you use Unity and do not optimise almost anything for any HW and just tweak some sliders or leave those sliders to people to tweak their own game with. Again, it depends.

If you want the best out of any given HW and you have a great HW variety you will spend more money to do so and support so… unless you are at C-level positions and you are happy selling snake oil ;)… then promise that using Electron to make efficient and well integrated apps on Windows, macOS, and Linux is the answer ;).
 

Panajev2001a

GAF's Pleasant Genius
But why stop with SeiresS holding stuff back, i mean the XSX and PS5 only have 13.5gb ram for games and 10-12tf gpus with ok RT, imagine how much they are holding games back compared to a console with 24gbram, RDNA4 20TFLOPS etc etc
Well I will imagine my outrage due to this imaginary console not being catered for properly ;).
 

phil_t98

#SonyToo
It depends. How far down the rabbit hole you go? How much do you optimise for the HW how much money you expect to make back (ROI), etc…

It depends. Maybe you use Unity and do not optimise almost anything for any HW and just tweak some sliders or leave those sliders to people to tweak their own game with. Again, it depends.

If you want the best out of any given HW and you have a great HW variety you will spend more money to do so and support so… unless you are at C-level positions and you are happy selling snake oil ;)… then promise that using Electron to make efficient and well integrated apps on Windows, macOS, and Linux is the answer ;).

so you avoid answering the question. do you see it being easier to program games for PS5 than xbox series consoles with them having 2 variants?
 
Last edited:

Loxus

Member
I think there are also benefit to higher clock rate unless they made architectural sacrifices to get there (think Pentium 4), but RDNA2 and RDNA3 both seem to be intentionally high clockspeed optimised designs and they seem to do well without many compromises (RDNA3 is just rumors and leaks right now). XSX|S and PS5 GPU are using the same fundamental RDNA2 architecture so you do not have a design on PS5 that is fundamentally less efficient because it trades it off for higher clocks… if you have it then you have it on both consoles but one console went with a wider array of DCU’s instead of raising the clock as much as the architecture was designed for… it has pros and cons (same as the amount of DCU’s per Shader Array and the shared cache each has).

There is shared HW, same amount of logic across the two GPU’s, that benefits from higher clock rate and also there is work the same CU might iterate over and over more and clock rate helps with that too (dynamic code with branches , complex logic doing many steps over the same bucket of data, etc…).
I also think the I/O Complex is the reason for the high clocks.
Being able to transfer data at high speeds would be wasted if the GPU isn't fast enough to use said data, resulting in a bottleneck.
 

Panajev2001a

GAF's Pleasant Genius
so you avoid answering the question. do you see it being easier to program games for PS4 than xbox series consoles with them having 2 variants?
I am not avoiding answering the question Miles Edgeworth sir, you are trying to bait things and I am just saying that if you just want to hear what you want to hear there is little point to it. You can say the same back and we go our merry way huh?

I said it depends on more than just having different HW to code for. If you have a single console to work on you will have a simpler (read cheaper and easier to support and exploit throughout the entire generation, unless the HW is really difficult to work with) than if you add another console to the mix. If you add an Xbox One version you are adding another revenue stream, if you add a PS4 Pro version you are not and how PS4 Pro was given brute force token support kind of shows that.
 

phil_t98

#SonyToo
I am not avoiding answering the question Miles Edgeworth sir, you are trying to bait things and I am just saying that if you just want to hear what you want to hear there is little point to it. You can say the same back and we go our merry way huh?

I said it depends on more than just having different HW to code for. If you have a single console to work on you will have a simpler (read cheaper and easier to support and exploit throughout the entire generation, unless the HW is really difficult to work with) than if you add another console to the mix. If you add an Xbox One version you are adding another revenue stream, if you add a PS4 Pro version you are not and how PS4 Pro was given brute force token support kind of shows that.


so basically you won't back up what you said earlier in the thread about it being more work because xbox has two consoles lol carry on being you and concerned about xbox
 
If you want to know how much xbox series sold than you only need it's sales total from the US. That would account for about 70% of its total sales. I would say 15m is probably the ceiling right now or there about. 10m in the US and 5m rest of the world.
 
Last edited:

Loxus

Member
so basically you won't back up what you said earlier in the thread about it being more work because xbox has two consoles lol carry on being you and concerned about xbox
Technically it's more work, especially when scaling down.

Developers have to tune each consoles performance. Sometimes some Publishers even outsource Studios for different game versions/ports.
 

Panajev2001a

GAF's Pleasant Genius
so basically you won't back up what you said earlier in the thread about it being more work because xbox has two consoles lol carry on being you and concerned about xbox
You keep on not reading and giving up on your big gotcha because others do not play along? This is the attitude you are showing here.

You are trying to gotcha other posters with complex questions requiring simple responses only to drive a narrative. Fill your boots mate ;).
 

Helghan

Member
Technically it's more work, especially when scaling down.

Developers have to tune each consoles performance. Sometimes some Publishers even outsource Studios for different game versions/ports.
Aren't most games being developed for PC anyway?
 

phil_t98

#SonyToo
You keep on not reading and giving up on your big gotcha because others do not play along? This is the attitude you are showing here.

You are trying to gotcha other posters with complex questions requiring simple responses only to drive a narrative. Fill your boots mate ;).


there was not gotcha , you claimed it would be harder or require more work but didn't back it up as usual
 

phil_t98

#SonyToo
Technically it's more work, especially when scaling down.

Developers have to tune each consoles performance. Sometimes some Publishers even outsource Studios for different game versions/ports.


now didn't xbox devs come out and say it would require little work as the dev kits did most of it for them
 

Sosokrates

Report me if I continue to console war
I think there are also benefit to higher clock rate unless they made architectural sacrifices to get there (think Pentium 4), but RDNA2 and RDNA3 both seem to be intentionally high clockspeed optimised designs and they seem to do well without many compromises (RDNA3 is just rumors and leaks right now). XSX|S and PS5 GPU are using the same fundamental RDNA2 architecture so you do not have a design on PS5 that is fundamentally less efficient because it trades it off for higher clocks… if you have it then you have it on both consoles but one console went with a wider array of DCU’s instead of raising the clock as much as the architecture was designed for… it has pros and cons (same as the amount of DCU’s per Shader Array and the shared cache each has).

There is shared HW, same amount of logic across the two GPU’s, that benefits from higher clock rate and also there is work the same CU might iterate over and over more and clock rate helps with that too (dynamic code with branches , complex logic doing many steps over the same bucket of data, etc…).

At the end of the day the number of calculations per second will be the most beneficial aspect. Because when you break it down thats what matters the most, its all just 0's and 1's at the end of the day.

But I will only listen to direct specific quotes from developers with experience, on how much a higher clock rate effects performance.
Because until its just people choosing the answer which they want to be correct.

My opinion on the subject regarding clocks is that I dont know, and neither does anyone else unless they have hands on experience with the hardware.

I know in the PC space clockspeed does not seem to effect performance beyond what the tflop number suggests so....🤷

Im sure there are better examples but the 5700 and 5700xt used here use the same memory amount, bandwidth and overall architecture.

 
Last edited:

Panajev2001a

GAF's Pleasant Genius
there was not gotcha , you claimed it would be harder or require more work but didn't back it up as usual
Considering the proof and work you provided without reading what others say…

Explanation was provided, you offer thin burden of proof and want accounting books for others, have it your way ;)… as usual.

Your points/proofs were “engines scale” ( :rolleyes: as If others argued they do not), that cross generation games exist and they do not have to look like crap (not the point people argue), and that UE5 can run on XSS.

Cost, revenue expected, development and especially support and maintenance was the usual “do not care how the sausage is made”. Gave you an example of the very very scalable platform that is mobile (iOS for example) and the frameworks and tools teams use to make apps with and something as “simple” as multi device optimisation and/or multi OS support and how the lowest common denominator applies there (aka “holding back” or “playing the game”, from a user PoV is the same thing). Result is skipping it all and going back on the attack on something else… 🤷‍♂️.
 

Panajev2001a

GAF's Pleasant Genius
At the end of the day the number of calculations per second will be the most beneficial aspect.
Pentium 4 back then had tremendous theoretical number of calculations per second, so did CELL… yet you find people hating both ;).
But I will only listen to direct specific quotes from developers with experience, on how much a higher clock rate effects performance.

Xbox Series X architects:
Andrew Goossen
Just like our friends we're based on the Sea Islands family. We've made quite a number of changes in different parts of the areas. The biggest thing in terms of the number of compute units, that's been something that's been very easy to focus on. It's like, hey, let's count up the number of CUs, count up the gigaflops and declare the winner based on that. My take on it is that when you buy a graphics card, do you go by the specs or do you actually run some benchmarks? Firstly though, we don't have any games out. You can't see the games. When you see the games you'll be saying, "What is the performance difference between them?" The games are the benchmarks. We've had the opportunity with the Xbox One to go and check a lot of our balance. The balance is really key to making good performance on a games console. You don't want one of your bottlenecks being the main bottleneck that slows you down.
Balance is so key to real effective performance. It's been really nice on Xbox One with Nick and his team and the system design folks have built a system where we've had the opportunity to check our balances on the system and make tweaks accordingly. Did we do a good job when we did all of our analysis a couple of years ago and simulations and guessing where games would be in terms of utilisation? Did we make the right balance decisions back then? And so raising the GPU clock is the result of going in and tweaking our balance. Every one of the Xbox One dev kits actually has 14 CUs on the silicon. Two of those CUs are reserved for redundancy in manufacturing. But we could go and do the experiment - if we were actually at 14 CUs what kind of performance benefit would we get versus 12? And if we raised the GPU clock what sort of performance advantage would we get? And we actually saw on the launch titles - we looked at a lot of titles in a lot of depth - we found that going to 14 CUs wasn't as effective as the 6.6 per cent clock upgrade that we did. Now everybody knows from the internet that going to 14 CUs should have given us almost 17 per cent more performance but in terms of actual measured games - what actually, ultimately counts - is that it was a better engineering decision to raise the clock. There are various bottlenecks you have in the pipeline that [can] cause you not to get the performance you want [if your design is out of balance].
Advertisement


Nick Baker
Increasing the frequency impacts the whole of the GPU whereas adding CUs beefs up shaders and ALU.
Andrew Goossen
Right. By fixing the clock, not only do we increase our ALU performance, we also increase our vertex rate, we increase our pixel rate and ironically increase our ESRAM bandwidth. But we also increase the performance in areas surrounding bottlenecks like the drawcalls flowing through the pipeline, the performance of reading GPRs out of the GPR pool, etc. GPUs are giantly complex. There's gazillions of areas in the pipeline that can be your bottleneck in addition to just ALU and fetch performance.
hkNCI4P.jpg


My opinion on the subject regarding clocks is that I dont know, and neither does anyone else unless they have hands on experience with the hardware.

I know in the PC space clockspeed does not seem to effect performance beyond what the tflop number suggests so....🤷

Im sure there are better examples but the 5700 and 5700xt used here use the same memory amount, bandwidth and overall architecture.


Again, comparing different architectures, different software stacks, etc… claiming higher clockspeed is always better is like claiming that more CU’s is always better and then we get lost on NVFLOPS vs RDNA FLOPS vs GCN FLOPs, etc… instead of why the theoretical peak was not reached or why it did not move the needle in real world software.

With clocks it is the same thing… clocks increase can cost you tradeoffs that destroy performance, see Pentium 4 in real world use, or can improve performance of the whole chip ( it is true that say XSX GPU and PS5 GPU’s have likely the same or similar amount of shared HW logic outside of the CU count given what they have both said like MS HotChips presentation and what we know of RDNA architecture configurations from AMD… where the output per clock performance is the same clockspeed matters). What is less easy to speculate from the outside is the impact of said clockspeed superiority. It might not move the needle much due to other bottlenecks or not.
 
Last edited:

Sosokrates

Report me if I continue to console war
Considering the proof and work you provided without reading what others say…

Explanation was provided, you offer thin burden of proof and want accounting books for others, have it your way ;)… as usual.

Your points/proofs were “engines scale” ( :rolleyes: as If others argued they do not), that cross generation games exist and they do not have to look like crap (not the point people argue), and that UE5 can run on XSS.

Cost, revenue expected, development and especially support and maintenance was the usual “do not care how the sausage is made”. Gave you an example of the very very scalable platform that is mobile (iOS for example) and the frameworks and tools teams use to make apps with and something as “simple” as multi device optimisation and/or multi OS support and how the lowest common denominator applies there (aka “holding back” or “playing the game”, from a user PoV is the same thing). Result is skipping it all and going back on the attack on something else… 🤷‍♂️.

What kind of cutbacks do you expect to be in games because of the XSS?
 

Panajev2001a

GAF's Pleasant Genius
What kind of cutbacks do you expect to be in games because of the XSS?
How do you expect XSX is fully maxed out thanks to the XSS?

What cutbacks do you expect in PS5 titles because of PS4? That is not an argument that soften the cost of multiple HW support, it is a “well they succeeded despite vs because” argument at best.
 
Last edited:

Woopah

Member
It’s all guessing anyway, NPD, Ampere, Vgcharts, has anyone ever explained their process to get the figures? for all we know they might have like 1% actual ww data and just extrapolate on that 🤷‍♂️
Vgchartz - mostly guessed based on some contacts with a few retailers around the world
NPD - Professional tracking firm that receives data directly from retailers and publishers for the US, then extrapolates to cover the retailers it doesn't cover
Ampere - Receives data form professional trackers like NPD and Media Create. Then makes estimates for country's that don't have trackers.
Why do you think that Twitter account is more reliable than Ampere? A firm with industry connections and sales data?

Zhuge and Welfare are known Xbox cheerleaders they don't have access to better data. They look at amazon sales charts and shit.
Zhuge is a professional industry analyst who work for a firm with industry connections and sales data. He's not an "Xbox cheerleader"
 

Sosokrates

Report me if I continue to console war
Pentium 4 back then had tremendous theoretical number of calculations per second, so did CELL… yet you find people hating both ;).


Xbox Series X architects:

[/URL][/URL]
hkNCI4P.jpg



Again, comparing different architectures, different software stacks, etc… claiming higher clockspeed is always better is like claiming that more CU’s is always better and then we get lost on NVFLOPS vs RDNA FLOPS vs GCN FLOPs, etc… instead of why the theoretical peak was not reached or why it did not move the needle in real world software.

With clocks it is the same thing… clocks increase can cost you tradeoffs that destroy performance, see Pentium 4 in real world use, or can improve performance of the whole chip ( it is true that say XSX GPU and PS5 GPU’s have likely the same or similar amount of shared HW logic outside of the CU count given what they have both said like MS HotChips presentation and what we know of RDNA architecture configurations from AMD… where the output per clock performance is the same clockspeed matters). What is less easy to speculate from the outside is the impact of said clockspeed superiority. It might not move the needle much due to other bottlenecks or not.

Interesting stuff, but that still does not really give us an idea on how the PS5s higher clock will benefit it.

If higher clocks were so benficial why aren't all consoles made with this mentality?
Why did the PS4 go with 18cus and 800mhz if higher clocks are better then why not go with 14cus and 1100mhz? Same with the pro, 1x, xss,xsx.....

Anyway I dont think higher clocks will give considerble performance advantages over lower clocks and more CUs, unless going to extremes ( 500cus @ 50mhz)

A 5700xt can give similar performance to a PS5 so. The advantage of the higher clock rate cant be that drastic.
 
Last edited:

clampzyn

Member
How do you expect XSX is fully maxed out thanks to the XSS? Hand waving ensues while accusing others of doing the same, but hey keep living in the land where fundamentally how software is written is not your concern and we will both be happy.
Hell nah XSS will not held XSX down this gen, if I'm a dev company I would prioritize the best performance and visuals to a platform I will make it for which is XSX on xbox then will downscale for XSS at a decent performance at least if the devs are not first party. People are thinking it the other way around, you think next gen games will target XSS first then upscale it for XSX? It will be XSX first then XSS optimization.

Just like this game:

Made for current gen consoles only, Series S didn't held back Series X here and it did perform well even if it doesn't have RT.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Hell nah XSS will not held XSX down this gen, if I'm a dev company I would prioritize the best performance and visuals to a platform I will make it for which is XSX on xbox then will downscale for XSS at a decent performance at least if the devs are not first party. People are thinking it the other way around, you think next gen games will target XSS first then upscale it for XSX? It will be XSX first then XSS optimization.

Just like this game:

Made for current gen consoles only, Series S didn't held back Series X here and it did perform well even if it doesn't have RT.

Yeah, unless you want a world of pain you will not start with the max performance specs and RAM and then figure out how to make it work on the lower end HW unless you have a dedicated team to do essentially the port (see what Sony is using Nixxes and extra time between the console release and the PC one for).
 
Last edited:

Mr Moose

Member
Interesting stuff, but that still does not really give us an idea on how the PS5s higher clock will benefit it.

If higher clocks were so benficial why aren't all consoles made with this mentality?
Why did the PS4 go with 18cus and 800mhz if higher clocks are better then why not go with 14cus and 1100mhz? Same with the pro, 1x, xss,xsx.....

Anyway I dont think higher clocks will give considerble performance advantages over lower clocks and more CUs, unless going to extremes ( 500cus @ 50mhz)

A 5700xt can give similar performance to a PS5 so. The advantage of the higher clock rate cant be that drastic.
Higher clocks, more heat. Did you want the PS4 to sound even worse than it did?
F100_F-15_engine.JPG
 

clampzyn

Member
Yeah, unless you want a world of pain you will not start with the max performance specs and RAM and then figure out how to make it work on the lower end HW unless you have a dedicated team to do essentially the port (see what Sony is using Nixxes and extra time between the console release and the PC one for).
That's why MS is shifting to a much more scalable GDK. Sony would need more time to port their games to PC because of their GDK, while MS is leaning towards to a GDK that is for both console and PC which I guess Xbox upcoming games would be built to scale as that is the right path to take since we are going to see a much more powerful consoles next gen.
 

Panajev2001a

GAF's Pleasant Genius
Interesting stuff, but that still does not really give us an idea on how the PS5s higher clock will benefit it.

If higher clocks were so benficial why aren't all consoles made with this mentality?
Why did the PS4 go with 18cus and 800mhz if higher clocks are better then why not go with 14cus and 1100mhz? Same with the pro, 1x, xss,xsx.....

Anyway I dont think higher clocks will give considerble performance advantages over lower clocks and more CUs, unless going to extremes ( 500cus @ 50mhz)

A 5700xt can give similar performance to a PS5 so. The advantage of the higher clock rate cant be that drastic.
It is not an easy thing to know from the outside. Maybe Sony needed the extra clocks only to make up on the smaller SoC but then they invested the area on the SSD I/O… and maybe the lack of some features forced them to push the shared Geometry Engine harder and thus they were able to claw back performance against the XSX thanks to that.
Maybe the XSX has higher numbers but their CU arrangement is perhaps less efficient (more CU’s feeding off the same shared Shader Array cache) and Sony thought they could do better by improving clocks…

PS4 vs Xbox One the clock supremacy did not flip the tables but is it because the clock speed update did not matter? Was it because the amount of CU’s was below the threshold they needed to be? Was it because of other architectural differences and bottlenecks?

Three systems his team (Cerny) designed and three very easy to develop for and relatively bottlenecks free strong HW. It does not mean PS5 is perfect but does give a bit of faith i the choices and constraints he chose to go with.
 

Panajev2001a

GAF's Pleasant Genius
That's why MS is shifting to a much more scalable GDK. Sony would need more time to port their games to PC because of their GDK, while MS is leaning towards to a GDK that is for both console and PC which I guess Xbox upcoming games would be built to scale as that is the right path to take since we are going to see a much more powerful consoles next gen.
The GDK is not magical. You can take a lead platform and port to the others or design for scaleability and the best way is to stay as close as your minimum common denominator and give you levers and knobs to twist when you scale up.
On mobiles you have the mother of scaleable SDK’s and it does not make this easy.
 

clampzyn

Member
A 5700xt can give similar performance to a PS5 so. The advantage of the higher clock rate cant be that drastic.
Why is it a pain to read that people still think 5700xt which is an RNDA1 gpu can give similar performance to PS5? A lot of people are vastly underestimating PS5 / Series X hardware, really LOL. 5700xt = 2070S will perform the same on a game that doesn't need much performance where for example a game on 5700xt can run at 4k60 then on PS5 it's also 4k60, does that mean PS5 is only equal to 5700xt? Hell no, we don't know how much overhead room performance are left on the consoles which was proven when PS5's spiderman remastered was ported to PS5, it was locked 4k30 (NO RT) at fidelity mode now they patched it later on, it now can run 4k 45-55FPS with RT.
 
Top Bottom