You are correct, because technically speaking no modern GPU will run at fixed clocks all the time. I was thinking only about demanding gaming scenarios where GPU will need to run at max clocks for extended period of time. XSX will offer sustained level of performance in such scenario for sure.
That's the idea. It still comes down to how good the cooling system is, but I'm basing off One X that they can handle it.
It would be ironic out of this if Series X is the louder of the two systems though, even if it's only slightly. But again, just comes down to how good the cooling is and MS nailed that already with One X. Sony's the one who needs to prove they have a very solid cooling system this time that'll keep things quiet, I'm fairly confident they will though.
Dat PS5 is a big boy xD.
Digital Foundry vs. the Xbox One architects
"There's a lot of misinformation out there and a lot of people who don't get it. We're actually extremely proud of our design."
Article by Richard Leadbetter, Technology Editor, Digital Foundry
Updated on 24 September 2013
"There's a lot of misinformation out there and a lot of people who don't get it. We're actually extremely proud of our design."
www.eurogamer.net
Microsoft says that game performance doesn't scale with the number of compute units you have.
"Every one of the Xbox One dev kits actually has 14 CUs on the silicon. Two of those CUs are reserved for redundancy in manufacturing, but we could go and do the experiment - if we were actually at 14 CUs what kind of performance benefit would we get versus 12? And if we raised the GPU clock what sort of performance advantage would we get? And we actually saw on the launch titles -
we looked at a lot of titles in a lot of depth - we found that going to 14 CUs wasn't as effective as the 6.6 per cent clock upgrade that we did."
but now Clock doesn't matter, only the number of CUs.
Both matter.
Also keep in mind saturation of CUs on 1st-gen GCN was...kinda terrible. That's part of the reason Sony made the customizations they made in PS4's GPU, and probably why MS upped the clocks on XBO. They were both valid choices for the respective designs, but RDNA1 (let alone RDNA2) utilization of CU saturation absolutely shits all over GCN's.
I honestly was not sure when I read your post since you seemed to add customisations to the Tflop delta between the two machines which implies that the PS5 has none.
Happy to get the clarification though
As to the actual customisations to the PS5 you list several that I would consider sophisticated BS right now - just like yourself.
I expect a significantly different GE since the GE that comes with the current RDNA2 design out of the box cannot do culling/prioritisation in the way Cerny talked about it in his speech. I believe this comes with both pros and cons. For developers utilising it it will give significant advantages to the entire rendering pipeline, however I have question marks how a standard PC-centric engine will manage this (i.e. might be a disadvantage in multi-platform titles with some serious eye candy in first party titles). How well Sony has developed the API will determine how it will fare in multi-platform titles.
Secondly, a lot points towards customisations regarding the cache/ memory side of things. Early information hinted at some weird soldering of memory chips to the board. This is one of those things that intrigue me the most but also one of those things that I am the most uncertain about. The person with dev-kit access was rather specific though.
Thirdly, it is the RT. Sony has been very tightlipped here and hinted towards that they did not go with the standard AMD approach. Some even interpreted that as if the PS5 would not have RT. Now that we know it has RT the question is what the silicon looks like. I assume the original information is correct that it is not the standard AMD approach. That begs the question though - what is it if that is not the case? That early information might of course be wrong and they sit with the bog-standard RDNA2 RT set-up.
And then comes the API.
Hopefully we will get to know more soon.
Yeah mentioning the GE customizations Sony have made with theirs...that's already been pretty much confirmed. And a while ago I figured that from the Matt engineer guy's statements, Sony may have made some of those customizations to the GE possibly to move some aspects of VRS further ahead in the graphics pipeline. That seems to be what the extent of it may be, but again I say "VRS" loosely as that's MS's implementation of such a technique. In Sony's case it would be something moreso related to Foveated Rendering and these customizations might have been done for next-generation PSVR but that also benefits non-VR games too.
As far as the cache and memory stuff are concerned, I don't think there's anything wild there we don't already know. They're using GDDR6, I know they have some patent for stacked RAM but that could've been for a hypothetical system design using HBM, which we know PS5 is not using, and while there are some prototype stacked GDDR6 designs around there's nothing in commercial sense. If Sony took such an approach it's likely for cooling purposes because if it gave a massive performance advantage I think they'd of mentioned it in Road to PS5, even if they didn't spend a great deal of time on the GPU in that presentation. There's also the SRAM cache on the I/O block, etc., and the cache scrubbers, but we've already known of this stuff by March. There isn't much else of high probability they have left to reveal in this area that would be a big surprise or anything outside of the conventional, except maybe some increase to GPU cache sizes or CPU cache sizes I guess
Ray-tracing I feel is kind of similar. Where did Sony hint about their approach not being AMD's standard? I must've missed it. Truth is though we don't even know exactly how AMD's RT works! We just know it's based somewhat on the CUs which is where some of the figures come from. The way MS have described some of their DXR RT also seems to indicate they might've done some alterations there beyond whatever the standard is, but to what extent is up for debate, as usual.
Sony should be doing a teardown either later this month or early September going by rumors, so yeah I hope we get a (further) deep dive into the architecture around then including exactly how the variable frequency works in real-world gameplay scenarios, etc. And of course there's still the Hot Chips presentation for MS on the 17th, so not long to go on that end
Well 1st. Integrated Graphics.... lol kidding. But as for that, we know that in the past they let the frequency stay sort of set and let the power fluctuate. We also for the most part know that you can in fact change a frequency multiple times in a scene. I personally don't think PC is the right comparison and it's also worth noting since this is relatively new and different this is all in theory until it's in front of us. However based on what we know such as they have indeed said they have the power value at a set amount and don't let it vary, and wen't with a varying frequency based on load (i.e what the scene requires).
So I admit it's certainly theoretical until we obviously have hands on experiences but I don't think it's improbable. I just as I said see it as them allowing devs to use what their engine needs while trying to get as close to that theoretical number everyone loves throwing around as possible. It's just new, and all we really can do is theorize. It's just well, different.
I do apologize if I have left some stuff out, today is a busy one, but i'm just trying to chime in on my free time, or perhaps I just misunderstood something you asked.
Fair points, and yeah tell me about IG xD. Upgrades are imminent once I settle on a build worth pursuing.
There's one very critical thing with Sony's approach I don't think has yet been answered: just how sophisticated is the monitoring software, and how is it managing for figuring the power load of game code to then determine what components get what amount of power? Is it fully reactive, waiting for the code to get crunched by the processor components, drawing the power and then using some kind of flag exceptions or whatever to know that power budget is being exceeded?
Dunno; it seems like they would need some kind of custom microcontroller and current sensors constantly noting power draws and having some fixed logic to regulate the PSU based on some fixed settings they gate through to automate the amount of current sent through the system, or something like that :S. Like I said, this ain't an area I'm well-versed in so I'm spitballing.
VRS isn't MS branded term. Nvidia used that crap first
A few nVidia GPUs also use executeIndirect but that is a MS-derived technology.
Nvidia GPUs were first to market and able to leverage the tech by building the required hardware to implement the feature, but that doesn't mean nVidia created that feature set.