• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox June GDK: More dev memory for Series S and more.

Hoddi

Member
Zen 2 is not that great tho, and the Zen 2 CPUs in the consoles are laptop grade ones.

no game should ever struggle to hit 60fps on a Zen 2 CPU tho, so for that they'll be fine
I think people tend to overstate these differences. Even a full 33% performance difference might sound like a lot but it’s only the difference between 60fps and 80fps.
 
Zen 2 is not that great tho, and the Zen 2 CPUs in the consoles are laptop grade ones.

no game should ever struggle to hit 60fps on a Zen 2 CPU tho, so for that they'll be fine
why did you argue with me before when i said zen 2 wasnt great though? zen 2 is bad for gaming especially when raytracing is in use
 
i disagree with you we see how cpu intensive some of these games are like spiderman
Well yea there will always be CPU intensive games. All CPU intensive means is that games are pushing the CPU closer to it's limits, that has nothing to do with power because the moment you add more power see Xbox One vs Series X devs immediately saturate it. What matters is the relative power of the component and Zen 2 is great for what it is considering PS4 has shitty Jaguar and Switch is stuck with mobile ATM cores from 2015, Zen 2 a 2019 product is a blessing. The ram situation and RT situation is pretty bad on the new consoles though, no matter how you slice it going from 12GB on the One X go 16GB on the Series X is an anemic increase when PS3/X360 went from less than 1GB to 8GB on the PS4 gen.
 
Last edited:
Well yea there will always be CPU intensive games. All CPU intensive means is that games are pushing the CPU closer to it's limits, that has nothing to do with power because the moment you add more power see Xbox One vs Series X devs immediately saturate it. What matters is the relative power of the component and Zen 2 is great for what it is considering PS4 has shitty Jaguar and Switch is stuck with mobile ATM cores from 2015, Zen 2 a 2019 product is a blessing. The ram situation and RT situation is pretty bad on the new consoles though, no matter how you slice it going from 12GB on the One X go 16GB on the Series X is an anemic increase when PS3/X360 went from less than 1GB to 8GB on the PS4 gen.
this is still underselling the problem by a lot
 

Astral Dog

Member
Well yea there will always be CPU intensive games. All CPU intensive means is that games are pushing the CPU closer to it's limits, that has nothing to do with power because the moment you add more power see Xbox One vs Series X devs immediately saturate it. What matters is the relative power of the component and Zen 2 is great for what it is considering PS4 has shitty Jaguar and Switch is stuck with mobile ATM cores from 2015, Zen 2 a 2019 product is a blessing. The ram situation and RT situation is pretty bad on the new consoles though, no matter how you slice it going from 12GB on the One X go 16GB on the Series X is an anemic increase when PS3/X360 went from less than 1GB to 8GB on the PS4 gen.
These days RAM is not something that limits game development in the same ways, the potential benefits vs costs is unlike the PS2 or PS3 days, business wise its just not a priority anymore. We aren't going to see such a big jump again because the limitations come from other components like the CPU or slow hard drives, wich they tried to improve this generation for developers

Sure, more RAM to max out 4k textures is very different than game developers asking for more megabytes(!) to fit simple animations,complex level design, playable characters or higher quality environments, they now are asking first party companies to replace the CPU or hard drive.

would be nice if developers weren't tied to Xbox ONE S configuration and could take advantage of those 13-16GB in the new models but still...

Last generation devs weren't really RAM starved, they were CPU starved
 
Last edited:
These days RAM is not something that limits game development in the same ways, the potential benefits vs costs is unlike the PS2 or PS3 days, business wise its just not a priority anymore. We aren't going to see such a big jump again because the limitations come from other components like the CPU or slow hard drives, wich they tried to improve this generation for developers

Sure, more RAM to max out 4k textures is very different than game developers asking for more megabytes(!) to fit simple animations,complex level design, playable characters or higher quality environments, they now are asking first party companies to replace the CPU or hard drive.

would be nice if developers weren't tied to Xbox ONE S configuration and could take advantage of those 13-16GB in the new models but still...

Last generation devs weren't really RAM starved, they were CPU starved
Lol nope. The only reason we didn't get more RAM is because the price became so expensive. If you want more stuff in memory(and you will always need more stuff if you want games to expand) then you will need more RAM. SSDs are incredibly slow compared to ram they do not directly replace it. If enhanced consoles come out and when next gen comes out you will look back at your comment and laugh.
 
I think people tend to overstate these differences. Even a full 33% performance difference might sound like a lot but it’s only the difference between 60fps and 80fps.
Yea and the difference between 30fps and 21... I think something like a 3800X is far more powerful than the crippled Renoir chips in the consoles. Not does the 3800X have more resources like cache but it also runs at way higher clocks gaming than the PS5s maximum 3.5Ghz. That said Zen 2 is great compared to the shit Jaguar cores in the PS4 that were awful even in 2012 before the PS4 came out.
 

REDRZA MWS

Member
currently running i9 9900k OC’d, EVGA 2080ti, and 32 GB’s of gskill rip jaws ram. When Nbidias 4 series cards hit im getting the 4090 and jumping up to 64 H ‘s of RAM. Can’t wait.
 

Hoddi

Member
Yea and the difference between 30fps and 21... I think something like a 3800X is far more powerful than the crippled Renoir chips in the consoles. Not does the 3800X have more resources like cache but it also runs at way higher clocks gaming than the PS5s maximum 3.5Ghz. That said Zen 2 is great compared to the shit Jaguar cores in the PS4 that were awful even in 2012 before the PS4 came out.
Fair enough. But my point was that a game running at 21fps won’t look very different from one that runs at 30fps. The difference is too small to be meaningful.

Remember that 33% is also the difference between the PS4 and PS4 Pro CPUs. It never amounted to much other than slightly improved draw distances and the difference between Zen 2 and 3 would likely have been smaller than that.
 
Last edited:
Fair enough. But my point was that a game running at 21fps won’t look very different from one that runs at 30fps. The difference is too small to be meaningful.

Remember that 33% is also the difference between the PS4 and PS4 Pro CPUs. It never amounted to much other than slightly improved draw distances and the difference between Zen 2 and 3 would likely have been smaller than that.
Well 30fps is the industries standard for minimum acceptable motion fidelity. Anything less than 30fps and the stuttering becomes too much for many.

As for the PS4 vs Pro argument your missing one key component. Last gen games were designed for base consoles as lowest common denominator so any additional power of the Pro and One X would only translate to higher fps(most commonly a 30fps cap with drops below that would yield less drops on One X etc). The reason for this is that the work the CPU does(world simulation, physics, AI etc.) can't scale with stronger hardware because changing those means fundamentally changing the game(imagine a Mario game having different physics with more powerful hardware). So the easiest scaling thing is fps which is what you saw last gen with the One X. This gen will be the same but if all 3 consoles had Zen 3 instead of Zen 2 the kind of world sim, physics and AI we would see in games would improve because games are being made for that Series S minimum spec.
 
Last edited:

Astral Dog

Member
Lol nope. The only reason we didn't get more RAM is because the price became so expensive. If you want more stuff in memory(and you will always need more stuff if you want games to expand) then you will need more RAM. SSDs are incredibly slow compared to ram they do not directly replace it. If enhanced consoles come out and when next gen comes out you will look back at your comment and laugh.
Yeah that's what i mean, price is no longer worth much more memory

But also developers aren't as demanding for more in comparison to SSD and CPU, so it was cut first its just not as essential as before
 
Last edited:

Hoddi

Member
Well 30fps is the industries standard for minimum acceptable motion fidelity. Anything less than 30fps and the stuttering becomes too much for many.

As for the PS4 vs Pro argument your missing one key component. Last gen games were designed for base consoles as lowest common denominator so any additional power of the Pro and One X would only translate to higher fps(most commonly a 30fps cap with drops below that would yield less drops on One X etc). The reason for this is that the work the CPU does(world simulation, physics, AI etc.) can't scale with stronger hardware because changing those means fundamentally changing the game(imagine a Mario game having different physics with more powerful hardware). So the easiest scaling thing is fps which is what you saw last gen with the One X. This gen will be the same but if all 3 consoles had Zen 3 instead of Zen 2 the kind of world sim, physics and AI we would see in games would improve because games are being made for that Series S minimum spec.
I don't really follow. It doesn't matter if Zen 2 or Zen 3 is the baseline and it has little to do with Series S being the min spec. The question is whether these consoles would have seen meaningful improvements from Zen 3. And my argument is that they wouldn't because the difference is too small.

And this applies twofold when you factor in that most games are going to be GPU bound anyway.
 
I don't really follow. It doesn't matter if Zen 2 or Zen 3 is the baseline and it has little to do with Series S being the min spec. The question is whether these consoles would have seen meaningful improvements from Zen 3. And my argument is that they wouldn't because the difference is too small.

And this applies twofold when you factor in that most games are going to be GPU bound anyway.
This is what you said: "But my point was that a game running at 21fps won’t look very different from one that runs at 30fps. The difference is too small to be meaningful.

Remember that 33% is also the difference between the PS4 and PS4 Pro CPUs. It never amounted to much other than slightly improved draw distances and the difference between Zen 2 and 3 would likely have been smaller than that."

1st point: No, there is actually a big difference between 30fps and 21fps, 21fps is below the industry standard of acceptable fps.

2nd point: 33% Faster CPU is a big deal. Those gains are so good you don't expect new CPUs to be that much faster then the CPUs they are replacing. Think of it as simply as getting your console to do 33% more work in the same amount of time.

3rd point: You say the Pro's CPU gains didn't amount to much over the base PS4. To which I replied "your missing one key component. Last gen games were designed for base consoles as lowest common denominator so any additional power of the Pro and One X would only translate to higher fps". So unlike GPUs you can't just scale CPU tasks to do more because there's a more powerful CPU under the hood all of a sudden. Why? "The reason for this is that the work the CPU does(world simulation, physics, AI etc.) can't scale with stronger hardware because changing those means fundamentally changing the game(imagine a Mario game having different physics with more powerful hardware). So the easiest scaling thing is fps which is what you saw last gen with the One X."

The work the CPU does can't scale anywhere as simply as the work the GPU does. Higher resolutions, more particle effects, higher quality ray tracing, better antialiasing, higher resolution textures, higher quality shadows all of these easily scale and do not affect gameplay but scaling the AI, the physics or the simulation of the game world would change the fundamental gameplay of a game. This is why even if you have a stronger CPU running the same game most of those things can't be changed or improved. Somethings can be like number of pedestrians but most things are too complicated to scale them with a stronger CPU.
 
Yeah that's what i mean, price is no longer worth much more memory

But also developers aren't as demanding for more in comparison to SSD and CPU, so it was cut first its just not as essential as before
The CPU was so awful before that devs were indeed thirsty for better but keep in mind consoles traditionally have powerful CPUs the PS4 and X2 were outliers. SSD is an easy one too, PC has had them since 2006 on the high end. By the time the PS4 launched even my modest PC at the time had an SSD, the leap in power between an HDD and am SSD is massive.

That said though if you were to upgrade the consoles today, a faster SSD would be super low priority due to diminishing returns, CPU is strong as well although it can always be better. Ram and RT performance are the biggest weaknesses of current gen consoles. If the price of ram becomes cheaper than it was in 2019-2020 then expect significantly more RAM in enhanced current gen consoles should they launch.
 

Hoddi

Member
This is what you said: "But my point was that a game running at 21fps won’t look very different from one that runs at 30fps. The difference is too small to be meaningful.

Remember that 33% is also the difference between the PS4 and PS4 Pro CPUs. It never amounted to much other than slightly improved draw distances and the difference between Zen 2 and 3 would likely have been smaller than that."

1st point: No, there is actually a big difference between 30fps and 21fps, 21fps is below the industry standard of acceptable fps.

2nd point: 33% Faster CPU is a big deal. Those gains are so good you don't expect new CPUs to be that much faster then the CPUs they are replacing. Think of it as simply as getting your console to do 33% more work in the same amount of time.

3rd point: You say the Pro's CPU gains didn't amount to much over the base PS4. To which I replied "your missing one key component. Last gen games were designed for base consoles as lowest common denominator so any additional power of the Pro and One X would only translate to higher fps". So unlike GPUs you can't just scale CPU tasks to do more because there's a more powerful CPU under the hood all of a sudden. Why? "The reason for this is that the work the CPU does(world simulation, physics, AI etc.) can't scale with stronger hardware because changing those means fundamentally changing the game(imagine a Mario game having different physics with more powerful hardware). So the easiest scaling thing is fps which is what you saw last gen with the One X."

The work the CPU does can't scale anywhere as simply as the work the GPU does. Higher resolutions, more particle effects, higher quality ray tracing, better antialiasing, higher resolution textures, higher quality shadows all of these easily scale and do not affect gameplay but scaling the AI, the physics or the simulation of the game world would change the fundamental gameplay of a game. This is why even if you have a stronger CPU running the same game most of those things can't be changed or improved. Somethings can be like number of pedestrians but most things are too complicated to scale them with a stronger CPU.
I'm not sure where the confusion is but I was only drawing a hypothetical example. Using fps as a measure was supposed to make it simpler and the difference wouldn't be 33% to start with. I was trying to exaggerate it in order to drive the point.

In actual practical terms, 33% would mean the difference between having 50k draw calls per frame and 66k draw calls per frame. Would you be able to tell?
 

Duchess

Member
You know, one of the things that devs could do with that extra memory that the SDK is about to offer is ... nothing. Don't try to use it for extra textures, geometry, sound, etc., just ignore it. Leaving that memory just sitting there, available, should increase performance, due to their being more contiguous memory being available.

The system won't have to work with memory that's fragmented as much, and so can respond quicker.

(that's all as I understand it, though it may no longer hold true)
 

kikkis

Member
You know, one of the things that devs could do with that extra memory that the SDK is about to offer is ... nothing. Don't try to use it for extra textures, geometry, sound, etc., just ignore it. Leaving that memory just sitting there, available, should increase performance, due to their being more contiguous memory being available.

The system won't have to work with memory that's fragmented as much, and so can respond quicker.

(that's all as I understand it, though it may no longer hold true)
Games don't usually do memory allocs per frame they just get pages they need and handle memory manually themselves in this "virtual memory"
 
I'm not sure where the confusion is but I was only drawing a hypothetical example. Using fps as a measure was supposed to make it simpler and the difference wouldn't be 33% to start with. I was trying to exaggerate it in order to drive the point.

In actual practical terms, 33% would mean the difference between having 50k draw calls per frame and 66k draw calls per frame. Would you be able to tell?
Why make an example if it's wrong. All of this to say a CPU that's 33% faster wouldn't be significant, 33% more work done in the same timeframe is not insignificant. Your premise is incorrect, most of your examples are incorrect and you lacked the knowledge to even understand why. Anyway this discussion has reached it's EOL. Good day!
 

Hoddi

Member
Why make an example if it's wrong. All of this to say a CPU that's 33% faster wouldn't be significant, 33% more work done in the same timeframe is not insignificant. Your premise is incorrect, most of your examples are incorrect and you lacked the knowledge to even understand why. Anyway this discussion has reached it's EOL. Good day!
Okay but you’re the one who brought it up.

Fun fact is though that AC Origins has a ~60% higher CPU load than AC Unity at the same framerate. It also pushes ~60% more draw calls per frame and nobody even noticed it.

The difference between Zen 2 and Zen 3 IPC is less than half of that.
 
Last edited:

clampzyn

Member
Okay but you’re the one who brought it up.

Fun fact is though that AC Origins has a ~60% higher CPU load than AC Unity at the same framerate. It also pushes ~60% more draw calls per frame and nobody even noticed it.

The difference between Zen 2 and Zen 3 IPC is less than half of that.
why are you comparing ubisoft sh*tty optimized games? :messenger_grinning_squinting:
 

Sosokrates

Member
Well yea there will always be CPU intensive games. All CPU intensive means is that games are pushing the CPU closer to it's limits, that has nothing to do with power because the moment you add more power see Xbox One vs Series X devs immediately saturate it. What matters is the relative power of the component and Zen 2 is great for what it is considering PS4 has shitty Jaguar and Switch is stuck with mobile ATM cores from 2015, Zen 2 a 2019 product is a blessing. The ram situation and RT situation is pretty bad on the new consoles though, no matter how you slice it going from 12GB on the One X go 16GB on the Series X is an anemic increase when PS3/X360 went from less than 1GB to 8GB on the PS4 gen.

The good thing is that quite a few tasks that were required to be in RAM last gen can now be streamed directly from the SSD. So these consoles are more like 24-32gb RAM Paired with a HDD.
 

winjer

Member
zen2 has no link to raytracing, it's handled by the GPU ... go home your drunk

On the consoles it might have to.
RDNA2 only has hardware to accelerate ray and triangle testing.
But it has to accelerators to create or traverse the BVH structure. So it has to do that either on shaders or on the CPU.
Besides, generating reflections require more draw calls on the CPU.
 
Top Bottom