• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry - Playstation 5 Pro specs analysis, also new information

foamdino

Member
Pro gpu benches will be completely useless with that cpu
I'm willing to wait and see instead of jumping to conclusions.

You have a very simplistic view if hw. It shows through in every comment you make in these threads.

Memory, bandwidth and other architectural differences in the design have a bug impact on how code runs.

For example, the code I'm currently working on exhibits massive performance drops if one compute thread is scheduled on a different numa node (on the same CPU). We get a huge uplift by pinning our threads to cores and ensure data isn't in the caches of different cores.

You can get a lot of performance out of quite terrible CPUs if you program sympathetically to how they are designed.

You lose loads of performance if you don't...
 

Bernardougf

Gold Member
I haven't heard anything new but your numbers are spot on to whats being mentioned

I bet we hear much more about this right before PS5 Pro launches just trying to give anyone doubts for someone maybe buying the Pro for the best GTA VI experience

Like James Sawyer Ford James Sawyer Ford said its likely going to be expensive

I know I am by far in the minority but I will 100% drop even $1000 IF MS can launch nextbox close to GTA VI but if they don't I will pass on anything else Xbox has to offer moving forward
Thanks as usual for the reply bro. And the info.
 

FireFly

Member
While some of you contemplate the CPUs performance. I'm still trying to figure out the GPU clocks.

I'm thinking the PS5 Pro GPU clock speed my be higher than the PS5 after all.

The Tempest Engine in the PS5 runs at the GPU's frequency of 2.23GHz.
PlayStation 5 uncovered: the Mark Cerny tech deep dive
The Tempest engine itself is, as Cerny explained in his presentation, a revamped AMD compute unit, which runs at the GPU's frequency and delivers 64 flops per cycle. Peak performance from the engine is therefore in the region of 100 gigaflops, in the ballpark of the entire eight-core Jaguar CPU cluster used in PlayStation 4. While based on GPU architecture, utilisation is very, very different.

Tom Henderson's article states the ACV runs at a higher clock speed on PS5 Pro than on PS5.

Audio

The ACV in the PlayStation 5 Pro runs at a higher clock speed over the standard PlayStation 5, resulting in the ACM library having 35% more performance.
  • More convolution reverbs can be processed
  • More FFT or IFFT can be processed


Assuming this ACV is the Tempest Engine.
This implies the PS5 Pro GPU is running at a higher frequency than 2.23GHz.

Assuming the frequency and performance scales 1:1.
2.23GHz + 35% = 3.010GHz
Which is insane even for Mark Cerny.

I'm starting to think MLiD and Tom Henderson tweaked the numbers in order to protect their sources.
The issue with the clock speed in regular operation is fitting within the ~230W power budget. That doesn't apply to a single CU (or other decoupled element) so it could clock higher.
 
If you consider their most successful gen was born from this strategy (360 releasing 1 year early in the US vs PS3 and only 4 years after the original Xbox) they might as well try it...
Difference is they launched a year earlier and they were always cheaper when the competition did come out, had a lot more variety in first party exclusives and Sony did just about everything wrong for the first few years including that insane price point at the time. Now you have a console that will have to cost significantly more than anything else, likely won't have it's power really tapped much because it's not going to be worth the investment from developers to try to push it when it'll be the console with the fewest owners for years and we also have people owning games digitally which sort of ties them to the platform they are with now. They really shot themselves in the foot with that focus on kinect and the lack of first party output for the final few years of the 360's lifespan and early part of the xbox one generation.
 

Mr.Phoenix

Member
While their reluctance to embrace the Pro last year made no sense for a tech oriented channel, their disappointment at the specs makes far more sense considering this is just a 45% upgrade for the GPU and 10% upgrade for the CPU. Way worse than the already paltry 30% upgrade for the PS4 Pro CPU and a large 125% upgrade to the GPU. And thats before the 25% IPC gains that we are NOT getting this gen. The PS4 Pro GPU was actually 5.2 GCN 1.1 tflops that put it at a near 200% or 3x GPU performance advantage over the base PS4.

The only reason why it didnt perform like that was because Cerny's genius idea was to bottleneck it with a mere 30% increase in vram bandwidth. A mistake he's repeating here with an even smaller 28% vram bandwidth increase. Lets not forget that the X1X was only 41% more powerful but ran A LOT of games at a full native 4k while PS4 Pro had to rely on poorly implemented 4kcb ports like RDR2. X1X with its 41% increase in GPU performance delivered a 100% increase in pixels because it was not bottlenecked by vram.

There is a lot to like here and they were very positive on the ray tracing and PSSR upgrade. If they were meh on those things which they werent, id understand your complaints. But I dont blame them for not being blown away by 45% GPU, 10% CPU and 28% vram bandwidth upgrades since we know most of them caused issues last gen.
This is where my issue with them comes into play.

I consider myself an above-average technical person, but I accept that my technical prowess is nowhere near as good as the people that would work and publish for something like DF. So I find it puzzling if or when the people I consider to be the "experts", lack the objectivity or foresight to make informed analyses or conclusions.

To elaborate; on expectations and reality, I understand a lot of people's disappointment, however, I can't sympathize with them if that disappointment stems from unrealistic expectations. Remember, I prefixed this by saying I am an above-average technical type, so if with my above-average technical knowledge, I could predict that the PS5pro would be a 16-17TF console (you can check my post history), I could predict that the CPU will mostly be unchanged with nothing more than a 20-30% speed bump if anything at all. I could predict that RAM will remain the same, and even predict that th free up more of that RAM for use by maybe adding some DRAM to handle background tasks. I could predict that the emphasis would be on reconstruction, just like how with the PS4pro it was on checkerboarding. I could literally say that the PS5pro was going to be a 1440p console that would allow reconstruction to 4k.

All before we had concrete leaks, how is it possible that DF... ALL OF THEM... couldn't have seen this coming? How can me with my slightly above average technical knowledge get so much right, then the people that cover this stuff didnt see it coming?

But more importantly, its how did I arrive at those predictions? Technical "realistic" common sense.

- 16-17TF? Look at what AMD has released, its easy to predict what Sony will use from looking at AMD GPU line. And more importantly, the size of the die of those GPUs, you work with the premise that the PS5pro APU will be no bigger than 320-340mm2. And not have a power draw higher than 250W-280W. Once you do that, your options quickly gets limited to a GPU in the 54-64CU range.

- CPU be mostly unchanged? First off, they did the exact same thing with the PS4pro. But more importantly, I looked at it from a system design perspective. If you can only improve one thing, its better to improve the one thing that has more overreaching benefits than something that is only really beneficial for the 10% of cases that may need it. And that's not even the real kicker, the real kicker is that the PS5 does not need a better CPU. Sounds controversial I know, but hear me out, I have a half-decent CPU, as I am sure most on here do. I implore anyone curious, to run one of those games that seems to be suffering from a CPU bottleneck on their systems, you will quickly find that what we are calling CPU bottlenecks, are actually CPU underutilization.

And this is something I have been saying for a while too, CPU bottlenecks is not the same thing as CPU underutilization. Sony will not.... NEVER build a machine that prioritizes fixing a problem due to CPU underutilization. That's like building a machine to accommodate the incompetence of most devs. And make no mistake, Sony has data that they can compile from hundreds of games on their platform, with very specific performance analysis, and the conclusion they have probably seen is that not a single game running on the PS5, is using more than 50-60% CPU utilization. Having a situation where one of 14 threads is at 90% and the other 13 are under 25%, as is the case with all these games that we refer to as CPU bottlenecked, is not a CPU bottleneck, its CPU underutilization.


I could go on, but this has become an essay lol. And I am sure you get my point and stance... because If i could figure this shit out without confirmed specs, I would expect DF to be able to do the same, especially after they at least have specs.
 
Last edited:

Loxus

Member
The issue with the clock speed in regular operation is fitting within the ~230W power budget. That doesn't apply to a single CU (or other decoupled element) so it could clock higher.
Decoupled clocks doesn't explain why the TOPs and FLOPs use different frequencies to get 67 FLOPs and 300 TOPs.

The TOPs and FLOPs use the same frequency.
 

Panajev2001a

GAF's Pleasant Genius
One X was more ambitious it had a vapor chamber for God's sake.
1 year later and $100 more, 1 year later and $100 more… no… s&&£ :p.
Anyway you're right that tech is more expensive now but that applies to everybody, some companies like AMD, Nvidia and Intel continue to move forward as they did back in the PS4 Pro days, others like Sony are being more conservative in R&D and spending hence why the PS5 Pro is what it is.
Nobody is moving like back in those days at those days prices.
 

Mr.Phoenix

Member
One X was more ambitious it had a vapor chamber for God's sake. Anyway you're right that tech is more expensive now but that applies to everybody, some companies like AMD, Nvidia and Intel continue to move forward as they did back in the PS4 Pro days, others like Sony are being more conservative in R&D and spending hence why the PS5 Pro is what it is.
This couldn't be more wrong if you tried.

Too lazy to type, but some context...

1080ti, most powerful GPU in its time, cost $700. 4090, most powerful GPU in its time, cost $1700.

In the 5 years between both GPUs, the price has increased by 2.5x.

PS4 launch price, 2013, $400. PS4pro launch price, 2016, $400. PS5 launch price, 2020, $400. PS5pro launch price, 2024... $500?

Get it?
 
Last edited:
Not even remotely true (we could only wish even the pro model comes near a 5800X) and the current PS5 GPU is not 6800 level, either. Do not rely on those bottleneck calculators just like the majority of the youtube benchmark channels which are completely non sense numbers.

As far as CPU bottlenecks it completely depends on the game. There are games that at 4k my old 4790k could push my 6900XT and yet other games at the same resolution that my 11900k is majorly bottlenecking my 6900XT.

Sony really needed to slap a better CPU in this. Regardless, the extra GPU horsepower will help keep the resolution up for a cleaner image which is very much welcomed.
The ps5 does perform like a 6800 in some games but it doesn’t have th raw hardware grunt of a 6800
 

Crayon

Member
If this thing can't run pretty much all games at 60FPS then it's pointless and I'll just wait for the PS6 instead.

How much is pretty much? We're not seeing many games stuck at 30fps and we are about halfway through the gen. That's a big x-factor here. Does that number start increasing faster or stay similar. And then of course, how many of those are because of the cpu. Whatever number of games are stuck at 30 on pro (and I'm sure there will be some), it will be less than on base ps5.
 
and you're doing reverse cherry picking. here's how

- due to console jump being smaller, and pandemic, crossgen was longer than it is supposed to. even games like hogwarts legacy eventually landed on PS4. returnal can be ported to PS4 and only limitation would be the HDD (it runs very well on stuff like gtx 1050ti). this is one of the biggest reasons most games still kept having 60 FPS modes
- majority of 60 fps games are crossgen titles
- the actual "nextgen" games have just begun, and most of them exhibit CPU boundness problems. trying to paint a good picture by showcasing games like god of war ragnarok, spiderman, etc. is reverse cherry picking because they're not representative

technically ps4 can run all the library of ps2 and ps3 at 60 fps, heck maybe 120 FPS. does that make ps4 a 60/120 fps console? ps5 being able to brute force ps4 or ps4-gen worthy games at 60 fps is not that impressive for me and I cannot take that as a way to "this consoles' cpu is capable for 60 fps".

reality is, jedi survivor (ray tracing) and dragon dogma 2 (probably ray tracing) are the real representatives of how third party games going forward will run on console CPUs with ray tracing involved. at least for me. they may not be for you. in that case, we would have to agree to disagree
All facts very informative posts man really appreciate it
 

Mr.Phoenix

Member
If this thing can't run pretty much all games at 60FPS then it's pointless and I'll just wait for the PS6 instead.
You might as well just start waiting for the PS6 then. Because this is not going to run everything at 60fps. There will be somewhere between 5-15% of games out there that this just wouldn't do anything for. For any number of reasons.

In the same way that even the PS4 Pro didn't mean that everything that was released after it had a 60fps mode.

I can tell you this much though, everything that has a performance mode on the PS5, will have a locked 60fps performance mode on the PS5pro that looks on par or at least close to the fidelity mode on PS5. That is what your $100 more over the PS5 will get you for the PS5pro.
 

Ashamam

Member
I can tell you this much though, everything that has a performance mode on the PS5, will have a locked 60fps performance mode on the PS5pro that looks on par or at least close to the fidelity mode on PS5. That is what your $100 more over the PS5 will get you for the PS5pro.
I reckon you could just about close this thread on that statement lol. Pretty much sums up what the Pro brings to the table. At least based on everything that has been leaked so far.

Then there are the occasional hints that the Pro might outperform. I haven't been active on Neogaf long enough to gauge whether a particular user has more weight than usual though.

Those of us who care and may even be hearing all the good stuff about this from industry folk will continue to be excited about this. And to be clear, that's what I got from pre PS5 launch as well as PS5 Pro dev opinion. It's very good and a step in the right direction.
 

FireFly

Member
Decoupled clocks doesn't explain why the TOPs and FLOPs use different frequencies to get 67 FLOPs and 300 TOPs.
Neither does positing a different but higher clock speed. In any case we don't know how many INT8 OPS the PS5 Pro GPU can perform per CU and if there is any more dedicated hardware.
 

ChiefDada

Gold Member
While their reluctance to embrace the Pro last year made no sense for a tech oriented channel, their disappointment at the specs makes far more sense considering this is just a 45% upgrade for the GPU and 10% upgrade for the CPU. Way worse than the already paltry 30% upgrade for the PS4 Pro CPU and a large 125% upgrade to the GPU. And thats before the 25% IPC gains that we are NOT getting this gen. The PS4 Pro GPU was actually 5.2 GCN 1.1 tflops that put it at a near 200% or 3x GPU performance advantage over the base PS4.

Ok so should we just ignore the PS5 Pro ML and RT hardware in our comparison to the prior generation simply because they don't manifest in the TF number, while still being the most crucial aspects of the GPU upgrade? Would you have rather had ML and RT baked into TF number by using raw compute as opposed to dedicated hardware? Because the TF would be more like 50TF RDNA2. Wow big number yay!
 

SlimySnake

Flashless at the Golden Globes
Ok so should we just ignore the PS5 Pro ML and RT hardware in our comparison to the prior generation simply because they don't manifest in the TF number, while still being the most crucial aspects of the GPU upgrade? Would you have rather had ML and RT baked into TF number by using raw compute as opposed to dedicated hardware? Because the TF would be more like 50TF RDNA2. Wow big number yay!
Luckily for you, neither DF nor I ignored the PS5 Pro ML or RT hardware. Including in the very post you quoted.

There is a lot to like here and they were very positive on the ray tracing and PSSR upgrade. If they were meh on those things which they werent, id understand your complaints. But I dont blame them for not being blown away by 45% GPU, 10% CPU and 28% vram bandwidth upgrades since we know most of them caused issues last gen.
 
Last edited:

Mr.Phoenix

Member
Ok so should we just ignore the PS5 Pro ML and RT hardware in our comparison to the prior generation simply because they don't manifest in the TF number, while still being the most crucial aspects of the GPU upgrade? Would you have rather had ML and RT baked into TF number by using raw compute as opposed to dedicated hardware? Because the TF would be more like 50TF RDNA2. Wow big number yay!
At this point I have become wary of all this. Its one thing having keyboard engineers, its another when so many seem so confidently wrong. Or for whatever reason, regardless of all the tech analyses and talking over God knows how many years now, always inevitably seem to come to the most basic ignorant conclusions.

We talk CPU bottlenecks like this is our first console rodeo. Or that the choices made couldn't be seen coming a mile away or like the why those choices were made isn't obvious for anyone to see.

And there is the suspension of common sense, where its expected that Sony makes or builds a console that breaks every single rule governing consoles since well... consoles. Or that somehow consoles and PCs exist in the same vein, while conveniently forgetting that this console would cost at most $600.

Or we are right back to talk about TFs, because Nvidia hasn't shown time and time again that TFs do not matter. Or we have forgotten the whole PS5 10Tf vs XSX 12TF debacle that still to this day puzzles DF. Or like we have not seen this exact same shift before, where an OEM focuses their efforts on everything else in the GPU besides pushing for just more TFs, and adds in new tech and features to make an all-around better system. But yet... here we are.

Its honestly embarrassing when you think of it.
 

PUNKem733

Member
If this thing can't run pretty much all games at 60FPS then it's pointless and I'll just wait for the PS6 instead.
If you think there won't be 30 FPS games on the sixxer I got some sand to sell you in the sahara desert. Better to wait for the PS7....no, no the PS8 just to be safe.
 

SlimySnake

Flashless at the Golden Globes
This is where my issue with them comes into play.

I consider myself an above-average technical person, but I accept that my technical prowess is nowhere near as good as the people that would work and publish for something like DF. So I find it puzzling if or when the people I consider to be the "experts", lack the objectivity or foresight to make informed analyses or conclusions.

To elaborate; on expectations and reality, I understand a lot of people's disappointment, however, I can't sympathize with them if that disappointment stems from unrealistic expectations. Remember, I prefixed this by saying I am an above-average technical type, so if with my above-average technical knowledge, I could predict that the PS5pro would be a 16-17TF console (you can check my post history), I could predict that the CPU will mostly be unchanged with nothing more than a 20-30% speed bump if anything at all. I could predict that RAM will remain the same, and even predict that th free up more of that RAM for use by maybe adding some DRAM to handle background tasks. I could predict that the emphasis would be on reconstruction, just like how with the PS4pro it was on checkerboarding. I could literally say that the PS5pro was going to be a 1440p console that would allow reconstruction to 4k.

All before we had concrete leaks, how is it possible that DF... ALL OF THEM... couldn't have seen this coming? How can me with my slightly above average technical knowledge get so much right, then the people that cover this stuff didnt see it coming?

But more importantly, its how did I arrive at those predictions? Technical "realistic" common sense.

- 16-17TF? Look at what AMD has released, its easy to predict what Sony will use from looking at AMD GPU line. And more importantly, the size of the die of those GPUs, you work with the premise that the PS5pro APU will be no bigger than 320-340mm2. And not have a power draw higher than 250W-280W. Once you do that, your options quickly gets limited to a GPU in the 54-64CU range.

- CPU be mostly unchanged? First off, they did the exact same thing with the PS4pro. But more importantly, I looked at it from a system design perspective. If you can only improve one thing, its better to improve the one thing that has more overreaching benefits than something that is only really beneficial for the 10% of cases that may need it. And that's not even the real kicker, the real kicker is that the PS5 does not need a better CPU. Sounds controversial I know, but hear me out, I have a half-decent CPU, as I am sure most on here do. I implore anyone curious, to run one of those games that seems to be suffering from a CPU bottleneck on their systems, you will quickly find that what we are calling CPU bottlenecks, are actually CPU underutilization.

And this is something I have been saying for a while too, CPU bottlenecks is not the same thing as CPU underutilization. Sony will not.... NEVER build a machine that prioritizes fixing a problem due to CPU underutilization. That's like building a machine to accommodate the incompetence of most devs. And make no mistake, Sony has data that they can compile from hundreds of games on their platform, with very specific performance analysis, and the conclusion they have probably seen is that not a single game running on the PS5, is using more than 50-60% CPU utilization. Having a situation where one of 14 threads is at 90% and the other 13 are under 25%, as is the case with all these games that we refer to as CPU bottlenecked, is not a CPU bottleneck, its CPU underutilization.


I could go on, but this has become an essay lol. And I am sure you get my point and stance... because If i could figure this shit out without confirmed specs, I would expect DF to be able to do the same, especially after they at least have specs.
In regards to expectations, you and I both had very similar expectations based on leaks. We both expected 17 tflops, but i did not expect that to translate to 45% or 14.5 tflops in reality. I also expected RDNA4 IPC gains that would help it perform like a 20 tflops or 100% more powerful console. All my posts are there for you to see. Ive been consistent so its not like im making this up. I dont know what their expectations were but I gather they did not expect 45% GPU increase and similar increases for the CPU and vram which they had outlined last gen as bottlenecks.

In regards to the CPU expectations, again, I knew not to expect a major redesign but up until DF confirmed the 10% leak, i was expecting a 30% increase not because im an asshole who wants the world but because i knew 30% is what the PS4 Pro CPU received and i figured that would get us to 4.55 Ghz and solve at least the single threaded games which is pretty much every UE4 and UE5 game.

So it's not like our expectations are outlandish or wildly optimistic. We were applying PS4 Pro level estimates here after the leaks last year and Sony didnt even meet those. No wonder they are disappointed. I was prepared and even i was disappointed.

At the end of the day, this is just a spec discussion. When the games come out and we see performance gains, and those performance gains amount to 2x more performance, i dont think DF or I would be disappointed. Thats what I wanted from a PS5 Pro and if thats what PSSR and RT upgrades allow them to do then great.

Lastly, I dont know if I agree with the assertion that Sony's profiling would indicate the games are not CPU bound. I have listed over a dozen RT games with no RT in their 60 fps modes. I have also shown proof that on PC those 60 fps modes are cpu botttlenecked. So if sony has the same data we do then they surely wouldve noticed all these 60 fps modes skipping RT. Again, if these games get RT in their 60 fps modes once this thing releases or all games on the Pro feature RT in 60 fps modes then I agree we can rule out CPU being the bottleneck.
 

yamaci17

Member
2-4x increase in ray tracing is worth more than another 25% increase in rasterzation in my books.
I agree here, 3070 levels of ray tracing performance gets me 840p (balanced) upscaled to 1440p 30 fps with PATH tracing which looks fantastic with DLSS's ability to do well at such resolutions

example 1440p dlss balanced path tracing in alan wake 2



it would fall in line with targeting 30 fps on CPU as well. But I hope they push more than %10 on the CPU for the edge cases where locked 30 is out of reach. it wouldn't be needed in the case of alan wake 2 but it is sorely needed in the case of jedi survivor and dragon dogma 2..
 
Last edited:

Crayon

Member
2-4x increase in ray tracing is worth more than another 25% increase in rasterzation in my books.

It would be for me too, but I'm not too high on the state of rt atm. I can only play cyberpunk so many times.... If I could count on the next monster hunter or whatever big game having an impressive rt implementation, this would be a huge thing. As of now, I'm mostly expecting pro to fix up tomorrow's blurry 60fps modes and get some portion of 30fps-bound games a 60fps mode.
 

SlimySnake

Flashless at the Golden Globes
2-4x increase in ray tracing is worth more than another 25% increase in rasterzation in my books.
If they 2-4x rt upgrade helps them run the native 4k 30 fps Spider-Man 2 rt mode at native 4k 60 fps then great. I wanted to play Horizon fw at native 4k 60 fps. If this 45% gpu and 10% cpu increase helps them double the ps5 performance then yay. Let’s go cerny! I will bend the knee and call him king.

But we all know that’s not going to happen and they will use pssr to upscale to 4k. Likely dlss performance equivalent and I’m just not a fan of that. I stick with quality. Good for everyone suffering through 720p fsr2 60 fps modes though.
 

Loxus

Member
Neither does positing a different but higher clock speed. In any case we don't know how many INT8 OPS the PS5 Pro GPU can perform per CU and if there is any more dedicated hardware.
It's not that hard.
TH7mICT.png

PS5 Pro rumored clock speed is 2.18GHz is using 60CUs.

60CU × 4 SIMD32 × 32 × 2 × 2.18GHz = 33.5 TFLOPS.

  • 16-bit floating point (FP16) = FLOPs
  • 8-bit floating point (FP8) = FLOPs
  • 8-bit integer (INT8) = TOPs
  • 4-bit integer (INT4) = TOPs
The leak from Tom Henderson states AI Accelerators.
  • AI Accelerator, supporting 300 TOPS of 8 bit computation / 67 TFLOPS of 16-bit floating point

60CU × 2 AI Accelerators × 256 × 2.18GHz = 67 FLOPs (FP16)

60CU × 2 AI Accelerators × 512 × 2.18GHz = 134 TOPs (INT8)

RDNA4 now supports Sparsity, which doubles performance.
Examining AMD’s RDNA 4 Changes in LLVM
RDNA 4 introduces new SWMMAC (Sparse Wave Matrix Multiply Accumulate) instructions to take advantage of sparsity.


60CU × 2 AI Accelerators x 1028 × 2.18GHz = 267 TOPs. But this number is still not the 300 TOPs number. Which is where the problem with clock speed starts.

Kepler uses a clock speed of 2.45GHz to get that 300 TOPs number.
60CU × 2 AI Accelerators × 1028 × 2.45GHz = 302 TOPs


but if that's the clock, the TFLOPs would be to high.
60CU × 4 SIMD32 × 32 × 2 × 2.45GHz = 37.6 TFLOPS


The only way it all makes sense is this and the leaks tweaked stuff to protect their sources.

Normal Mode
  • CPU = 3.5GHz
  • GPU = 2.23GHz

High CPU Frequency Mode / Performance Mode
  • CPU = 3.85GHz
  • GPU = 2.18GHz

High GPU Frequency Mode / Fidelity Mode
  • CPU = 3.43GHz
  • GPU = 2.45GHz

Obviously, this is just me speculating but I find it strange that no one noticed this.
 
Last edited:

Mr.Phoenix

Member
In regards to expectations, you and I both had very similar expectations based on leaks. We both expected 17 tflops, but i did not expect that to translate to 45% or 14.5 tflops in reality. I also expected RDNA4 IPC gains that would help it perform like a 20 tflops or 100% more powerful console. All my posts are there for you to see. Ive been consistent so its not like im making this up. I dont know what their expectations were but I gather they did not expect 45% GPU increase and similar increases for the CPU and vram which they had outlined last gen as bottlenecks.

In regards to the CPU expectations, again, I knew not to expect a major redesign but up until DF confirmed the 10% leak, i was expecting a 30% increase not because im an asshole who wants the world but because i knew 30% is what the PS4 Pro CPU received and i figured that would get us to 4.55 Ghz and solve at least the single threaded games which is pretty much every UE4 and UE5 game.

So it's not like our expectations are outlandish or wildly optimistic. We were applying PS4 Pro level estimates here after the leaks last year and Sony didnt even meet those. No wonder they are disappointed. I was prepared and even i was disappointed.

At the end of the day, this is just a spec discussion. When the games come out and we see performance gains, and those performance gains amount to 2x more performance, i dont think DF or I would be disappointed. Thats what I wanted from a PS5 Pro and if thats what PSSR and RT upgrades allow them to do then great.
Agreed with everything above, I also expected a possible 30% CPU bump based on the same PS4pro assumption.

I am still puzzled by the whole 1,45x GPU gain though, it's the one thing in all of this that makes no sense to me. Being that we have a 1.67x raw TF increase and even a decent bandwidth bump.
Lastly, I dont know if I agree with the assertion that Sony's profiling would indicate the games are not CPU bound. I have listed over a dozen RT games with no RT in their 60 fps modes. I have also shown proof that on PC those 60 fps modes are cpu botttlenecked. So if sony has the same data we do then they surely wouldve noticed all these 60 fps modes skipping RT. Again, if these games get RT in their 60 fps modes once this thing releases or all games on the Pro feature RT in 60 fps modes then I agree we can rule out CPU being the bottleneck.
My assumption, and from my own testing on PC, is that these CPU bottlenecks are mostly due to poor optimization. And by that, I am specifically talking about those single or double-threaded games you have mentioned too. This is something we have for th most part accepted, and that has warranted brute forcing past it by getting CPUs with the highest clock speeds we can manage. But it shouldn't be that way. And its that kinda analysis I am sure Sony has done.

Eg. If you take a 7C/14t like what is available on PS5 in a PC environment. What you would notice is that 2 threads average around 90%, and the other 12 around 25%. If you average that all out, what you are really ending up with is around 34% utilization. I have no doubt in my mind that this is the same thing happening on the consoles. Because if their CPU loads were properly optimized on the consoles, then the PC would benefit too.

And this is how we have to understand Sony (or Cerny) approached this. When they're looking at performance statistics from like 100s of games, they are not going to be looking at the fps numbers as that could be affected by any number of factors. For CPU, they would look at CPU utilization. And they will see that just like I described above, most games will be averaging under 50% utilization. Which typically is as a result of them being poorly optimized for proper multithreaded support. Sony is going to come off that thinking to themselves, ok, the CPU is fine. It doesn't help third parties that Sony First Party does a great job of having their own games well optimized, which is why Sony First Party doesn't struggle with all this 60fps stuff. And I dont think most RT games skipping RT is due to a CPU bottleneck, I believe its the same reason its skipped on the series S too. Taking RT out is just an easy way to free up GPU resources, especially when you are trying to hit 6fps.
 
Last edited:
It's not that hard.
TH7mICT.png

PS5 rumored clock speed is 2.18GHz is using 60CUs.

60CU × 4 SIMD32 × 32 × 2 × 2.18GHz = 33.5 TFLOPS.

  • 16-bit floating point (FP16) = FLOPs
  • 8-bit floating point (FP8) = FLOPs
  • 8-bit integer (INT8) = TOPs
  • 4-bit integer (INT4) = TOPs
The leak from Tom Henderson states AI Accelerators.
  • AI Accelerator, supporting 300 TOPS of 8 bit computation / 67 TFLOPS of 16-bit floating point

60CU × 2 AI Accelerators 256 × 2.18GHz = 67 FLOPs (FP16)

60CU × 2 AI Accelerators × 512 × 2.18GHz = 134 TOPs (INT8)

RDNA4 now supports Sparsity, which doubles performance.
Examining AMD’s RDNA 4 Changes in LLVM
RDNA 4 introduces new SWMMAC (Sparse Wave Matrix Multiply Accumulate) instructions to take advantage of sparsity.


60CU × 2 AI Accelerators x 1028 × 2.18GHz = 267 TOPs. But this number is still not the 300 TOPs number. Which is where the problem with clock speed starts.

Kepler uses a clock speed of 2.45GHz to get that 300 TOPs number.
60CU × 2 AI Accelerators × 1028 × 2.45GHz = 302 TOPs


but if that's the clock, the TFLOPs would be to high.
60CU × 4 SIMD32 × 32 × 2 × 2.45GHz = 37.6 TFLOPS


The only way it all makes sense is this and the leaks tweaked stuff to protect their sources.

Normal Mode
  • CPU = 3.5GHz
  • GPU = 2.23GHz

High CPU Frequency Mode / Performance Mode
  • CPU = 3.85GHz
  • GPU = 2.18GHz

High GPU Frequency Mode / Fidelity Mode
  • CPU = 3.43GHz
  • GPU = 2.45GHz

Obviously, this is just me speculating but I find it strange that no one noticed this.

A hypothetical High GPU Frequency Mode that has the CPU clock running at 3.43GHz which is below base PS5 (3.5GHz) is not happening.
 
Last edited:

Mr.Phoenix

Member
It's not that hard.
TH7mICT.png

PS5 rumored clock speed is 2.18GHz is using 60CUs.

60CU × 4 SIMD32 × 32 × 2 × 2.18GHz = 33.5 TFLOPS.

  • 16-bit floating point (FP16) = FLOPs
  • 8-bit floating point (FP8) = FLOPs
  • 8-bit integer (INT8) = TOPs
  • 4-bit integer (INT4) = TOPs
The leak from Tom Henderson states AI Accelerators.
  • AI Accelerator, supporting 300 TOPS of 8 bit computation / 67 TFLOPS of 16-bit floating point

60CU × 2 AI Accelerators 256 × 2.18GHz = 67 FLOPs (FP16)

60CU × 2 AI Accelerators × 512 × 2.18GHz = 134 TOPs (INT8)

RDNA4 now supports Sparsity, which doubles performance.
Examining AMD’s RDNA 4 Changes in LLVM
RDNA 4 introduces new SWMMAC (Sparse Wave Matrix Multiply Accumulate) instructions to take advantage of sparsity.


60CU × 2 AI Accelerators x 1028 × 2.18GHz = 267 TOPs. But this number is still not the 300 TOPs number. Which is where the problem with clock speed starts.

Kepler uses a clock speed of 2.45GHz to get that 300 TOPs number.
60CU × 2 AI Accelerators × 1028 × 2.45GHz = 302 TOPs


but if that's the clock, the TFLOPs would be to high.
60CU × 4 SIMD32 × 32 × 2 × 2.45GHz = 37.6 TFLOPS


The only way it all makes sense is this and the leaks tweaked stuff to protect their sources.

Normal Mode
  • CPU = 3.5GHz
  • GPU = 2.23GHz

High CPU Frequency Mode / Performance Mode
  • CPU = 3.85GHz
  • GPU = 2.18GHz

High GPU Frequency Mode / Fidelity Mode
  • CPU = 3.43GHz
  • GPU = 2.45GHz

Obviously, this is just me speculating but I find it strange that no one noticed this.

I think you may be onto something. I get some stuff has leaked, but I kinda have this feeling we still do not have the full picture. Cause something I have always wondered, surely, if there is a mode where power is shunted from the GPU to the CPU, there must be one where itys shunted from the CPU to the GPU.

A hypothetical High GPU Frequency Mode that has the CPU clock running at 3.43GHz which is below base PS5 (3.5GHz) is not happening.
And how do you know there aren't instances that the base PS5 CPU runs at lower clocks? Wasn't that one of the highlights of Cerny's road to PS5 thing? Literally talked about shifting power from one to the other.
 
Last edited:

Loxus

Member
A hypothetical High GPU Frequency Mode that has the CPU clock running at 3.43GHz which is below base PS5 (3.5GHz) is not happening.
You sure about that?
From road to PS5.
"We supply a generous amount of electrical power and then increase the frequency of GPU and CPU until they reach the capabilities of the system's cooling solution."

"It's a completely different paradigm. Rather than running at constant frequency and letting power vary based on the workload. We run at essentially constant power and let the frequency band vary based on the workload."

"That doesn't mean all games will be running in 2.23 GHz and 3.5 GHz. When that worst case game arrives, it will run at a lower clock speed but not too much lower. To reduce power by 10% it only takes a couple of percent reduction in frequency."



Both the GPU and CPU has a variable frequency and can drop clocks.
 
Last edited:

Audiophile

Member
I wonder if the source only provided one FLOP/TOP FP/INT performance metric and then they extrapolated the rest of the numbers from it. Without any consideration for decoupling, continuous boost or other possibilities we're not aware of yet.
 
You sure about that?
From road to PS5.
"We supply a generous amount of electrical power and then increase the frequency of GPU and CPU until they reach the capabilities of the system's cooling solution."

"It's a completely different paradigm. Rather than running at constant frequency and letting power vary based on the workload. We run at essentially constant power and let the frequency band vary based on the workload."

"That doesn't mean all games will be running in 2.23 GHz and 3.5 GHz. When that worst case game arrives, it will run at a lower clock speed but not too much lower. To reduce power by 10% it only takes a couple of percent reduction in frequency."



Both the GPU and CPU has a variable frequency and can drop clocks.
Edit nevermind. This wouldn't interfere with BC mode, so yeah, this mode is a possibility.
 
Last edited:
If this 45% gpu and 10% cpu increase helps them double the ps5 performance then yay. Let’s go cerny! I will bend the knee and call him king.

But we all know that’s not going to happen and they will use pssr to upscale to 4k. Likely dlss performance equivalent and I’m just not a fan of that. I stick with quality. Good for everyone suffering through 720p fsr2 60 fps modes though.

Why does it matter what the internal resolution is before it's upscaled if it yields approximately the same results?

Isn't that the point of adding in PSSR?

There you go again, worrying about trivialities of numbers (bigger numbers = good, better, whether that's TFLops or pixels), or Cross-Gen vs. Next-Gen. It's always nothing beyond superficial with your takes
 
Last edited:

Loxus

Member
Yeah, but you can't cap the Pro's processor speed to a speed below the base PS5 speed cap because you'll end up having inconsistencies on the BC mode.
The frequency varies. That fixed frequency doesn't apply to PS5 games. It's the same for the GPU as well.

I would understand PS4 and PS4 Pro games as the PS5 has to match their clocks in BC Mode.
 

ReBurn

Gold Member
I wonder if the source only provided one FLOP/TOP FP/INT performance metric and then they extrapolated the rest of the numbers from it. Without any consideration for decoupling, continuous boost or other possibilities we're not aware of yet.

FLOP is just a unit of measure of computational throughput, so the math to convert peak throughput from one method to another is fairly straightforward if you have one number for the processor you're measuring.

Number of FLOP or integer calcutions per time period pretty much never directly represents the actual performance of software, though. The higher number the better, of course. But that number alone just represents a theoretical max capacity. The efficiency of application code and the throughout of the other components in the pipeline also have to be taken into account to measure true performance.
 

lh032

I cry about Xbox and hate PlayStation.
If they 2-4x rt upgrade helps them run the native 4k 30 fps Spider-Man 2 rt mode at native 4k 60 fps then great. I wanted to play Horizon fw at native 4k 60 fps. If this 45% gpu and 10% cpu increase helps them double the ps5 performance then yay. Let’s go cerny! I will bend the knee and call him king.

But we all know that’s not going to happen and they will use pssr to upscale to 4k. Likely dlss performance equivalent and I’m just not a fan of that. I stick with quality. Good for everyone suffering through 720p fsr2 60 fps modes though.
but thats the point of the pro.
you cant expect a console performance to match a high end PC, that makes no sense.
Why people keep comparing pro performance to high end PC?
 

Radical_3d

Member
If you consider their most successful gen was born from this strategy (360 releasing 1 year early in the US vs PS3 and only 4 years after the original Xbox) they might as well try it...
Well. The situation is more similar to that scenario than most people think. The financial reports were casting some light on the Series X situation. From the one of Take2 we know that closing the last financial year the current generation of consoles install base was of 77 millions. From Sony themselves we know that of those 77M, 54,7 millions were PS5. So that’s 23 million Series sold worldwide. And from the documents Microsoft uploaded themselves in the trial for Activlizzard we know that as for 2022 the split between Series S and Series X was 75:25. So there are roughly 6 million Series X sold. An original Xbox-esque situation with the market because PS5 is selling 9:1 versus Series X. And that’s worldwide. In Europe must be shattering.

So, knowing all that and bearing in mind that the Series X is positioned as the more powerful machine, will an early start of the generation benefit Xbox like it did in the HD generation? Are even comparables the power shifts when we were moving from PS2 with no real GPU to the programable shaders and HD tvs, while this time is just more of the same? What time is it? Those are the questions…
 

Mr.Phoenix

Member
it would fall in line with targeting 30 fps on CPU as well. But I hope they push more than %10 on the CPU for the edge cases where locked 30 is out of reach. it wouldn't be needed in the case of alan wake 2 but it is sorely needed in the case of jedi survivor and dragon dogma 2..
I am sure I am sounding like a broken record now. But we really need to change where we point the blame when it comes to these things. Gonna give you an example of two games here. Both are on PC.

Jedi survivor



and Horizon forbidden west


For Jedi survivor, the Owen guy was trying to explain how its CPU bottlenecked, in response to people like me who were making the point, that what he describes as a CPU bottleneck isn't a bottleneck but rather underutilization. You can take a look at the video yourself and see what I mean. In JS, and the 8C/16t CPU he was running the game with, the highest clocked thread was at ~60%. The lowest was at 21%, and the rest anywhere in between that with most under 40%. And this was on a 7800X3D and with a 4090. The GPU was idling at 40%

That is not a bottleneck. It's what we, for whatever reason, call a CPU bottleneck, but it's not. It's bad game design. It's poor optimization. And should be critiqued the same way we would critique devs for using bad textures or having obvious bugs.

And then we have HFW. And I used that because it does two things, as you will see in the video. First, it blows Owens argument out of the water about how difficult it is to properly parallelize CPU tasks so everything has to run on one thread at the end of the day. Second, its shows what a properly CPU-optimized game does. In that test, they ran it on a Zen 5 3600. 6C/12t. And clocked the CPU to around 3.7Ghz to mimic the PS5 CPU. And paired it to a 2070. But what is more important, is that in this test, the lowest utilized thread was at 50%, and the highest was at 89%. With everything in between in the high 70s. They even commended the game for this.

Then they ran it in 1080p DLSS that way they can push the GPU as far as they can. And they averaged 85fps. That is what proper optimization and CPU utilization looks like. And we should be calling devs out more often for their lazy approach to this, and instead just choosing to put all; their code on one or two threads and expect ppl to just have fast CPUs.

I dont know when or why people just accepted this poor CPU utilization as the norm, and instead chose to brute force our way through it by getting the fastest CPU we can afford, but it's not normal.

Oh one more....


Thats Alan wake 2. Tested across a myriad of GPUs. Guess what its Overall CPU utilization was averaging at all times? Across all the GPUs, from a 1070 all the way to a 4090..... under 24%. At some points, it was even as low as 11%.

That right there is the problem, imagine having a 16-thread CPU, and your overall CPU utilization is under 20% because the game engine is running on practically just one of your 16 threads. Then we come and say its CPU bottlenecked.
 
Last edited:

Crayon

Member
I am sure I am sounding like a broken record now. But we really need to change where we point the blame when it comes to these things. Gonna give you an example of two games here. Both are on PC.

Jedi survivor



and Horizon forbidden west


For Jedi survivor, the Owen guy was trying to explain how its CPU bottlenecked, in response to people like me who were making the point, that what he describes as a CPU bottleneck isn't a bottleneck but rather underutilization. You can take a look at the video yourself and see what I mean. In JS, and the 8C/16t CPU he was running the game with, the highest clocked thread was at ~60%. The lowest was at 21%, and the rest anywhere in between that with most under 40%. And this was on a 7800X3D and with a 4090. The GPU was idling at 40%

That is not a bottleneck. It's what we, for whatever reason, call a CPU bottleneck, but it's not. It's bad game design. It's poor optimization. And should be critiqued the same way we would critique devs for using bad textures or having obvious bugs.

And then we have HFW. And I used that because it does two things, as you will see in the video. First, it blows Owens argument out of the water about how difficult it is to properly parallelize CPU tasks so everything has to run on one thread at the end of the day. Second, its shows what a properly CPU-optimized game does. In that test, they ran it on a Zen 5 3600. 6C/12t. And clocked the CPU to around 3.7Ghz to mimic the PS5 CPU. And paired it to a 4070. But what is more important, is that in this test, the lowest utilized thread was at 50%, and the highest was at 89%. With everything in between in the high 70s. They even commended the game for this.

Then they ran it in 1080p DLSS that way they can push the GPU as far as they can. And they averaged 85fps. That is what proper optimization and CPU utilization looks like. And we should be calling devs out more often for their lazy approach to this, and instead just choosing to put all; their code on one or two threads and expect ppl to just have fast CPUs.

I dont know when or why people just accepted this poor CPU utilization as the norm, and instead chose to brute force our way through it by getting the fastest CPU we can afford, but it's not normal.


16d87c3d-642c-49fc-94dd-530612fa21a7_text.gif
 

yamaci17

Member
I have no power / control over what devs do / how devs code. and vast majority of people keep buying these games. you don't need to lecture me about CPU utilization, I'm fully aware of it but I still classify single thread limitations as CPU bottlenecks because I, as an end user, have no control over it. it is a trend that is likely will never stop, and me or you having a "this is not a cpu bottleneck though!" stance won't change anything. the solution / remedy will be the same, better IPC/single thread performance, higher clocks, more cache etc. if i go out there and buy a cheap i3 cpu that has half the cores of my crappy 8 core zen+ CPU, i will get insanely better performance. and that is literally what CPU bottleneck is for most casuals

it is just semantics in the end, and won't change what is required in real life scenarios

That is not a bottleneck. It's what we, for whatever reason, call a CPU bottleneck, but it's not. It's bad game design. It's poor optimization.

I'm not denying it is bad game design and poor optimization. but both can be true at the same time. it is a CPU bottleneck because of poor optimization. in the end, it is both poor optimization and a CPU bottleneck. being single thread limited still means that your main performance limiter, ON THE HARDWARE level, is the CPU. this is the "lazy dev" narrative. and I agree with you.

it is not even something you can fight against, not even against developers. literally pc gaming userbase will fuel the fire. go on forums and say you get poor CPU performance in any random port with a zen+ or zen 2 CPU while CPU is at %30 usage. even those PC folks will shame you and tell you to get a better CPU. it is just what it is. I too wish world was different but it is clear that not every dev is capable of making their game multithreaded.

again though, Cerny or whoever responsible for consoles can do whatever they want. in the end most 3rd party titles by the end of the gen run at unstable 30 fps on the ps4 pro as well. 1st party games being super multihreaded and running at extremely rock solid 30 fps does not interest me when games like control, avengers, guardians of galaxy were dropping frames left and right on lastgen machines.

if you want happiness, I acknowledge that some games do it while not making it worthwhile. if thats gonna make ye happy.
 
Last edited:

Radical_3d

Member
The thing is: those of you who’d put silicon budget on CPU over GPU for a handful of games would be wasting resources. Those of you who’d make a more expensive Pro would be reaching less market. And finally those of you who’d make a more powerful Pro taking the hit on the profits would end up with a long and uncomfortable conversation with Totoki.
 

Mr.Phoenix

Member
I have no power / control over what devs do / how devs code. and vast majority of people keep buying these games. you don't need to lecture me about CPU utilization, I'm fully aware of it but I still classify single thread limitations as CPU bottlenecks because I, as an end user, have no control over it. it is a trend that is likely will never stop, and me or you having a "this is not a cpu bottleneck though!" stance won't change anything. the solution / remedy will be the same, better IPC/single thread performance, higher clocks, more cache etc.

it is just semantics in the end, and won't change what is required in real life scenarios



again it is semantics. I'm not denying it is bad game design and poor optimization. but both can be true at the same time. it is a CPU bottleneck because of poor optimization. in the end, it is both poor optimization and a CPU bottleneck.

it is not even something you can fight against, not even against developers. literally pc gaming userbase will fuel the fire. go on forums and say you get poor CPU performance in any random port with a zen+ or zen 2 CPU while CPU is at %30 usage. even those PC folks will shame you and tell you to get a better CPU. it is just what it is. I too wish world was different but it is clear that not every dev is capable of making their game multithreaded.

again though, Cerny or whoever responsible for consoles can do whatever they want. in the end most 3rd party titles by the end of the gen run at unstable 30 fps on the ps4 pro as well. 1st party games being super multihreaded and running at extremely rock solid 30 fps does not interest me when games like control, avengers, guardians of galaxy was dropping frames left and right on lastgen machines
I am sorry. But I can do something about it. And I have been doing something about it. I simply do not support devs that do it. And that we allow them keep doing it and no one... especially DF, or even people like you, never call them out for it, they keep doing it. And it keeps inflating what our system specs are supposed to be. Also I get that the PC folk will always lean towards just getting a faster CPU, and that is honestly what I find irritating on bringing that kinda mindset into a console thread. Because consoles simply are not designed that way.

There is something I used to say, that if you gave devs a 10Ghz 16 core CPU and a GPU equivalent to two 4090s.... they would still find a way to make a 30fps game.

And I cant remember if it was you that was making the argument that all these 60fps games we have now are mostly cross-gen games... but something I wanted to ask on that.

Has it occurred to anyone, that because of how piss-poor the Jaguar CPUs were, devs actually had to work on doing a better job optimizing their games for it? I mean, doing so was the only way you get decent performance out of it. If you ask me, I feel the fact that the current-gen consoles CPUs are so much better now, its why a lot of devs do not bother with CPU optimization anymore. And I will never understand, nor subscribe to this mindset, after seeing what is possible from first-party devs, give third-party devs a pass because "they have too much work to do". If they are collecting the same money from me, I will hold them to the same standards.

And I am sorry, but I do not believe we cannot do anything about it. If devs notice we give them a lot of stick and pushback on not including performance modes, or they realize that games that lack a performance mode gets shit on before its even released, or notice that once we see a trailer for the game the first thing we are asking is if it has a performance mode... trust me... they will make sure there is a performance mode. Its worked so far.
 

yamaci17

Member
I am sorry. But I can do something about it. And I have been doing something about it. I simply do not support devs that do it. And that we allow them keep doing it and no one... especially DF, or even people like you, never call them out for it, they keep doing it. And it keeps inflating what our system specs are supposed to be. Also I get that the PC folk will always lean towards just getting a faster CPU, and that is honestly what I find irritating on bringing that kinda mindset into a console thread. Because consoles simply are not designed that way.

There is something I used to say, that if you gave devs a 10Ghz 16 core CPU and a GPU equivalent to two 4090s.... they would still find a way to make a 30fps game.

And I cant remember if it was you that was making the argument that all these 60fps games we have now are mostly cross-gen games... but something I wanted to ask on that.

Has it occurred to anyone, that because of how piss-poor the Jaguar CPUs were, devs actually had to work on doing a better job optimizing their games for it? I mean, doing so was the only way you get decent performance out of it. If you ask me, I feel the fact that the current-gen consoles CPUs are so much better now, its why a lot of devs do not bother with CPU optimization anymore. And I will never understand, nor subscribe to this mindset, after seeing what is possible from first-party devs, give third-party devs a pass because "they have too much work to do". If they are collecting the same money from me, I will hold them to the same standards.

And I am sorry, but I do not believe we cannot do anything about it. If devs notice we give them a lot of stick and pushback on not including performance modes, or they realize that games that lack a performance mode gets shit on before its even released, or notice that once we see a trailer for the game the first thing we are asking is if it has a performance mode... trust me... they will make sure there is a performance mode. Its worked so far.
I simply do not support devs that do it

I personally have my "i'm not supporting this" moments and it never worked for me. here's a personal experience: nba 2k on PC. This series has massive input delay with online gameplay no matter your online connection quality. people just accept it and play the game as is. massive problem, people complain about it all the time, keep buying the game, still complain. me? I've stopped buying after 2k16. Every year I'm looking at forums, see the same "laggy online" complaints and move on. occasionally jump into discussion and "call out" devs. what good it did me? nothing. it just won't change. it needs more than me. it needs an entire userbase to somehow muster and boycott the game. it is just not happening

especially DF, or even people like you, never call them out for it, they keep doing it

this is just false. I've called out on poor CPU optimization on games many times. I've been doing it for VRAM. most of the time what I get is that "just upgrade haha". and I got enough of it and gave up on it. "never" is a bold word when I was one of the most fierce critics of poor CPU optimization

and that is honestly what I find irritating on bringing that kinda mindset into a console thread. Because consoles simply are not designed that way.

you say this but then you should remember how they bumped the ps4 pro cpu to 2.1 ghz from 1.6 ghz. that resulted as 3rd party ps4 games being even more unstable (<25 fps towards the last 2-3 years into its useful life time) while ps4 pro barely keeping up at 30 fps in those titles. consoles are very much designed in a similar way. it was clear devs had enough with 1.6 ghz jaguar.

"Has it occurred to anyone, that because of how piss-poor the Jaguar CPUs were, devs actually had to work on doing a better job optimizing their games for it? I mean, doing so was the only way you get decent performance out of it. If you ask me, I feel the fact that the current-gen consoles CPUs are so much better now, its why a lot of devs do not bother with CPU optimization anymore. And I will never understand, nor subscribe to this mindset, after seeing what is possible from first-party devs, give third-party devs a pass because "they have too much work to do". If they are collecting the same money from me, I will hold them to the same standards."

problem is, they keep collecting the same if not more money.

if it is any solace, I didnt buy dragon dogma nor I will be doing it. I played jedi survivor on ea play pro (local monthly subscription price was around 5 bucks). in the end I tend to avoid buying broken games for full price.

"If devs notice we give them a lot of stick and pushback on not including performance modes, "

that's the problem, this is not happening. some people are at a point where they will tank a game's steam review score to %30 but still keep the game and keep playing it. devs keep earning their money. my stance changes nothing, just like the NBA 2K one. I wish it did.

also, the fact that I'm insisting on keeping my 2700x in 2024 should tell you something about my mentality on hardware upgrades. but I wouldn't really wish this upon others or console folks or on Pro console, which is why asking for zen 3 on console should not be seen as a bad thing. I even would get a zen 3 upgrade myself but I just see it as a hassle but I could see myself doing it at some point. it is not like I'm asking the console to get a massive zen 4 upgrade or something. as it stands, even ps4 pro have gotten a decent %30 CPU bump. zen 3 to zen 2 can lead to performance increases up to %50 in weirdly single threaded titles such as Flight Simulator. That is just too good of an IPC jump to ignore. But it is just my opinion in the end.

I myself called out DF, devs etc. for VRAM related problems in ratchet and spiderman. no one seemed to care. I just feel like old man yelling at clouds at times like that
 
Last edited:

Mr.Phoenix

Member
this is just false. I've called out on poor CPU optimization on games many times. I've been doing it for VRAM. most of the time what I get is that "just upgrade haha". and I got enough of it and gave up on it. "never" is a bold word when I was one of the most fierce critics of poor CPU optimization
My apologies, shouldn't have made the assumption. Having a different perspective doesn't mean you are part of the problem.
you say this but then you should remember how they bumped the ps4 pro cpu to 2.1 ghz from 1.6 ghz. that resulted as 3rd party ps4 games being even more unstable (<25 fps towards the last 2-3 years into its useful life time) while ps4 pro barely keeping up at 30 fps in those titles. consoles are very much designed in a similar way. it was clear devs had enough with 1.6 ghz jaguar.
The Jaguar CPU clearly needed it. That thing was hamstrung from the jump. I still stand by, and firmly believe that when it comes to the PS5, for better or for worse, when they analyze their performance date, across every game released for the platform thus far, they are going to be seeing that sub 50% total CPU utilization. Can't make a case for a better CPU when that is the data they are looking at. As far as sony is concerned, we have given them a great CPU. And our devs have confirmed that its a great CPU and are doing wonders with it. So problem solved.

And why you will never see any third party complain about the CPU not having enough power, is because they know that isn't true. They know they are not utilizing it properly.
"Has it occurred to anyone, that because of how piss-poor the Jaguar CPUs were, devs actually had to work on doing a better job optimizing their games for it? I mean, doing so was the only way you get decent performance out of it. If you ask me, I feel the fact that the current-gen consoles CPUs are so much better now, its why a lot of devs do not bother with CPU optimization anymore. And I will never understand, nor subscribe to this mindset, after seeing what is possible from first-party devs, give third-party devs a pass because "they have too much work to do". If they are collecting the same money from me, I will hold them to the same standards."

problem is, they keep collecting the same if not more money.
Not from me though.
if it is any solace, I didnt buy dragon dogma nor I will be doing it. I played jedi survivor on ea play pro (local monthly subscription price was around 5 bucks). in the end I tend to avoid buying broken games for full price.
I havent and will nt be buying it either until it is fixed. And I do not buy any game that is obviously broken or has some glaring issues. Or that lacks a performance mode. What's funny is that I sometimes don't even use the performance mode, but I just refuse to buy any game without one out of principle.
 

Panajev2001a

GAF's Pleasant Genius
I'm willing to wait and see instead of jumping to conclusions.

You have a very simplistic view if hw. It shows through in every comment you make in these threads.

Memory, bandwidth and other architectural differences in the design have a bug impact on how code runs.

For example, the code I'm currently working on exhibits massive performance drops if one compute thread is scheduled on a different numa node (on the same CPU). We get a huge uplift by pinning our threads to cores and ensure data isn't in the caches of different cores.

You can get a lot of performance out of quite terrible CPUs if you program sympathetically to how they are designed.

You lose loads of performance if you don't...
eMVpfag.jpg


It is is weird how the lyrics to this song adapt to the NUMA optimisation point you were making. The data really does not want to be migrated (“you want to leave code, but you are not going to take me with you [not going to let you take me]”) :D. Sorry, being silly ;).
 
Top Bottom