• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS5 Pro Specs Leak are Real, Releasing Holiday 2024(Insider Gaming)

Bojji

Member
It uses a 200mm^2 GCD and 4x MCDs from Navi 31.


Yep, I thought anything below top chips is non chiplet but actually only 7600 is monolithic (and it doesn't support dual issue too).

Paa8Hq4.jpeg
 

winjer

Gold Member
Yep, I thought anything below top chips is non chiplet but actually only 7600 is monolithic (and it doesn't support dual issue too).

Paa8Hq4.jpeg

That is incorrect. The 7600 has VOPD.
AMD even list single precision at 21.75 TFLOPs. Not the 10.8 that is in that list.
 

Mr.Phoenix

Member
We'll see but as I said, 45% sounds insanely low for two generations and a 4-year gap. I know PSSR and better RT are the big selling points but still, you need a strong baseline to work with.
That 45% number is just misleading. I liken this PS5 to PS5pro jump to something like what we saw when going from the 1080 to the 2080. The raw raster or render performance jump wasn't anything to be proud of, but the generational advancements and use of new technologies set those two cards completely apart.

I believe that is going to be the case here with the PS5pro vs the PS5.
 
That 45% number is just misleading. I liken this PS5 to PS5pro jump to something like what we saw when going from the 1080 to the 2080. The raw raster or render performance jump wasn't anything to be proud of, but the generational advancements and use of new technologies set those two cards completely apart.

I believe that is going to be the case here with the PS5pro vs the PS5.
Ironic you pick those 2 cards because imo the 1080 was a great card and the 2080 was an upgrade easily skipped
 
We'll see but as I said, 45% sounds insanely low for two generations and a 4-year gap. I know PSSR and better RT are the big selling points but still, you need a strong baseline to work with.


I have little doubt this will be the case. 3080/4070 performance is pretty much exactly like the PS5 was positioned relative to the 2070S/1080 Ti.

I simply think that using precedents and looking at the market is more revelatory than specs sheets. Perhaps Sony has figured out something about dual-issue? Perhaps there is some major bottleneck in most DX12 games preventing compute from scaling more efficiently? It could be anything. Whatever the case, Sony is satisfied with those specs, so either they're great to begin with or PSSR is doing some major heavy lifting. I just don't see how they'd squeeze out just 45% better performance out of those parts.

As I said though, we'll see. I still believe 4070/3080 level in rasterization and a similar level in ray tracing. Perhaps not quite as strong as the 4070 but maybe better than the 3080 in ray tracing.
That 45% is just an honest engineer average (think Cerny 14 + 4 abstract concept) on expected increased resolution if nothing else is added. But that's not what's going to be the most important here. If they use PSSR they won't even need to increase the resolution, actually they could lower it, have a better final image and add RT on top.
 
That 45% is just an honest engineer average (think Cerny 14 + 4 abstract concept) on expected increased resolution if nothing else is added. But that's not what's going to be the most important here. If they use PSSR they won't even need to increase the resolution, actually they could lower it, have a better final image and add RT on top.

Let's assume PSSR is around Intel's deep learning upsampler quality. Along with a 45% raster improvement, that would make a huge difference vs. baseline PS5 + commonly used FSR2.
 

truth411

Member
That 45% is just an honest engineer average (think Cerny 14 + 4 abstract concept) on expected increased resolution if nothing else is added. But that's not what's going to be the most important here. If they use PSSR they won't even need to increase the resolution, actually they could lower it, have a better final image and add RT on top.
Exactly.
 

SlimySnake

Flashless at the Golden Globes
18 tflops simply doesn’t reconcile with the 45% more performance figure straight from Sony. Especially with the clockspeed increase. The Xsx partially suffers from lower clock speeds, but 2.35 ghz is plenty.

Maybe we don’t know the full picture despite these major leaks. Maybe the 60 cu gpu needs infinity cache even on consoles.

But if we go by 18 tflops tflops alone, that should put it closer to 4070 and 3080 performance even in standard rasterization which would be pretty impressive and what i had wanted from a ps5 pro.

But the proof is in the pudding and if we are getting 80% more performance in line with the tflops increase then i would expect spiderman 2 to run at native 4k 60 fps with dynamic res doing what it is doing today and avatar running at 4k dlss performance 1080p internally instead of 1440p dlss performance 720p internal like it is running today. Same goes for hfw, ratchet, demon souls and all other native 4k 30 fps games from Sony.
 

Zathalus

Member
18 tflops simply doesn’t reconcile with the 45% more performance figure straight from Sony. Especially with the clockspeed increase. The Xsx partially suffers from lower clock speeds, but 2.35 ghz is plenty.

Maybe we don’t know the full picture despite these major leaks. Maybe the 60 cu gpu needs infinity cache even on consoles.

But if we go by 18 tflops tflops alone, that should put it closer to 4070 and 3080 performance even in standard rasterization which would be pretty impressive and what i had wanted from a ps5 pro.

But the proof is in the pudding and if we are getting 80% more performance in line with the tflops increase then i would expect spiderman 2 to run at native 4k 60 fps with dynamic res doing what it is doing today and avatar running at 4k dlss performance 1080p internally instead of 1440p dlss performance 720p internal like it is running today. Same goes for hfw, ratchet, demon souls and all other native 4k 30 fps games from Sony.
The raw performance uplift isn't that unusual. The 7700XT is 17.5 TFLOPs if you don't count dual compute and it averages 42% faster then the 11 TFLOPs 6700.

That is average numbers, optimized games will of course have a much bigger difference.
 

nemiroff

Gold Member
Do you stare at the screen or the console when gaming.
No need to spew acid at those who only respectfully replied to me.

And yes that's what I do with the consoles at home: Mostly only look at them. It's my son(s) who play them.

But yes I do have some general resistance against gamer esthetics in our "open spaces" at home, it's definitely not just the PS5. As a PC gamer I don't even have a gamer chair. Anyway, the XSX and Switch are easy to "hide" , the PS5 not so much..

Anyway, looking forward to add the Pro to the stack, maybe I'll even sneak in some time on it.
 

Fafalada

Fafracer forever
So it does hide cache and memory latency by getting the data before it has to execute it.
Memory latencies are at least 10x bigger than L2 cache - OOOE just doesn't meaningfully help with those.

performance in line with the tflops increase then i would expect spiderman 2 to run at native 4k 60 fps with dynamic res doing what it is doing today
Leaked documents mentioned at least one game doing exactly that (we just don't know if it's Spiderman or TLOU or GoW ...).
Mind you - there's a lot more that goes into a frame than just 'TFlop counts' - the linear scaling doesn't apply to all parts of the pipeline, some costs are largely static or barely scale with bigger GPUs, some others are resolution invariant, etc.
If that 45% was an attempt to capture all these variables, it may very well be conservative for a lot of cases.
 

winjer

Gold Member
Memory latencies are at least 10x bigger than L2 cache - OOOE just doesn't meaningfully help with those.

Of course it helps hiding latency. If you can get data into the caches before it is needed, instead of fetching from memory when it is requested, the memory latency is hidden.
This is basic computing knowledge.
 

Xyphie

Member
I think it's reasonable to assume that the Pro GPU will underperform comparatively to e.g. Navi 32 if it's only 2 Shader Engines in pure raster workloads. May still only have 64 ROPs on top of the wonky shader configuration with the shader arrays have uneven ALU counts. There haven't been any indication that there's an L3 cache either, so the effective bandwidth would be less than e.g. 4070, 3080, 7800XT, 6800XT et al.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Of course it helps hiding latency. If you can get data into the caches before it is needed, instead of fetching from memory when it is requested, the memory latency is hidden.
This is basic computing knowledge.
So, to the point of L2 and especially L3 cache misses are not covered by even obscene ROB out of order windows (Apple’s A series can only look ahead 600 instructions or so and that is the window of opportunity to find non dependent instructions to execute [224 on Zen 2]… and even then with branches you may fall into replay traps and other hazards anyways) the comment is “well, not a problem, if you have the data in the cache before it is needed…”? Well… sure, but I think you are both saying the same thing :p.

OOOE engines are designed to maximise parallelism by finding non dependent work that can be scheduled and thus indirectly cover L1 misses mostly and, depending on L3 availability and latency, L2 misses too. If you are talking about covering misses beyond that I think we are exaggerating. I need to look into more studies, but there was an older one that actually linked too aggressive memory instructions reordering and wide OOOE windows to higher mixes and lower efficiency… so not trivial: https://citeseerx.ist.psu.edu/docum...&doi=8ffeda1abde50055e9b2308cc8c05c17e7dac2dc
 

winjer

Gold Member
So, to the point of L2 and especially L3 cache misses are not covered by even obscene ROB out of order windows (Apple’s A series can only look ahead 600 instructions or so and that is the window of opportunity to find non dependent instructions to execute [224 on Zen 2]… and even then with branches you may fall into replay traps and other hazards anyways) the comment is “well, not a problem, if you have the data in the cache before it is needed…”? Well… sure, but I think you are both saying the same thing :p.

OOOE engines are designed to maximise parallelism by finding non dependent work that can be scheduled and thus indirectly cover L1 misses mostly and, depending on L3 availability and latency, L2 misses too. If you are talking about covering misses beyond that I think we are exaggerating. I need to look into more studies, but there was an older one that actually linked too aggressive memory instructions reordering and wide OOOE windows to higher mixes and lower efficiency… so not trivial: https://citeseerx.ist.psu.edu/docum...&doi=8ffeda1abde50055e9b2308cc8c05c17e7dac2dc

According to Jim Keller the effectiveness of modern day branch prediction for CPUs, is well above 95%. So the occurrence of pipeline stalls or even a pipeline flush, are relatively rare.
Even more, if we then add the effectiveness of modern day software compilers.
OoO is not about maximizing parallelism. It's about reordering instructions as to have pipeline stalls, due to instructions with later dependencies being executed first.
For this the CPU will watch it's to have an instruction queue, to assess what data will it need next. And if it's missing, it will reorder those instructions.
Of course, that in modern superscalar processors, parallelism is almost a given. But having a good pre-fetcher, branch predictor and out of order execution is vital to keep all those pipelines fed and with as fewer stall as possible.

So although there are issues with OoO, the advantages far outstrip the disadvantages. And that is why all modern CPU cores are OoO. Even ARMs low power cores are now OoO.
 

Bojji

Member
lol they are doing the whole peak clocks will never be hit thing again. Df really can’t help themselves.

Yeah but they said Sony tells that this GPU is power limited most of the time.

Situation with PS5 was different, Cerny said that only some hardcore operations on CPU (like AVX) can drain power from the GPU and get it to the CPU (smart shift). Other than that console was 2.23GHz locked.
 

Imtjnotu

Member
Yeah but they said Sony tells that this GPU is power limited most of the time.

Situation with PS5 was different, Cerny said that only some hardcore operations on CPU (like AVX) can drain power from the GPU and get it to the CPU (smart shift). Other than that console was 2.23GHz locked.
but PS5 isnt doing AVX during gaming. when cerny talked about the PS5 it was said the machine also would never be fully running at 2.23 remember.
 

Lysandros

Member
Did Alex really say that PS5 doesn't support VRS? There are plenty of titles using VRS on PS5.
 
Last edited:

Bojji

Member
but PS5 isnt doing AVX during gaming. when cerny talked about the PS5 it was said the machine also would never be fully running at 2.23 remember.


Put simply, with race to idle out of the equation and both CPU and GPU fully used, the boost clock system should still see both components running near to or at peak frequency most of the time.

Did Alex really say that PS5 doesn't support VRS?

It doesn't have VRS from RDNA2 specs.
 
Last edited:

Lysandros

Member
Alex is literally the worst person to divulge and comment on technical infos about PS5 PRO. They are already distorting the leaks to fit their usual narratives about 'RDNA1' and primitive/mesh shaders about PS5/XSX. Same old same old. Take all this with a grain of salt.
 
Last edited:
Did Alex really say that PS5 doesn't support VRS? There are plenty of titles using VRS on PS5.

We wen't through all this when PS5 and Series specs leaked, PS5 doesn't support DX12 neither does PS5 Pro, VRS is a Direct X feature, Sony had a VRS like feature before Microsoft and it's referenced in the VRS patent that Microsoft filed, go look it up, saying it's software or hardware is nonsense, there is no hardware that accelerates VRS, it's an algorithmic rendering process and the GPU either has optimisations in software for it or it doesn't.

PS5 and PS5 Pro have optimisations for Sony's flavour of variable rate shading, it has more optimisations than DirectX it's a better Graphics API, it works at a lower level with fewer overheads, it's why PlayStation outperforms Xbox and why it always will, DirectX needs to be taken out to pasture and shot.
 

Zathalus

Member
Alex is literally the worst person to divulge and comment on technical infos about PS5 PRO. They are already distorting the leaks to fit their usual narratives about 'RDNA1' and primitive/mesh shaders about PS5/XSX. Same old same old. Take all this with a grain of salt.
Take information directly from Sony with a grain of salt?
 
lol they are doing the whole peak clocks will never be hit thing again. Df really can’t help themselves.
The good old days of static clocks vs 8-9Tflops are back. Next step: PS5 Pro doesn't have true RDNAX features and they'll focus on dynamic clocks, low tflops and weak CPU the whole rest of the generation like the good Microsoft propagandists / activists they always were.

Take information directly from Sony with a grain of salt?
you know what they are doing. The same thing they did with PS5. Focus on a small part of the specs and build a narrative from there while ignoring the rest (the meat) of the specs.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Yeah but they said Sony tells that this GPU is power limited most of the time.

Situation with PS5 was different, Cerny said that only some hardcore operations on CPU (like AVX) can drain power from the GPU and get it to the CPU (smart shift). Other than that console was 2.23GHz locked.
I think that could be conjecture from DF. They pushed the peak clock bs hard back then and this was after Cerny did the tedious road to the show a subsequent interview with Richard promising this would never happen.

now they are reading from a doc with no guidance from cerny and are likely arriving at the wrong conclusion. I honestly dont know why Sony would hit peak 36 tflops and advertise only 33 tflops. No marketing guy would ever allow that.
 

Zathalus

Member
you know what they are doing. The same thing they did with PS5. Focus on a small part of the specs and build a narrative from there while ignoring the rest (the meat) of the specs.
What narrative exactly? That the Pro GPU is power constrained and cannot hit max clocks in all games? That is directly from Sony themselves. That the Pro supports Tier 2+ VRS which the normal PS5 doesn't? Again, directly from Sony themselves. That the Pro has full mesh shader support instead of primitive shaders (minor difference really, as Alex said simply programmatic). Bingo, directly from Sony again.
 
What narrative exactly? That the Pro GPU is power constrained and cannot hit max clocks in all games? That is directly from Sony themselves. That the Pro supports Tier 2+ VRS which the normal PS5 doesn't? Again, directly from Sony themselves. That the Pro has full mesh shader support instead of primitive shaders (minor difference really, as Alex said simply programmatic). Bingo, directly from Sony again.

Yeah, people here are always forgetting that power and thermals on consoles are limited compared to a PC where they put a 1000W PSU in there with no conscience at all

We don't even know the actual node that will be used for PS5 Pro
 
now they are reading from a doc with no guidance from cerny and are likely arriving at the wrong conclusion. I honestly dont know why Sony would hit peak 36 tflops and advertise only 33 tflops. No marketing guy would ever allow that.

What marketing?

This thing for Sony doesn't exist yet. We are using Devs Portal as source
 
Last edited:

S0ULZB0URNE

Member
What narrative exactly? That the Pro GPU is power constrained and cannot hit max clocks in all games? That is directly from Sony themselves. That the Pro supports Tier 2+ VRS which the normal PS5 doesn't? Again, directly from Sony themselves. That the Pro has full mesh shader support instead of primitive shaders (minor difference really, as Alex said simply programmatic). Bingo, directly from Sony again.
Where did Sony say this?
 
Top Bottom