• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry - Playstation 5 Pro specs analysis, also new information

SlimySnake

Flashless at the Golden Globes
I am willing to bet it will. For whatever reason, that seems to be an area that devs always cheap out on. I say devs because there is no reason even on the PS5 as is that should be the case. Have you seen the frametime impact of enabling it on the PC? Practically non-existent.
Someone in the dragons dogma 2 thread pointed out cpu utilization jumping up when AF was turned on in game.

that could be the reason why.
 

Gaiff

SBI’s Resident Gaslighter
I am willing to bet it will. For whatever reason, that seems to be an area that devs always cheap out on. I say devs because there is no reason even on the PS5 as is that should be the case. Have you seen the frametime impact of enabling it on the PC? Practically non-existent.

My guess is that devs don't bother with that because they feel that from the average viewing distance of a console gamer, its not something they may notice.
It’s probably a fair bit more complicated than that. In HFW, the AF is variable but never approaches 16x (which doesn’t even look much better than 8x anyway). It fluctuates between 4X and 8X.

In Rift Apart, it’s 4X in Performance Mode but 8X in Quality Mode.

Those are first-party titles too. AF on consoles is still presumably very costly as 16X is non-existent and 8X is rarely seen but 4X is common. I thought this would go away this gen but nah. That bandwidth still murders consoles whereas PCs don’t care.
 
Last edited:

Mr.Phoenix

Member
Someone in the dragons dogma 2 thread pointed out cpu utilization jumping up when AF was turned on in game.

that could be the reason why.
So CPU hit...
It’s probably a fair bit more complicated than that. In HFW, the AF is variable but never approaches 16x (which doesn’t even look much better than 8x anyway). It fluctuates between 4X and 8X.

In Rift Apart, it’s 4X in Performance Mode but 8X in Quality Mode.

Those are first-party titles too. AF on consoles is still presumably very costly as 16X is non-existent and 8X is never seen but 4X is common. I thought this would go away this gen but nah. That bandwidth still murders consoles whereas PCs don’t care.
And bandwidth...

You guys are most likely both correct. Either way, its something that even on the PS5pro I expect to still be an issue. Or at the very least it wouldn't surprise me if it was.
 

Romulus

Member
Curious about psvr2 improvements. I would guess many developers will take the easy road and just remove reprojection but considering how gpu taxing VR is it could be glorious for some games.
 

SlimySnake

Flashless at the Golden Globes
AMD did say 50% improvement per CU.
bwhnoQd.jpg
Interesting? The difference in PT Of the RX 7800XT vs the RX 6800 and the RX 7600 RX 6600 is above 50%.
7800xt should be compared with the 6800xt. 6800 is only around 16 tflops while the 7800xt is over 19 tflops and the 6800xt is over 21 tflops in game clocks.

7800xt is 25-30% faster than the 6800 which is a bit over the 20% tflops difference. It could be due to the faster memory bandwidth.

And while the 7800xt has better ray tracing performance than the 6800xt but only by around 9-10. Granted it has fewer CUs so AMD is probably not lying here, but 50% IPC gain is not translating 1:1 with flops. Maybe things change with RDNA4.

bZZ8i2S.jpg
 

Radical_3d

Member
Hell, even the apple you mentioned got it right on their first attempt too.
Technically they’ve been years prior to their first ML core doing hardware AI acceleration in the part that manages the camera of the iPhones. So they had some experience. But now that I think about it, the same can be said about Sony as a broader company that also makes digital cameras… 🤔
 

Loxus

Member

System Memory

Standard PlayStation 5 – 448 GB/s (14 GT/s)
PlayStation 5 Pro – 576 GB/s (18GT/s) – A 28% increase over the standard console.

Also outlined is that the PlayStation 5 Pro’s system memory is more efficient than the standard console, so the bandwidth gain may increase by over 28%.



Anyone know what the bolded is suggesting?
448GB/s + 28% = 573GB/s
or
576GB/s + 28% = 737GB/s

If it's suggesting (576 GB/s + 28%), that 737GB/s may be effective bandwidth (576 + 161 = 737), meaning the PS5 Pro maybe using Infinity Cache.

Depending on the hit-rate bandwidth, it could have between 8 - 48MB of Infinity Cache.

sKKIDE6.jpg
 

Loxus

Member
7800xt should be compared with the 6800xt. 6800 is only around 16 tflops while the 7800xt is over 19 tflops and the 6800xt is over 21 tflops in game clocks.

7800xt is 25-30% faster than the 6800 which is a bit over the 20% tflops difference. It could be due to the faster memory bandwidth.

And while the 7800xt has better ray tracing performance than the 6800xt but only by around 9-10. Granted it has fewer CUs so AMD is probably not lying here, but 50% IPC gain is not translating 1:1 with flops. Maybe things change with RDNA4.

bZZ8i2S.jpg
Not really,
It's about RT, so matching the number of RT cores is what should be done, not the tflops. Especially if it's Path Tracing.

Both the 7800xt and 6800 has the same number of RT cores.

Most of RDNA3 performance comes from running at a higher clock speed than RDNA2 though.
 

SlimySnake

Flashless at the Golden Globes

System Memory

Standard PlayStation 5 – 448 GB/s (14 GT/s)
PlayStation 5 Pro – 576 GB/s (18GT/s) – A 28% increase over the standard console.

Also outlined is that the PlayStation 5 Pro’s system memory is more efficient than the standard console, so the bandwidth gain may increase by over 28%.



Anyone know what the bolded is suggesting?
448GB/s + 28% = 573GB/s
or
576GB/s + 28% = 737GB/s

If it's suggesting (576 GB/s + 28%), that 737GB/s may be effective bandwidth (576 + 161 = 737), meaning the PS5 Pro maybe using Infinity Cache.

Depending on the hit-rate bandwidth, it could have between 8 - 48MB of Infinity Cache.

sKKIDE6.jpg
it would be amazing if they included infinity cache but the fact that the 65% tflops increase is only giving them 45% more performance tells me that they are having the same bottlenecks as the XSX which also didnt have infinity cache and couldnt properly utilize its 52CUs.
 

paolo11

Member
I can’t wait for PSSR for performance mode. How likely Square Enix will add PSSR on FF7 trilogy remake and FF16?

I’m not going to expect fps improvements on quality mode . Or should I?
 

onQ123

Member
it would be amazing if they included infinity cache but the fact that the 65% tflops increase is only giving them 45% more performance tells me that they are having the same bottlenecks as the XSX which also didnt have infinity cache and couldnt properly utilize its 52CUs.
That 45% increase is about the normal rendering pipeline like the number of triangles & so on .

Much like when Xbox only claimed 4 - 6 X the GPU power of Xbox One for Xbox Series X while having 9X the compute .


 

Loxus

Member
it would be amazing if they included infinity cache but the fact that the 65% tflops increase is only giving them 45% more performance tells me that they are having the same bottlenecks as the XSX which also didnt have infinity cache and couldnt properly utilize its 52CUs.
Depends on how you look at it.
For example, using 54CUs @2.23GHz.

PS5 Pro
54 × 2 × 32 × 2 × 2.23GHz = 15.41TF

PS5
36 × 2 × 32 × 2 × 2.23GHz = 10.28TF

10.28 + 45% = 14.9TF

Teraflops and performance isn't linear, so that 45% more rendering performance seems legit to me.


It matches the RT performance increase as well.

PS5 Pro BC mode (all CUs on)
54 × 8 × 2.23GHz = 963.36G/s ray-box
54 × 2 × 2.23GHz = 240.84G/s ray-tri
= 3× performance

PS5 Pro BC mode
36 × 8 × 2.23GHz = 642.24G/s ray-box
36 × 2 × 2.23GHz = 160.56G/s ray-tri
= 2× performance

PS5
36 × 4 × 2.23GHz = 321.12G/s ray-box
36 × 1 × 2.23GHz = 80.28G/s ray-tri


4× is some cases, could be a best case scenario.
Maybe with a possible High GPU Frequency Mode.

Obviously this is my speculation as it's 60/64 CUs, but 54/60 just works so well in nearly every aspect.
 
That 45% increase is about the normal rendering pipeline like the number of triangles & so on .

Much like when Xbox only claimed 4 - 6 X the GPU power of Xbox One for Xbox Series X while having 9X the compute .


Finally the last piece of the puzzle! 45% is likely about the rendering of polygons and raster performance, not compute. 96 ROPs at 2.18 ghz are 45% more performant than 64 ROPs at 2.23ghz!

Well done!

EDIT: But it doesn't work with 2 shader engines! Annoyingly.
 
Last edited:

Loxus

Member
Finally the last piece of the puzzle! 45% is likely about the rendering of polygons and raster performance, not compute. 96 ROPs at 2.18 ghz are 45% more performant than 64 ROPs at 2.23ghz!

Well done!
It's impossible to get 96 ROPs with 2 Shader Engines though.

This is the 7900xtx die shot.
You can see 32 ROPs per Shader Engine with RB+.
j3ChqrG.jpg


If the PS5 Pro has 2 Shader Engines, it's either 64 ROPs or 128 ROPs.
 
Last edited:
It's impossible to get 96 ROPs with 2 Shader Engines though.

This is the 7900xtx die shot.
You can see 32 ROPs per Shader Engine with RB+.
j3ChqrG.jpg


If the PS5 Pro has 2 Shader Engines, it's either 64 ROPs or 128 ROPs.

This was in reply to a leak which claimed the ROP’s was 96.



The SE configuration talk usually goes way above my head anyway.
 

onQ123

Member
Finally the last piece of the puzzle! 45% is likely about the rendering of polygons and raster performance, not compute. 96 ROPs at 2.18 ghz are 45% more performant than 64 ROPs at 2.23ghz!

Well done!

Also the people saying you can't get 60fps from 30fps PS5 games this is something to think about.



If a game is in-between 30fps & 60fps unlocked like ~ 42fps devs will choose a locked 30fps but 45% more rendering power would give you 60fps

If it is bottlenecked by the rendering pipeline.
 

Tqaulity

Member
I’m not going to expect fps improvements on quality mode . Or should I?
You absolutely should as that would be the primary motivation for the Pro: to take base PS5 fidelity modes and push to 60fps. The combination of the GPU perf increase and PSSR should absolutely be able to take a 2160p/30 title and push to 4K/60fps.

The leaked documents already alluded to this as the "Game1" use case mentions fidelity mode quality at 60fps.
 

Bojji

Member
You absolutely should as that would be the primary motivation for the Pro: to take base PS5 fidelity modes and push to 60fps. The combination of the GPU perf increase and PSSR should absolutely be able to take a 2160p/30 title and push to 4K/60fps.

The leaked documents already alluded to this as the "Game1" use case mentions fidelity mode quality at 60fps.

But it won't be native 4K after that, "something x something" resolution reconstructed to 4K via PSSR.
 
2.18GHz is from the 2 Shader Engines with 60/64 CUs rumor.

Anything with clocks lower than the PS5 2.23ghz, just doesn't seem right to me.

Despite the leaks, there's just too much we don't know yet. We need a real presentation by Cerny.

It could take a while though
 

shamoomoo

Member
7800xt should be compared with the 6800xt. 6800 is only around 16 tflops while the 7800xt is over 19 tflops and the 6800xt is over 21 tflops in game clocks.

7800xt is 25-30% faster than the 6800 which is a bit over the 20% tflops difference. It could be due to the faster memory bandwidth.

And while the 7800xt has better ray tracing performance than the 6800xt but only by around 9-10. Granted it has fewer CUs so AMD is probably not lying here, but 50% IPC gain is not translating 1:1 with flops. Maybe things change with RDNA4.

bZZ8i2S.jpg
But the 7800 XT and 6800 non-XT both have 60 CUs and ditto for the 6600XT/50 vs the 7600. The difference would be in architectural improvements from different generations.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
But the 7800 XT and 6800 non-XT both have 60 CUs and ditto for the 6600XT/50 vs the 7600. The difference would be in architectural improvements from different generations.
if the clocks were the same, yes. but 7800xt clocks are around 2.45 ghz while 6800 tops out around 2.2 ghz. hence the tflops difference.
 

shamoomoo

Member
if the clocks were the same, yes. but 7800xt clocks are around 2.45 ghz while 6800 tops out around 2.2 ghz. hence the tflops difference.
I just looked it up,but the clock difference between the 7600 vs 6600xt is marginal and I believe the cache amount is the same and there still is a difference in PT.
 

Bojji

Member
I just looked it up,but the clock difference between the 7600 vs 6600xt is marginal and I believe the cache amount is the same and there still is a difference in PT.

Yeah, PT shows big differences (it's abysmal on RDNA2) but standard RT? Not so much.
 

SlimySnake

Flashless at the Golden Globes
I can’t wait for PSSR for performance mode. How likely Square Enix will add PSSR on FF7 trilogy remake and FF16?

I’m not going to expect fps improvements on quality mode . Or should I?
This is what Sony said about FPS improvements in 30 fps modes. FF7 has a native 4k and 1440p mode

Game 1 mostly fits the bill except FF7 has a native 4k and 1440p mode instead of 1800p and 1080p. the 1440p mode sits around 1152p most of the time which is why it looks so rough.

wcfhiTH.png
 

Tqaulity

Member
But it won't be native 4K after that, "something x something" resolution reconstructed to 4K via PSSR.
It doesn't matter if it's native or not. What matters is if you'll be able to tell the difference when playing. Isn't that what DLSS,FSR,TSR etc are all about? Anytime you're playing with DLSS On, it's technically not native but the differences are negligible in most cases and quality even exceeds native in some cases.

This is the world we live in and will be the future going forward. We CANNOT get the perf gains we've been used to with just pure HW alone. We're literally hitting the limits of what we can do in terms of transistor size shrinks, thermal envelopes, and cost/transistor (which is actually increasing now over time). Using SW and AI will be staples going forward for performance improvements in gaming. For better and worse...
 

shamoomoo

Member
whats the CU count on 7600?

i was talking about 7800xt vs 6800.
It's 32 vs 32 with the same amount of cache,the difference between each GPU is clock frequency. Though,the difference is too small to make such difference in PT.

I bring up the lower-end RDNA2/3 GPUs because they are closer overall vs 7800 vs the 6800 non-XT.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
It's 32 vs 32 with the same amount of cache,the difference between each GPU is closed frequency. Though,the difference is too small to make such difference in PT.

I bring up the lower-end RDNA2/3 GPUs because they are closer overall vs 7800 vs the 6800 non-XT.
6800 has 60 CUs.

7600 has 32 CUs. Your information is incorrect.

They are two completely different class of cards.
 

shamoomoo

Member
6800 has 60 CUs.

7600 has 32 CUs. Your information is incorrect.

They are two completely different class of cards.
I mentioned both GPUs because there is a corresponding RDNA3 version. The 7800 vs 6800 non-XT both have 60 CUs and the 7600 vs 6600/50 XT both have 32 CUs and the same amount of cache,the biggest difference is clock frequency and the 7600 memory bandwidth being about 3% faster.

In PT,the 7600 is 58% faster vs the 6650.
 
Last edited:

paolo11

Member
You absolutely should as that would be the primary motivation for the Pro: to take base PS5 fidelity modes and push to 60fps. The combination of the GPU perf increase and PSSR should absolutely be able to take a 2160p/30 title and push to 4K/60fps.

The leaked documents already alluded to this as the "Game1" use case mentions fidelity mode quality at 60fps.
Nice. Can I see that leak?
 

paolo11

Member
This is what Sony said about FPS improvements in 30 fps modes. FF7 has a native 4k and 1440p mode

Game 1 mostly fits the bill except FF7 has a native 4k and 1440p mode instead of 1800p and 1080p. the 1440p mode sits around 1152p most of the time which is why it looks so rough.

wcfhiTH.png
So per the leak, game 1 on ps5 pro is 1440p at 60fps using PSSR. Is that the main target to make it close to 1800p 60fps?
 

Bojji

Member
1440p(native rest)@60 FPS using PSSR to upscale to 1800p/4K will most likely look better than Fidelity Mode 1800p@30 FPS.

Obviosly.

IF PSSR quality is close to DLSS then even 1080 -> 4K 60FPS will look better than 4K 30FPS, maybe slightly worse in image quality alone (in still shots) but in motion it will be better.
 

Fafalada

Fafracer forever
EDIT: But it doesn't work with 2 shader engines! Annoyingly
128 ROPs @ 2ghz would never reach full utilisation with that bandwidth - effective performance would likely be no different.
PS4Pro doubled the ROPs too and real-world it was more like a 33% improvement in pixel-fill (barring some very special edge cases).

So per the leak, game 1 on ps5 pro is 1440p at 60fps using PSSR. Is that the main target to make it close to 1800p 60fps?
If the upscale quality is actually good - it would likely exceed 1800p@30 result as far as IQ goes.
Fun part is - the same kind of study on PS4Pro had 2 1st party games reach 1800p using CBR, and one stuck at 1440p. We're talking almost exact same kind of scaling here based on that leak.
 
128 ROPs @ 2ghz would never reach full utilisation with that bandwidth - effective performance would likely be no different.
PS4Pro doubled the ROPs too and real-world it was more like a 33% improvement in pixel-fill (barring some very special edge cases).


If the upscale quality is actually good - it would likely exceed 1800p@30 result as far as IQ goes.
Fun part is - the same kind of study on PS4Pro had 2 1st party games reach 1800p using CBR, and one stuck at 1440p. We're talking almost exact same kind of scaling here based on that leak.
Right. But what do you make of the 45% better rendering claim? What are they talking about?
 

SlimySnake

Flashless at the Golden Globes
Obviosly.

IF PSSR quality is close to DLSS then even 1080 -> 4K 60FPS will look better than 4K 30FPS, maybe slightly worse in image quality alone (in still shots) but in motion it will be better.
I dont know if i agree. I think native 4k 30 fps with good AA looks better than even DLSS quality let alone performance which is way softer in comparison.

I like DLSS but its no substitute for native 4k rendering. In motion or while standing still.
 

Gaiff

SBI’s Resident Gaslighter
I dont know if i agree. I think native 4k 30 fps with good AA looks better than even DLSS quality let alone performance which is way softer in comparison.

I like DLSS but its no substitute for native 4k rendering. In motion or while standing still.
Depends on the game. Sometimes, 4K DLSS Quality has less shimmering and breakup than even native with TAA. It also resolves finer details better 9/10 times. I wouldn’t call it unequivocally superior but it can get close and do some things better.

This is highly dependent on the game though.
 

Mr.Phoenix

Member
Right. But what do you make of the 45% better rendering claim? What are they talking about?
If taken literally (which we shouldn't) but to keep things simple. It means, For game X.

- PS5 fidelity mode, 4K@30fps = PS5pro fidelity mode, 4K@44fps.
- PS5 performance mode, 1080p@60fps = PS5pro performance mode, 1080p@87fps.

But thats a very basic way of describing it. But that should give you a general idea. So what a PS5pro would do?

- PS5pro Fidelity mode, 1296p-1440p + PSSR>4K@60fps (should get you something that looks close to, similar or even better than the base PS5 fidelity mode)
- PS5pro performance mode, 720p-900p + PSSR>1440p@80fps+/unlocked.
I dont know if i agree. I think native 4k 30 fps with good AA looks better than even DLSS quality let alone performance which is way softer in comparison.

I like DLSS but its no substitute for native 4k rendering. In motion or while standing still.
It varies from game to game, but one thing for certain is that you are getting a better-performing game overall that doesn't have to make the kinda cuts they currently do to get 60fps modes running.

Obviosly.

IF PSSR quality is close to DLSS then even 1080 -> 4K 60FPS will look better than 4K 30FPS, maybe slightly worse in image quality alone (in still shots) but in motion it will be better.
Isnt that supposed to be the other way around? Will look good or even better in still shots, but in motion artifacts may start becoming apparent. Unless you are talking about in motion as in having higher framerates.
 

sendit

Member
Obviosly.

IF PSSR quality is close to DLSS then even 1080 -> 4K 60FPS will look better than 4K 30FPS, maybe slightly worse in image quality alone (in still shots) but in motion it will be better.

Obviously. 30 FPS has lower frames per second than 60 FPS
 

sendit

Member
I dont know if i agree. I think native 4k 30 fps with good AA looks better than even DLSS quality let alone performance which is way softer in comparison.

I like DLSS but its no substitute for native 4k rendering. In motion or while standing still.
Rainbow Thumbs Down GIF by Mind Pump Media
 
Top Bottom