• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry - Playstation 5 Pro specs analysis, also new information

This should help you with Ray Tracing performance.


Ray tracing performance

This is based on the Hot Chips XSX info. I assume the RT used in XSX and PS5 is the same RDNA2 RT.

The XSX specs are:

Either 4 texture or 4 ray ops per CU per clock. Ray intersection unit is in the texture sampler.

4 \* 52 \* 1.825GHz = 380G/sec ray-box theoretical peak performance.

1 \* 52 \* 1.825GHz = 95G/sec ray-triangle ops peak , means 1 ray-triangle intersection per CU per clock.

Applying that to PS5:

4 \* 36 \* 2.23GHz = 321G/s ray-box

1 \* 36 \* 2.23GHz = 80G/s ray-triangle


Edit:
Assuming RDNA4 is now 8 Ray/Box and 2 Ray/Triangle.

PS5 Pro Ray Tracing performance should look like this.
For example, if using 54CUs and +10% of PS5 clocks.
(2.23GHz + 10% = 2.45GHz)

8 Ray/Box × 54CU × 2.45GHz = 1,058.4G/sec ray-box
2 Ray/Tri × 54CU × 2.45GHz = 264.6G/sec ray-triangle

Does this scale linearly or exponentially?
 

Mr.Phoenix

Member
I don't know much about Ray Tracing, it's complicated. Hopefully someone else will chime in, but from my understanding.

A box or triangle is what geometry (objects) in games are made of.
YxBouMp.jpg



Within a RT unit is an intersection engine, which can calculate the intersection of rays (light) with boxes and triangles.
N1WvZQh.png

obzyMMq.png

MFfPyeQ.png


The numbers earlier, are basically how fast the GPU can calculate the intersections.


BVH is more of a technique.
Ray Tracing
Bounding Volume Hierarchy (BVH) is a popular ray tracing acceleration technique that uses a tree-based “acceleration structure” that contains multiple hierarchically-arranged bounding boxes (bounding volumes) that encompass or surround different amounts of scene geometry or primitives. Testing each ray against every primitive intersection in the scene is inefficient and computationally expensive, and BVH is one of many techniques and optimizations that can be used to accelerate it.

The BVH can be organized in different types of tree structures and each ray only needs to be tested against the BVH using a depth-first tree traversal process instead of against every primitive in the scene. Prior to rendering a scene for the first time, a BVH structure must be created (called BVH building) from source geometry. The next frame will require either a new BVH build operation or a BVH refitting based on scene changes.


lWUmPVt.jpg
Thanks for this.

So in the example yuou gave on the PS5pro RT. 1058G/sec. Would actually be more like 17G/frame in a 60fps game right?
 

winjer

Gold Member
Does this scale linearly or exponentially?

Neither. The main issue with RT is instruction coherency and being able to keep instructions in flight.
This is why, despite all the improvements to RDNA3, performance is almost identical to RDNA2. At least on the small RDNA3 and RDNA2.
The problem is the instruction cache. RDNA2 and small RDNA3, all have only 128kb of instruction cache that the RT units can use. This limits the amount of instructions that can be keep in execution.
So most of RT units and shaders end up underutilized. Close to 30-40%.
The big RDNA3 has 192KB, so it can have more instructions in flight and have better utilization of the hardware.

The question is how AMD is going to improve this in RDNA4. Maybe with more instruction cache, or maybe with better scheduler for RT.
NVidia already had some success with SER. Intel has something similar to improve scheduling.
But even with AMD and NVidia GPUs, there are still many problems with hardware underutilization.
Having more RT units, improves performance almost linearly. But it still means a lot of units are idle.
 
Last edited:

Ivan

Member
DF really deserves to be called out. And that "strictly from a high end pc user perspective" analysis sounds completely amateurish even if they would like to be percieved as the most knowledgable people on the planet.

It's like a few beyond3d geeks had all the power in the world and got too much attention. I'm glad they're being called out more and more.

Especially with their stupid chuckling while doing all that shit standard youtube shills do. Alex - the pretty boy deserves all the flak he gets for attitude only.

I don't know if this was posted, but it seems interesting IF TRUE, this guy mentions some EA developers from Respawn were commenting on PSSR and they allowed him to say that they achieved 4k 120 via PSSR in Jedi Survivor easily:

It is timestamped:



It is one game under certain conditiions of course, but could be telling for a lot of titles. It would mean that their solution is pretty good.
 
Last edited:

HeisenbergFX4

Gold Member
DF really deserves to be called out. And that "strictly from a high end pc user perspective" analysis sounds completely amateurish even if they would like to be percieved as the most knowledgable people on the planet.

It's like a few beyond3d geeks had all the power in the world and got too much attention. I'm glad they're being called out more and more.

Especially with their stupid chuckling while doing all that shit standard youtube shills do. Alex - the pretty boy deserves all the flak he gets for attitude only.

I don't know if this was posted, but it seems interesting IF TRUE, this guy mentions some EA developers were commenting on PSSR and they allowed him to say that they achieved 4k 120 via PSSR in Jedi Survivor easily:

It is timestamped:



It is one game under certain conditiions of course, but could be telling for a lot of titles. It would mean that their solution is pretty good.

Tom Henderson commented on that Jedi rumor so who knows who is right.

 

Fafalada

Fafracer forever
So in the example yuou gave on the PS5pro RT. 1058G/sec. Would actually be more like 17G/frame in a 60fps game right?
Sure but the metric itself just tells you how fast intersections are computed, it's not about counting the totals.
You will never reach those peak-totals anyway - intersection tests are either/or (you obviously don't do both tri and box tests every cycle) tests are only run when reaching a new node (that's where 'traversal costs' come into play), and a chunk of the GPU frame will be consumed by BVH updates (contrary to popularly held beliefs - even in this thread - majority of RT BVH updating costs* is done by the GPU - not the CPU).

And that's 'just' the costs for raw RT processing - obviously GPU will also execute the rest of the work during a frame - so the % of frame-time you'll get for RT will be some fraction of total frame (even in Path tracing - you share time with shader-compute at least).
But what the number above does tell you is that any given intersection-test costs 'x' nano-seconds, so you know how fast they'll go when they do run.

*I don't know where we are with dedicated ASICs for this, afaik it's still done on GPU compute today in most cases, but I know some vendors have been looking at it (especially Intel) so maybe there is something out on the market already.

Anyway above word salad aside - where conversation on RT performance gets interesting is accelerating things outside of intersection testing. Node traversal, scheduling tests/compute, BVH updates etc. So saying RT is up to 4x faster by definition means more than simply doing 4x more intersection tests.

Does this scale linearly or exponentially?
If you're referring to 'total number of intersection tests' per scene - then neither. It's O(logN) - that's the primary purpose for having the acceleration structure in the first place.
 

Radical_3d

Member
DF really deserves to be called out. And that "strictly from a high end pc user perspective" analysis sounds completely amateurish even if they would like to be percieved as the most knowledgable people on the planet.

It's like a few beyond3d geeks had all the power in the world and got too much attention. I'm glad they're being called out more and more.

Especially with their stupid chuckling while doing all that shit standard youtube shills do. Alex - the pretty boy deserves all the flak he gets for attitude only.

I don't know if this was posted, but it seems interesting IF TRUE, this guy mentions some EA developers from Respawn were commenting on PSSR and they allowed him to say that they achieved 4k 120 via PSSR in Jedi Survivor easily:

It is timestamped:



It is one game under certain conditiions of course, but could be telling for a lot of titles. It would mean that their solution is pretty good.

120fps? But… but… but GAF told me the CPU wasn’t fast enough U__U I can’t believe I’ve been foiled by the internet.
 

Ivan

Member
We're going to have to see if that rumor is 🐖 :messenger_poop: first :messenger_grinning_sweat:.

But I have a feeling that ~120 via PSSR will be there in games that have 60 fps performance modes now. Without that easily marketable 120 fps differentiator i really don't see the way for them to make people buy the pro. Just "better RT, more stable frame rate, higher resolution" isn't going to cut it. They got to have SOMETHING meaningful like 4K was on PS4 Pro.
 
Last edited:

Loxus

Member
Sure but the metric itself just tells you how fast intersections are computed, it's not about counting the totals.
You will never reach those peak-totals anyway - intersection tests are either/or (you obviously don't do both tri and box tests every cycle) tests are only run when reaching a new node (that's where 'traversal costs' come into play), and a chunk of the GPU frame will be consumed by BVH updates (contrary to popularly held beliefs - even in this thread - majority of RT BVH updating costs* is done by the GPU - not the CPU).
Are you sure?
There was a big talk that the CPU manages the BVH when Spiderman first released on PC. Which lead many to believe this is how the PS5 handles RT.



You can even see the CPU utilization go up with RT enabled.
wHKZXt2.jpg

nPEjefh.jpg


I can't remember if this was ever resolved.
 
Last edited:

winjer

Gold Member
Are you sure?
There was a big talk that the CPU manages the BVH when Spiderman first released on PC. Which lead many to believe this is how the PS5 handles RT.



I can't remember if the was ever resolved.


CPUs are great for BVH traversal, because they are very good at branching and dealing with dependencies.
On a console, because the CPU is right next to the GPU, its easy to pass data between the 2. So it makes sense.
But on PC, they have to go through the pcie bud and that ads latency and can put pressure on the pcie bandwidth, for older gens.
 

Gaiff

SBI’s Resident Gaslighter
Are you sure?
There was a big talk that the CPU manages the BVH when Spiderman first released on PC. Which lead many to believe this is how the PS5 handles RT.



You can even see the CPU utilization go up with RT enabled.
wHKZXt2.jpg

nPEjefh.jpg


I can't remember if this was ever resolved.

It can be done by either. In Frontiers of Pandora, it's done on the GPU on consoles but on the CPU on PC. It's the same for Spider-Man on PC at least. Dunno about the PS5 version.
 
Last edited:

Radical_3d

Member
Not funny and you know it. General market doen't think like that. We have 4K consoles now in their view (and it's not far from truth in a way).
General market isn’t gonna spend one penny above what is needed to get a console. Is just us, you know.
 

Mr.Phoenix

Member
Sure but the metric itself just tells you how fast intersections are computed, it's not about counting the totals.
You will never reach those peak-totals anyway - intersection tests are either/or (you obviously don't do both tri and box tests every cycle) tests are only run when reaching a new node (that's where 'traversal costs' come into play), and a chunk of the GPU frame will be consumed by BVH updates (contrary to popularly held beliefs - even in this thread - majority of RT BVH updating costs* is done by the GPU - not the CPU).

And that's 'just' the costs for raw RT processing - obviously GPU will also execute the rest of the work during a frame - so the % of frame-time you'll get for RT will be some fraction of total frame (even in Path tracing - you share time with shader-compute at least).
But what the number above does tell you is that any given intersection-test costs 'x' nano-seconds, so you know how fast they'll go when they do run.

*I don't know where we are with dedicated ASICs for this, afaik it's still done on GPU compute today in most cases, but I know some vendors have been looking at it (especially Intel) so maybe there is something out on the market already.

Anyway above word salad aside - where conversation on RT performance gets interesting is accelerating things outside of intersection testing. Node traversal, scheduling tests/compute, BVH updates etc. So saying RT is up to 4x faster by definition means more than simply doing 4x more intersection tests.


If you're referring to 'total number of intersection tests' per scene - then neither. It's O(logN) - that's the primary purpose for having the acceleration structure in the first place.
Damn... RT is even complicated to give a layman's explanation. Thanks though, you guys have made me get a better overall grasp of it.

Are you sure?
There was a big talk that the CPU manages the BVH when Spiderman first released on PC. Which lead many to believe this is how the PS5 handles RT.



You can even see the CPU utilization go up with RT enabled.
wHKZXt2.jpg

nPEjefh.jpg


I can't remember if this was ever resolved.

Wouldnt the CPU be needed more for BVH stuff on the PS5 particularly because the GPU isn't accelerating the BVH? If the PS5pro has BVH acceleration on the GPU, that will be freeing up a LOT of CPU load as seen in the examples you gave.

We're going to have to see if that rumor is 🐖 :messenger_poop: first :messenger_grinning_sweat:.

But I have a feeling that ~120 via PSSR will be there in games that have 60 fps performance modes now. Without that easily marketable 120 fps differentiator i really don't see the way for them to make people buy the pro. Just "better RT, more stable frame rate, higher resolution" isn't going to cut it. They got to have SOMETHING meaningful like 4K was on PS4 Pro.
While I generally agree with you, I also feel that a lot of people will buy it simply because its the "best" PS5 to buy. Consumers can be funny like that.

Basically, if this thing goes on to sell like 2M a year, make no mistake, the people buying it because they understand the benefits of what its tech brings to the table, will make no more than 10% of those yearly sales. The rest, are just going to be people buying it because its the new and best PS5.
 

Loxus

Member
Wouldnt the CPU be needed more for BVH stuff on the PS5 particularly because the GPU isn't accelerating the BVH? If the PS5pro has BVH acceleration on the GPU, that will be freeing up a LOT of CPU load as seen in the examples you gave.
Well AMD says the BVH stuff is done on the CUs.

Traversal of the BVH and shading of ray results is handled by shader code running on the Compute Units
mW5wU9d.jpg


Maybe when Cerny said this in Road to PS5, he meant BVH isn't done on the GPU, but on the CPU.
"While the Intersection Engine is processing the requested ray triangle or ray box intersections the shaders are free to do other work."

On PS5, it man be better to do BVH on the CPU since the GPU is mostly the weakest part already, even without Ray Tracing.


I'm just hoping AMD really implements the Traversal Engine on RDNA4.
0exFgGz.png


This should free up all the BVH stuff from both the CPU and GPU.
 

SlimySnake

Flashless at the Golden Globes
No 60FPS in DD2 on PS5 Pro

cfb87c68f368bb6e74ace0580106a8c42fc6cf2012ddb37b5f0b993d8ed4f5b4.png


At least not in the city.
i have a CPU 2x more powerful than the PS5 CPU and i cant get a locked 30 fps with proper frame pacing in the city. Framerates drop from 70 outside to 30 fps in the city. and its extremely bad as the city can be 55 fps one second then drop to 30 then go up to 44 then drop to 28 fps. it is awful and not indicative of any bottlenecks other than the bottlenecks in the brains of these incompetent developers.
 
Especially with their stupid chuckling while doing all that shit standard youtube shills do. Alex - the pretty boy deserves all the flak he gets for attitude only.
I find their inane chortling at each others non-funny comments one of the most irritating things about them.

Why do they do it? They come across like giggling schoolboys sharing secrets at breaktime.
 

Radical_3d

Member

Timestamped. John brings a little of sense to the table arguing what we all know: CPUs this generation are fine. Of course 4 years in there are better stuff but consoles are not even high end stuff of their launch period. He asked a few devs and… what do you know! They said the same thing. Someone on the DD2 perform thread quoted a guy who had access to the debug of Jedi Survivor and it was just a mess of bad practices. Again, nothing that we don’t already know. But some DF staff and some people here like to pretend that we didn’t have massive crowd or physics simulation a couple of generations ago.

And also states again what we all know: the 60fps modes have a GPU problem and the Pro is going to help with that.
 
Last edited:

ChiefDada

Gold Member
Timestamped. John brings a little of sense to the table arguing what we all know: CPUs this generation are fine.

Important to note that John is quoting a developer here. In other words, they could not come to this obvious conlusion on their own. DF were fear mongering just last week about how the PS5 Pro CPU will be problematic. It's pathetic that they had to go and ask developers questions to answers they should already know, particularly as so called "tech experts".
 

HeisenbergFX4

Gold Member

Timestamped. John brings a little of sense to the table arguing what we all know: CPUs this generation are fine. Of course 4 years in there are better stuff but consoles are not even high end stuff of their launch period. He asked a few devs and… what do you know! They said the same thing. Someone on the DD2 perform thread quoted a guy who had access to the debug of Jedi Survivor and it was just a mess of bad practices. Again, nothing that we don’t already know. But some DF staff and some people here like to pretend that we didn’t have massive crowd or physics simulation a couple of generations ago.

And also states again what we all know: the 60fps modes have a GPU problem and the Pro is going to help with that.

Hate to keep saying it but told people quite awhile back to not read much into the paper specs, wait to see this thing run
 

mansoor1980

Gold Member
Important to note that John is quoting a developer here. In other words, they could not come to this obvious conlusion on their own. DF were fear mongering just last week about how the PS5 Pro CPU will be problematic. It's pathetic that they had to go and ask developers questions to answers they should already know, particularly as so called "tech experts".
richard and alex are still not satisfied with the developers viewpoint , these guys are something else
 

SkylineRKR

Member
Yeah no thanks. CPU is going to bottleneck this like Pro did last gen. I'll sit this one out. PS5 is fine for me.

PS4 Pro wasn't enough for me in the end. Except for the 4K TV support. PS5 has this from the get go.
 

Radical_3d

Member
Except for the 4K TV support. PS5 has this from the get go.
Really? Mine must be a faulty model. Lets check what games of my library run at 4K in their 60fps mode:
- Demons: nope.
- Dirt 5: niet.
- FFVII remake and rebirth: no.
- Helldivers: not even on the 30fps mode.
- Resident Evil 4 and village: nada.
- Armored Core: sad face.
- Octopath Traveler II: YES!

Ok so it’s true!! I just need to aim at games like OTII in my 10TF machine so I can have those sweet sweet 4K pixels.
 
Last edited:

SkylineRKR

Member
Really? Mine must be a faulty model. Lets check what games of my library run at 4K in their 60fps mode:
- Demons: nope.
- Dirt 5: niet.
- FFVII remake and rebirth: no.
- Helldivers: not even on the 30fps mode.
- Resident Evil 4 and village: nada.
- Armored Core: sad face.
- Octopath Traveler II: YES!

Ok so it’s true!! I just need to aim at games like OTII in my 10TF machine so I can have those sweet sweet 4K pixels.

What does it matter? Dirt 5 is not going to be a good game at native 4k/60. Which PS5 Pro likely won't even hit as well.

Helldivers will never be a looker. AC too. Rebirth won't be without its terrible texture work and UE fuckery as well. A few more pixels isn't going to change this.

What PS4 Pro did, was offer support for the growing 4K TV market. Going beyond 1080p limits. This upgrade made sense.
 

Radical_3d

Member
What does it matter? Dirt 5 is not going to be a good game at native 4k/60. Which PS5 Pro likely won't even hit as well.

Helldivers will never be a looker. AC too. Rebirth won't be without its terrible texture work and UE fuckery as well. A few more pixels isn't going to change this.

What PS4 Pro did, was offer support for the growing 4K TV market. Going beyond 1080p limits. This upgrade made sense.
So your argument is “I don’t like thing therefore 4K in thing is not important”. Ok.
 

Fafalada

Fafracer forever
Are you sure?
There was a big talk that the CPU manages the BVH when Spiderman first released on PC. Which lead many to believe this is how the PS5 handles RT
Updating a BVH properly involves bucketing data down to the triangle level - using a CPU for that would get prohibitively expensive - very fast.
To put it another way - one of the big reasons why to date, physics interaction in game worlds is still limited to basically static worlds (and that's pretty much in everything, even in the supposedly 'interactive' games with lots of flying objects - you still just get a static world with some objects layered on top) - is that physics uses very similar kind of acceleration structures under-the hood, and updating them at runtime - on the CPU - just gets too expensive really fast.
CPU can and does interface with the high-level BVH updates (at least on some graphics APIs), analogous to what is done for regular render-scene updates (it's moving/updating top-node matrices around, nothing too excessive), but the heavy lifting of updating the underlying tree-structure is typically done by GPUs.

Spiderman example - the game streams large amounts of data constantly, so I'd expect BVH is in constant flux, hence the big increase in PCI utilization. Doesn't mean CPU is doing anything computationally relevant - likely just data being copied around. After all - Spiderman is another world that is - still basically static so data doesn't really change that much inside it.
And on console that copying data around problem doesn't exist since there's no memory segmentation, so it's likely far less CPU/memory intensive than on PC.

Well AMD says the BVH stuff is done on the CUs.
We're talking about two separate things. What AMD is describing is BVH traversal - which happens for every Ray that gets traced in the scene.
This is entirely GPU driven (either by CUs or other specialized hardware).

The other - which is what usually gets associated to CPU issues, is BVH updates (where geometry in the scene changes - so BVH structure must change to reflect it before you can render with it again). This is a costly process that 'may' involve some CPU, but as I note above - is typically still GPU driven (but indeed - console will be more flexible here due to shared memory).
To say this process is 'CPU limited' is akin to saying draw-calls are 'CPU limited'. It can 'technically' happen - but it's far more likely not, especially on consoles.
 
Last edited:

ChiefDada

Gold Member
They are still confused about PS5 vs Series X performance differences, so how can we expect them to competently analyze what PS5 Pro may or may not have to offer? People say I/we unfairly criticize them, but is it wrong to expect a higher level of technical discussion and understanding from a platform such as theirs?

 

Wooxsvan

Member
They are still confused about PS5 vs Series X performance differences, so how can we expect them to competently analyze what PS5 Pro may or may not have to offer? People say I/we unfairly criticize them, but is it wrong to expect a higher level of technical discussion and understanding from a platform such as theirs?


ya that part was off to me. It came across as if they were almost disappointed that Series X wasn't showing its power advantage over PS5 more often...
It does show they still don't understand the quote and explanation Cerny gave, "a rising tide lifts all boats" with regards to faster clocks

i mean look at these comparisons. its never been just about TF


GPU - Triangle Rasterization (Billion/s) 8.92 vs 7.3 - 20% (PS5 Advantage)
GPU - Culling Rate (Billion/s) 17.84 vs 14.6 - 20% (PS5 Advantage)
GPU - Pixel Fill Rate (Gpixels/s) 142.72 vs 116.8 - 20% (PS5 Advantage)
 
Last edited:

onQ123

Member
People worried about the CPU in a Pro console is a little strange because you will be playing the same games so a more powerful CPU will mostly go to waste.
They are still confused about PS5 vs Series X performance differences, so how can we expect them to competently analyze what PS5 Pro may or may not have to offer? People say I/we unfairly criticize them, but is it wrong to expect a higher level of technical discussion and understanding from a platform such as theirs?




They will probably be befuddled at how close PS5 Pro GPU will actually be to a CPU & be lost when things are being offloaded to compute & ML lol
 

SonGoku

Member
To put it another way - one of the big reasons why to date, physics interaction in game worlds is still limited to basically static worlds (and that's pretty much in everything, even in the supposedly 'interactive' games with lots of flying objects - you still just get a static world with some objects layered on top) - is that physics uses very similar kind of acceleration structures under-the hood, and updating them at runtime - on the CPU - just gets too expensive really fast.
Interesting so why don't they run physics interactions on GPUs tp have more complex interactive worlds or if GPUs not capable why not make dedicated hardware capable of handling it? Physics interactions in games would be even more game changing than raytracing and easier to sell
 

Loxus

Member
Updating a BVH properly involves bucketing data down to the triangle level - using a CPU for that would get prohibitively expensive - very fast.
To put it another way - one of the big reasons why to date, physics interaction in game worlds is still limited to basically static worlds (and that's pretty much in everything, even in the supposedly 'interactive' games with lots of flying objects - you still just get a static world with some objects layered on top) - is that physics uses very similar kind of acceleration structures under-the hood, and updating them at runtime - on the CPU - just gets too expensive really fast.
CPU can and does interface with the high-level BVH updates (at least on some graphics APIs), analogous to what is done for regular render-scene updates (it's moving/updating top-node matrices around, nothing too excessive), but the heavy lifting of updating the underlying tree-structure is typically done by GPUs.

Spiderman example - the game streams large amounts of data constantly, so I'd expect BVH is in constant flux, hence the big increase in PCI utilization. Doesn't mean CPU is doing anything computationally relevant - likely just data being copied around. After all - Spiderman is another world that is - still basically static so data doesn't really change that much inside it.
And on console that copying data around problem doesn't exist since there's no memory segmentation, so it's likely far less CPU/memory intensive than on PC.


We're talking about two separate things. What AMD is describing is BVH traversal - which happens for every Ray that gets traced in the scene.
This is entirely GPU driven (either by CUs or other specialized hardware).

The other - which is what usually gets associated to CPU issues, is BVH updates (where geometry in the scene changes - so BVH structure must change to reflect it before you can render with it again). This is a costly process that 'may' involve some CPU, but as I note above - is typically still GPU driven (but indeed - console will be more flexible here due to shared memory).
To say this process is 'CPU limited' is akin to saying draw-calls are 'CPU limited'. It can 'technically' happen - but it's far more likely not, especially on consoles.
My bad, I probably misunderstood this.
I thought Sony was using a similar approach on PS5.

Ray Tracing In Vulkan
Deferred Host Operations to build a
complex Acceleration Structure using
CPU cores to offload the work from
GPU for faster, smoother framerates.

DAhgZu3.jpg
 
I really don't get the concern trolling over the CPU upgrade being too small... CPU bound games are very few and far between.

Why do you say that? While it's true that very few games are so CPU bound that having a 60 fps mode is off the table, a big part of why people wanted the Pro was to finally be able to not have to settle for playing Quality modes at 30 fps. This is where games will still be CPU limited on a Pro device that is CPU bottlenecked.

It's why I've been playing so many games on PS5/SX at 30 fps despite them having a performance mode! Forbidden West, Ratchet, Spiderman 2, Alan Wake 2, FF16, FF7 Rebirth, Hogwarts, Jedi Survivor, Plague Tale Requiem, etc

Basically all the best looking games you have to choose the lesser of two evils and if you want next gen visuals and RT you still have to choose 30 fps (or 40 if you're lucky). So now a PS5 Pro comes along with only a 10% boost and nothings going to change! Upcoming games will have greater requirements and I'll still have to settle for 30.
 

shamoomoo

Member
People worried about the CPU in a Pro console is a little strange because you will be playing the same games so a more powerful CPU will mostly go to waste.



They will probably be befuddled at how close PS5 Pro GPU will actually be to a CPU & be lost when things are being offloaded to compute & ML lol
Yeah. As much as a spokesperson Alex is for Nvidia, it's weird the tensor main use is for image improvement and not other aspects of games.
 
Top Bottom