• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS5 Pro devkits arrive at third-party studios, Sony expects Pro specs to leak

darrylgorn

Member
Animated GIF
 

paolo11

Member
You won't see any huge jump gen by gen moving forward. Moore's law is dead, look at the GPU market and the performance gains gen by gen. And that's ignoring the fact that newer chips are becoming more and more expensive to make.

So, yes, you'll play the same game with slightly better graphics/higher resolution and you'll be happy (according to the WEF)
So from ps5 to ps6 the jump is not huge anymore? What can we expect on next gen then? I mean that’s fine and all cuz this gen and last is beautiful anyway
 

dano1

A Sheep
The Pro wouldn’t change that. People don’t care about higher frame rates. They want games and nothing will make them forget this.

Xbox tried for years but it never worked.

Who are these people you speak of?? Probably the wrong site to say that..
 

Zathalus

Member
Your wrong then see David Shapiro who already predicted everything that’s happened with ai today back in 2020-2021
LLM generative AI is simply not the way forward to AGI. I'd love to be proven wrong but I doubt generative AI even has a strong business case with how high costs are right now.
 

PaintTinJr

Member

....at least they are trying to understand a patent is the conclusion I should draw - so well done DF for doing a video I could watch without looking at the patent - rather than maybe point out the things they should and would have gleaned more from if they were actual developers and "technical".

The first is the material ID they mention. If you go back to DF's coverage of the old Wipeout on the Vita, IIRC the shadow maps at the time were using IIRC a polygon group ID in the stencil buffer(8bit channel) to allow for a discrete comparison in the lighting pass, so if an ID was present in the shadow map, then it was fully lit - resulting in robust shadows edges - assuming well chosen polygon groups - but that would fully light a partially visible group of polygons, and at the cost of limiting polygon groups at (2^8) two to the power 8 (256 groups) , but with an advantage of using much smaller ID shadowmaps that did integer testing rather than floating point comparison with magnitudes more data, and still would suffer erratic false positives and false negative shadow tests. I noted it at the time because I was coming off the back of exiting a bottomless pit of research of shadow techniques for my own hobby stuff.

Any way, in regards of the video, a rendered image that has holes but still keeps(renders) the ID of what would need rendered in the hole allows for quick effective means to look for discontinuities, meaning if it is the same geometry part as the neighbours you can effectively anisotropically or Guassian filter the data to cheaply impute a result consistent with the neighbours using the material ID and texture coordinates of the hole. The 3x3 grid (filter) in the video implies Guassian.
IMO this info all suggest being able to do lots of hole pixels in low frequency lighting almost as a 2D accelerated filtered fill command, and save 10's of shader OPs per pixel potentially.

The variably hole size reference in the patent sounds like a rework of their camera technology that uses variable frequency sampling - in sports like tennis - to only sample the changes rather than a constant frequency that samples constantly, but then lacks samples at the high frequency, giving poorer sample reconstructions, and IMO this is what the patent is trying to bring to AI rendering, so that the expensive inference is reserved for fixing holes in high frequency and variable frequency lit areas of scenes being upscaled.

I'm not sure the patent is getting used either, but the information DF discussed all sounds like a really great solution that is super efficient at eliminating redundant processing and give high quality results that would possibly leverage Sony's world class signal processing knowledge /tech.
 
Last edited:
LLM generative AI is simply not the way forward to AGI. I'd love to be proven wrong but I doubt generative AI even has a strong business case with how high costs are right now.
I specifically said a form of generative ai will be here before then not llm specifically
 

PaintTinJr

Member
Looking at the upscaling patent and the hardware patent with 4 GPUs, I can't help but feel that the 60fps and higher desire by core gamers has put developer to the sword since the days of PS3/360 with their mostly 30fps deferred render games and that these patents are all part of trying to provide a route where complex deferred rendering schemes can be re-introduced alongside complex RT and AI inference "hole filling".

Paraphrasing, the Epic UE5 engineer in the UE5 tech talk pointed to the need of nanite and Lumen being big complex single call compute shaders each, that brute force process the data in a homogenous way, as a 2 pass deferred system. IMO this highlighted that the ability to run many deferred render targets efficiently and gather, as was the classic PS3 first party games running at 30fps, and still render at 60fps this gen was inhibited by the inherent latency inefficiency of switching workloads on a single GPU, and with the patent of the 4 GPUs that I suspect use a 4 lane, 4 token ringbus, with each token set a GPU to GPU journey behind the previous token, I wonder if that would be a good way to avoid the deferred rendering latency by not having to switch render tasks on the same GPU that provides pre-render data info for the others.

That's where I think the hole filling patent came into my thinking, because much of the task can probably save work, but it also reads like it is quite latency heavy on a single GPU, which wouldn't fit with +90fps VR.
 
Last edited:

Loxus

Member
Looking at the upscaling patent and the hardware patent with 4 GPUs, I can't help but feel that the 60fps and higher desire by core gamers has put developer to the sword since the days of PS3/360 with their mostly 30fps deferred render games and that these patents are all part of trying to provide a route where complex deferred rendering schemes can be re-introduced alongside complex RT and AI inference "hole filling".

Paraphrasing, the Epic UE5 engineer in the UE5 tech talk pointed to the need of nanite and Lumen being big complex single call compute shaders each, that brute force process the data in a homogenous way, as a 2 pass deferred system. IMO this highlighted that the ability to run many deferred render targets efficiently and gather, as was the classic PS3 first party games running at 30fps, and still render at 60fps this gen was inhibited by the inherent latency inefficiency of switching workloads on a single GPU, and with the patent of the 4 GPUs that I suspect use a 4 lane, 4 token ringbus, with each token set a GPU to GPU journey behind the previous token, I wonder if that would be a good way to avoid the deferred rendering latency by not having to switch render tasks on the same GPU that provides pre-render data info for the others.

That's where I think the hole filling patent came into my thinking, because much of the task can probably save work, but it also reads like it is quite latency heavy on a single GPU, which wouldn't fit with +90fps VR.
In terms of multi-gpu.
This AMD patent describes having 3 base dies, with each base die housing a command processor.
DIE STACKING FOR MODULAR PARALLEL PROCESSORS
gJeKLw2.jpg

wD1Dgta.png

Having 3 command processors makes me think this is a chiplet and multi-gpu patent.

Looking at Mi300, we can get an insight on how AMD may partition a multi-gpu chip.
EomOl17.jpg



I would love to see Sony do this with PS5 Pro with 2 base dies (AID) as a way of preparation of 4 base dies (AID) for the PS6.


Someone did a render of how RDNA4/5 would look like with chiplets.
0dYeC3Q.jpg
 
Last edited:
In terms of multi-gpu.
This AMD patent describes having 3 base dies, with each base die housing a command processor.
DIE STACKING FOR MODULAR PARALLEL PROCESSORS
gJeKLw2.jpg

wD1Dgta.png

Having 3 command processors makes me think this is a chiplet and multi-gpu patent.

Looking at Mi300, we can get an insight on how AMD may partition a multi-gpu chip.
EomOl17.jpg



I would love to see Sony do this with PS5 Pro with 2 base dies (AID) as a way of preparation of 4 base dies (AID) for the PS6.


Someone did a render of how RDNA4/5 would look like with chiplets.
0dYeC3Q.jpg
ALSo saves die space for the cpu
 

PaintTinJr

Member
In terms of multi-gpu.
This AMD patent describes having 3 base dies, with each base die housing a command processor.
DIE STACKING FOR MODULAR PARALLEL PROCESSORS
gJeKLw2.jpg

wD1Dgta.png

Having 3 command processors makes me think this is a chiplet and multi-gpu patent.

Looking at Mi300, we can get an insight on how AMD may partition a multi-gpu chip.
EomOl17.jpg



I would love to see Sony do this with PS5 Pro with 2 base dies (AID) as a way of preparation of 4 base dies (AID) for the PS6.


Someone did a render of how RDNA4/5 would look like with chiplets.
0dYeC3Q.jpg
My fear would be that the Infinity fabric will have more in common with he PCIe and CrossFire interface than a multi-lane ringbus solution and therefore fail to handle the latency problems of game rendering at high frame rate. The partitioning in the AMD chip probably impacts processing latency more when configured with less partitions, and I couldn't see a scenario where it would suit PlayStation to configure with less than 4 partitions, while still needing a unified memory to all memory.

In the AMD MI300 chips can they partition the GPU processing and memory differently? Or does 4 partitions constrain the config to then get 1/4 of the memory each. too?
 
Last edited:

FireFly

Member
Tesla cars had insufficient computation onboard. In any case they still drive quite decently. But for self driving to be legal they need 99.9999% superhuman driving skills, and they'll achieve that soon.

What you have to familiarize yourself is the knee of the curve
What's in question is precisely where we are on the knee of the curve. You don't get to assume that.
 

Loxus

Member
My fear would be that the Infinity fabric will have more in common with he PCIe and CrossFire interface than a multi-lane ringbus solution and therefore fail to handle the latency problems of game rendering at high frame rate. The partitioning in the AMD chip probably impacts processing latency more when configured with less partitions, and I couldn't see a scenario where it would suit PlayStation to configure with less than 4 partitions, while still needing a unified memory to all memory.

In the AMD MI300 chips can they partition the GPU processing and memory differently? Or does 4 partitions constrain the config to then get 1/4 of the memory each. too?
The image from the Mark Cerny patent is just showing that each gpu and the cpu can communicate with each other, not it will have lanes like that.

From my understanding, it'll look more like the Infinity Fabric AP Interconnect in purple.
1jElO9V.jpg


Which is high bandwidth, low latency. So I wouldn't worry about latency being an issue.
Fqbrs8O.jpg



The Infinity Cache is shared by the whole chip.

JpbjAoM.jpg

And used for sharing data between the chiplets.
P2alImi.jpg
 
What's in question is precisely where we are on the knee of the curve. You don't get to assume that.
from decades ago, agi was expected before 2030, by several of the leading minds. Now many researchers are saying the same thing, gold medals in mathematics, turing test passing, agi within years.

Every couple of months, sometimes just weeks, a new breakthrough in ai is announced. Sora, Claude 3, Devin, etc. You better start believing we're in the knee of the curve...
 

PaintTinJr

Member
The image from the Mark Cerny patent is just showing that each gpu and the cpu can communicate with each other, not it will have lanes like that.

From my understanding, it'll look more like the Infinity Fabric AP Interconnect in purple.
1jElO9V.jpg


Which is high bandwidth, low latency. So I wouldn't worry about latency being an issue.
Fqbrs8O.jpg



The Infinity Cache is shared by the whole chip.

JpbjAoM.jpg

And used for sharing data between the chiplets.
P2alImi.jpg
But looking at the way the numbers are used, the infinity fabric just looks like a different protocol using PCIe5 lanes, which is likely to share latency charatestics and PCIe isn't exactly low latency IMO, when I looked into the actual protocol after the consoles launched and the velocity architecture and Nvidia DirectStorage accelerator were being compared to the IO complex. Give SLI and Cross fire had special cables to bypass the PCIe southbridge on PCs and still produced stutter in games that used SLI/Cross fire, I still think PlayStation would need something slightly less off the peg.

The real question is whether the latency will be less than 1/4 of the latency of the PS5 northbridge? I doubt that it will be that low using infinity fabric, because just looking at the MI300 external memory bandwidth specs, IIRC they are only double the GDDR bandwidth of the PS5, meaning that the balance of the system would be trending in a negative direction if it had 4 GPUs, but only double the northbridge bandwidth to feed and coordinate them.
 

FireFly

Member
from decades ago, agi was expected before 2030, by several of the leading minds. Now many researchers are saying the same thing, gold medals in mathematics, turing test passing, agi within years.

Every couple of months, sometimes just weeks, a new breakthrough in ai is announced. Sora, Claude 3, Devin, etc. You better start believing we're in the knee of the curve...

There is no consensus amongst A.I experts as to when AGI will arrive.


Part of the reason for this is that we don't actually know what is required to create an AGI. So the latest advancements could be taking us almost there, or we could be a long way off still. This comes back to my point about not being able to assume we are at a particular point in the curve.
 
There is no consensus amongst A.I experts as to when AGI will arrive.


Part of the reason for this is that we don't actually know what is required to create an AGI. So the latest advancements could be taking us almost there, or we could be a long way off still. This comes back to my point about not being able to assume we are at a particular point in the curve.
In the past if you described what some of the latest llms can do, theyd have said you have agi.

Vernor Vinge, Ray Kurzweil, Elon Musk, iirc have suggested human level intelligence before 2030. Hans Moravec's computational estimates for human level performance in robots has already been attained.

The number of connections in models will soon match or exceed human brains.

It is true that not all ai experts agree. But iirc two of the top minds are already suggesting it is near. Iirc a recent book had an old quote of Sam Altman saying 5 to 10 years for agi and Dario has also suggested it may be near.

 

Loxus

Member
But looking at the way the numbers are used, the infinity fabric just looks like a different protocol using PCIe5 lanes, which is likely to share latency charatestics and PCIe isn't exactly low latency IMO, when I looked into the actual protocol after the consoles launched and the velocity architecture and Nvidia DirectStorage accelerator were being compared to the IO complex. Give SLI and Cross fire had special cables to bypass the PCIe southbridge on PCs and still produced stutter in games that used SLI/Cross fire, I still think PlayStation would need something slightly less off the peg.

The real question is whether the latency will be less than 1/4 of the latency of the PS5 northbridge? I doubt that it will be that low using infinity fabric, because just looking at the MI300 external memory bandwidth specs, IIRC they are only double the GDDR bandwidth of the PS5, meaning that the balance of the system would be trending in a negative direction if it had 4 GPUs, but only double the northbridge bandwidth to feed and coordinate them.
I read everywhere and couldn't find anything that suggests latency will be a problem.

This is all I found about latency.
AMD unveils Instinct MI300X GPU and MI300A APU, claims up to 1.6X lead over Nvidia’s competing GPUs
AMD has also added 256MB of total Infinity Cache capacity, spread across all four of the I/O Dies, to cache data traffic via a prefetcher, thus increasing hit rates and power efficiency while reducing bus contention and latency. This adds a new level of caching for the CPUs (conceptually a shared L4) while providing a shared L3 cache for the GPUs. An Infinity Fabric NoC (network on chip), dubbed the AMD Infinity Fabric AP (Advanced Package) Interconnect, connects the HBM, I/O subsystems, and compute.


The only thing I found that you may be thinking is bandwidth, with is an non-issue.
AMD’s CDNA 3 Compute Architecture
This is similar to desktop Ryzen 7000 parts where one CCD can’t take full advantage of DDR5 bandwidth due to Infinity Fabric limits. However this is likely to be a non-issue on MI300X because the bandwidth demands will be highest with all dies in play. In that case, each die will consume about 1.3 TB/s of bandwidth and getting 3/4 of that over cross-die links won’t be a problem.

1.3 TB/s is much more bandwidth than PCIe 5 with 32 lanes @ 128 GB/s.
PCI Express 5 (PCIe 5.0): Here's everything you need to know about the new standard
FJhns7g.jpg



I would say it's more in line with the Infinity Links used in RDNA3 than PCIe 5, which has better latency than RDNA2.
y5oKK99.jpg
iVavIZy.jpg
 

Poordevil

Member
Seems like everybody on the World Wide Web is talking and speculating about this PS5 Pro....except Sony. I get news feeds on my phone, gaming web sites, YouTube content creators. You name it, if they are affiliated with gaming they have chimed in on their take on this PS5 Pro. Yet Sony is silent. Not a peep out of them. Very weird, especially going on as long as it has.

I'm sitting tight with my PS4 Pro. I have a lot of good PS4 games I have started up but not completed. So I'm not all that anxious about needing a PS5 Pro. The PS4 Pro runs PS4 games quite well too. But I would upgrade in a heart beat if the PS5 Pro arrived.
 

Perrott

Member
Seems like everybody on the World Wide Web is talking and speculating about this PS5 Pro....except Sony. I get news feeds on my phone, gaming web sites, YouTube content creators. You name it, if they are affiliated with gaming they have chimed in on their take on this PS5 Pro. Yet Sony is silent. Not a peep out of them. Very weird, especially going on as long as it has.

I'm sitting tight with my PS4 Pro. I have a lot of good PS4 games I have started up but not completed. So I'm not all that anxious about needing a PS5 Pro. The PS4 Pro runs PS4 games quite well too. But I would upgrade in a heart beat if the PS5 Pro arrived.
I get the feeling that the reason why we're all so anxious about learning some additional information is because we heard about this thing way ahead of time compared to PS4 Pro last generation.

Worst case scenario, we'll just learn more about it (and officially) at the PlayStation Showcase in May, which is 10 weeks away.
 

Pelta88

Member
Is there any info from a credible source that this thing actually exists?

Note: Credible source does not include social media types pushing a podcast and telling you to subscribe for updates.

Preferably a dev studio working with said hardware or an "insider" with a proven track record. I'll settle for Shinobi or Imran Khan as very few others qualify.
 
Last edited:

jm89

Member
Is there any info from a credible source that this thing actually exists?

Note: Credible source does not include social media types pushing a podcast and telling you to subscribe for updates.

Preferably a dev studio working with said hardware or an "insider" with a proven track record. I'll settle for Shinobi or Imran Khan as very few others qualify.
Tom henderson is probably the most credible playstation leaker right now, his confirmed it exists and dev kits are going out.
 
Last edited:

ChiefDada

Gold Member
Note: Credible source does not include social media types pushing a podcast and telling you to subscribe for updates.

It doesn't get much better than Tom Henderson. He's been 100% accurate at least with PS news iirc.

Preferably a dev studio working with said hardware

Not a chance at this early stage.

or an "insider" with a proven track record. I'll settle for Shinobi or Imran Khan as very few others qualify.

See above on Tom Henderson.
 

Pelta88

Member
Yes. I've mentioned in this thread earlier what the specs are from said leakers.

It doesn't get much better than Tom Henderson. He's been 100% accurate at least with PS news iirc.

I get the impression that this will be a "Plans changed" and/or "Despite SIE pouring millions into it, they decided not to release it on a whim." Embracing the blow back because his sub count exploded as a result of pushing this narrative.

With the PS4 pro we had dev confirmation and SIE documentations explaining the road map. With this... Hardware version of vapour. I'll get one day 1, if true. It's the lack of evidence beyond those pushing podcasts and asking for subs that has me sceptical.
 
Last edited:
I get the impression that this will be a "Plans changed" and/or "Despite SIE pouring millions into it, they decided not to release it on a whim." Embracing the blow back because his sub count exploded as a result of pushing this narrative.

With the PS4 pro we had dev confirmation and SIE documentations explaining the road map. With this... Hardware version of vapour. I'll get one day 1, if true. It's the lack of evidence beyond those pushing podcasts and asking for subs that has me sceptical.

Just be patient. There were people on GAF claiming that the Slim didn't exist... right up until it was announced.
 

jm89

Member
I get the impression that this will be a "Plans changed" and/or "Despite SIE pouring millions into it, they decided not to release it on a whim." Embracing the blow back because his sub count exploded as a result of pushing this narrative.

With the PS4 pro we had dev confirmation and SIE documentations explaining the road map. With this... Hardware version of vapour. I'll get one day 1, if true. It's the lack of evidence beyond those pushing podcasts and asking for subs that has me sceptical.
If devkits have gone out I'd be surprised if they pull out at this point.

We've been hearing devkits have gone out to first parties last year and end of this year to more third parties. They are definitely comitted.
 
Last edited:

Pelta88

Member
Just be patient. There were people on GAF claiming that the Slim didn't exist... right up until it was announced.

A slim is baked into the hardware roadmap. Pro is/was an anomaly. It’s possible that the pro does exist, I’m just questioning the verification.
 

FireFly

Member
In the past if you described what some of the latest llms can do, theyd have said you have agi.

Vernor Vinge, Ray Kurzweil, Elon Musk, iirc have suggested human level intelligence before 2030. Hans Moravec's computational estimates for human level performance in robots has already been attained.

The number of connections in models will soon match or exceed human brains.

It is true that not all ai experts agree. But iirc two of the top minds are already suggesting it is near. Iirc a recent book had an old quote of Sam Altman saying 5 to 10 years for agi and Dario has also suggested it may be near.


Dario has suggested that we may be 2-3 years off A.I models being able to do a wide range of professional jobs.



So that may include substantial automation of video game development. But the bit that is captured by AGI, is the ability to create conceptual frameworks for understanding the world and progressively evolve those frameworks to "discover" new things. That's not something LLM's are well equipped to do since they don't seem to have the capacity to develop generalised reasoning skills:


LLMs struggle at basic logic puzzles that children are able to solve. Even Sam Altman believes the solution to this is not simply more data.

"I think we need another breakthrough. We can push on large language models quite a lot, and we should, and we will do that. We can take our current hill that we're on and keep climbing it, and the peak of that is still pretty far away. But within reason, if you push that super far, maybe all this other stuff emerged. But within reason, I don't think that will do something that I view as critical to a general intelligence," Altman said.


So then it becomes a question of what that breakthrough involves and therefore when it can conceivably happen. But we don't have the answer to this! So all these "expert" predictions about AGI are really just guesses. That's why there is no consensus.

But I take the point that there are lots of things that we may have thought we need AGI for that we actually don't. A highly trained model may be able to create game content without really "knowing" what it is doing, in the sense of having an evolving sense of principles for game design. It's an open question as to whether games produced by such a model will be "good".
 
Top Bottom