• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Sony custom RDNA2 have their own VRS and mesh shading - Cerny and Naughty dog

Dolomite

Member
What games do you think will employ deep learning algorithms in 16 ms frame time ?

Humour.
? Humour yourself chief, I don't work here LMAO. I'm here to sit back and enjoy the ocean breeze salt that's been spraying the air in this thread. Also pretending that deep learning can't useful outside of Mocap, or that ML can't aid in things aside from TexUprez is just corny. Downplaying a feature because it's better. Implemented in the architecture of one plastic box over the other is childish....but then again children make the best tears, so I'll allow it😂😂
 

geordiemp

Member
? Humour yourself chief, I don't work here LMAO. I'm here to sit back and enjoy the ocean breeze salt that's been spraying the air in this thread. Also pretending that deep learning can't useful outside of Mocap, or that ML can't aid in things aside from TexUprez is just corny. Downplaying a feature because it's better. Implemented in the architecture of one plastic box over the other is childish....but then again children make the best tears, so I'll allow it😂😂

I am sure DL in PCs will have many applications, in 16 ms frame time gaming I need convinced why Sony would need to put additional ML cores over and above what they have in ps5. Thats all.
 
Last edited:

Dolomite

Member
I am sure DL in PCs will have many applications, in 16 ms frame time gaming I need convinced why Sony would need to put additional ML cores over and above what they have in ps5. Thats all.
I'd ask the engineers that amended the original specs of the console to more closely match Scarlett, but we're too hasty to wait for full RDNA2 implementation....or are we still holding out hope for that RDNA3 secret sauce?
 

geordiemp

Member
I'd ask the engineers that amended the original specs of the console to more closely match Scarlett, but we're too hasty to wait for full RDNA2 implementation....or are we still holding out hope for that RDNA3 secret sauce?

ML was in GCN - its not new, its called choices.

Full RDNA2 implementation for what exactly ? DX12 api that ps5 does not use ?

Or hardware such as

Fully fine gated frequency control of DCU ? Infinity cache ? Fast caches so > 2.2 GHz ?

Shorter shader arrays at 5 DCU or less for faster propagation ?

Dont worry, it was hinted at by the CU slide in the reveal, it will be explained more fully in the RDNA2 white paper.

I can guarantee you XSX is nothing close to 100 % of the actual hardware in RDNA2, and neither is ps5.
 
Last edited:

Lethal01

Member


Oh I can agree the evidence pointing to XBSX having the advantage in ML performance.
But it's far too soon to say PS5 doesn't have tech that performs similar functions to SFS or VRS.

You seem to be claiming that most people are trying to argue that PS5 and XBSX are exactly the same and would be salty if they aren't.
I think most people are just arguing that we really don't know who's solution for most thing is better or worse. Mesh shaders and primitive shaders are literally different things, however, it's yet to be if the advantages of mesh shaders were not gotten using a different methods by using custom tech + primitive shaders.
it's just weird seeing all the celebration and boasting when a neutral statement like "PS5 uses custom RDNA 2" is made.
 

rnlval

Member
ML was in GCN - its not new, its called choices.

Full RDNA2 implementation for what exactly ? DX12 api that ps5 does not use ?

Or hardware such as

Fully fine gated frequency control of DCU ? Infinity cache ? Fast caches so > 2.2 GHz ?

Shorter shader arrays at 5 DCU or less for faster propagation ?

Dont worry, it was hinted at by the CU slide in the reveal, it will be explained more fully in the RDNA2 white paper.

I can guarantee you XSX is nothing close to 100 % of the actual hardware in RDNA2, and neither is ps5.

"RX Vega" doesn't include 8x rate INT4 support.

1*KemyG5IQ38u7canZaLhaDg.png


From https://blog.inten.to/hardware-for-deep-learning-part-3-gpu-8906c1644664

From https://wccftech.com/amd-radeon-instinct-mi60-first-7nm-vega-20-gpu-official/

"Vega 20" which as Vega II and MI60 supports 8x rate INT4 and 4X rate INT8.


For RDNA, 8X rate INT4 and 4X rate INT8 are optional features.

MS has confirmed 8X rate INT4 and 4X rate INT8 features for XSX GPU.
 
Last edited:

rnlval

Member
:messenger_tears_of_joy:


Cache ≠ Infinity Cache.

No signs of Infinity Cache with 128MB until AFTER he leaks it? That's your proof? Just let it go man... I'm sorry it bothers you that much

The size of 128 MB Infinity Cache (Level 3 cache-based on Zen 2's L3 cache IP) is carefully selected to be four times of XBO's 32 MB eSRAM which can handle 1600x900p framebuffer without delta color compression (DCC) and without tiling.

128 MB Infinity Cache can handle 4K framebuffer with DCC.

Both PS5 and XSX GPUs are less optimized for 4K when compared to RX 6800.

MS knows memory bandwidth is very important for framebuffers.
 
Last edited:
Top Bottom