• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Microsoft Game Stack VRS update (Series X|S) - Doom Eternal, Gears 5 and UE5 - 33% boost to Nanite Performance - cut deferred lighting time in half

winjer

Member
Cerny never stated the PS5 has support for DP4A. So you trying to use his name to defend your position seems out of order.

The devs that stated the PS5 has less capabilities in ML have already been shown to you, several time.

Have you noticed how when someone talks abou the Series S/X ML capabilities, they can point to MS official statements and documentation.
But when you try to claim about the PS5 capabilities, you always use the PC slides for RDNA2. It's wonder you haven't claimed that the PS5 has Infinity Cache.
 

Loxus

Member
Cerny never stated the PS5 has support for DP4A. So you trying to use his name to defend your position seems out of order.

The devs that stated the PS5 has less capabilities in ML have already been shown to you, several time.

Have you noticed how when someone talks abou the Series S/X ML capabilities, they can point to MS official statements and documentation.
But when you try to claim about the PS5 capabilities, you always use the PC slides for RDNA2. It's wonder you haven't claimed that the PS5 has Infinity Cache.
Mark Cerny's talk in Road to PS5 is strictly about customizations done to the hardware. No customizations was done to the mixed precision operations, so it wasn't talked about.

DP4A (Signed Integer Dot-Product of 4 Elements and Accumulate) instructions are used to multiply 8-bit integers (one byte, INT8) accumulated into one 32-bit integer and then run on a GPU's ALUs.

An RDNA 2 Compute Unit supports mixed precision operations (INT8/4) for tensor math.

PS5 is confirmed to have RDNA 2 Compute Units, which means it supports mixed precision operations (INT8/4) for tensor math.

It's basic understanding skills.

I'm putting you on ignore before you get me banned.
 
Last edited:

winjer

Member
DP4A (Signed Integer Dot-Product of 4 Elements and Accumulate) instructions are used to multiply 8-bit integers (one byte, INT8) accumulated into one 32-bit integer and then run on a GPU's ALUs.

An RDNA 2 Compute Unit supports mixed precision operations (INT8/4) for tensor math.

PS5 is confirmed to have RDNA 2 Compute Units, which means it supports mixed precision operations (INT8/4) for tensor math.

It's basic understanding skills.

I putting you on ignore before you get me banned.

You are just quoting the PC blurb.
And proving nothing.
 

Hobbygaming

has been asked to post in 'Grounded' mode.
Seeing all three or a combination of these features is going to be a big performance win for Xbox.
Happy Cracking Up GIF by Regal
 

onesvenus

Member
There is no hardware dedicated to VRS if that's what you're asking
What I'm asking, and what I've been asking all this time, is a source about this.

As I told you before, Xbox claims to have added "new capabilities" to their GPU about VRS. Seeing how software VRS can be done via a compute shader and nothing else, and it could be already done in the PS4/One generation, I understand that as them adding some hardware (physical, no API/driver talk) to do something better.
You say that's not the case and I asked for a source.
 

Riky

My little VRR pleasure pearl goes vrrrooommm.
"

Tile size​

The app can query an API to know the supported VRS tile size for its device.

Tiles are square, and the size refers to the tile’s width or height in texels.

If the hardware does not support Tier 2 variable rate shading, the capability query for the tile size will yield 0.

If the hardware does support Tier 2 variable rate shading, the tile size is one of

  • 8
  • 16"
Only certain hardware supports Tier 2, it's in the Microsoft DX12 white papers, therefore yes you need certain hardware support.
 

Riky

My little VRR pleasure pearl goes vrrrooommm.


"Geometry Engine" is a generic term, you can see it here on the Xbox Series Hotchips presentation.
 

John Wick

Member
Someone disagrees with you and that makes them a fanboy. Grow up.
Besides, I'm not the one that has the tag "Playstation fanclub"



Every game that implements it.
You asked me what new features and techniques this generation has that will make a difference. Mesh and primitive shaders is one of them.
Considering that Mesh shaders are the standard in DX12U, both on PC and Xbox, and that primitive shaders are on PS5, that means most games will probably use it in the future.



What bandwidth? It's like everything to you is bandwidth. Sorry to disappoint you, but it's not.
RT in consoles are limited primarily by shader throughput.
They don't have dedicated hardware for ray traversal. Only have acceleration to build the BVH.




Once again, it seems to me you have no idea of what you are talking about. So when you have to try to explain any technical stuff, the only word you know is bandwidth.
And the funny thing is that RDNA2 has a tile based rendering arch. Meaning its less dependent on memory bandwidth than previous generations.


DLSS 1.9, it was Control. Before the patch for DLSS 2.0
XESS is in development along with Intel's Alchemist. But it is already implemented and shown in Hitman 3 and The Riftbreaker. It has also been shown in UE5, and an SDK will be released soon after their cards.
TAAU can be used in every game that uses UE4.19 or later. This version of UE was released in 2018. I've used in several games already.
If you watch DF videos regularly, you will find that several games on consoles are already using TAAU.



Plenty of RT reflections though. Impressive upgrades, for what are essentially PS4 games.
And don't forget all the other games I pointed out.



No. Not all games that render fur have that level of detail. But please, prove me wrong and show me a game, on the PS4 that has fur rendered as well as R&C on the PS5.



Strangely, that when a game doesn't fit your narrative, it becomes instantly a bad game.
Metro Exodus Enhanced received lots of praise for it's RTGI implementation. I played it and was very impressed with the result.
So was a lot of gamers and Digital Foundry made a great video showcasing this tech.



Once again with the fanboy insults, just because someone disagrees with you.
Fortunately you said you would no answer any more to this thread.
Are you an actual developer? Or work in the games industry?
 

John Wick

Member
Forget trying to prove a point to Riky.
He completely ignore everything about the PS5's hardware.

He thinks you can only achieve hardware VRS with RB+ ROPs that Nvidia doesn't have. So I guess Nvidia been using software VRS all this time?

He also ignore that the PS5 Geometry Engine is customized to do Foveated Rendering.

He also ignore the gains between software VRS and Hardware VRS on a 2D screen (not VR) are small.
pRiky knows more than actual game developers because he happens to read some articles. It makes him an expert.
 

Riky

My little VRR pleasure pearl goes vrrrooommm.
As DF said it's just a term for primitive shaders, AMD did them a while ago.
The Series version is to do with Mesh Shaders which are the next iteration.
 

John Wick

Member
How mature adding a p to my username, that's really the best you could do? When you've run out of arguments I suppose it's all you've got left 🤣
pRiky it's not my fault you live up to your name now is it?
Also it's not your fault your getting excited about these new features that MS seem to be shouting from the rooftops about.
Only problem is Nvidia have all of them already and some time ago too. VRS isn't something new. Nor ML. Just because Sony don't do technical breakdowns doesn't mean they are missing RDNA2 features. They might have other features which aren't part of DX12. It's just Sony don't need to advertise them because it's a waste of time because the developers have the info already.
 

Riky

My little VRR pleasure pearl goes vrrrooommm.
pRiky it's not my fault you live up to your name now is it?
Also it's not your fault your getting excited about these new features that MS seem to be shouting from the rooftops about.
Only problem is Nvidia have all of them already and some time ago too. VRS isn't something new. Nor ML. Just because Sony don't do technical breakdowns doesn't mean they are missing RDNA2 features. They might have other features which aren't part of DX12. It's just Sony don't need to advertise them because it's a waste of time because the developers have the info already.

You're going to persist with your toddler like name calling I see, no point entering into any discussion with you then.
 
"Through close collaboration and partnership between Xbox and AMD, not only have we delivered on this promise, we have gone even further introducing additional next-generation innovation such as hardware accelerated Machine Learning capabilities for better NPC intelligence, more lifelike animation, and improved visual quality via techniques such as ML powered super resolution."

By the way, I wonder where this extra ML hardware went to hide... Since then we never heard anything about the existence of this hardware, let alone any developer talking about using it.
But it's only the other guy that lies (about things he didn't even said).


You are just quoting the PC blurb.
And proving nothing.

PC RDNA2 CU supports it.
PS5 CU = RDNA2 CU.
So, does PS5 supports it...? 2+2? Hum?

Even if the PS5 GPU is "customized" and it's not all the same as the a PC RDNA2, it's understood that those changes happened on other parts of the GPU (just like the SeX GPU was "customized" to accommodate more CUs per SE), the CU were left alone. The discussed feature happens on the CUs.
So the PS5 RDNA2 CU where not changed, the PC RDNA2 CU supports it.
So... does the PS5 supports it? Hum?
HUUUUUUUUMM???!
 
Last edited:

Riky

My little VRR pleasure pearl goes vrrrooommm.
Jason Ronald talked about it being used for Auto HDR actually.
 

Three

Member
What I'm asking, and what I've been asking all this time, is a source about this.

As I told you before, Xbox claims to have added "new capabilities" to their GPU about VRS. Seeing how software VRS can be done via a compute shader and nothing else, and it could be already done in the PS4/One generation, I understand that as them adding some hardware (physical, no API/driver talk) to do something better.
You say that's not the case and I asked for a source.
I'm not sure why you're hooked on that marketing article but as I said before "new innovative capabilities" is referring to the VRS itself not any particular dedicated VRS hardware because VRS is relatively new in general. Raytracing was advertised as a new capability of RTX cards in the example I gave where it was still possible and added to old GTX cards, doesn't mean it wasn't better at it. If tomorrow nvidia makes some small change in their new cards I doubt it will be advertised as "new innovative raytracing capabilities". They did that to point out VRS as a new tech itself compared to most of what we got last gen not point out specifics about the hardware efficiency.

This does not mean there aren't things that make VRS efficiency better but this is related to general things like mixed precision or lower precision options and extra things exposed in the API that benefit forward renderers when doing VRS. There isn't hardware dedicated to VRS though like you were suggesting earlier, like some separate processor does some asynchronous calculations to free up compute shaders or something. Worst still somthing that makes VRS possible on one and impossible on anything else as people still seem to be suggesting. The funny thing is that the gears "tier 2 VRS" that people think is impossible is using compute shaders too.

"

Tile size​

The app can query an API to know the supported VRS tile size for its device.

Tiles are square, and the size refers to the tile’s width or height in texels.

If the hardware does not support Tier 2 variable rate shading, the capability query for the tile size will yield 0.

If the hardware does support Tier 2 variable rate shading, the tile size is one of

  • 8
  • 16"
Only certain hardware supports Tier 2, it's in the Microsoft DX12 white papers, therefore yes you need certain hardware support.
What a breakthrough! Why don't you contact Guerrilla Games and try and get an explanation as to how on earth they did VRS on Horizon Forbidden West after they did a tile query in DX12 on a PS5 and it returned 0 sized tiles because obviously they can't be doing "Tier 2 VRS" without the "hardware support".

 
Last edited:

Riky

My little VRR pleasure pearl goes vrrrooommm.
I don't need to, they used an inferior software version that gave bad results as also seen with Metro Exodus, nobody is saying that you can't mimic it poorly in software, that's been around since last gen.
Tier 2 is just much improved and supported in certain hardware, id said it themselves. If they say they wish every platform had it and only one didn't with the next gen patch then it doesn't take Columbo to work it out.
 

winjer

Member
What a breakthrough! Why don't you contact Guerrilla Games and try and get an explanation as to how on earth they did VRS on Horizon Forbidden West after they did a tile query in DX12 on a PS5 and it returned 0 sized tiles because obviously they can't be doing "Tier 2 VRS" without the "hardware support".

They probably did it by manipulating the MSAA functions in the GPU. And a shader program to identify and list groups of fragments to use the same shading.
 

Three

Member
They probably did it by manipulating the MSAA functions in the GPU. And a shader program to identify and list groups of fragments to use the same shading.
Um... why would you do that when it's deferred and you have primitive shaders? You have a visibility buffer. You're thinking of Call of Duty's forward rendering VRS on a PS4.

The method is very similar to this:


And here is a tidbit for our resident believer of all things secret sauce and "inferior" everything else

It’s a very nice win. The API is very easy to enable, and in many cases you can get better performance without any other work. If it looks the same, but is faster, then of course you should do it. But there are ways we can improve the technique by doing it ourselves, in software.

As mentioned previously, we can use the same sample positions as the 1x reference image, so that our image converges to the non-VRS result. Also, Visibility VRS is able to solve some of the performance inefficiencies with Hardware VRS. What inefficiencies does Hardware VRS have? Well, if you thought we were done talking about quad utilization then I have some bad news for you.

The coalitions VRS in UE5 is using something similar to what you see in Decima and HFW.
 

Sega Orphan

Banned
You do know the PS5 is confirmed to be RDNA 2, with RDNA 2 Compute Units since Road to PS5 right?






If you didn't know,
INT4 and INT8 is done via the ALU (Stream Processors) within a CU.

There is not extra hardware on the XBSX for that, INT4/8 are done by the CUs also.

Read RDNA Whitepaper for better understanding.
Neither XSX nor PS5 are full RDNA2. They are a mesh up both RDNA 1 and 2. You could call both of them RDNA 1.5 really.
For instance, the PS5 has RDNA 1 ROPs, while the XSX has RDNA 2 ROPs. It's the ROPs where hardware VRS is contained which explains why the XSX has it and not PS5.
Locuza on twitter put alot of this info out awhile ago.
His quote about the ROPs was
"The new Render Backend+ is a lot smaller. Instead of 4 Color ROPs + 16 Z/Stencil ROPs, the new RB+ has 8 Color ROPs + 16 Z/Stencil ROPs. Xbox Series/RDNA2 GPUs have half the amount of Z/Stencil ROPs per SE vs. PS5/RDNA1. PS5 pays a lot more for the Render Backend, area wise."

The front ends and back ends are also different with the PS5 WPG layout being the same as RDNA 1.

This is why it's not accurate to say they are both RDNA 2 and so have the same tech. Both aren't fully RDNA 2, and they have different bits and pieces to each other from RDNA 1 and 2.
 
Last edited:

Loxus

Member
Neither XSX nor PS5 are full RDNA2. They are a mesh up both RDNA 1 and 2. You could call both of them RDNA 1.5 really.
For instance, the PS5 has RDNA 1 ROPs, while the XSX has RDNA 2 ROPs. It's the ROPs where hardware VRS is contained which explains why the XSX has it and not PS5.
Locuza on twitter put alot of this info out awhile ago.
His quote about the ROPs was
"The new Render Backend+ is a lot smaller. Instead of 4 Color ROPs + 16 Z/Stencil ROPs, the new RB+ has 8 Color ROPs + 16 Z/Stencil ROPs. Xbox Series/RDNA2 GPUs have half the amount of Z/Stencil ROPs per SE vs. PS5/RDNA1. PS5 pays a lot more for the Render Backend, area wise."

The front ends and back ends are also different with the PS5 WPG layout being the same as RDNA 1.

This is why it's not accurate to say they are both RDNA 2 and so have the same tech. Both aren't fully RDNA 2, and they have different bits and pieces to each other from RDNA 1 and 2.
Check this out.

Even with RDNA Display Engine, Media Engine, Rasterizer and ROPs, Renoir is still considered to be Vega.

You know why?
Because it has Vega Compute Units.
PS5 is confirmed to have RDNA 2 Compute Units.

 
Last edited:

Riky

My little VRR pleasure pearl goes vrrrooommm.
And here is a tidbit for our resident believer of all things secret sauce and "inferior" everything else

We've already discussed the tile size that Tier 2 gives you compared to software VRS many times and the extra pass needed for it, it's in the original Coalition document.
Your tidbit does say something very important though I agree, twice in fact,

"Hardware VRS".......all that was needed.
 
Last edited:

Sega Orphan

Banned
Just because it has RDNA 2 CUs does not mean it has int4 and int8. MS also has RDNA 2 CUs but said they added the ability for int4 and int8 over the stock CU. We have the Italian Sony engineer who said the PS5 didn't have the ML additions to it. We also have David Cage also said in an interview that the XSX was more suited for ML than the PS5 because of the additions in its shader cores.

Nowhere has Sony said the PS5 has lower precision additions, so until they do the evidence is that it doesn't.

It must be said that you can do ML outside of int4 and int8 anyway, so of course the PS5 can do ML.
 
Didn't know where else to post this but, just rewatched this...




...and jump to the part starting @ 14:25. It speaks about the Primitive Shaders for PS5. It seems like they do work notably differently to Mesh Shaders because they're intended for generating new geometry in real-time.

What I didn't realize is, that is also used for adjusting level of detail in rendering depending on areas of focus. I.E areas that don't need as much detail have rendering pulled back while those of more focus have additional detail generated with new on-the-fly geometry rendering.

That is essentially a form of VRS, and it actually lines up with PS5 Software Engineer Matt Hargett's tweets from a couple years ago. Remember when they teased about VRS being useful, but possibly even earlier in the graphics pipeline? I think this is what they meant.

So basically, both systems have VRS, they just implement it differently. Microsoft's is implemented later in the pipeline for the framebuffer, Sony's is done earlier on using the Primitive Shaders (which also lines up with Matt's tweets). Another way of looking at it is, Sony's solution rolls dynamic geometry generation & variable levels of framebuffer detail (VRS) into the Primitive Shaders (I'm sure the Geometry Engine has a part in this as well IIRC the Primitive Shaders are in the GE); Microsoft's solution splits them up into two distinct things, Mesh Shaders (which aren't for dynamic real-time geometry generation, but controlling modification of batches of vertexes more efficiently (normally vertex shading is done on each single vertex at a time) and Variable Rate Shading (VRS).

For as much as both systems have in common in ways I'm even more intrigued now with where their differences lay.

EDIT: I say "differences" but technically both systems can use both techniques in a more generic way I suppose. Just going from some other things mentioned in the same video. Each one is still optimized for a particular approach but I suppose that doesn't mean the other is unavailable for usage in a more "unoptimized" fashion.
 
Last edited:

Rea

Member
Didn't know where else to post this but, just rewatched this...




...and jump to the part starting @ 14:25. It speaks about the Primitive Shaders for PS5. It seems like they do work notably differently to Mesh Shaders because they're intended for generating new geometry in real-time.

What I didn't realize is, that is also used for adjusting level of detail in rendering depending on areas of focus. I.E areas that don't need as much detail have rendering pulled back while those of more focus have additional detail generated with new on-the-fly geometry rendering.

That is essentially a form of VRS, and it actually lines up with PS5 Software Engineer Matt Hargett's tweets from a couple years ago. Remember when they teased about VRS being useful, but possibly even earlier in the graphics pipeline? I think this is what they meant.

So basically, both systems have VRS, they just implement it differently. Microsoft's is implemented later in the pipeline for the framebuffer, Sony's is done earlier on using the Primitive Shaders (which also lines up with Matt's tweets). Another way of looking at it is, Sony's solution rolls dynamic geometry generation & variable levels of framebuffer detail (VRS) into the Primitive Shaders (I'm sure the Geometry Engine has a part in this as well IIRC the Primitive Shaders are in the GE); Microsoft's solution splits them up into two distinct things, Mesh Shaders (which aren't for dynamic real-time geometry generation, but controlling modification of batches of vertexes more efficiently (normally vertex shading is done on each single vertex at a time) and Variable Rate Shading (VRS).

For as much as both systems have in common in ways I'm even more intrigued now with where their differences lay.

EDIT: I say "differences" but technically both systems can use both techniques in a more generic way I suppose. Just going from some other things mentioned in the same video. Each one is still optimized for a particular approach but I suppose that doesn't mean the other is unavailable for usage in a more "unoptimized" fashion.
Nice avatar, by the way.
:goog_lol::lollipop_beaming_smiling:
 

Riky

My little VRR pleasure pearl goes vrrrooommm.
Didn't know where else to post this but, just rewatched this...




...and jump to the part starting @ 14:25. It speaks about the Primitive Shaders for PS5. It seems like they do work notably differently to Mesh Shaders because they're intended for generating new geometry in real-time.

What I didn't realize is, that is also used for adjusting level of detail in rendering depending on areas of focus. I.E areas that don't need as much detail have rendering pulled back while those of more focus have additional detail generated with new on-the-fly geometry rendering.

That is essentially a form of VRS, and it actually lines up with PS5 Software Engineer Matt Hargett's tweets from a couple years ago. Remember when they teased about VRS being useful, but possibly even earlier in the graphics pipeline? I think this is what they meant.

So basically, both systems have VRS, they just implement it differently. Microsoft's is implemented later in the pipeline for the framebuffer, Sony's is done earlier on using the Primitive Shaders (which also lines up with Matt's tweets). Another way of looking at it is, Sony's solution rolls dynamic geometry generation & variable levels of framebuffer detail (VRS) into the Primitive Shaders (I'm sure the Geometry Engine has a part in this as well IIRC the Primitive Shaders are in the GE); Microsoft's solution splits them up into two distinct things, Mesh Shaders (which aren't for dynamic real-time geometry generation, but controlling modification of batches of vertexes more efficiently (normally vertex shading is done on each single vertex at a time) and Variable Rate Shading (VRS).

For as much as both systems have in common in ways I'm even more intrigued now with where their differences lay.

EDIT: I say "differences" but technically both systems can use both techniques in a more generic way I suppose. Just going from some other things mentioned in the same video. Each one is still optimized for a particular approach but I suppose that doesn't mean the other is unavailable for usage in a more "unoptimized" fashion.

That guy seemed to have no clue how the Xbox backwards compatibility works, then when he kept going back to SSD throughput for every comparison he spent literally seconds on SFS and didn't factor it in at all. He also had no clue how similar Mesh Shaders and Primitive shaders are as the main difference is pipeline control, didn't mention hardware support just describing them both as RDNA2.
Not worth the time it took him to make.
 
Last edited:

dcmk7

Member
Didn't know where else to post this but, just rewatched this...




...and jump to the part starting @ 14:25. It speaks about the Primitive Shaders for PS5. It seems like they do work notably differently to Mesh Shaders because they're intended for generating new geometry in real-time.

What I didn't realize is, that is also used for adjusting level of detail in rendering depending on areas of focus. I.E areas that don't need as much detail have rendering pulled back while those of more focus have additional detail generated with new on-the-fly geometry rendering.

That is essentially a form of VRS, and it actually lines up with PS5 Software Engineer Matt Hargett's tweets from a couple years ago. Remember when they teased about VRS being useful, but possibly even earlier in the graphics pipeline? I think this is what they meant.

So basically, both systems have VRS, they just implement it differently. Microsoft's is implemented later in the pipeline for the framebuffer, Sony's is done earlier on using the Primitive Shaders (which also lines up with Matt's tweets). Another way of looking at it is, Sony's solution rolls dynamic geometry generation & variable levels of framebuffer detail (VRS) into the Primitive Shaders (I'm sure the Geometry Engine has a part in this as well IIRC the Primitive Shaders are in the GE); Microsoft's solution splits them up into two distinct things, Mesh Shaders (which aren't for dynamic real-time geometry generation, but controlling modification of batches of vertexes more efficiently (normally vertex shading is done on each single vertex at a time) and Variable Rate Shading (VRS).

For as much as both systems have in common in ways I'm even more intrigued now with where their differences lay.

EDIT: I say "differences" but technically both systems can use both techniques in a more generic way I suppose. Just going from some other things mentioned in the same video. Each one is still optimized for a particular approach but I suppose that doesn't mean the other is unavailable for usage in a more "unoptimized" fashion.

Pretty interesting post / video (y)
 
That guy seemed to have no clue how the Xbox backwards compatibility works, then when he kept going back to SSD throughput for every comparison he spent literally seconds on SFS and didn't factor it in at all. He also had no clue how similar Mesh Shaders and Primitive shaders are as the main difference is pipeline control, didn't mention hardware support just describing them both as RDNA2.
Not worth the time it took him to make.

Keep in mind , the video's from 2020, like right at around the end of March or maybe April that year. I don't think as much of SFS was known at that time compared to today (and actually some things like the "100 GB instantly accessible" stuff is stull cloudy IMO).

I think they did a good job discussing the big differences between Primitive Shaders (also FWIW Sony's Primitive Shaders are based on an updated spec that never got pushed with the Vega GPUs) and Mesh Shaders; yes both systems can use both methods since as you said it's down to pipeline control, but they're both optimized for one of two of those particular approaches which I think is still the unique distinction.

And that's fine, because both systems have different ideas on how to boost performance. Again, they can both use both methods, but at the hardware level they are optimized for a specific implementation. The real reason I posted the video link though was because I think it puts to rest the idea PS5 doesn't have a VRS equivalent; it may not be called VRS for marketing reasons, but the system does have VRS capabilities and they're handled through the GE and Primitive Shaders at an earlier step of the graphics pipeline, that's all.

It's pretty much exactly as Matt Hargett said almost two years ago; I'm just personally coming to the realization in agreement now since it was something I skimmed over back then and didn't really care to look back upon for a long time.
 

Shmunter

Gold Member
Nobody says software VRS isn't possible, but hardware support isn't there, just accept it. Several sources have confirmed this several times well past that video.
Yes, because ps5 has a more advanced system for in screen dynamic detail rendering.

The tech is the cornerstone to psvr2 with gaze tracking where scene in focus pumps the detail and all else gets toned down because you won’t notice. Rendering efficiency increases by 3 or 4 fold according to devs disclosing their experience post GDC2022.

You don’t waste silicone on something like vrs when you’ve leapfrogged it already. Is there anything to argue here or are we going to pretend to not understand any of this?
 
Last edited:
Didn't know where else to post this but, just rewatched this...




...and jump to the part starting @ 14:25. It speaks about the Primitive Shaders for PS5. It seems like they do work notably differently to Mesh Shaders because they're intended for generating new geometry in real-time.

What I didn't realize is, that is also used for adjusting level of detail in rendering depending on areas of focus. I.E areas that don't need as much detail have rendering pulled back while those of more focus have additional detail generated with new on-the-fly geometry rendering.

That is essentially a form of VRS, and it actually lines up with PS5 Software Engineer Matt Hargett's tweets from a couple years ago. Remember when they teased about VRS being useful, but possibly even earlier in the graphics pipeline? I think this is what they meant.

So basically, both systems have VRS, they just implement it differently. Microsoft's is implemented later in the pipeline for the framebuffer, Sony's is done earlier on using the Primitive Shaders (which also lines up with Matt's tweets). Another way of looking at it is, Sony's solution rolls dynamic geometry generation & variable levels of framebuffer detail (VRS) into the Primitive Shaders (I'm sure the Geometry Engine has a part in this as well IIRC the Primitive Shaders are in the GE); Microsoft's solution splits them up into two distinct things, Mesh Shaders (which aren't for dynamic real-time geometry generation, but controlling modification of batches of vertexes more efficiently (normally vertex shading is done on each single vertex at a time) and Variable Rate Shading (VRS).

For as much as both systems have in common in ways I'm even more intrigued now with where their differences lay.

EDIT: I say "differences" but technically both systems can use both techniques in a more generic way I suppose. Just going from some other things mentioned in the same video. Each one is still optimized for a particular approach but I suppose that doesn't mean the other is unavailable for usage in a more "unoptimized" fashion.

In theory is an excellent idea BUT! In practice? Will be used?
Maybe some internal studios with their custom engines can explore this, but everyone else? They'll will rely on VRS2 that it's becoming an industry wide standard, with full support and understanding by everyone.
 
Last edited:

Riky

My little VRR pleasure pearl goes vrrrooommm.
Yes, because ps5 has a more advanced system for in screen dynamic detail rendering.

The tech is the cornerstone to psvr2 with gaze tracking where scene in focus pumps the detail and all else gets toned down because you won’t notice. Rendering efficiency increases by 3 or 4 fold according to devs disclosing their experience post GDC2022.

You don’t waste silicone on something like vrs when you’ve leapfrogged it already. Is there anything to argue here or are we going to pretend to not understand any of this?

Series consoles already have the pre pipeline Mesh Shaders, then SFS. Tier 2 VRS is post, it's a different ball game.
 
Nobody says software VRS isn't possible, but hardware support isn't there, just accept it. Several sources have confirmed this several times well past that video.

You mean Digital Foundry? Well, if a game doesn't need VRS, you're not going to see it in the game. Also if VRS on PS5 works the way it seems to (utilizing the Primitive Shaders at a different part of the graphics pipeline), then I don't think you're going to get a reduction of raster output or texture quality for parts not of focus. Those objects would just have lower levels of geometry detail, which would affect the rasterized result differently.

If the Primitive Shaders implement control of detail for generated objects at later parts of the pipeline, and the Primitive Shaders are themselves hardware then...isn't that essentially hardware support? If it's more about VRS the way the Series consoles do it then I'd agree PS5 doesn't have hardware support for that implementation of the technique, but it obviously has hardware support for an equivalent using the Primitive Shaders.

For Xbox Series X\S

VRS+FSR 2.0 = 120 FPS

amirite? :messenger_grimmacing_

Would've seemed that way at first, but Alex at DF himself came out not long ago saying FSR 2.0 is basically a nothingburger for consoles. Seems like it might be a bigger deal for PC though.

Reason being because console devs have been able to implement their own FSR-style solutions for a while now, and multiple game engines have theirs like UE with TSR. But there might be specific nuances to FSR 2.0 I'm missing that make it a notable jump over 1.0 and maybe something WRT the Series systems to leverage it better?

I think you'll see a lot more 120 FPS games for Series systems once devs are able to leverage Mesh Shaders more. When they can start doing that is the real question.

In theory is an excellent idea BUT! In practice? Will be used?
Maybe some internal studious with their custom engines can explore this, but everyone else? They'll way rely on VRS2 that it's becoming an industry wide standard, with full support and understanding by everyone.

Well the good news is, technically both systems can use Mesh & Primitive Shaders. It's just that each one's pipeline is more optimized for one approach over the other, so they'd be less efficient at the opposite they aren't optimized for.

So PS5 can still use Mesh Shading, it'll just be a bit less efficient at it than Series X. And Series X can use Primitive Shaders, just less efficiently than PS5. But it's also worth noting that any customizations on their GPUs were also probably made in part to optimize for the technique they are specifically tuned for.
 
Last edited:
You mean Digital Foundry? Well, if a game doesn't need VRS, you're not going to see it in the game. Also if VRS on PS5 works the way it seems to (utilizing the Primitive Shaders at a different part of the graphics pipeline), then I don't think you're going to get a reduction of raster output or texture quality for parts not of focus. Those objects would just have lower levels of geometry detail, which would affect the rasterized result differently.

If the Primitive Shaders implement control of detail for generated objects at later parts of the pipeline, and the Primitive Shaders are themselves hardware then...isn't that essentially hardware support? If it's more about VRS the way the Series consoles do it then I'd agree PS5 doesn't have hardware support for that implementation of the technique, but it obviously has hardware support for an equivalent using the Primitive Shaders.



Would've seemed that way at first, but Alex at DF himself came out not long ago saying FSR 2.0 is basically a nothingburger for consoles. Seems like it might be a bigger deal for PC though.

Reason being because console devs have been able to implement their own FSR-style solutions for a while now, and multiple game engines have theirs like UE with TSR. But there might be specific nuances to FSR 2.0 I'm missing that make it a notable jump over 1.0 and maybe something WRT the Series systems to leverage it better?

I think you'll see a lot more 120 FPS games for Series systems once devs are able to leverage Mesh Shaders more. When they can start doing that is the real question.
Primitive shaders are not VRS. Also has any game ever used primitive shaders?
 
Primitive shaders are not VRS. Also has any game ever used primitive shaders?

Not saying Primitive Shaders are VRS, just that they implement a technique similar to VRS at a different part of the graphics pipeline. They can both do effectively the same thing: reduce detail on parts of the image not in focal view, in part to help boost performance per frame.

You just have them being done at different parts of the pipeline using different hardware components. But yes, Primitive Shaders, their real purpose isn't for a VRS-like, it's for dynamic real-time geometry generation, particle generation and the such. It's aimed at boosting fidelity, mainly.

No commercial 3P games use Primitive Shaders because the earlier form on Vega was borked and AMD moved on to Mesh Shaders anyway for RDNA2. Sony are using an updated form of those earlier Primitive Shaders which they've fixed up alongside AMD for PS5, and I wouldn't be surprised if some 1P games already like Horizon Forbidden West are utilizing the Primitive Shaders in some ways.

Primitive/mesh shaders and VRS are completely different things. They can coexist to render the final image

I know that. What I'm saying is that Primitive Shaders allow for controlled culling of detail at an early part of the graphics pipeline which can help reduce attention to detail for those elements at the rasterization stage.

In that way it operates similar to VRS because the intent in that usage case is the same, though I know Primitive Shaders have main uses much different than that. As for Primitives & Meshes, IIRC the difference is Meshes combine the geometry generation and vertex shading into a single stage and allow batched vertex shading. It's aimed mainly at efficiency while Primitives are aimed at increasing fidelity.

Yes they can both be used in tandem, but Sony & MS have made some design choices in their GPUs where one or the other technique will perform better on their hardware. That should lead to some interesting results as time goes on.
 
Last edited:
Didn't know where else to post this but, just rewatched this...




...and jump to the part starting @ 14:25. It speaks about the Primitive Shaders for PS5. It seems like they do work notably differently to Mesh Shaders because they're intended for generating new geometry in real-time.

What I didn't realize is, that is also used for adjusting level of detail in rendering depending on areas of focus. I.E areas that don't need as much detail have rendering pulled back while those of more focus have additional detail generated with new on-the-fly geometry rendering.

That is essentially a form of VRS, and it actually lines up with PS5 Software Engineer Matt Hargett's tweets from a couple years ago. Remember when they teased about VRS being useful, but possibly even earlier in the graphics pipeline? I think this is what they meant.

So basically, both systems have VRS, they just implement it differently. Microsoft's is implemented later in the pipeline for the framebuffer, Sony's is done earlier on using the Primitive Shaders (which also lines up with Matt's tweets). Another way of looking at it is, Sony's solution rolls dynamic geometry generation & variable levels of framebuffer detail (VRS) into the Primitive Shaders (I'm sure the Geometry Engine has a part in this as well IIRC the Primitive Shaders are in the GE); Microsoft's solution splits them up into two distinct things, Mesh Shaders (which aren't for dynamic real-time geometry generation, but controlling modification of batches of vertexes more efficiently (normally vertex shading is done on each single vertex at a time) and Variable Rate Shading (VRS).

For as much as both systems have in common in ways I'm even more intrigued now with where their differences lay.

EDIT: I say "differences" but technically both systems can use both techniques in a more generic way I suppose. Just going from some other things mentioned in the same video. Each one is still optimized for a particular approach but I suppose that doesn't mean the other is unavailable for usage in a more "unoptimized" fashion.

Yes, because ps5 has a more advanced system for in screen dynamic detail rendering.

The tech is the cornerstone to psvr2 with gaze tracking where scene in focus pumps the detail and all else gets toned down because you won’t notice. Rendering efficiency increases by 3 or 4 fold according to devs disclosing their experience post GDC2022.

You don’t waste silicone on something like vrs when you’ve leapfrogged it already. Is there anything to argue here or are we going to pretend to not understand any of this?
Yet again we have here a display of ignorance and the never ending spread of misinformation. The same people who are still pushing this Uber Geometry Engine none-sense were the same people who said that Horizon FW and the PS5 2020 demo were only possible on the PS5 and due only to PS5's special and unique god-like Geometry Engine/SSD. This has been thoroughly debunked and even then with the Matrix Awaken demo coming out in 4 days on PC. These people will be no where to be found or they will be spreading another fabricated none-sense that the demo requires 9,000 GB RAM on the PC just like they did with the Valley of the ancient.

VRS has NOTHING to do with primitive shader and mesh shader.
And saying PS5 has primitive shader so it doesn't need VRS, is like saying PS5 game have shadows so it doesn't need reflections.

Primitive Shaders and Mesh shaders are also different and its not just in its name
Primitive shaders just replaces the Vextex shader, Domain shaders and Geometry Shaders.
Mesh shaders and Amplification Shaders on the other hand does that AND MORE!

"In 2017, to accommodate developers’ increasing appetite for migrating geometry work to compute shaders, AMD introduced a more programmable geometry pipeline stage in their Vega GPU that ran a new type of shader called a primitive shader. According to AMD corporate fellow Mike Mantor, primitive shaders have “the same access that a compute shader has to coordinate how you bring work into the shader.” Mantor said that primitive shaders would give developers access to all the data they need to effectively process geometry, as well. Primitive shaders led to task shaders, and that led to mesh shaders."



Here is what the traditional geometry pipeline looks like:


Here it is after primitive shader implementation: which replaces the vertex shader, Domain shaders and Geometry Shaders (3)

Here it is after mesh shader implementation: It completely replaces the input assembler stage, tessellation stage, vertex shader stage, hull shader stage, domain shader stage, geometry shader stage. (it replaces 6 things). Its a complete redesign of the geometry pipeline.

 
Last edited:

Fafalada

Fafracer forever
Here it is after mesh shader implementation:
I know you're arguing about something completely different - but all I see in that image is that it took us 22 years to get back to PS2 pipeline.

Anyway - one thing of more relevance though
VRS has NOTHING to do with primitive shader and mesh shader.
That depends on implementation.
There are literally patents that describe rendering with variable screen outputs(that includes VRS) that works via Primitive Assembly stage, which sits in the 'Geometry Engine' box in AMD GPUs at least.
 
S staticshock You realize PS5's Primitive Shaders aren't 1:1 the same as the Vega ones, right? AMD updated the Primitive Shader spec, and was never implemented in further Vega GPUs (to my knowledge). I'm not saying that to imply their Primitive Shaders are exactly like Mesh Shaders (there are obvious differences), but there's also nothing you can bring up refuting Matt Hargett's posts on the topic of VRS-like techniques on PS5, can you? ;)

Also chill on that Matrix demo already, that's got nothing to do with the topic 😂
 
VRS has NOTHING to do with primitive shader and mesh shader.

We have or very own Alex Battaglia in this forum. Many of them actually.

What people say when they they compare VRS with the potential of Primitive Shaders is that both can be used for the same end, render less of the scene saving performance.
What they mean is that a game making heavy and proper use of this effect of Primitive Shares would have a bigger bust to performance than a game not using it and relying on VRS instead for the same purpose.


Personally I think that these people are wrong about VRS because it can be also be used for all the parts of the scene that are very visible right then in the from of the player's camera. Primitive Shader will touch that? VRS is also to get those normally visible parts but with not that much detail that don't need the highest pixel precision math.

Also consider that geometry and fill rate are different parts of the hardware and each console have different levels of performance on both.
In the end will win the best software team, that we all know is Sony's anyway. :pie_eyeroll:
 

Salty Pickle

Neo Member
Yet again we have here a display of ignorance and the never ending spread of misinformation. The same people who are still pushing this Uber Geometry Engine none-sense were the same people who said that Horizon FW and the PS5 2020 demo were only possible on the PS5 and due only to PS5's special and unique god-like Geometry Engine/SSD. This has been thoroughly debunked and even then with the Matrix Awaken demo coming out in 4 days on PC. These people will be no where to be found or they will be spreading another fabricated none-sense that the demo requires 9,000 GB RAM on the PC just like they did with the Valley of the ancient.

VRS has NOTHING to do with primitive shader and mesh shader.
And saying PS5 has primitive shader so it doesn't need VRS, is like saying PS5 game have shadows so it doesn't need reflections.

Primitive Shaders and Mesh shaders are also different and its not just in its name
Primitive shaders just replaces the Vextex shader, Domain shaders and Geometry Shaders.
Mesh shaders and Amplification Shaders on the other hand does that AND MORE!

"In 2017, to accommodate developers’ increasing appetite for migrating geometry work to compute shaders, AMD introduced a more programmable geometry pipeline stage in their Vega GPU that ran a new type of shader called a primitive shader. According to AMD corporate fellow Mike Mantor, primitive shaders have “the same access that a compute shader has to coordinate how you bring work into the shader.” Mantor said that primitive shaders would give developers access to all the data they need to effectively process geometry, as well. Primitive shaders led to task shaders, and that led to mesh shaders."



Here is what the traditional geometry pipeline looks like:


Here it is after primitive shader implementation: which replaces the vertex shader, Domain shaders and Geometry Shaders (3)

Here it is after mesh shader implementation: It completely replaces the input assembler stage, tessellation stage, vertex shader stage, hull shader stage, domain shader stage, geometry shader stage. (it replaces 6 things). Its a complete redesign of the geometry pipeline.

You do know that UE5 (matrix demo) uses prim/mesh shaders right? Yet they perform very similarly in UE5 at least. Why then do you argue that one is better than the other?
 
Is there really talk that foveated rendering for VR is applicable to regular TV games and is the superior technology to VRS? Not saying VRS is superior, but like they’re two different use cases of optimization and one requires an eye tracker lol

I haven't seen anyone doing that xD . I'd be interested to see how foveated rendering could work with a PS Eye camera though, if it would bring any benefits to increasing efficiency for traditional rendering to a television by detecting eye movement.

But I don't think that would do anything on its own though, the camera wouldn't be able to tell what specific pixel area the user is looking at so you'd get FR applied sloppily across a big chunk of the frame trying to make a guess.
 

Shmunter

Gold Member
Is there really talk that foveated rendering for VR is applicable to regular TV games and is the superior technology to VRS? Not saying VRS is superior, but like they’re two different use cases of optimization and one requires an eye tracker lol
Foveated rendering is a focus related solution. But for it to be implemented, the underlying engine requires the ability to render different levels of detail per frame.

In a 2d game you wouldn’t need to implement a Foveated technique, but you would exploit the different level of detail per frame ability in a way it makes sense for a flat screen.
 
Top Bottom