• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Half Life Alyx on PSVR2 via PC. I never thought I’d see the day.

A.Romero

Member
I guess I'm just not understanding the Venn diagram of people who:

- Have a PC capable of running Alyx well
- Are clearly interested in VR
- Are clearly interested in Alyx
- Somehow haven't already played it on a headset nearly half the price years ago

But obviously that's going to apply to a few people that only just got into VR, I guess.

- Have PC
- Have Oculus Rift
- Interested in Alyx
- Haven't played it

Interested in PSVR2 because I'm not planning to support Meta and their devices anymore.
 
  • Thoughtful
Reactions: AV

nemiroff

Gold Member
Yes, it does. You move your eyes around and anything outside of the center would be worse.

That's not FFR per se, that's the lens properties and how it affects light rays on different parts of the lens. That's why f.ex. the Oculus DK1/DK2 had horrendous iq at the periphery and tiny sweet spots even before FFR was introduced. With the correct use of a foveation map FFR you don't have to worsen the image quality (but as a developer you can if you want..). Hence the diagram I posted earlier.

If you look at actual foveation maps they are mapped to the individual lense properties for each headset. Which is also in context to why I wrote that headsets with pancake lenses will benefit more from eyetracking than headsets with fresnel lenses.
 

Buggy Loop

Member
Quest 3 doesnt have eye tracking so no foveated reendering and doesnt have a dedicated cable either, im not saying its shit , but not really the next gen Quest you would hope for.
imo, they should of focused on variable focus and eye tracking.

It has fixed foveated rendering of course. You barely save performances. Someone wanting to play Half Life Alyx which was effectively made in mind when VR required a 970 to pass the requirement test is not caring about foveated or to buy a $500+ headset fearing that his <$500 PC will not handle the game.

The foveated will not work on PC unless Sony officially supports PCVR to begin with so I’m not sure who keep’s insisting on a feature while right now what we have is a hack, not the whole software stack Sony has.

Quest 3 has more pixel density, pancake, smaller form factor, a dedicated chipset that packs a punch, a battery, is lighter, still has airlink and that will even improve with AV1 decoding, and Link cable also with AV1 so.. and of course the full support and patching from Meta rather than a hack unless Sony steps in and support officially. Not to mention that Oculus game studios are pretty fucking good, on top of PCVR support.

Just why? Who would buy that when it’s just a neat hack as of now? If Sony supports, then we can have a discussion. For now, features you’re hyping aren’t there, they just aren’t.
 

R6Rider

Gold Member
That's not FFR per se, that's the lens properties and how it affects light rays on different parts of the lens. That's why f.ex. the Oculus DK1/DK2 had horrendous iq at the periphery and tiny sweet spots even before FFR was introduced. With the correct use of a foveation map FFR you don't have to worsen the image quality (but as a developer you can if you want..). Hence the diagram I posted earlier.

If you look at actual foveation maps they are mapped to the individual lense properties for each headset. Which is also in context to why I wrote that headsets with pancake lenses will benefit more from eyetracking than headsets with fresnel lenses.
Foveated imagery is the higher rendering of an image at specific points. In a game this is typically the center of the display where a player is most often looking. Tons of articles on this online with examples.

Lenses matter, but Fixed Foveated Rendering is far worse than Eye-tracked.

Here's an article from UploadVR for those interested:

So back to your earlier comment about it not making sense, yes, it does.
 
Last edited:

nemiroff

Gold Member
Foveated imagery is the higher rendering of an image at specific points. In a game this is typically the center of the display where a player is most often looking. Tons of articles n this online with examples.

Lenses matter, but Fixed Foveated Rendering is far worse than Eye-tracked.

Here's an article from UploadVR for those interested:

So back to your earlier comment about it not making sense, yes, it does.
What the fuck is going on.. Are you for real..? That's the article I posted the image from earlier! (5%-9% performance over eye tracked FR)

How do you not know that lenses, especially fresnel lenses are inherently blurry at the sides of center. Hence why FFR was introduced (to reduce resolution/performance without affecting iq).
 
Last edited:

R6Rider

Gold Member
What the fuck is going on.. Are you for real..? I've been doing this for twenty years. How do you not know that lenses, especially fresnel lenses are inherently blurry at the sides of center. Hence why FFR was introduced.
You literally claimed above that it doesn't make any sense how the edges would look worse when looking around in a game with Fixed Foveated Rendering.

They DO look worse outside of the detailed render area. That's a fact.

20 Years? Are YOU for real. Foveated rendering is not hard to understand and even less so to see how it works with videos showcasing it in action.
 

nemiroff

Gold Member
You literally claimed above that it doesn't make any sense how the edges would look worse when looking around in a game with Fixed Foveated Rendering.

They DO look worse outside of the detailed render area. That's a fact.

20 Years? Are YOU for real. Foveated rendering is not hard to understand and even less so to see how it works with videos showcasing it in action.
You'll feel like a schmuck when you realize..

I'll just go ahead and put you on ignore.
 
Last edited:

Fafalada

Fafracer forever
Analysis of the Quest Pro. FFR vs ETFR:
Is there any test methodology data provided for this?
That chart in of itself says absolutely nothing - we're talking methods designed to save on pixel compute, I have no idea what they were measuring there when they say 'GPU', or in what conditions.
Implementation matters as well - if someone does this the naive-way like - using VRS - you can and will run into diminishing returns all over the place depending on the scene topology - and that has nothing to do with actual workload-gains, just limitations of VRS itself.

Also the statement that 'more aggressive FOV map' means 'more lossy' makes the entire comparison pointless if it's true.
The comparison only works if the quality metric is fixed (and assessing that objectively measure between the two is difficult - given that the entire point of these methods is minimizing the amount of pixel-work done while maintaining perceptual quality) - if it's not - what's to stop someone to be 'extra aggressive' and just downsample to arbitrarily nonsensical numbers.

It’ll come, but it’s not a feature on PCVR even worth discussing about. PC have fixed foveated rendering for the longest time if performance is a concern.
PCVR was entirely brute-forcing everything for the longest time because hw support to implement variable pixel-distribution in PC GPUs was a fragmented shit-show until 2019 or thereabouts. Performance was always a concern - but the most viable solution for end-users was to buy a bigger GPU.
Even today - the one standard that does have broad support (VRS) is substantially limited - though it's better than situation before at least.
 

nemiroff

Gold Member
Is there any test methodology data provided for this?
That chart in of itself says absolutely nothing - we're talking methods designed to save on pixel compute, I have no idea what they were measuring there when they say 'GPU', or in what conditions.
Implementation matters as well - if someone does this the naive-way like - using VRS - you can and will run into diminishing returns all over the place depending on the scene topology - and that has nothing to do with actual workload-gains, just limitations of VRS itself.

Also the statement that 'more aggressive FOV map' means 'more lossy' makes the entire comparison pointless if it's true.
The comparison only works if the quality metric is fixed (and assessing that objectively measure between the two is difficult - given that the entire point of these methods is minimizing the amount of pixel-work done while maintaining perceptual quality) - if it's not - what's to stop someone to be 'extra aggressive' and just downsample to arbitrarily nonsensical numbers.


PCVR was entirely brute-forcing everything for the longest time because hw support to implement variable pixel-distribution in PC GPUs was a fragmented shit-show until 2019 or thereabouts. Performance was always a concern - but the most viable solution for end-users was to buy a bigger GPU.
Even today - the one standard that does have broad support (VRS) is substantially limited - though it's better than situation before at least.

It's not pointless, it's even described in detail in the Meta SDKs. I can't believe after all these years I'd have this type of discussion just because Sony released a headset. It's astonishing.
 

Fafalada

Fafracer forever
You literally claimed above that it doesn't make any sense how the edges would look worse when looking around in a game with Fixed Foveated Rendering.
They DO look worse outside of the detailed render area. That's a fact.
Terminology has been abused to hell - but what Oculus refers to as 'FFR' is really intended to be just lens-matching. What the other poster talks about - geometry of the lens means that you can efficiently redistribute pixels to mimick the distortion of the lens (aka. FFR) and have the exact same perceptual quality as if you rendered it brute-force. This is how majority of console and mobile VR has operated to date.

The reason people equate 'loss in quality' with FFR is that a lot of applications, instead of matching the lens, decide to be more 'aggressive' and cull the pixel resolution lower. But it's not - in of itself, a property of the approach, it's how developers decide to apply it.
The same applies to eye-tracking (it's supposed to be perceptually lossless - but it's entirely possible to make it lossy and still - sort of get away with it).

It's not pointless, it's even described in detail in the Meta SDKs.
I have no idea what you're even replying to here?
 
Last edited:

nemiroff

Gold Member
Terminology has been abused to hell - but what Oculus refers to as 'FFR' is really intended to be just lens-matching. What the other poster talks about - geometry of the lens means that you can efficiently redistribute pixels to mimick the distortion of the lens (aka. FFR) and have the exact same perceptual quality as if you rendered it brute-force. This is how majority of console and mobile VR has operated to date.

The reason people equate 'loss in quality' with FFR is that a lot of applications, instead of matching the lens, decide to be more 'aggressive' and cull the pixel resolution lower. But it's not - in of itself, a property of the approach, it's how developers decide to apply it.
The same applies to eye-tracking (it's supposed to be perceptually lossless - but it's entirely possible to make it lossy and still - sort of get away with it).


I have no idea what you're even replying to here?
Maybe I misunderstood you.

I don't see how it would be difficult to access the api and turn off and on eyetracking/ffr and measure performance between them at different map resolution level.

Regarding the numbnuts I tried to talk to earlier, I forgot to mention FOV distortion, which is as lense distortion itself a contributing factor to why FFR is so benefitial even without eyetracking.

 
Last edited:

Buggy Loop

Member
It's not pointless, it's even described in detail in the Meta SDKs. I can't believe after all these years I'd have this type of discussion just because Sony released a headset. It's astonishing.

It's new for them, thus they think they're always at the edge of technology.

There's a reason headset suppliers ditched OLED, there's a reason why everyone has a solution for eye tracked foveated, for YEARS, yet they don't implement them. (it will make more sense with AR).

But no, Sony knows all.
 

rofif

Can’t Git Gud
Meryl Streep Doubt GIF


Not sure why anyone on PCVR would pick this over the upcoming Quest 3
yep. It will not work well. Doubt they will get eye tracking and stuff to work.
Besides - psvr2 is kinda shitty. HDR screens are good but sweet spot is some of the smallest I've ever seen. Fresnel lenses also suck ass. And the grainy pattern on oled was bad.
Ultimately I returned it due to bad comfort + sweetspot. I constantly had to move it around. Halo straps are the worst. I never had to move my rift cv1 around on my head. I could do full 7 hours play session without any adjusting. You can never do that with halo straps. They will always move around and creep up the back of your head.
quest 3 with pancakes will rock.
 

Fafalada

Fafracer forever
I don't see how it would be difficult to access the api and turn off and on eyetracking/ffr and measure performance between them at different map resolution level.
My point was that without being explicit about what you're measuring (GPU metrics aren't one number - not even close, and none of this optimizes for 'GPU' as a whole), and exactly how the two are configured (ie. the perceptual quality target should be the same between two runs - else you're comparing apples to coconuts), I have no idea what the graph is telling me.

Second bit is that - as I alluded to in another post - implementing non-linear distribution of pixel/sample coverage can have great variance in what it does for GPU performance as well - and this is completely orthogonal to whether you use eyetracking with it or not. If I pick a geometry limited method, and my scene happens to use a lot of geometry processing, my gains will be proportionally poor. And vice versa - it's *easy* to setup demo-scenes that prove just about anything I want to them to prove.
Basically without a context of a broader statistical sample (and accounting for the combination of above variables) making blanket statements how X is Y% different from Z - is meaningless.
I said exactly the same thing when Unity demos were first shown with PSVR2 and various gains they achieved - none of those multipliers meant anything in isolation - but people were all too keen to take the highest or lowest number (depending on what they were trying to prove) and run with it.

Regarding the numbnuts I tried to talk to earlier, I forgot to mention FOV distortion, which is as lense distortion itself a contributing factor to why FFR is so benefitial even without eyetracking.
Lens and FOV distortion is *the* reason why FFR exists at all. If GPUs natively supported non-linear projection rendering, we wouldn't even be having a discussion about it - everyone would just plug in the lens-equation to the camera matrix and be done with it - but we don't have that.
 

nemiroff

Gold Member
My point was that without being explicit about what you're measuring (GPU metrics aren't one number - not even close, and none of this optimizes for 'GPU' as a whole), and exactly how the two are configured (ie. the perceptual quality target should be the same between two runs - else you're comparing apples to coconuts), I have no idea what the graph is telling me.

Second bit is that - as I alluded to in another post - implementing non-linear distribution of pixel/sample coverage can have great variance in what it does for GPU performance as well - and this is completely orthogonal to whether you use eyetracking with it or not. If I pick a geometry limited method, and my scene happens to use a lot of geometry processing, my gains will be proportionally poor. And vice versa - it's *easy* to setup demo-scenes that prove just about anything I want to them to prove.
Basically without a context of a broader statistical sample (and accounting for the combination of above variables) making blanket statements how X is Y% different from Z - is meaningless.
I said exactly the same thing when Unity demos were first shown with PSVR2 and various gains they achieved - none of those multipliers meant anything in isolation - but people were all too keen to take the highest or lowest number (depending on what they were trying to prove) and run with it.


Lens and FOV distortion is *the* reason why FFR exists at all. If GPUs natively supported non-linear projection rendering, we wouldn't even be having a discussion about it - everyone would just plug in the lens-equation to the camera matrix and be done with it - but we don't have that.

You're free to change the direction of the discussion to be more about how to accurately measure performance, and question methods. That's legit. But my journey into this topic was as a comment against the perceived notion that ETFR is an excusive holy grail of performance boosting for VR. That's all.

My two simple points were:
1. It kinda isn't
2. It's contextual. Hence my reference to lens tech, lens properties (and later FOV warping). It's also important to take into consideration the intersection line between performance and image quality which is all in the hands of the developer.

The graph itself was taken from a talk Meta held about optimizing performance for the Quest headsets. In fact the segment it was taken from was clearly more of a promotion of the newly implemented ETFR for the Quest Pro in their SDK rather than FFR per se. That's why in my opinion is pretty damn safe to assume there's no particular bias in the performance indication. They were using a bespoke test application to test the performance difference, and thus I see no reason for why it shouldn't be used as a rough estimation.

The difference seemed to me to be around 5-9%. But if it would make people happy we could even raise it to 15-20% on a whim.., in the big picture it's still wouldn't be that much difference relative to the original notion.
 
Last edited:

K2D

Banned
I guess I'm just not understanding the Venn diagram of people who:

- Have a PC capable of running Alyx well
- Are clearly interested in VR
- Are clearly interested in Alyx
- Somehow haven't already played it on a headset nearly half the price years ago

But obviously that's going to apply to a few people that only just got into VR, I guess.

Basically all late adopters?

I had PSVR, and know I want the best performance/value headset.

I have a PS5, and I could upgrade my GPU down the line.
 

Buggy Loop

Member
Sony should just fucking post official PC drivers already. If they really want to push VR as a platform, they need to not be so restrictive with their own share of it.

Don’t they lose money on each units? Without sales from game store if they support PCVR, it must not be interesting for them financially.
 

cyberheater

PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 Xbone PS4 PS4
Don’t they lose money on each units? Without sales from game store if they support PCVR, it must not be interesting for them financially.
I’ve googled search and I can’t find any articles that state if Sony makes or losses money on PSVR2.
 

Fafalada

Fafracer forever
You're free to change the direction of the discussion to be more about how to accurately measure performance, and question methods. That's legit. But my journey into this topic was as a comment against the perceived notion that ETFR is an excusive holy grail of performance boosting for VR. That's all.
It's hardly an exclusive given there are multiple headsets that support it (and one released like - 4 years ago IIRC).
It is borderline useless on PC - but that applies to most forms of similar optimization on PC, including 'FFR', due to aforementioned hardware fragmentation. Closed boxes are the primary benefactors here, so consoles and Quest, pretty much.

1. It kinda isn't
2. It's contextual.
I appreciate the reasoning for first point - but your 2nd one is what my argument was all about. I don't think blanket statements in either direction are meaningful without context. And said context for the Oculus benchmarks is pretty specific to their hw/sw combination, at given point in time.

Hence my reference to lens tech, lens properties (and later FOV warping). It's also important to take into consideration the intersection line between performance and image quality which is all in the hands of the developer.
We obviously agree here - that's what I was referencing in the first post. When benchmark alludes to variable quality settings it already sets off alarms for me because I have no idea how they're measuring that. Ie. if we're gonna make statements about benefits of different optimization techniques - we either need to affix quality, or performance and observe the other metric in isolation to get a sense of trade-offs. Moving both - just makes the whole thing a jumble of noise.

The graph itself was taken from a talk Meta held about optimizing performance for the Quest headsets. In fact the segment it was taken from was clearly more of a promotion of the newly implemented ETFR for the Quest Pro in their SDK rather than FFR per se.
That helps as context, but also makes my point for why it's not useful for broader comparisons across different hw/sw stacks.
To give a specific example of a similar thing - Sony had benchmarks of 4-5 different FFR/lens-matching optimizations for PSVR back in 2016 (The reason there were multiple approaches is that PS4 hw had no direct way to do variable render-target resolution, so each approach had different trade-offs. PS4Pro added hw for it - but that would only benefit 5-10% of your userbase).
Now - the best case for optimization on PSVR lens (assuming we are targeting no-quality loss compared to brute-force/naive render) was somewhere around 2.2x. The different techniques landed all over the place (on the same demo-level scenario) from 1.2 to close to 2.0. All of them were doing the same kind of optimization - ie. FFR, but savings were variable for the specific test-case.
On a retail title I worked on, we used one of said techniques, as it fit reasonably well to the rendering pipeline we were working with. Contrary to Sony's own benchmark (which put it second to last in terms of raw-performance increase) we were getting approximately a 1.8-2.x performance on average, so very different results. Not because there was attempts at misleading - the types of content rendered, combined with the specific of the rendering pipeline simply yielded different returns.
TLDR - sweeping generalizations made about graphics optimizations, are more often wrong than not - and we're really looking at statistical spectrum with any of these.

My personal view on ETFR itself, is that we're still in very immature stages when it comes to production codebases(and that's where the impact is actually measured for the end-user), and we're up against 30 years of rendering pipeline evolution that went in a different direction, and that's just the software bit. And frankly - we still struggle with many of the basics in VR pipelines - so I never expected we'd be getting big returns early on with this either.
And on hw front there may well be issues as well - admittedly I've not really followed that closely what current hw is actually capable off vs. theoretical limits of where we want it to be.
But to the point - academic research precedent shows theoretical best case is orders of magnitude removed from just rendering the scene statically - but getting the software and hardware to that point may well be far in the future.
 

nemiroff

Gold Member
It's hardly an exclusive given there are multiple headsets that support it (and one released like - 4 years ago IIRC).
It is borderline useless on PC - but that applies to most forms of similar optimization on PC, including 'FFR', due to aforementioned hardware fragmentation. Closed boxes are the primary benefactors here, so consoles and Quest, pretty much.


I appreciate the reasoning for first point - but your 2nd one is what my argument was all about. I don't think blanket statements in either direction are meaningful without context. And said context for the Oculus benchmarks is pretty specific to their hw/sw combination, at given point in time.


We obviously agree here - that's what I was referencing in the first post. When benchmark alludes to variable quality settings it already sets off alarms for me because I have no idea how they're measuring that. Ie. if we're gonna make statements about benefits of different optimization techniques - we either need to affix quality, or performance and observe the other metric in isolation to get a sense of trade-offs. Moving both - just makes the whole thing a jumble of noise.


That helps as context, but also makes my point for why it's not useful for broader comparisons across different hw/sw stacks.
To give a specific example of a similar thing - Sony had benchmarks of 4-5 different FFR/lens-matching optimizations for PSVR back in 2016 (The reason there were multiple approaches is that PS4 hw had no direct way to do variable render-target resolution, so each approach had different trade-offs. PS4Pro added hw for it - but that would only benefit 5-10% of your userbase).
Now - the best case for optimization on PSVR lens (assuming we are targeting no-quality loss compared to brute-force/naive render) was somewhere around 2.2x. The different techniques landed all over the place (on the same demo-level scenario) from 1.2 to close to 2.0. All of them were doing the same kind of optimization - ie. FFR, but savings were variable for the specific test-case.
On a retail title I worked on, we used one of said techniques, as it fit reasonably well to the rendering pipeline we were working with. Contrary to Sony's own benchmark (which put it second to last in terms of raw-performance increase) we were getting approximately a 1.8-2.x performance on average, so very different results. Not because there was attempts at misleading - the types of content rendered, combined with the specific of the rendering pipeline simply yielded different returns.
TLDR - sweeping generalizations made about graphics optimizations, are more often wrong than not - and we're really looking at statistical spectrum with any of these.

My personal view on ETFR itself, is that we're still in very immature stages when it comes to production codebases(and that's where the impact is actually measured for the end-user), and we're up against 30 years of rendering pipeline evolution that went in a different direction, and that's just the software bit. And frankly - we still struggle with many of the basics in VR pipelines - so I never expected we'd be getting big returns early on with this either.
And on hw front there may well be issues as well - admittedly I've not really followed that closely what current hw is actually capable off vs. theoretical limits of where we want it to be.
But to the point - academic research precedent shows theoretical best case is orders of magnitude removed from just rendering the scene statically - but getting the software and hardware to that point may well be far in the future.

I appreciate your input. And I agree with everything you say in that particular silo. Where I have my interest is in more of a general or average space so-to-speak. All I wanted to say is a couple of pretty simple facts:

1. The performance gain of FFR and ETFR "as an average across headsets": relatively high.
2. The performance difference between FFR and ETFR "as an average across headsets": relatively low.

What sparked this "discussion" for me was that there seemed to be a notion that headsets without ETFR are useless. Which is an utterly ridiculous thing to say. Of course, Sony needed ET utilize every single bit of performance, and there's absolutely nothing wrong with that, they did good. The PSVR2 is actually a very good and solid headset.

Bespoke configurations is another matter, and you've adressed that nicely.

Oh and btw; I didn't mean "exclusive" per se. I meant the notion that ETFR is exclusively the only way you can gain performance.

And, Meta did take image quality into consideration. This is mentioned in detail in their SDK documentation.


Edit: I just realized this went off topic.. I'm just gonna see myself out of this thread now.. Sorry.
 
Last edited:
Top Bottom