• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

IntentionalPun

Ask me about my wife's perfect butthole
If you remove triangles, you're removing information, as well as detail. How can you not be?

They are talking about it in reference to displaying the image at 4k resolution. Or at any number of pixels at any size as the size of the displayed object changes from being closer/nearer the player. There are only ~8 million pixels at 4k, a model with 1 billion tris is still going to display with.. ~8 million pixels at 4k. They are scaling the model itself to those resolutions automatically (likely to the 20 million polygon model at first on import into the engine, then to smaller models depending on the actual display resolution + distance from player... because you CANT store a bunch of billion poly models reasonably for a game, and it would be pointless to do so). The only actual DATA we see on the screen are the lit up pixels. So from the standpoint of presentation, there is no loss in detail.

edit: to be clear it generates >4k assets for disk storage, I think 8k; along with 8k textures, which it can also scale.
 
Last edited:
IntentionalPun IntentionalPun TheThreadsThatBindUs TheThreadsThatBindUs , I think we can blame Epic for improperly using the term lossless in this context. It was not said in reference to compression AT ALL.

When epic said "Nanite crunches down billions of polygons worth of source geometry losslessly to 20m drawn triangles" They are saying the end result is lossless. Like if I handed you two images. One with source material, one from Nanite, You'd say it's the same fucking image.

I agree, it was not proper use of the term but it couldn't mean anything else. No other explanation makes sense. Of course Epic isn't altering the source asset each time a LOD is created for a frame, as TheThreadsThatBindUs TheThreadsThatBindUs just said it is the reference from which all LODs of that asset are produced. That wouldn't be called lossless either, but non destructive editing leaving the source intact. That's why I'm 99% sure they are just referring to the end result Nanite crunches being lossless or identical to the full resolution asset.

I agree with you.

I think you guys both understand this and are just arguing semantics.

Not sure I can agree this is true, however.
 
They are talking about it in reference to displaying the image at 4k resolution. Or at any number of pixels at any size as the size of the displayed object changes from being closer/nearer the player. There are only ~8 million pixels at 4k, a model with 1 billion tris is still going to display with.. ~8 million pixels at 4k. They are scaling the model itself to those resolutions automatically. The only actual DATA we see on the screen are the lit up pixels. So from the standpoint of presentation, there is no loss in detail.
On this point I agree. Had you said this all along, there would have been no argument.
 
Why can't you two take your fight to PM's? Trust me no one gives a shit. I don't even know how to @ someone here. lol fml
I don’t understand when people say this. It’s a next-gen speculation thread therefore next-gen speculation is what will be discussed and there will be disagreements just as there have been seen this thread was created. Nothing new.

I also agree that this thread has been dry for months now and severel people on this thread always seem to discuss off-topic and extremely trivial issues but no one seems to mind.
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
On this point I agree. Had you said this all along, there would have been no argument.
I referred to the concept multiple times, in my responses to you, including your previous one to this response... and my first response to you after I felt you were being insulting.

I said this: "it then scales that geometry to as close to the rendering resolution as possible, without any loss of detail, by scaling to a number of micro-polygons as close to the number of pixels as possible."

And your response was "You're parroting the quote from Epic without even thinking critically about what you're saying."

You were quoting my posts without thinking critically about what I was saying. You never bothered to ask what I meant by referring to the number of pixels.. which I did several times... and repeated the word "detail" several times. It's difficult to want to elaborate when someone is just insulting you and calling your ideas "Weird" and "absurdley false" and "you don't know what you are talking about.

edit: Anyways, back on topic:

I'M EATING FISH FOR DINNER.

HA-HA.. it's funny cuz there's a poster with a fish avatar. sircaw sircaw ;)

But seriously, I'm having salmon.
 
Last edited:

DJ12

Member
Apparently there's a glitch in Control in Photo Mode on the PS5, that unlocks FPS.... Has anyone noticed it here. 👀
No, but I highly recommend you ask someone with a framerate analyser to make a video immediately, before the bug gets fixed.

My guess is a steady 33~35 FPS or 70 in performance mode.
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
No, but I highly recommend you ask someone with a framerate analyser to make a video immediately, before the bug gets fixed.

My guess is a steady 33~35 FPS or 70 in performance mode.
Isn't photo mode with the game on "pause" anyways?

Or is there another mode where you just hide HUD elements?
 
Last edited:

sircaw

Banned
I referred to the concept multiple times, in my responses to you, including your previous one to this response... and my first response to you after I felt you were being insulting.

I said this: "it then scales that geometry to as close to the rendering resolution as possible, without any loss of detail, by scaling to a number of micro-polygons as close to the number of pixels as possible."

And your response was "You're parroting the quote from Epic without even thinking critically about what you're saying."

You were quoting my posts without thinking critically about what I was saying. You never bothered to ask what I meant by referring to the number of pixels.. which I did several times... and repeated the word "detail" several times. It's difficult to want to elaborate when someone is just insulting you and calling your ideas "Weird" and "absurdley false" and "you don't know what you are talking about.

edit: Anyways, back on topic:

I'M EATING FISH FOR DINNER.

HA-HA.. it's funny cuz there's a poster with a fish avatar. sircaw sircaw ;)

But seriously, I'm having salmon.
Dad never came home today, You bastard. "lollipop_disappointed:
 

PaintTinJr

Member
So this culling discussion...

I had originally written more but I want to focus on two points. The lossless argument and is Nanite considered culling.

From Epic "Nanite crunches down billions of polygons worth of source geometry losslessly to 20m drawn triangles" - Epic.

So Nanite is able to generate on the fly an infinite level of LODs for every object based on what the viewport requires for that specific frame. If an object is far enough from the camera that small details couldn't be seen, then they aren't drawn. THAT is what Epic means by lossless, The final frame wouldn't look any different if it was drawn with the full quality assets vs what nanite crunched down because the polygons are already as small as a pixel. The source geometry is also unchanged (although why would it be) So I guess it's lossless in that sense. In my world this is just called working non destructively, where you are preserving the original asset or image.

If you want to argue that what Nanite is doing is also considered culling then I do kinda see the point. In a broad sense it's reducing geometric complexity to increase performance. Same goals. However what Nanite is doing is not replacing traditional frustum culling but rather works in concert with it. Nanite first crunches down the scene to determine the necessary polygons for the frame, then the traditional culling methods are applied once the scene is built. Nanite has to do its work first creating all the unique LODs to determine the polygons for that frame before the culling pass.

So in a broad definition Nanite is culling the original assets on the fly. It is not however replacing or even performing traditional culling of back-faced, obstructed or off screen geometry.
It is certainly intriguing IMHO, given how magnificent the UE5 demo looked.

What if rather than 1 frustum and many pixels, they are using 1 frustum for every 3x3 pixels - and transforming the frusta(frustums) into model space, instead of models into world space? Would something like that work? Theoretically you could prepare the model data for being intersected/indexed/culled by such frusta I suspect, and then the 3x3 pixels would be blended to provide 1 pixel at each screen pixel - averaged from 8 boundary pixels and centre pixel, but even that doesn't sound like their ~4 polygons per pixel claim.

I don't know, it is probably something really simple and complimentary to existing rendering, but visually it looked so good, I'm still sort of expecting something really different in how the pipeline works.
 

Elog

Member
On the culling and lossless discussion we lack detail from Epic exactly how the new UE works but assuming we take their word as gospel they state the following:

- Normally culling is done before rasterization, i.e. the engine makes assumptions about which objects that are hidden and which objects that are far away to save rendering capacity. These assumptions are ofc for the most part correct but there will be mathematical errors in this calculation so some objects that should have been rendered is not. Or in other words there is a loss in perfect image quality from this process.

- With Nanite the engine seems to have some sort of pre-rasterization step of only the geometry so the engine actually knows - i.e. no assumption - which objects that will not be shown at all in the final rasterization step and which objects that are so small they will be represented by single pixels. This results in lossless culling since the match is 100% compared to the traditional culling step. This is also one of the core philosophies behind the GE hardware in the PS5 (so it makes perfect sense why Sony and Epic has collaborated around this).

I believe this is what 'lossless' refers to in this context. It is not the loss of geometry but the loss of mismatches between culling and final rasterized image.
 
Last edited:
On the culling and lossless discussion we lack detail from Epic exactly how the new UE works but assuming we take their word as gospel they state the following:

- Normally culling is done before rasterization, i.e. the engine makes assumptions about which objects that are hidden and which objects that are far away to save rendering capacity. These assumptions are ofc for the most part correct but there will be mathematical errors in this calculation so some objects that should have been rendered is not. Or in other words there is a loss in perfect image quality from this process.

- With Nanite the engine seems to have some sort of pre-rasterization step of only the geometry so the engine actually knows - i.e. no assumption - which objects that will not be shown at all in the final rasterization step and which objects that are so small they will be represented by single pixels. This results in lossless culling since the match is 100% compared to the traditional culling step. This is also one of the core philosophies behind the GE hardware in the PS5 (so it makes perfect sense why Sony and Epic has collaborated around this).

I believe this is what 'lossless' refers to in this context. It is not the loss of geometry but the loss of mismatches between culling and final rasterized image.

This makes a lot of sense. Thanks for the clarification and the insight.
 
Last edited:

Zoro7

Banned
I don’t understand when people say this. It’s a next-gen speculation thread therefore next-gen speculation is what will be discussed and there will be disagreements just as there have been seen this thread was created. Nothing new.

I also agree that these thread has been dry for months now and severel people on this thread always seem to discuss off-topic and extremely trivial issues but no one seems to mind.
Calm down mate it might not happen.
 

LucidFlux

Member
It is certainly intriguing IMHO, given how magnificent the UE5 demo looked.

What if rather than 1 frustum and many pixels, they are using 1 frustum for every 3x3 pixels - and transforming the frusta(frustums) into model space, instead of models into world space? Would something like that work? Theoretically you could prepare the model data for being intersected/indexed/culled by such frusta I suspect, and then the 3x3 pixels would be blended to provide 1 pixel at each screen pixel - averaged from 8 boundary pixels and centre pixel, but even that doesn't sound like their ~4 polygons per pixel claim.

I don't know, it is probably something really simple and complimentary to existing rendering, but visually it looked so good, I'm still sort of expecting something really different in how the pipeline works.

I'm not sure I follow. I don't believe having multiple frustrum even makes sense. Now, it's been almost a decade since I worked in any game engine but the frustrum is simply what is currently viewable on screen as dictated by the scene's "camera". Nanite's algorithm would make its calculations from this single frustrum because that's the point of view the player sees these assets from.
 
You should tell that to them. Although, i doubt they'll even listen. They seem to think that the fridge meme has some sort of significance now, so they're just pushing things even further when it's not even necessary.

Not all fridge memes are positive though.

feel fridge GIF


I think it's getting a bit old at this point. They should try something else. Heck even Craig is starting to wear itself out.
 

LucidFlux

Member
On the culling and lossless discussion we lack detail from Epic exactly how the new UE works but assuming we take their word as gospel they state the following:

- Normally culling is done before rasterization, i.e. the engine makes assumptions about which objects that are hidden and which objects that are far away to save rendering capacity. These assumptions are ofc for the most part correct but there will be mathematical errors in this calculation so some objects that should have been rendered is not. Or in other words there is a loss in perfect image quality from this process.

- With Nanite the engine seems to have some sort of pre-rasterization step of only the geometry so the engine actually knows - i.e. no assumption - which objects that will not be shown at all in the final rasterization step and which objects that are so small they will be represented by single pixels. This results in lossless culling since the match is 100% compared to the traditional culling step. This is also one of the core philosophies behind the GE hardware in the PS5 (so it makes perfect sense why Sony and Epic has collaborated around this).

I believe this is what 'lossless' refers to in this context. It is not the loss of geometry but the loss of mismatches between culling and final rasterized image.

Nailed it. But wait there's more, in addition to geometry Epic claims Nanite can do this with textures as well (according to the demo). This goes a step beyond DX partially resident textures where textures are chunked up with the higher resolution portions only called into specific tiles as needed.

But Nanite can essentially draw LODs for textures from the full resolution source texture, I imagine this is why the memory footprint can be so light. You're literally only loading the necessary textures needed down to the pixel.
 
Nailed it. But wait there's more, in addition to geometry Epic claims Nanite can do this with textures as well (according to the demo). This goes a step beyond DX partially resident textures where textures are chunked up with the higher resolution portions only called into specific tiles as needed.

But Nanite can essentially draw LODs for textures from the full resolution source texture, I imagine this is why the memory footprint can be so light. You're literally only loading the necessary textures needed down to the pixel.
This is admittedly pretty darn rad.

I'm curious to understand exactly how they're doing this.
 

PaintTinJr

Member
I'm not sure I follow. I don't believe having multiple frustrum even makes sense. Now, it's been almost a decade since I worked in any game engine but the frustrum is simply what is currently viewable on screen as dictated by the scene's "camera". Nanite's algorithm would make its calculations from this single frustrum because that's the point of view the player sees these assets from.
I'm meaning that you represent that single frustum as many partly overlapping frusta (one per screen pixel, but occupying 3x3 pixels so each has an 1 pixel overlap with each of the 8 surrounding frusta).

I was then thinking they could maybe flip the process because I suspect the total number of model bounding boxes tested per pixel after space partitioning accelerating is far less than the problem of millions of polygons per model being redundantly processed at such polygon, model and resolution pixel counts.

So, rather than having 1 frustum working with many models with increasing redundant processing, you could reverse the process and move the clipping frusta into the models' worldspace, that then allows each frustum to bound just the needed geometry and project to a tiny fixed resolution buffer, which would automatically infer a polygon LOD level, to maintain 4 polygons per pixel - rather than dealing with a single model LOD level at all pixels it renders to screen for that instance, if some are close and some are far, a trade off is required in splitting the workload, or rendering all at higher, or all at lower LOD level AFAIK.

But, obviously this is just me trying to think how they did it, and likely flawed in some obvious ways in my thinking.

Edit:
AFAIK, both Carmack and Sweeney talked about graphics programming returning to be more generalised back in 2000s - like it was when they started their 3D programming writing software renderers that ran on the CPU. And given the low rendering cost Epic claimed for the UE5 demo, it seems like async compute - and/or a versatile geometry engine - along with the IO complex doing low latency data lookup/streaming was used heavily for nanite - instead of traditional fixed path vertex/rasterization pipelines with lots of static data in VRAM. And where my hypothetical frusta pseudo idea - instead of frustum - wouldn't work in the traditional model because the state changes and draw call count would multiply by frustum count, in async or a geometry engine - that was more general purpose - presumably this type of workload could be just as applicable as any other - so long as the model data could be accessed with low latency at a granular level.
 
Last edited:

DJ12

Member
No, but I highly recommend you ask someone with a framerate analyser to make a video immediately, before the bug gets fixed.

My guess is a steady 33~35 FPS or 70 in performance mode.
Turns out the "Dictator" has know about this for half a day and not posted any further about it.

My guess, it upsets his PC performance schtick so he's wanting for it to be patched out so he can say it doesn't exist.
 

Tchu-Espresso

likes mayo on everthing and can't dance
No, but I highly recommend you ask someone with a framerate analyser to make a video immediately, before the bug gets fixed.

My guess is a steady 33~35 FPS or 70 in performance mode.
It looks quite a bit smoother than that in quality mode to me, although depends on the scene.
 
Last edited:

DeepEnigma

Gold Member
Don’t get me wrong, I enjoyed hellblade. But it was most impressive for its audio. The gameplay wasn’t exactly on nd level. I never played enslaved though. DmC gets crap but I thought the gameplay was good
Enslaved was really good. Alex Garland was involved, Andy Serkis, etc.. If you ever get a chance to check it out, I recommend it. Had some Uncharted inspiration as well.
 

SlimySnake

Flashless at the Golden Globes
They were on a good path with Enslaved.
Nah enslaved was a massive downgrade from heavenly sword. Maybe the characters and story were better but the combat was bare bones and setpieces were a big step down from the awesome cinematic boss fights in heavenly sword.

It was a great first entry in the franchise and it sucks that Sony didn't give them a second chance like they did uncharted.
 

SlimySnake

Flashless at the Golden Globes
Q3 earnings announcement at 2am EST.

I hope we get some numbers. I have a feeling that they haven't outsold the ps4's 4.1 million sellthrough by January. I wouldn't be surprised if those Bloomberg articles were right and the reason for the shortages are the bad yields.

P.s Sony only hides numbers if they are bad. So let's hope we see them tomorrow.
 
Status
Not open for further replies.
Top Bottom