lukilladog
Member
That is not how rendering and AI "fill-in" works.....
There are other ways to use AI other than fill-in.
That is not how rendering and AI "fill-in" works.....
There's innuendo here somewhere.There are other ways to use AI other than fill-in.
There's innuendo here somewhere.
Soo, ray tracing they said, let's talk about "realism" shall we (and I don't mean Pixar stuff rendered by a freaking render farm, but by NVs tech demo), Corners:
if you need to be reminded how they look like, welp, a real photo:
you can read what's going on in this wonderful blog post:
Now, let's move to "full RT" shall we? Let's be generous, Quake.
it takes 2060 about 20 seconds to generate a decent quality frame.
So how do they render it in a fraction of a second? Meet Green RT Fakery:
1) Temporal denoiser + blur
This is based on previous frame data, so with the textures turned off and the only image you're seeing is what's raytraced. Top image was taken within a few frames of me moving the camera, bottom image was the desired final result that took 3-5 seconds to 'fade' in as the temporal denoiser had more previous frames to work from. Since you are usually moving when you're actually playing a game, the typical image quality of the entire experience is this 'dark smear', laggy, splotchy mess that visibly runs at a fraction of your framerate. It's genuinely amazing how close to a useful image it's generating in under half a second, but we're still a couple of orders of magnitude too slow to replace baked shadowmaps for full GI.
2. Resolution hacks and intelligent sampling zones to draw you eye to shiny things at the cost of detail accuracy (think of it as a crude VRS for DXR)
Here's an image from the same room, zoomed a lot, and the part of the image I took it from for reference:
A - rendered at 1/4 resolution
B - tranparency, this is a reflection on water, old-school 1995 DirectX 3.0 dither hack rather than real transparency calculations
C - the actual resolution of traced rays - each bright dot in region C is a ray that has been traced in just 4-bit chroma and all the dark space is essentially guesswork/temporal patterns tiled and rotated based on the frequency of those ray hits. If you go and look at a poorly-lit corner of the room you can clearly see the repeated tiling of these 'best guess' dot patterns and they have nothing to do with the noisier, more random bright specs that are the individual ray samples.
So, combine those two things together. Firstly we have very low ray density that is used as a basis for region definitions that can then be approximated per frame using a library of tile-based approximations that aren't real raytracing, just more fakery that's stamped out as a best guess based on the very low ray coverage for that geometry region. If I was going to pick a rough ballpark figure, I'd probably say that 3% of the frame data in that last image is raytraced samples and 97% of it is faked interpolation between regions and potato-stamped to fill in the gaps with an approximation. This works fine as long as you just want an approximation, because the human brain does great work in filling in the gaps, especially when it's all in motion. Anyway, once it's tile-stamped a best-guess frame together out of those few ray samples, each of those barely-raytraced frames are blurred together in a buffer over the course of several hundred frames. There will be visual artifacts like in my first point anywhere you have new data on screen, because temporal filtering of on-screen data only means that anything that has appeared from offscreen is a very low-resolution, mostly fake mess for the first few dozen frames.
By Crispy
And what exactly is your point? Nvidia was always upfront about how they do it, what compromises are made to make it possible in real time. You basically wasted your time here to repeat Nvidia's initial RTX presentation and added your personal dramatization and not much else.... Point stands that Nvidia's denoised RT is still a hell of a lot more realistic than anything else in classic real time rendering.Soo, ray tracing they said, let's talk about "realism" shall we (and I don't mean Pixar stuff rendered by a freaking render farm, but by NVs tech demo), Corners:
if you need to be reminded how they look like, welp, a real photo:
you can read what's going on in this wonderful blog post:
Now, let's move to "full RT" shall we? Let's be generous, Quake.
it takes 2060 about 20 seconds to generate a decent quality frame.
So how do they render it in a fraction of a second? Meet Green RT Fakery:
1) Temporal denoiser + blur
This is based on previous frame data, so with the textures turned off and the only image you're seeing is what's raytraced. Top image was taken within a few frames of me moving the camera, bottom image was the desired final result that took 3-5 seconds to 'fade' in as the temporal denoiser had more previous frames to work from. Since you are usually moving when you're actually playing a game, the typical image quality of the entire experience is this 'dark smear', laggy, splotchy mess that visibly runs at a fraction of your framerate. It's genuinely amazing how close to a useful image it's generating in under half a second, but we're still a couple of orders of magnitude too slow to replace baked shadowmaps for full GI.
2. Resolution hacks and intelligent sampling zones to draw you eye to shiny things at the cost of detail accuracy (think of it as a crude VRS for DXR)
Here's an image from the same room, zoomed a lot, and the part of the image I took it from for reference:
A - rendered at 1/4 resolution
B - tranparency, this is a reflection on water, old-school 1995 DirectX 3.0 dither hack rather than real transparency calculations
C - the actual resolution of traced rays - each bright dot in region C is a ray that has been traced in just 4-bit chroma and all the dark space is essentially guesswork/temporal patterns tiled and rotated based on the frequency of those ray hits. If you go and look at a poorly-lit corner of the room you can clearly see the repeated tiling of these 'best guess' dot patterns and they have nothing to do with the noisier, more random bright specs that are the individual ray samples.
So, combine those two things together. Firstly we have very low ray density that is used as a basis for region definitions that can then be approximated per frame using a library of tile-based approximations that aren't real raytracing, just more fakery that's stamped out as a best guess based on the very low ray coverage for that geometry region. If I was going to pick a rough ballpark figure, I'd probably say that 3% of the frame data in that last image is raytraced samples and 97% of it is faked interpolation between regions and potato-stamped to fill in the gaps with an approximation. This works fine as long as you just want an approximation, because the human brain does great work in filling in the gaps, especially when it's all in motion. Anyway, once it's tile-stamped a best-guess frame together out of those few ray samples, each of those barely-raytraced frames are blurred together in a buffer over the course of several hundred frames. There will be visual artifacts like in my first point anywhere you have new data on screen, because temporal filtering of on-screen data only means that anything that has appeared from offscreen is a very low-resolution, mostly fake mess for the first few dozen frames.
By Crispy
Yes, you are right. I was explaining the effect in UE4 and modern engines which render to texture.
The big thing (for me) is from an implementation point of view: It's far easier to reason about a ray tracer than to reason about the whole pipeline for rasterization. It's also kinda the canonical way of how one would naively think about rendering stuff because it's close to our light model(s) in physics (except for the big difference that the viewpoint shoots the rays and not not light sources, I guess). I just find the theory of how ray tracing works much more elegant.
I understand way too little of the tech. From what I gather, apparently the whole tech is still in early stages and the big difference should come when it starts being used more widely for global illumination or something.
The way it is now though, mostly focusing on reflection, it just seems like a way to tank performance for neglectable upgrades.
As RT improves and fake solutions become more robust, that gap closes. Fully path traced stuff is still probably a ways off for modern games but you can do a lot with the kind of power we see in Ampere.Of course not, it's fake. But like other techniques, fakery can improve too.
I just don't see the cost value in it. I would rather an improved, inexpensive fake solution versus real RT.
First game to offer "true" portals was probably Prey (2006), another game from 3D Realms that was decent and didn't get enough recognition.
Leave it to Disney to explain things to people like me who are into learning how things work but need it laid out for them easy enough so even a kid could understand...
Whereas, we're talking about familiar techniques of fakery doing so much impressive work already in simulating the things a game wants to show us, and those are all cool, but... I mean, I don't get them at all? Three gens in, I still don't really know much about "Deferred-Rendering" or "Forward-Rendering", or could explain what a "Cube-Map Reflection" is, or get at all how the heck we call something a "Grass Shader" or "Fur Shader" (isn't shade just an absence of light? why is everything now called shaders? how's it make sense that there such a thing as a 'sound shader'? why are bump-maps a "texture" but fur is a "shader"? I'm forever confused...)
With Raytracing, I pretty much instantly get it. It's light, the way light works.
But also, I get why it's so hard, because its complexity and its need of resolution for thousands and millions and billions of bouncing beams to "replicate" reality seems really big for even really powerful computers. And I get that maybe I won't be that satisfied, if ever even impressed by RT this gen. Or next gen, or who know what it'll take to make raytracing worth its while. Maybe we'll get great things this gen, but then again, from the demos so far, maybe I should temper my expectations, So we will see what the big deal of raytracing amounts to over the course of the generation, but I at least get why it's the direction to go as games evolve.
Yeah, it's been a problem that the showcases have largely been very loud demonstrations of relatively negligible things. "In our game, we use RAYTRACING ... just look at the dome on this robot, you can see the street lamp reflected in realtime on his dome - AMAZING!" "With our next-gen engine equipped with RAYTRACING, when you walk past a mirror in stage 3.1 and also stage 6.3, you can see your character accurately reflected back at you - NEXT-GEN!!" "In this neon-lit world, when the day turns to night, and the streets get kind of wet for some reason, you can see streetsigns reflected on the roads, even when you're going 185MPH and can't read them - forget about every game that's come before, because we can do RAYTRACING!"
...All that stuff can be really cool if you're one to geek out over it, but it's not going to sell you on hardware or software. And it's not going to be worth a tanked framerate if that's how it goes down. Developers are interested in over-demonstrating raytracing so that you can see it in action, but in the end, raytracing and GI and other new visual techniques will serve a more subtle purpose.
Ok so I see this now and I understand with the Disney video. I guess what I’m coming to is that this technology for video games still seems to be in early, early stages - like we’re still years and years away from that path tracing being the deferred to method of doing things in a game with already high fidelity visuals - basically the true game-changing benefits of it are not gonna be felt for a while. Am I correct in assuming that?
Note this part in particular: "to pick a rough ballpark figure, I'd probably say that 3% of the frame data in that last image is raytraced samples and 97% of it is faked interpolation between regions and potato-stamped to fill in the gaps with an approximation."And what exactly is your point?
No, they haven't. E.g. marble demo is presented as "tru, full frame RT".Nvidia was always upfront about how they do it
Heck, I remember when PlayStation 2 was coming out, and there was talk of it using raytracing and NURBS and all these other technical elements, just like Toy Story in realtime! (There were some similarly outlandish mentions of tech like this when "Ultra 64" was being shown off with industrial SGI demo videos.) Of course, eventually every console finds where will eventually float, and early tech talk eventually broke down to a much more reasonable reality, but technically, strange things are actually reality.
So, we'll see what PS5/Series X raytracing actually amounts to. It's not that this is technology in the early, early stages (because obviously, RT has been around, and long before it was realtime, it was a big part of how Pixar made CGI worthy of big-screen entertainment.) The situation with in-game RT is I believe more that the tools to harness it and the chip features to make use of it are still not mature enough to really know what you can and can't do even if developers understand the math involved and have the hardware built with it in mind. (Remember that Epic Games built Unreal Engine 4 around with something called Sparse Voxel Octree Global Illumination, which is... well, I can't actually explain GI well, but raytracing/pathtracing can be involved to make for realtime global illumination, and up until very late in promoting UE4, realtime SVOGI was an anticipated feature yet it was dropped in favor of more reasonable precomputed performance techniques; solutions for CryEngine and UE and other engines did eventually bring GI through in some capacity, and now even the Nintendo Switch version of Crysis has SVOGI at a limited capacity.)
You don't have to think of it as pot of gold at the end of a distant rainbow, however. Raytracing is being integrated into shipping products playable come November. What it is and what it is not on next-gen consoles, that remains to be proved, but it will be in there, and already plays out in some capacity on a box you already own right now. What game, if any, will be the "AH HA!!" demonstration of this techology? Who knows. But Raytracing is not just the next-gen hype term for some marbles you can never play with.
Soo, ray tracing they said, let's talk about "realism" shall we (and I don't mean Pixar stuff rendered by a freaking render farm, but by NVs tech demo), Corners:
if you need to be reminded how they look like, welp, a real photo:
you can read what's going on in this wonderful blog post:
Now, let's move to "full RT" shall we? Let's be generous, Quake.
it takes 2060 about 20 seconds to generate a decent quality frame.
So how do they render it in a fraction of a second? Meet Green RT Fakery:
1) Temporal denoiser + blur
This is based on previous frame data, so with the textures turned off and the only image you're seeing is what's raytraced. Top image was taken within a few frames of me moving the camera, bottom image was the desired final result that took 3-5 seconds to 'fade' in as the temporal denoiser had more previous frames to work from. Since you are usually moving when you're actually playing a game, the typical image quality of the entire experience is this 'dark smear', laggy, splotchy mess that visibly runs at a fraction of your framerate. It's genuinely amazing how close to a useful image it's generating in under half a second, but we're still a couple of orders of magnitude too slow to replace baked shadowmaps for full GI.
2. Resolution hacks and intelligent sampling zones to draw you eye to shiny things at the cost of detail accuracy (think of it as a crude VRS for DXR)
Here's an image from the same room, zoomed a lot, and the part of the image I took it from for reference:
A - rendered at 1/4 resolution
B - tranparency, this is a reflection on water, old-school 1995 DirectX 3.0 dither hack rather than real transparency calculations
C - the actual resolution of traced rays - each bright dot in region C is a ray that has been traced in just 4-bit chroma and all the dark space is essentially guesswork/temporal patterns tiled and rotated based on the frequency of those ray hits. If you go and look at a poorly-lit corner of the room you can clearly see the repeated tiling of these 'best guess' dot patterns and they have nothing to do with the noisier, more random bright specs that are the individual ray samples.
So, combine those two things together. Firstly we have very low ray density that is used as a basis for region definitions that can then be approximated per frame using a library of tile-based approximations that aren't real raytracing, just more fakery that's stamped out as a best guess based on the very low ray coverage for that geometry region. If I was going to pick a rough ballpark figure, I'd probably say that 3% of the frame data in that last image is raytraced samples and 97% of it is faked interpolation between regions and potato-stamped to fill in the gaps with an approximation. This works fine as long as you just want an approximation, because the human brain does great work in filling in the gaps, especially when it's all in motion. Anyway, once it's tile-stamped a best-guess frame together out of those few ray samples, each of those barely-raytraced frames are blurred together in a buffer over the course of several hundred frames. There will be visual artifacts like in my first point anywhere you have new data on screen, because temporal filtering of on-screen data only means that anything that has appeared from offscreen is a very low-resolution, mostly fake mess for the first few dozen frames.
By Crispy
The new Nvidia marbles demo should be a good eye opener for anyone wanting to see ray tracing at its best. It looks so close to real it’s quite incredible.
Much more accurate than anything normal real time rendering could ever produce without needing to manually fake ...everything..., that`s what it is.Note this part in particular: "to pick a rough ballpark figure, I'd probably say that 3% of the frame data in that last image is raytraced samples and 97% of it is faked interpolation between regions and potato-stamped to fill in the gaps with an approximation."
And this is quake. Now think what that marble demo really is.
Since the denoising and the variable raycount is one package nothing about that argument changes......the efforts that have to be invested are absolutely incomparable and the realism is far above classic tech.No, they haven't. E.g. marble demo is presented as "tru, full frame RT".
But it doesn't even matter what they are saying.
The message here is that RTRT is lots of fakery one has to fiddle with to get acceptable results, and hence, there goes the "realism" argument together with "ease of development" argument.
No, they haven't. E.g. marble demo is presented as "tru, full frame RT".
But it doesn't even matter what they are saying.
The message here is that RTRT is lots of fakery one has to fiddle with to get acceptable results, and hence, there goes the "realism" argument together with "ease of development" argument.
Thanks for that bit of info. I´d have thought the offline-renderfarms could actually do without one.The GPU/CPU simply needs a denoiser even in offline rendering.
Thanks for that bit of info. I´d have thought the offline-renderfarms could actually do without one.
And yes, we are still a long way from having that in hardware at reasonable framerates.
The GPU/CPU simply needs a denoiser even in offline rendering.
Either you didn't read, or didn't understand it, at the end of the day, 97% of the Quake RTX is faked.Much more accurate than anything normal real time rendering could ever produce without needing to manually fake ...everything..., that`s what it is.
Citation needed.......the efforts that have to be invested are absolutely incomparable...
Either you didn`t read or you didn`t understand.Either you didn't read, or didn't understand it, at the end of the day, 97% of the Quake RTX is faked.
comparatívely...no.Besides actual hardware only being capable to produce wild noise, there are other problems, such as actual effort needed to get reasonably good results not being static. Not only object complexity, but just changing POV might get game developer into trouble.
And then, there goes that ethereal "ease of development".
And we are comparing RT to 2 decades old techniques, because...?the result is leaps and bounds beyond the classic alternative
Not sure what you mean.comparatívely...no.
This thread is about real time rt in games and what this brings to the table (compared to what was before)......what did you think it was about?And we are comparing RT to 2 decades old techniques, because...?
Ever tried building a game level using your own assets without a supercomputer in your backyard? The hoops you have to jump through for every single object just to get anywhere near a somewhat believable light-situation that can be rendered in real time are ridiculous.Not sure what you mean.
I also call BS on NV's statement about marble demo being "true RT, no fakes" rendering.
Ever tried building a game level using your own assets without a supercomputer in your backyard? The hoops you have to jump through for every single object just to get anywhere near a somewhat believable light-situation that can be rendered in real time are ridiculous.
I´m sure developers at naughty dog are still waking up screaming "that angle light doesn`t fit".......
No you aren`t. Right now that`s exactly the situation at hand. All RTX implementations we see are in partnership with Nvidia as in "implement this to show off and we pay you for this because it`s advertisement for our product".Someone explain me this, ok rtx is easy, faster and money saving for devs, wonderfull.
But...aren't devs forced to still create their best fake lights for the majority of people without rtx gpus?
I mean, they have to create 2 completely different methods for lights\shadows\reflections so even if one methods is easy and fast, at the end of the day it's still more work for them compared to only doing the rasterized fake method...
Am i missing something here??
But the majority of people still gonna have non-capable rtx gpus in their pc...and console have probably shitty rtx implementation, i'm not even sure that a ps6 or even a ps7 will sustain totally raytraced illuminations in their games...No you aren`t. Right now that`s exactly the situation at hand. All RTX implementations we see are in partnership with Nvidia as in "implement this to show off and we pay you for this because it`s advertisement for our product".
But since the next gen consoles, the quasi game development hardware standard, will have RT hardware on board this is about to change.
2-3 years into this gen most PC ports will have RT-capable hardware and an SSD in their min-requirements.
About comparing today's RT tech to today's "other techniques" that do not require hardware RT-ing.This thread is about real time rt in games and what this brings to the table (compared to what was before)......what did you think it was about?
Just as an alternative to Quake's "non-RT" techniques. Oh, and also because it was a UE5 demo, by the company that added RT support to even UE4, on hardware that supports RT, but it wasn't using it "for some reason". I thought figuring that reason is very important in the context of this thread.I´m also not sure why you bring the UE5 demo into this which uses a conglomerate of techniques including a cut down "one bounce tracing" form of software RT.....
Nope. Although I'd expect game engines to address it somewhat.Ever tried building a game level using your own assets without a supercomputer in your backyard?
It's nothing special. Shiny puddles of water and metallic surfaces.
I'd be more interested in advanced physics or AI then ray tracing.
that`s what we´ve been doing the whole time......About comparing today's RT tech to today's "other techniques" that do not require hardware RT-ing.
Performance and alternatives that save performance by making compromises. Riddle solved.....Just as an alternative to Quake's "non-RT" techniques. Oh, and also because it was a UE5 demo, by the company that added RT support to even UE4, on hardware that supports RT, but it wasn't using it "for some reason". I thought figuring that reason is very important in the context of this thread.
Unfortunately, no.Nope. Although I'd expect game engines to address it somewhat.
This seems to be the case.Or maybe i don't understand what are you saying...
I think it's your take on "RT" that is off.UE5 isn`t released yet if you haven`t noticed. And if you check how lumen works in detail (as far as we know) you should realize why you don`t have an argument there either.
When even major engine developer does not use it, why would game developers?Performance and alternatives that save performance by making compromises.
Do I seriously have to explain the difference between a fixed tech demo and a game? The performance difference between all purpose silicone and specialized hardware? ....really? A single look at an RT benchmark with a 1080TI should seriously make you question yourself as to why you ever thought this nonsense would make sense here.I think it's your take on "RT" that is off.
Which reminds me this demo, done without using hardware RT and runs just fine even on dated GPUs:
Why would someone like Epic with a game made for absolute min requirements add features for 0,1% of its user base if it wasn`t for monetary incentives (enter nvidia with their ampere marketing budget).When even major engine developer does not use it, why would game developers?
But the majority of people still gonna have non-capable rtx gpus in their pc...and console have probably shitty rtx implementation, i'm not even sure that a ps6 or even a ps7 will sustain totally raytraced illuminations in their games...
I was talking about an imaginary point in the future where EVERYONE has a powerfull rtx gpu and devs can completely forget rasterized lights, are we really that close to that moment? Because to me it looks far far away...
Or maybe i don't understand what are you saying...
I think it's your take on "RT" that is off.
Which reminds me this demo, done without using hardware RT and runs just fine even on dated GPUs:
When even major engine developer does not use it, why would game developers?
i think for me the Global illumination part of Ray tracing is whats interesting
A single look at which RT benchmark?A single look at an RT benchmark with a 1080TI...
Because "now that next gen consoles are coming, game developers will be using RT left and right" I was told. So the tech demo of something that comes to those consoles would absolutely need to include it, as countless game devs are interested in using it.Why would someone like Epic with a game made for absolute min requirements add features for 0,1% of its user base...
The claim that "denoisnig" (in reality there is a bunch of steps) (if filling up 97%-ish part of the screen could be referred as such) "is RT" is mind boggling.State your claim because the reasons for denoising isn't a fake.
So until that moment arrive, devs still have to create 2 different method for lights, rtx and rasterized, correct?Short answer no. We have a few generations. I'm not sure consoles will be around next go around.
That's a "you" problem.
Whichever one you like that includes non-RTX hardware....please don`t start playing stupid now.A single look at which RT benchmark?
As already stated...one look at how lumen operates immediately invalidates this "argument" of yours....RT (even heavily denoised) will still be the most accurate option.Because "now that next gen consoles are coming, game developers will be using RT left and right" I was told. So the tech demo of something that comes to those consoles would absolutely need to include it, as countless game devs are interested in using it.
Welcome to reality where things have definitions that aren`t up for discussion.The claim that "denoisnig" (in reality there is a bunch of steps) (if filling up 97%-ish part of the screen could be referred as such) "is RT" is mind boggling.