• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

I don’t get the ray tracing thing

llien

Member
Soo, ray tracing they said, let's talk about "realism" shall we (and I don't mean Pixar stuff rendered by a freaking render farm, but by NVs tech demo), Corners:

hCo0iv7.png


if you need to be reminded how they look like, welp, a real photo:

JxAYkuJ.png


you can read what's going on in this wonderful blog post:



Now, let's move to "full RT" shall we? Let's be generous, Quake.

it takes 2060 about 20 seconds to generate a decent quality frame.
So how do they render it in a fraction of a second? Meet Green RT Fakery:
1) Temporal denoiser + blur
This is based on previous frame data, so with the textures turned off and the only image you're seeing is what's raytraced. Top image was taken within a few frames of me moving the camera, bottom image was the desired final result that took 3-5 seconds to 'fade' in as the temporal denoiser had more previous frames to work from. Since you are usually moving when you're actually playing a game, the typical image quality of the entire experience is this 'dark smear', laggy, splotchy mess that visibly runs at a fraction of your framerate. It's genuinely amazing how close to a useful image it's generating in under half a second, but we're still a couple of orders of magnitude too slow to replace baked shadowmaps for full GI.
1xgEUDU.png

2. Resolution hacks and intelligent sampling zones to draw you eye to shiny things at the cost of detail accuracy (think of it as a crude VRS for DXR)
Here's an image from the same room, zoomed a lot, and the part of the image I took it from for reference:
A - rendered at 1/4 resolution
B - tranparency, this is a reflection on water, old-school 1995 DirectX 3.0 dither hack rather than real transparency calculations
C - the actual resolution of traced rays - each bright dot in region C is a ray that has been traced in just 4-bit chroma and all the dark space is essentially guesswork/temporal patterns tiled and rotated based on the frequency of those ray hits. If you go and look at a poorly-lit corner of the room you can clearly see the repeated tiling of these 'best guess' dot patterns and they have nothing to do with the noisier, more random bright specs that are the individual ray samples.

85KG1Xo.png

r4LJppH.png


So, combine those two things together. Firstly we have very low ray density that is used as a basis for region definitions that can then be approximated per frame using a library of tile-based approximations that aren't real raytracing, just more fakery that's stamped out as a best guess based on the very low ray coverage for that geometry region. If I was going to pick a rough ballpark figure, I'd probably say that 3% of the frame data in that last image is raytraced samples and 97% of it is faked interpolation between regions and potato-stamped to fill in the gaps with an approximation. This works fine as long as you just want an approximation, because the human brain does great work in filling in the gaps, especially when it's all in motion. Anyway, once it's tile-stamped a best-guess frame together out of those few ray samples, each of those barely-raytraced frames are blurred together in a buffer over the course of several hundred frames. There will be visual artifacts like in my first point anywhere you have new data on screen, because temporal filtering of on-screen data only means that anything that has appeared from offscreen is a very low-resolution, mostly fake mess for the first few dozen frames.

By Crispy
 
Last edited:

nemiroff

Gold Member
Soo, ray tracing they said, let's talk about "realism" shall we (and I don't mean Pixar stuff rendered by a freaking render farm, but by NVs tech demo), Corners:

hCo0iv7.png


if you need to be reminded how they look like, welp, a real photo:

JxAYkuJ.png


you can read what's going on in this wonderful blog post:



Now, let's move to "full RT" shall we? Let's be generous, Quake.

it takes 2060 about 20 seconds to generate a decent quality frame.
So how do they render it in a fraction of a second? Meet Green RT Fakery:
1) Temporal denoiser + blur
This is based on previous frame data, so with the textures turned off and the only image you're seeing is what's raytraced. Top image was taken within a few frames of me moving the camera, bottom image was the desired final result that took 3-5 seconds to 'fade' in as the temporal denoiser had more previous frames to work from. Since you are usually moving when you're actually playing a game, the typical image quality of the entire experience is this 'dark smear', laggy, splotchy mess that visibly runs at a fraction of your framerate. It's genuinely amazing how close to a useful image it's generating in under half a second, but we're still a couple of orders of magnitude too slow to replace baked shadowmaps for full GI.
1xgEUDU.png

2. Resolution hacks and intelligent sampling zones to draw you eye to shiny things at the cost of detail accuracy (think of it as a crude VRS for DXR)
Here's an image from the same room, zoomed a lot, and the part of the image I took it from for reference:
A - rendered at 1/4 resolution
B - tranparency, this is a reflection on water, old-school 1995 DirectX 3.0 dither hack rather than real transparency calculations
C - the actual resolution of traced rays - each bright dot in region C is a ray that has been traced in just 4-bit chroma and all the dark space is essentially guesswork/temporal patterns tiled and rotated based on the frequency of those ray hits. If you go and look at a poorly-lit corner of the room you can clearly see the repeated tiling of these 'best guess' dot patterns and they have nothing to do with the noisier, more random bright specs that are the individual ray samples.

85KG1Xo.png

r4LJppH.png


So, combine those two things together. Firstly we have very low ray density that is used as a basis for region definitions that can then be approximated per frame using a library of tile-based approximations that aren't real raytracing, just more fakery that's stamped out as a best guess based on the very low ray coverage for that geometry region. If I was going to pick a rough ballpark figure, I'd probably say that 3% of the frame data in that last image is raytraced samples and 97% of it is faked interpolation between regions and potato-stamped to fill in the gaps with an approximation. This works fine as long as you just want an approximation, because the human brain does great work in filling in the gaps, especially when it's all in motion. Anyway, once it's tile-stamped a best-guess frame together out of those few ray samples, each of those barely-raytraced frames are blurred together in a buffer over the course of several hundred frames. There will be visual artifacts like in my first point anywhere you have new data on screen, because temporal filtering of on-screen data only means that anything that has appeared from offscreen is a very low-resolution, mostly fake mess for the first few dozen frames.

By Crispy

Jesus.. Are you ok..?
 
Soo, ray tracing they said, let's talk about "realism" shall we (and I don't mean Pixar stuff rendered by a freaking render farm, but by NVs tech demo), Corners:

hCo0iv7.png


if you need to be reminded how they look like, welp, a real photo:

JxAYkuJ.png


you can read what's going on in this wonderful blog post:



Now, let's move to "full RT" shall we? Let's be generous, Quake.

it takes 2060 about 20 seconds to generate a decent quality frame.
So how do they render it in a fraction of a second? Meet Green RT Fakery:
1) Temporal denoiser + blur
This is based on previous frame data, so with the textures turned off and the only image you're seeing is what's raytraced. Top image was taken within a few frames of me moving the camera, bottom image was the desired final result that took 3-5 seconds to 'fade' in as the temporal denoiser had more previous frames to work from. Since you are usually moving when you're actually playing a game, the typical image quality of the entire experience is this 'dark smear', laggy, splotchy mess that visibly runs at a fraction of your framerate. It's genuinely amazing how close to a useful image it's generating in under half a second, but we're still a couple of orders of magnitude too slow to replace baked shadowmaps for full GI.
1xgEUDU.png

2. Resolution hacks and intelligent sampling zones to draw you eye to shiny things at the cost of detail accuracy (think of it as a crude VRS for DXR)
Here's an image from the same room, zoomed a lot, and the part of the image I took it from for reference:
A - rendered at 1/4 resolution
B - tranparency, this is a reflection on water, old-school 1995 DirectX 3.0 dither hack rather than real transparency calculations
C - the actual resolution of traced rays - each bright dot in region C is a ray that has been traced in just 4-bit chroma and all the dark space is essentially guesswork/temporal patterns tiled and rotated based on the frequency of those ray hits. If you go and look at a poorly-lit corner of the room you can clearly see the repeated tiling of these 'best guess' dot patterns and they have nothing to do with the noisier, more random bright specs that are the individual ray samples.

85KG1Xo.png

r4LJppH.png


So, combine those two things together. Firstly we have very low ray density that is used as a basis for region definitions that can then be approximated per frame using a library of tile-based approximations that aren't real raytracing, just more fakery that's stamped out as a best guess based on the very low ray coverage for that geometry region. If I was going to pick a rough ballpark figure, I'd probably say that 3% of the frame data in that last image is raytraced samples and 97% of it is faked interpolation between regions and potato-stamped to fill in the gaps with an approximation. This works fine as long as you just want an approximation, because the human brain does great work in filling in the gaps, especially when it's all in motion. Anyway, once it's tile-stamped a best-guess frame together out of those few ray samples, each of those barely-raytraced frames are blurred together in a buffer over the course of several hundred frames. There will be visual artifacts like in my first point anywhere you have new data on screen, because temporal filtering of on-screen data only means that anything that has appeared from offscreen is a very low-resolution, mostly fake mess for the first few dozen frames.

By Crispy
And what exactly is your point? Nvidia was always upfront about how they do it, what compromises are made to make it possible in real time. You basically wasted your time here to repeat Nvidia's initial RTX presentation and added your personal dramatization and not much else.... Point stands that Nvidia's denoised RT is still a hell of a lot more realistic than anything else in classic real time rendering.
 
Last edited:

CamHostage

Member
The big thing (for me) is from an implementation point of view: It's far easier to reason about a ray tracer than to reason about the whole pipeline for rasterization. It's also kinda the canonical way of how one would naively think about rendering stuff because it's close to our light model(s) in physics (except for the big difference that the viewpoint shoots the rays and not not light sources, I guess). I just find the theory of how ray tracing works much more elegant.

Leave it to Disney to explain things to people like me who are into learning how things work but need it laid out for them easy enough so even a kid could understand...


Whereas, we're talking about familiar techniques of fakery doing so much impressive work already in simulating the things a game wants to show us, and those are all cool, but... I mean, I don't understand them, like, at all? Three gens in, I still don't really know much about "Deferred-Rendering" or "Forward-Rendering", or could explain what a "Cube-Map Reflection" is, or get at all how the heck we call something a "Grass Shader" or "Fur Shader" (isn't shade just an absence of light? why is everything now called shaders? how's it make sense that there such a thing as a 'sound shader'? why are bump-maps a "texture" but fur is a "shader"? I'm forever confused...)

With Raytracing, I pretty much instantly get it. It's light, the way light works.

But also, I get why it's so hard, because its complexity and its need of resolution for thousands and millions and billions of bouncing beams to "replicate" reality seems really big for even really powerful computers. And I get that maybe I won't be that satisfied, if ever even impressed by RT this gen. Or next gen, or who know what it'll take to make raytracing worth its while. Maybe we'll get great things this gen, but then again, from the demos so far, maybe I should temper my expectations, So we will see what the big deal of raytracing amounts to over the course of the generation, but I at least get why it's the direction to go as games evolve.

I understand way too little of the tech. From what I gather, apparently the whole tech is still in early stages and the big difference should come when it starts being used more widely for global illumination or something.

The way it is now though, mostly focusing on reflection, it just seems like a way to tank performance for neglectable upgrades.

Yeah, it's been a problem that the showcases have largely been very loud demonstrations of relatively negligible things. "In our game, we use RAYTRACING ... just look at the dome on this robot, you can see the street lamp reflected in realtime on his dome - AMAZING!" "With our next-gen engine equipped with RAYTRACING, when you walk past a mirror in stage 3.1 and also stage 6.3, you can see your character accurately reflected back at you - NEXT-GEN!!" "In this neon-lit world, when the day turns to night, and the streets get kind of wet for some reason, you can see streetsigns reflected on the roads, even when you're going 185MPH and can't read them - forget about every game that's come before, because we can do RAYTRACING!"

...All that stuff can be really cool if you're one to geek out over it, but it's not going to sell you on hardware or software. And it's not going to be worth a tanked framerate if that's how it goes down. Developers are interested in over-demonstrating raytracing so that you can see it in action, but in the end, raytracing and GI and other new visual techniques will serve a more subtle purpose.
 
Last edited:

SF Kosmo

Al Jazeera Special Reporter
Of course not, it's fake. But like other techniques, fakery can improve too.

I just don't see the cost value in it. I would rather an improved, inexpensive fake solution versus real RT.
As RT improves and fake solutions become more robust, that gap closes. Fully path traced stuff is still probably a ways off for modern games but you can do a lot with the kind of power we see in Ampere.
 
The new Nvidia marbles demo should be a good eye opener for anyone wanting to see ray tracing at its best. It looks so close to real it’s quite incredible.
 

Keihart

Member
If the game has enough production it does very little to the full presentation since most effects are already achieved in more cheap ways performance wise, you can get better accurancy for shadows or multi bounce dynamic lights or reflections, but a lower fidelity version of those is already achievable with tricks at lower fidelity.
 
Leave it to Disney to explain things to people like me who are into learning how things work but need it laid out for them easy enough so even a kid could understand...


Whereas, we're talking about familiar techniques of fakery doing so much impressive work already in simulating the things a game wants to show us, and those are all cool, but... I mean, I don't get them at all? Three gens in, I still don't really know much about "Deferred-Rendering" or "Forward-Rendering", or could explain what a "Cube-Map Reflection" is, or get at all how the heck we call something a "Grass Shader" or "Fur Shader" (isn't shade just an absence of light? why is everything now called shaders? how's it make sense that there such a thing as a 'sound shader'? why are bump-maps a "texture" but fur is a "shader"? I'm forever confused...)

With Raytracing, I pretty much instantly get it. It's light, the way light works.

But also, I get why it's so hard, because its complexity and its need of resolution for thousands and millions and billions of bouncing beams to "replicate" reality seems really big for even really powerful computers. And I get that maybe I won't be that satisfied, if ever even impressed by RT this gen. Or next gen, or who know what it'll take to make raytracing worth its while. Maybe we'll get great things this gen, but then again, from the demos so far, maybe I should temper my expectations, So we will see what the big deal of raytracing amounts to over the course of the generation, but I at least get why it's the direction to go as games evolve.



Yeah, it's been a problem that the showcases have largely been very loud demonstrations of relatively negligible things. "In our game, we use RAYTRACING ... just look at the dome on this robot, you can see the street lamp reflected in realtime on his dome - AMAZING!" "With our next-gen engine equipped with RAYTRACING, when you walk past a mirror in stage 3.1 and also stage 6.3, you can see your character accurately reflected back at you - NEXT-GEN!!" "In this neon-lit world, when the day turns to night, and the streets get kind of wet for some reason, you can see streetsigns reflected on the roads, even when you're going 185MPH and can't read them - forget about every game that's come before, because we can do RAYTRACING!"

...All that stuff can be really cool if you're one to geek out over it, but it's not going to sell you on hardware or software. And it's not going to be worth a tanked framerate if that's how it goes down. Developers are interested in over-demonstrating raytracing so that you can see it in action, but in the end, raytracing and GI and other new visual techniques will serve a more subtle purpose.

Ok so I see this now and I understand with the Disney video. I guess what I’m coming to is that this technology for video games still seems to be in early, early stages - like we’re still years and years away from that path tracing being the deferred to method of doing things in a game with already high fidelity visuals - basically the true game-changing benefits of it are not gonna be felt for a while. Am I correct in assuming that? I guess that’s why it’s confusing to me how much we talk about it when it seems like we’re not really that close to developing a scene in the way the Disney video was showing
 

CamHostage

Member
Ok so I see this now and I understand with the Disney video. I guess what I’m coming to is that this technology for video games still seems to be in early, early stages - like we’re still years and years away from that path tracing being the deferred to method of doing things in a game with already high fidelity visuals - basically the true game-changing benefits of it are not gonna be felt for a while. Am I correct in assuming that?

Heck, I remember when PlayStation 2 was coming out, and there was talk of it using raytracing and NURBS and all these other technical elements, just like Toy Story in realtime! (There were some similarly outlandish mentions of tech like this when "Ultra 64" was being shown off with industrial SGI demo videos.) Of course, eventually every console finds where will eventually float, and early tech talk eventually broke down to a much more reasonable reality, but technically, strange things are actually reality.



So, we'll see what PS5/Series X raytracing actually amounts to. It's not that this is technology in the early, early stages (because obviously, RT has been around, and long before it was realtime, it was a big part of how Pixar made CGI worthy of big-screen entertainment.) The situation with in-game RT is I believe more that the tools to harness it and the chip features to make use of it are still not mature enough to really know what you can and can't do even if developers understand the math involved and have the hardware built with it in mind. (Remember that Epic Games built Unreal Engine 4 around with something called Sparse Voxel Octree Global Illumination, which is... well, I can't actually explain GI well, but raytracing/pathtracing can be involved to make for realtime global illumination, and up until very late in promoting UE4, realtime SVOGI was an anticipated feature yet it was dropped in favor of more reasonable precomputed performance techniques; solutions for CryEngine and UE and other engines did eventually bring GI through in some capacity, and now even the Nintendo Switch version of Crysis has SVOGI at a limited capacity.)

You don't have to think of it as pot of gold at the end of a distant rainbow, however. Raytracing is being integrated into shipping products playable come November. What it is and what it is not on next-gen consoles, that remains to be proved, but it will be in there, and already plays out in some capacity on a box you already own right now. What game, if any, will be the "AH HA!!" demonstration of this techology? Who knows. But Raytracing is not just the next-gen hype term for some marbles you can never play with.
 
Last edited:

llien

Member
And what exactly is your point?
Note this part in particular: "to pick a rough ballpark figure, I'd probably say that 3% of the frame data in that last image is raytraced samples and 97% of it is faked interpolation between regions and potato-stamped to fill in the gaps with an approximation."
And this is quake. Now think what that marble demo really is.

Nvidia was always upfront about how they do it
No, they haven't. E.g. marble demo is presented as "tru, full frame RT".
But it doesn't even matter what they are saying.
The message here is that RTRT is lots of fakery one has to fiddle with to get acceptable results, and hence, there goes the "realism" argument together with "ease of development" argument.
 
Heck, I remember when PlayStation 2 was coming out, and there was talk of it using raytracing and NURBS and all these other technical elements, just like Toy Story in realtime! (There were some similarly outlandish mentions of tech like this when "Ultra 64" was being shown off with industrial SGI demo videos.) Of course, eventually every console finds where will eventually float, and early tech talk eventually broke down to a much more reasonable reality, but technically, strange things are actually reality.



So, we'll see what PS5/Series X raytracing actually amounts to. It's not that this is technology in the early, early stages (because obviously, RT has been around, and long before it was realtime, it was a big part of how Pixar made CGI worthy of big-screen entertainment.) The situation with in-game RT is I believe more that the tools to harness it and the chip features to make use of it are still not mature enough to really know what you can and can't do even if developers understand the math involved and have the hardware built with it in mind. (Remember that Epic Games built Unreal Engine 4 around with something called Sparse Voxel Octree Global Illumination, which is... well, I can't actually explain GI well, but raytracing/pathtracing can be involved to make for realtime global illumination, and up until very late in promoting UE4, realtime SVOGI was an anticipated feature yet it was dropped in favor of more reasonable precomputed performance techniques; solutions for CryEngine and UE and other engines did eventually bring GI through in some capacity, and now even the Nintendo Switch version of Crysis has SVOGI at a limited capacity.)

You don't have to think of it as pot of gold at the end of a distant rainbow, however. Raytracing is being integrated into shipping products playable come November. What it is and what it is not on next-gen consoles, that remains to be proved, but it will be in there, and already plays out in some capacity on a box you already own right now. What game, if any, will be the "AH HA!!" demonstration of this techology? Who knows. But Raytracing is not just the next-gen hype term for some marbles you can never play with.

But am I correct in assuming we won’t see high fidelity games with full on path tracing in this next generation? Or is that actually a possibility

EDIT: by that I mean the entire thing is path traced rather than selective in certain features
 
Last edited:

CrysisFreak

Banned
Attention OP.
This is your last warning.
My name is Alex Battalion from Digital Foundry.
Ray Tracing is the future. Unlike PS5 it is glorious and needs to be worshipped.
Crysis with Ray Tracing makes me very horny and forces me to upload nude pics to instagram.
Your thread is blasphemy and heresy.
Apologise for your transgressions or feel the might of my baseball bat.
Indeed I am the masterrace.
 

VFXVeteran

Banned
Soo, ray tracing they said, let's talk about "realism" shall we (and I don't mean Pixar stuff rendered by a freaking render farm, but by NVs tech demo), Corners:

hCo0iv7.png


if you need to be reminded how they look like, welp, a real photo:

JxAYkuJ.png


you can read what's going on in this wonderful blog post:



Now, let's move to "full RT" shall we? Let's be generous, Quake.

it takes 2060 about 20 seconds to generate a decent quality frame.
So how do they render it in a fraction of a second? Meet Green RT Fakery:
1) Temporal denoiser + blur
This is based on previous frame data, so with the textures turned off and the only image you're seeing is what's raytraced. Top image was taken within a few frames of me moving the camera, bottom image was the desired final result that took 3-5 seconds to 'fade' in as the temporal denoiser had more previous frames to work from. Since you are usually moving when you're actually playing a game, the typical image quality of the entire experience is this 'dark smear', laggy, splotchy mess that visibly runs at a fraction of your framerate. It's genuinely amazing how close to a useful image it's generating in under half a second, but we're still a couple of orders of magnitude too slow to replace baked shadowmaps for full GI.
1xgEUDU.png

2. Resolution hacks and intelligent sampling zones to draw you eye to shiny things at the cost of detail accuracy (think of it as a crude VRS for DXR)
Here's an image from the same room, zoomed a lot, and the part of the image I took it from for reference:
A - rendered at 1/4 resolution
B - tranparency, this is a reflection on water, old-school 1995 DirectX 3.0 dither hack rather than real transparency calculations
C - the actual resolution of traced rays - each bright dot in region C is a ray that has been traced in just 4-bit chroma and all the dark space is essentially guesswork/temporal patterns tiled and rotated based on the frequency of those ray hits. If you go and look at a poorly-lit corner of the room you can clearly see the repeated tiling of these 'best guess' dot patterns and they have nothing to do with the noisier, more random bright specs that are the individual ray samples.

85KG1Xo.png

r4LJppH.png


So, combine those two things together. Firstly we have very low ray density that is used as a basis for region definitions that can then be approximated per frame using a library of tile-based approximations that aren't real raytracing, just more fakery that's stamped out as a best guess based on the very low ray coverage for that geometry region. If I was going to pick a rough ballpark figure, I'd probably say that 3% of the frame data in that last image is raytraced samples and 97% of it is faked interpolation between regions and potato-stamped to fill in the gaps with an approximation. This works fine as long as you just want an approximation, because the human brain does great work in filling in the gaps, especially when it's all in motion. Anyway, once it's tile-stamped a best-guess frame together out of those few ray samples, each of those barely-raytraced frames are blurred together in a buffer over the course of several hundred frames. There will be visual artifacts like in my first point anywhere you have new data on screen, because temporal filtering of on-screen data only means that anything that has appeared from offscreen is a very low-resolution, mostly fake mess for the first few dozen frames.

By Crispy

And you didn't even get into multiple importance sampling... that's an added layer on top of ray-tracing (i.e. path tracing). And yes, we are still a long way from having that in hardware at reasonable framerates.
 
Note this part in particular: "to pick a rough ballpark figure, I'd probably say that 3% of the frame data in that last image is raytraced samples and 97% of it is faked interpolation between regions and potato-stamped to fill in the gaps with an approximation."
And this is quake. Now think what that marble demo really is.
Much more accurate than anything normal real time rendering could ever produce without needing to manually fake ...everything..., that`s what it is.

No, they haven't. E.g. marble demo is presented as "tru, full frame RT".
But it doesn't even matter what they are saying.
The message here is that RTRT is lots of fakery one has to fiddle with to get acceptable results, and hence, there goes the "realism" argument together with "ease of development" argument.
Since the denoising and the variable raycount is one package nothing about that argument changes......the efforts that have to be invested are absolutely incomparable and the realism is far above classic tech.
 
Last edited:

VFXVeteran

Banned
No, they haven't. E.g. marble demo is presented as "tru, full frame RT".
But it doesn't even matter what they are saying.
The message here is that RTRT is lots of fakery one has to fiddle with to get acceptable results, and hence, there goes the "realism" argument together with "ease of development" argument.

It is full frame RT. It might be very sparse and Tensor Cores may be used to approximate the image, but shit. We started using denoiser awhile back (all film companies). It's just way too expensive to try brute forcing rays (even using importance sampling) especially indoors where not enough secondary bounce gets into shadowed areas. The sampling frequency would be too high for very little gain in getting rid of the monte carlo noise. And rendering 4-lobe specular Marschner hair model on a character indoors and you got a nightmare. That's why path-tracers like Redshift are very popular now in lighting productions. The GPU/CPU simply needs a denoiser even in offline rendering.
 

llien

Member
And yes, we are still a long way from having that in hardware at reasonable framerates.
The GPU/CPU simply needs a denoiser even in offline rendering.

The points are:
1) "hey, see how the heck the ACTUAL ray traced stuff looks like, to realize what kind of monster gap we are talking about"
2) Countless set of tricks used to address it even in Quake
3) Today's RT means neither "realism" nor "ease of development", which were the core promises

Much more accurate than anything normal real time rendering could ever produce without needing to manually fake ...everything..., that`s what it is.
Either you didn't read, or didn't understand it, at the end of the day, 97% of the Quake RTX is faked.

I also call BS on NV's statement about marble demo being "true RT, no fakes" rendering.


......the efforts that have to be invested are absolutely incomparable...
Citation needed.
RT algos are known for being easy to code ("elegant", note that is's a group of methods, not single one). Certain effects are available out of the box (heck, that is why it is considered to begin with). Note that there are some effects that are hard to do with RT alone. You need this, for photorealism:
https://en.wikipedia.org/wiki/Rendering_equation (don't even dream)
Besides actual hardware only being capable to produce wild noise, there are other problems, such as actual effort needed to get reasonably good results not being static. Not only object complexity, but just changing POV might get game developer into trouble.
And then, there goes that ethereal "ease of development".

It does work at Pixar (it is easier to develop for them). Because Pixar is:
1) Doing it offline
2) Has a freaking render farm to do it
 
Last edited:

Data Ghost

Member
From what I have seen so far (and admittedly its very little) ray tracing will be used sparingly and be contained to small elements here and there on next gen console games.

So for example you might walk into a room with a highly reflective floor at some point, a statue might be shiny and reflective, a car may have ray traced reflections on the side that is closest to you and faked reflections on the other areas as ray tracing is expensive.

I'd go as far as to say that ray tracing on the next gen consoles might be a bit overhyped but it will be nice to see what implementations they come up with depending on how console devs are able to harness it.
 
Either you didn't read, or didn't understand it, at the end of the day, 97% of the Quake RTX is faked.
Either you didn`t read or you didn`t understand.
It simply doesn`t matter when the result is leaps and bounds beyond the classic alternative!
We always knew it was a denoised picture from a low sample set. Nvidia never claimed anything else, even proudly presented their denoiser tech because it truly is a feat one can be proud of.
This is the same as DLSS....if the results are great it simply doesn`t matter how you got there. Who the fuck cares about the basic data when you have tech to extrapolate an excellent result from that.

Besides actual hardware only being capable to produce wild noise, there are other problems, such as actual effort needed to get reasonably good results not being static. Not only object complexity, but just changing POV might get game developer into trouble.
And then, there goes that ethereal "ease of development".
comparatívely...no.
 
Last edited:

Arun1910

Member
Ray Tracing can add a lot to the overall IQ, give competitive advantages etc.

Once you experience something like Control with Ray Tracing, or even play a MP game like BFV where lights from fires/gunfire bounces off windows, puddles of water, cars, it really does look pretty cool.

Obvious downside at the moment is that current 20-series cards have a hard time with it, not sure how consoles will pull it off to be honest.

That said, it is pretty cool in my opinion, once you experience it and its taken away, you'll miss it.
 
And we are comparing RT to 2 decades old techniques, because...?
This thread is about real time rt in games and what this brings to the table (compared to what was before)......what did you think it was about?
I´m also not sure why you bring the UE5 demo into this which uses a conglomerate of techniques including a cut down "one bounce tracing" form of software RT.....

Not sure what you mean.
Ever tried building a game level using your own assets without a supercomputer in your backyard? The hoops you have to jump through for every single object just to get anywhere near a somewhat believable light-situation that can be rendered in real time are ridiculous.
I´m sure developers at naughty dog are still waking up screaming "that angle light doesn`t fit".......
 
Last edited:

VFXVeteran

Banned
Ever tried building a game level using your own assets without a supercomputer in your backyard? The hoops you have to jump through for every single object just to get anywhere near a somewhat believable light-situation that can be rendered in real time are ridiculous.
I´m sure developers at naughty dog are still waking up screaming "that angle light doesn`t fit".......

This is very true. Ever since we moved from rasterization or even regular ray-tracing to path tracing with no baked shadow maps or ambient occlusion, production in film has been extremely easy as far as workflow. We still do stupid lighting passes for comp but that's never going away.
 

GymWolf

Member
Someone explain me this, ok rtx is easy, faster and money saving for devs, wonderfull.

But...aren't devs forced to still create their best fake lights for the majority of people without rtx gpus?

I mean, they have to create 2 completely different methods for lights\shadows\reflections so even if one methods is easy and fast, at the end of the day it's still more work for them compared to only doing the rasterized fake method...

How many years before everyone possess a powerfull rtx gpu and devs don't have to make fake lights anymore? 10 years? 15? We know how cheap are majority of pc gamers...

Am i missing something here??
 
Last edited:
Someone explain me this, ok rtx is easy, faster and money saving for devs, wonderfull.

But...aren't devs forced to still create their best fake lights for the majority of people without rtx gpus?

I mean, they have to create 2 completely different methods for lights\shadows\reflections so even if one methods is easy and fast, at the end of the day it's still more work for them compared to only doing the rasterized fake method...

Am i missing something here??
No you aren`t. Right now that`s exactly the situation at hand. All RTX implementations we see are in partnership with Nvidia as in "implement this to show off and we pay you for this because it`s advertisement for our product".
But since the next gen consoles, the quasi game development hardware standard, will have RT hardware on board this is about to change.
2-3 years into this gen most PC ports will have RT-capable hardware and an SSD in their min-requirements.
 
Last edited:

BluRayHiDef

Banned
Ray tracing seems like a big deal based on the segment in the video below that begins at 2:50 and ends at about 3:35. The increases in image quality and "wow" factor are immense.

 

GymWolf

Member
No you aren`t. Right now that`s exactly the situation at hand. All RTX implementations we see are in partnership with Nvidia as in "implement this to show off and we pay you for this because it`s advertisement for our product".
But since the next gen consoles, the quasi game development hardware standard, will have RT hardware on board this is about to change.
2-3 years into this gen most PC ports will have RT-capable hardware and an SSD in their min-requirements.
But the majority of people still gonna have non-capable rtx gpus in their pc...and console have probably shitty rtx implementation, i'm not even sure that a ps6 or even a ps7 will sustain totally raytraced illuminations in their games...

I was talking about an imaginary point in the future where EVERYONE has a powerfull rtx gpu and devs can completely forget rasterized lights, are we really that close to that moment? Because to me it looks far far away...

Or maybe i don't understand what are you saying...
 
Last edited:

llien

Member
This thread is about real time rt in games and what this brings to the table (compared to what was before)......what did you think it was about?
About comparing today's RT tech to today's "other techniques" that do not require hardware RT-ing.
Why would we compare it to 2 decades old stuff? Why stop there and not go to 50 years ago then?

I´m also not sure why you bring the UE5 demo into this which uses a conglomerate of techniques including a cut down "one bounce tracing" form of software RT.....
Just as an alternative to Quake's "non-RT" techniques. Oh, and also because it was a UE5 demo, by the company that added RT support to even UE4, on hardware that supports RT, but it wasn't using it "for some reason". I thought figuring that reason is very important in the context of this thread.

Ever tried building a game level using your own assets without a supercomputer in your backyard?
Nope. Although I'd expect game engines to address it somewhat.
I was told ND is actually doing "Sony's exclusive game studio game engine".
 
About comparing today's RT tech to today's "other techniques" that do not require hardware RT-ing.
that`s what we´ve been doing the whole time......
UE5 isn`t released yet if you haven`t noticed. And if you check how lumen works in detail (as far as we know) you should realize why you don`t have an argument there either.

Just as an alternative to Quake's "non-RT" techniques. Oh, and also because it was a UE5 demo, by the company that added RT support to even UE4, on hardware that supports RT, but it wasn't using it "for some reason". I thought figuring that reason is very important in the context of this thread.
Performance and alternatives that save performance by making compromises. Riddle solved.....

Nope. Although I'd expect game engines to address it somewhat.
Unfortunately, no.
A lot of this is still manual labor....especially optimization and detail work. The amount of time that goes into finetuning environments like in LoU or GoW is massive. And after aaaaall that tedious work dynamic objects still look out of place because you can`t do this detail work in real time without applying something like RT (or Lumen`s cut down version of it).
 
Last edited:
Or maybe i don't understand what are you saying...
This seems to be the case.
We definitely don`t have the performance reserves in the consoles to make RT the per-se standard for anything. We´ll only see partial implementations. One game might use it for GI, another for shadows and reflections another might even use it for sound enhancement. The point is that the hardware offers possibilities that will be used. And since I highly doubt that developers will always create a 2nd version for PC that requires more than minimal effort I doubt that versions which don`t require hardware comparable to what`s in the consoles will be a thing beyond mid-next gen.
Min requirements for PC ports have always adapted with every new console gen, this one will be no different.
 
Last edited:

llien

Member
UE5 isn`t released yet if you haven`t noticed. And if you check how lumen works in detail (as far as we know) you should realize why you don`t have an argument there either.
I think it's your take on "RT" that is off.
Which reminds me this demo, done without using hardware RT and runs just fine even on dated GPUs:



Performance and alternatives that save performance by making compromises.
When even major engine developer does not use it, why would game developers?
 
Last edited:
I think it's your take on "RT" that is off.
Which reminds me this demo, done without using hardware RT and runs just fine even on dated GPUs:
Do I seriously have to explain the difference between a fixed tech demo and a game? The performance difference between all purpose silicone and specialized hardware? ....really? A single look at an RT benchmark with a 1080TI should seriously make you question yourself as to why you ever thought this nonsense would make sense here.
This "argument" of yours was kicking and screaming, sinking its nails into the doorframe as you dragged it in by its hair.....

When even major engine developer does not use it, why would game developers?
Why would someone like Epic with a game made for absolute min requirements add features for 0,1% of its user base if it wasn`t for monetary incentives (enter nvidia with their ampere marketing budget).
RT hardware hasn`t been a standard so far, it`s beeen an early adopter feature...
...which will change in november when this will suddenly be inside the standard development platforms.
 
Last edited:

VFXVeteran

Banned
But the majority of people still gonna have non-capable rtx gpus in their pc...and console have probably shitty rtx implementation, i'm not even sure that a ps6 or even a ps7 will sustain totally raytraced illuminations in their games...

I was talking about an imaginary point in the future where EVERYONE has a powerfull rtx gpu and devs can completely forget rasterized lights, are we really that close to that moment? Because to me it looks far far away...

Or maybe i don't understand what are you saying...

Short answer no. We have a few generations. I'm not sure consoles will be around next go around.
 

VFXVeteran

Banned
I think it's your take on "RT" that is off.
Which reminds me this demo, done without using hardware RT and runs just fine even on dated GPUs:




When even major engine developer does not use it, why would game developers?


I don't understand your beef here. It just sounds like you are crying about something.. RT reflections is very easy to implement. It only requires one ray cast along the reflected vector to the eye. It's also the most unrealistic form of ray-tracing there is. There are only a very few objects that are a shiny mirror.
 
Last edited:

llien

Member
A single look at an RT benchmark with a 1080TI...
A single look at which RT benchmark?

Why would someone like Epic with a game made for absolute min requirements add features for 0,1% of its user base...
Because "now that next gen consoles are coming, game developers will be using RT left and right" I was told. So the tech demo of something that comes to those consoles would absolutely need to include it, as countless game devs are interested in using it.

State your claim because the reasons for denoising isn't a fake.
The claim that "denoisnig" (in reality there is a bunch of steps) (if filling up 97%-ish part of the screen could be referred as such) "is RT" is mind boggling.
 
Last edited:
A single look at which RT benchmark?
Whichever one you like that includes non-RTX hardware....please don`t start playing stupid now.
Let me google this for you...

Because "now that next gen consoles are coming, game developers will be using RT left and right" I was told. So the tech demo of something that comes to those consoles would absolutely need to include it, as countless game devs are interested in using it.
As already stated...one look at how lumen operates immediately invalidates this "argument" of yours....RT (even heavily denoised) will still be the most accurate option.

The claim that "denoisnig" (in reality there is a bunch of steps) (if filling up 97%-ish part of the screen could be referred as such) "is RT" is mind boggling.
Welcome to reality where things have definitions that aren`t up for discussion.


also: I still don`t get your issue...at all. What do you even want?
 
Last edited:
Top Bottom