• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Everything that you never wanted to know about ray tracing

psorcerer

Banned
There is a lot of confusion and cargo cult stories surrounding ray tracing, here I will try to explain what is going on.

TL;DR DXR/RTX is not really "ray tracing"

What is "ray tracing"?
The classic definition is coming from a 1979 paper "An improved illumination model for shaded display". By Whitted et al.
In a simplified form, [a global illumination information that affects the intensity of each pixel] is stored in a tree of “rays” extending from the viewer to the first surface encountered and from there to other surfaces and to the light sources.
Thus the classic ray tracing is an iterative process where an unspecified amount of rays are traced from camera, through the scene, creating a new ray at each bounce, until they hit the light source.
Ironically it's also called "BRT" or "backward ray tracing" in literature, because in reality light travels from light source into the eye.

Is it photorealistic?
No. The real light equation is too complex and undecidable even for simple cases of multiple planes in space. Not to mention complex objects.
But it can be pretty good looking for certain effects. For example reflections and shadows.
If we add a correct refraction for transparent surfaces things get pretty complicated.
We then need to cast at least 3 rays per each intersection: reflection ray, shadow ray, refraction ray. And do it for any other intersection later on.
It multiplies so quickly that more than one bounce cannot be traced in a sane time.

Over the years people tried to solve the abysmal speed of a classic ray tracer by using more clever methods:

Path tracing.
Instead of tracing a 3 new rays in each bounce. Trace one random ray.
Really, just shoot a ray in a random direction. It works because if you shoot enough rays and your random distribution is uniform enough the whole scene will be covered in random rays (paths) that can be averaged to fill in the "blank" spaces.
Of course it doesn't work in all cases now: what If we have a mirror? We cannot shoot in random, mirror reflects in one direction. What if the medium is not entirely opaque (subsurface scattering, caustics)? We cannot trace things randomly either here.
So, other hacks are incoming...

Photon mapping.
We trace paths from both eye and lights.. First we trace lights and when light intersects with a scene a special structure "photon map" is built to "cache" the results.
Then rays from "eye" are traced and when the intersection is found the photon map is used to sample from.
The photon map itself is a special structure called "k-d tree" that can accurately find a "photon" that's closest to a particular point in space (the intersection with an "eye" ray that we found).
Photon mapping can produce a lot of realistic effects if the map is detailed enough.
And it can also use an hierarchy of maps to make specific effects, that need insanely detailed maps (like caustics), local and small.

Metropolis light transport.
This algorithm tries to reduce the "randomness" of the rays, by using a specific fitting function: Metropolis-Hastings algorithm.
It's all a black magic of statistics, but the idea is simple: use what we know about specific materials in the ray intersection points to cast the random rays in not so random but the most probable directions.
It also reduces probability of rays cast into the "void" - places where are no objects in the scene.

There are a lot more hacks, and some of them are pretty new, like VCM (vertex connection and merging, paper published in 2012) where it's essentially a big bag of small hacks on how to cast rays more efficiently and then merge the colors.

Now the main thing why you opened this one: what about DXR, RTX and the next gen consoles?
Can we use all of the above? Is it all hardware accelerated? When will we get to photorealistic games?

The last question is easy to answer: you can get to photorealistic any time: just pre-bake everything and boom!
That's what happened in the latest NV Marbles demo: a lot of pre-baked assets.

As for the other questions: it's hard to answer.
If we go by what's available in DXR the future is pretty grim. They offer a very simplistic BVH (they call it AS: acceleration structure) it's a two level tree with vertices exclusively on the second level.
You cannot build a photon map with that, or any other more efficient BVH for a particular scene/game.
DXR should actually be called "programmable rasterization" or "rasterization shaders" because that's what it is. But because it has that fixed path hardware accelerated search in AS they called it "ray-tracing".
In reality in DXR compute shaders are used everywhere: ray cast, closest hit, ray miss, intersection output, any hit. The real hardware accelerated path is only the ray search, everything before and after is done by regular compute shaders.
More than that, the actual result cannot be even written to a render target and needs to be copied from a compute shader output (they could not bypass ROPs, it seems).

Is it really that bad?
No. It's not. It's a bad game from a marketing point of view. RT is slow and will remain slow.
But if we really looking at it like "raster shaders" - it's awesome!
You can use all of the tricks I've described above in moderation in specific places where it's needed.
You can even use them in screen space!
Photon mapping caustics in screen space? I'll take two!
Off-screen reflections in mirrors? Bring it on!
Path trace only the closest objects? Yeah!
Soft shadows? Take my money!
 

psorcerer

Banned
It's not really about graphics tech. More on what is the sate of the RT in the new hardware.
Exposition was too long. I concur.
 
Last edited:

Myths

Member
It seems more a primer on RT overall though. Still, I think this would be a great offshoot of a thread topic considering it’s a term expected to get thrown around this upcoming gen.
 

VFXVeteran

Banned
Over the years people tried to solve the abysmal speed of a classic ray tracer by using more clever methods:

Path tracing.
Instead of tracing a 3 new rays in each bounce. Trace one random ray.
Really, just shoot a ray in a random direction. It works because if you shoot enough rays and your random distribution is uniform enough the whole scene will be covered in random rays (paths) that can be averaged to fill in the "blank" spaces.
Of course it doesn't work in all cases now: what If we have a mirror? We cannot shoot in random, mirror reflects in one direction. What if the medium is not entirely opaque (subsurface scattering, caustics)? We cannot trace things randomly either here.
So, other hacks are incoming...

No company ever using random monte-carlo sampling to shoot rays. It produces way too much noise. ALL studios use a technique called multiple importance sampling. Whereby the random ray fired is actually a ray that falls under the integral of a probability function firing both from the surface as well as from the light. This is where RT ray-tracing "wants" to go.


DXR should actually be called "programmable rasterization" or "rasterization shaders" because that's what it is. But because it has that fixed path hardware accelerated search in AS they called it "ray-tracing".

They call it ray-tracing because it does indeed shoot rays in world space and test for intersection and then you can recurse from that point on.

In reality in DXR compute shaders are used everywhere: ray cast, closest hit, ray miss, intersection output, any hit. The real hardware accelerated path is only the ray search, everything before and after is done by regular compute shaders.

Yep. But the ray search is the most expensive so it's a wise choice.
 

Lethal01

Member
But can you raytrace caustics?

If someone says it can't you can safely disregard anything else that comes out of their mouth on the subject.
Right now to get accurate caustics ray tracing is usually a necessity. It's also super expensive, we do things to speed it up but doing things to improve raytracing does not mean not ray tracing..
 
Last edited:
If someone says it can't you can safely disregard anything else that comes out of their mouth on the subject.
Right now to get accurate caustics ray tracing is usually a necessity. It's also super expensive, we do things to speed it up but doing things to improve raytracing does not mean not ray tracing..

Except that ray tracing is not path tracing.

You indeed cannot do proper caustics with pure ray tracing, it's not made to replicate such physical phenomenons. You can do it with path tracing however, but it is still a notoriously difficult phenomenon to reproduce with classic unidirectional path tracing, and will necessite the use of a form of bidirectional path tracing in order to sample the caustics correctly.

No company ever using random monte-carlo sampling to shoot rays. It produces way too much noise. ALL studios use a technique called multiple importance sampling. Whereby the random ray fired is actually a ray that falls under the integral of a probability function firing both from the surface as well as from the light. This is where RT ray-tracing "wants" to go.

Purely uniform sampling is indeed unusable except for simple Lambert materials. Start introducing a specular lobe and microfacets and you are doomed to wait forever to get a clean result. To solve that problem, Multiple Importance Sampling is indeed part of the solution, but not all of it. It is especially useful in order for the renderer to know what is more important when sampling a shading point (i.e. the BSDF sampling or the NEE sampling?).

One very important area to better achieve sampling has to do with how your will generate the (pseudo) random numbers that will be used when dealing with the actual sampling of a ray direction for example. Many techniques can be used to generate "number", such as stratified sampling and Sobol or Halton sequences, and the idea is always to try as much as possible to get away from the inherent unbiased and unstructured (thus very much random) nature of a white noise, all while avoiding introducing too much bias that could lead to visible artifacts and noise patterns after a certain amount of iterations in the render that is being produced. Hence why it is "pseudo" random numbers :)
 

Ar¢tos

Member
I feel like good GI+baked would be more suitable (and less demanding) for most games and we are going to have a gen of chasing the RT rabbit and never really catching it, never really pleased and often disappointed.
 

Lethal01

Member
Except that ray tracing is not path tracing.

You indeed cannot do proper caustics with pure ray tracing, it's not made to replicate such physical phenomenons. You can do it with path tracing however, but it is still a notoriously difficult phenomenon to reproduce with classic unidirectional path tracing, and will necessite the use of a form of bidirectional path tracing in order to sample the caustics correctly.



Purely uniform sampling is indeed unusable except for simple Lambert materials. Start introducing a specular lobe and microfacets and you are doomed to wait forever to get a clean result. To solve that problem, Multiple Importance Sampling is indeed part of the solution, but not all of it. It is especially useful in order for the renderer to know what is more important when sampling a shading point (i.e. the BSDF sampling or the NEE sampling?).

One very important area to better achieve sampling has to do with how your will generate the (pseudo) random numbers that will be used when dealing with the actual sampling of a ray direction for example. Many techniques can be used to generate "number", such as stratified sampling and Sobol or Halton sequences, and the idea is always to try as much as possible to get away from the inherent unbiased and unstructured (thus very much random) nature of a white noise, all while avoiding introducing too much bias that could lead to visible artifacts and noise patterns after a certain amount of iterations in the render that is being produced. Hence why it is "pseudo" random numbers :)

Except path tracing is a form of ray tracing. You cannot path trace without raytracing. You cannot do caustics without raytracing. You don't stop path tracing when you start pathtracing.

It's like saying "I'm not moving, I'm dancing" :messenger_grinning_smiling:
 
Last edited:

deriks

4-Time GIF/Meme God
So it's a thing that still is expensive as hell, for money or hardware, and is far from being what really is

Oh, boy
 

psorcerer

Banned
Except path tracing is a form of ray tracing. You cannot path trace without raytracing. You cannot do caustics without raytracing. You don't stop path tracing when you start pathtracing.

It's like saying "I'm not moving, I'm dancing" :messenger_grinning_smiling:

It's a tautological argument just to have that "gotcha" moment.
Everything is "ray tracing" then. Rasterization is also a form of "ray tracing", you trace rays from each triangle to each pixel, using z-buffer as a "closest hit" shader.

P.s. parallax mapping, SSGI, AO, etc. all are "ray tracing"!
 
Last edited:

Lethal01

Member
P.s. parallax mapping, SSGI, AO, etc. all are "ray tracing"!

No shit, this isn't an attempt at a gotcha, just an attempt to stop spreading misinformation. If we are talking about what's literally correct Path tracing is Ray tracing, If we accept the word is used improperly nothing changes because people still don't say path tracing isn't ray tracing,

But hey let's put that behind us, Where is your source that the RTX marbles demo was pre-baked?
Surely you wouldn't just pull shit out of nowhere.
 
Last edited:

psorcerer

Banned
attempt to stop spreading misinformation

It's infeasible to do caustics with any non-specialized renderers.
You need a special high-detail renderer for caustics, or some way to fake it.

Where is your source that the RTX marbles demo was pre-baked?

I look at it and see pre-baked assets. You can easily distinguish dynamic vs static stuff in the video by comparing the quality.
Screenshots unfortunately don't show it, because they do AA accumulation there and the results are almost indistinguishable from an offline render (which it is, accumulation happens over a lot of frames).
So, bottom line: that's what I think it is.
 
Top Bottom