Mister Wolf
Gold Member
You'll have to imagine only, as it will never happen
Why couldn't it. I'm sure we would all rather have that over reflections.
You'll have to imagine only, as it will never happen
Vulkan is open source and paid for by both NVIDIA and AMD (and loads of other companies).
In that sense, whatever differences there are must be hardware and nothing to do with NVIDIA rigging the fight.
I think we all know now NVIDIA has the far better ray tracing solution right now.
Considering AMD has just started raytracing I would expect improvements with RDNA3 but of course NVIDIA is not going to sit around doing nothing.
Where did you see it was paid by nvidia?
Nah man, these benchmarks are real. I actually thought in the beginning that AMD would be able to compete with NVIDIA when it came to ray-tracing performance. I was pretty naive when it came to this by not knowing the R&D process NVIDIA had going on for years before all this to get to where they are now with real-time ray-tracing, but then I read the white paper for Ampere and learned just how much of the RT process was hardware-accelerated.
They have HW-acceleration for BVH traversal, ray/triangle and bounding box intersections, and instance transformation (someone correct me if I’m wrong on this, but I’m guessing this is HW acceleration for rapidly updating any asset changes like breaking glass for example but every bits of glass being shattered is being updated in the BVH and being ray-traced in real-time instead of the object being completely removed from the BVH). The Ampere cards are level 3 cards in ray-tracing, there are six different levels of RT and to achieve FULL HW-accelerated RT, it takes more time and research (and of course, more custom hardware).
The Levels of Ray Tracing - RTRT MONITOR
There are six, says Imagination Technologies. With ray tracing becoming increasingly important for a wide range of graphics applications, Imagination Technologies has developed a Ray Tracing Level System to give developers and OEMs an insight into the capability of solutions for ray tracing...rtrtmonitor.com
AMD’s RDNA 2 cards are level 2 cards in ray-tracing since they only have custom hardware acceleration for ray/triangle and ray/bounding box intersection tests, that’s pretty much it.
The part highlighted in red here is what the RX 6000 series cards (level 2 RT solution) have HW acceleration for, versus what NVIDIA has HW acceleration for (level 3 RT solution).
NVIDIA’s so ahead they even moved on to ray-traced MOTION BLUR.
I think AMD will start to improve with RT performance with RDNA 3 and 4 by a lot, I don’t think it’s fair to just shit on them since this is their first attempt at it, it’ll only get better from there. But I think NVIDIA will probably achieve level 5 by the time the RTX 50 or 60 series cards are out because of how early they started R&D for this. I was actually planning to get the 3080 but might as well wait for the 3070 Ti next year.
It´s not that simple. When gpu or processor manufacturers get involved in the development of games and demos they will naturally avoid or minimize the use of functions that don´t work well on their hardware while favouring the stuff they are good at, so in the end we get very biased results. When independent game devs start to familiarize themselves with AMD hardware we will see what their hardware is capable of. At this point this is just free marketing for nvidia.
Nah man, these benchmarks are real. I actually thought in the beginning that AMD would be able to compete with NVIDIA when it came to ray-tracing performance. I was pretty naive when it came to this by not knowing the R&D process NVIDIA had going on for years before all this to get to where they are now with real-time ray-tracing, but then I read the white paper for Ampere and learned just how much of the RT process was hardware-accelerated.
They have HW-acceleration for BVH traversal, ray/triangle and bounding box intersections, and instance transformation (someone correct me if I’m wrong on this, but I’m guessing this is HW acceleration for rapidly updating any asset changes like breaking glass for example but every bits of glass being shattered is being updated in the BVH and being ray-traced in real-time instead of the object being completely removed from the BVH). The Ampere cards are level 3 cards in ray-tracing, there are six different levels of RT and to achieve FULL HW-accelerated RT, it takes more time and research (and of course, more custom hardware).
The Levels of Ray Tracing - RTRT MONITOR
There are six, says Imagination Technologies. With ray tracing becoming increasingly important for a wide range of graphics applications, Imagination Technologies has developed a Ray Tracing Level System to give developers and OEMs an insight into the capability of solutions for ray tracing...rtrtmonitor.com
AMD’s RDNA 2 cards are level 2 cards in ray-tracing since they only have custom hardware acceleration for ray/triangle and ray/bounding box intersection tests, that’s pretty much it.
The part highlighted in red here is what the RX 6000 series cards (level 2 RT solution) have HW acceleration for, versus what NVIDIA has HW acceleration for (level 3 RT solution).
NVIDIA’s so ahead they even moved on to ray-traced MOTION BLUR.
I think AMD will start to improve with RT performance with RDNA 3 and 4 by a lot, I don’t think it’s fair to just shit on them since this is their first attempt at it, it’ll only get better from there. But I think NVIDIA will probably achieve level 5 by the time the RTX 50 or 60 series cards are out because of how early they started R&D for this. I was actually planning to get the 3080 but might as well wait for the 3070 Ti next year.
We are at 8 bit levels of RayTracing support, and at 0 levels of real time path tracing.
Diffuse cubemap sampling to speed up denoising "offline rendering" or for "previsualizing" a frame is not a comprehensive solution to path traced rendering otherwise render farms would not still require minutes to hours to render a frame at 1024bits.
While it may speed up basic geometry/light passes for artist curated frames in particular, that is hardly cause for celebration when citing the focus has shifted to a real time path tracing solution at Nvidia - which is pertinent if you consider Nvidia -just- had a conference stating their focus moving forward is not Ray Tracing but a fully functional Real Time Path Tracing solution for gaming at 60FPS.
It's telling just how inefficient and underdeveloped realtime raytracing is when the best examples for it are Minecraft and Quake - games that are decades old or otherwise graphically unimpressive. This tech really needs another generation before it's ready. Prebaked lighting runs better and looks 90% the same.
And yet nvidia's path traced marbles demos looks better than some of the stuff in the latest hollywood films such as Avenger's infinity war.We are at 8 bit levels of RayTracing support, and at 0 levels of real time path tracing.
Diffuse cubemap sampling to speed up denoising "offline rendering" or for "previsualizing" a frame is not a comprehensive solution to path traced rendering otherwise render farms would not still require minutes to hours to render a frame at 1024bits. While it may speed up basic geometry/light passes for artist curated frames in particular, that is hardly cause for celebration when citing the focus has shifted to a real time path tracing solution at Nvidia - which is pertinent if you consider Nvidia -just- had a conference stating their focus moving forward is not Ray Tracing but a fully functional Real Time Path Tracing solution for gaming at 60FPS.
That isn't a legitimate path tracing solution either, it's a Ray Tracing solution with 3 low passes of Path Tracing. Watch Nvidia's latest GTC where they factually say actual path tracing is too computationally demanding. The issue I've raised isn't that Nvidia aim to offer a Path Tracing solution for gamers, it is that Nvidia on one hand blatantly leads the consumer to believe demo's such as Marbles/Quake RTX have solved the Path Tracing issue - when on the other hand they insist an actual path tracing solution is too computationally demanding. Meanwhile, people like me have to point out nothing shown has actually implemented meaningful Realtime Pathtracing.And yet nvidia's path traced marbles demos looks better than some of the stuff in the latest hollywood films such as Avenger's infinity war.
I'm pretty sure they confirmed back in August the upcoming GPU they're working on does have hardware-accelerated ray-tracing but I'm not expecting them to surpass NVIDIA anytime soon on their first try, my guess is that it's probably gonna be something similar to AMD's current solution with RDNA 2.How far behind do you think that Intel will be when they finally release their discrete GPUs/ graphics cards? Do you think that they'll even have dedicated ray-tracing hardware?
There's a world of difference in terms of visual sharpness and clarity between DLSS on & native. 1440p DLSS means sub 1080p. It's whatever. It also gimps RT effects like reflections so you get to enjoy 960p RT reflections instead of 1440p. So, let's not kid ourselves about how good it is - it might still be worth using for the performance, but you're nowhere near native.
Cyberpunk 2077 im Benchmark-Test: Analyse zu Raytracing und DLSS
Cyberpunk 2077 im Test: Analyse zu Raytracing und DLSS / Raytracing in Cyberpunk 2077 / Raytracing gibt es derzeit nur auf Nvidiawww.computerbase.de
You wouldn't need two nVidia cards... Just the one.I ain't mad at them on that one, they're making sure you have 2 of their cards to run their stuff.That's straight up protecting an investment.
Good to see that at least someone sees it for what it is.You guys bully Ascend into this.
Your link sucks.He probably accepted this incident or turns a blind eye to it
EGMR | AMDGate: AMD Deny KitGuru Fury X Review Sample Based On “Negative Content”
A look at the controversy surrounding AMD and their withdrawal of a Radeon Fury X review sample for KitGuru based on Kitguru's publishing 'negative content' around AMD productstechspy.com
That’s why we say AMD is not your friend. Nvidia nor AMD are my friends, they’re corporations that makes cards, I give my money to either when it makes the most sense, period. Youtubers don’t deserve your simping either ffs especially not for a freebie.
Indeed they are. Can't possibly imagine why...Holy shit, these AMD vs NVIDIA arguments are worse than freakin' console wars.
Jesus.
Limits apply to both AMD and nVidia cards. That's called efficient development. Or would you want to have your performance tank with zero visual difference? Like Tessellation x64?Too many active ray query object in scope at any time?
Too large of a threadgroup size?
Not limiting enough the group shared memory?
To me these sounds mostly the same as DXR’s recommendations or for the memory structure, gimp your RT like consoles because it would collapse our GPU. Not some secret sauce that will catch up to 2 times the performances in path tracing. We’re compliant with DXR, but please avoid this, and that...
BVH is the most taxing aspect to implement RT, so, it's fine. Both AMD and nVidia tailored their hardware for their needs. Quoting myself;They only have a BVH intersection accelerator block. That’s it. Rest is relying on shaders for every decisions. There’s only so much possible optimization squeezed out of that. It’s bare minimum above just software ray tracing.
The one game that confirms that nVidia is faster at RT is the Riftbreaker. AMD sponsored yet nVidia is still faster.I guess 3Dmark’s full path tracing benchmark does not count either? The one that welcomed AMD in the RT arena to not have an Nvidia monopoly anymore? Feel free to @ me when there’s an independent full path tracing game where you feel AMD was not left on the sides.
I went back and compared it. It's honestly better than I thought, With some reshade thrown on it, the difference is tiny. Do you see drastic differences here which would make you turn off all raytracing and go native without it? I think you have to backtrack on your "nowhere near native" claim.
Both 1440p, one is native, the other DLSS Quality + reshade. It's actually rather easy to say which one uses DLSS, because DLSS recovers more details than native.
This is the first time I've ever seen Reshade have such little impact on colors and contrast. Is A better?
It´s not that simple. When gpu or processor manufacturers get involved in the development of games and demos they will naturally avoid or minimize the use of functions that don´t work well on their hardware while favouring the stuff they are good at, so in the end we get very biased results. When independent game devs start to familiarize themselves with AMD hardware we will see what their hardware is capable of. At this point this is just free marketing for nvidia.
No it doesn't. Cyberpunk looks incredible with RT. You will never get any prebaked lighting solution to mimic what you see.
Call me when cars are fully autonomous. Or when battery efficiency at least quadruple what we currently have. Higher power efficiency, with less heat dissipation. Or when machines can replace humans completely. The list can go on and on. But it's stupid to dismiss technology until it's fully there, cause you need to take the baby steps to get there. Ill take half assed raytracing, which lowers framerate, over no raytracing at all. Yeah it has it's cons, but I'm all for progression, rather than being stagnant.Lol. Puddle reflection technology, right? Not worth it, my man. Call me next generation when the hardware can handle RT effectively.
Call me when cars are fully autonomous. Or when battery efficiency at least quadruple what we currently have. Higher power efficiency, with less heat dissipation. Or when machines can replace humans completely. The list can go on and on. But it's stupid to dismiss technology until it's fully there, cause you need to take the baby steps to get there. Ill take half assed raytracing, which lowers framerate, over no raytracing at all. Yeah it has it's cons, but I'm all for progression, rather than being stagnant.
That's why Nvidia is killing it, regardless of all the AMD cultists/naysayers/religionists. They have put in so much time and research into it, and are constantly improving between hardware iterations, with software alone.
I wish someone could wake me up when it doesn't require DLSS to have the performance, or when gpu's have enough bandwith, that RT won't tank performance. That would be fucking sick. But for now, I'll take any solution that maximizes visual quality, as upping the resolution alone, won't change much, compared to when we were at 720p as the maximum.I can agree with most of this.
If you want to do a comparison you need to capture PNG & upload to flickr, otherwise imgur compression ruins what's generally the native advantage. There's a clear sheen & sharpness (and I'm not talking things like CAS) to a native 4K image that simply gets softened out of existence with DLSS (or other temporal injection type TAA variants). I'd also never ever not use CAS in a game with TAA, it just makes no sense to leave it out.I went back and compared it. It's honestly better than I thought, With some reshade thrown on it, the difference is tiny. Do you see drastic differences here which would make you turn off all raytracing and go native without it? I think you have to backtrack on your "nowhere near native" claim.
Both 1440p, one is native, the other DLSS Quality + reshade. It's actually rather easy to say which one uses DLSS, because DLSS recovers more details than native.
Seeing you guys are discussing DLSS I saw this video recently by GamersNexus which I thought was a really fair analysis using CyberPunk.
Goes through some of the pros and cons of DLSS in a pretty fair and unbiased way. I'd recommend it as a good watch for both the DLSS die hards and the haters.
I don't have the time right now to watch the video, but is he aware that the 1.04 update added a forced sharpening filter to "native"? Did he remove that filter or added it to the DLSS output to do a fair comparison ?
I don't think he messed with any sharpening filters at all from what I remember.
Nah man, these benchmarks are real. I actually thought in the beginning that AMD would be able to compete with NVIDIA when it came to ray-tracing performance. I was pretty naive when it came to this by not knowing the R&D process NVIDIA had going on for years before all this to get to where they are now with real-time ray-tracing, but then I read the white paper for Ampere and learned just how much of the RT process was hardware-accelerated.
They have HW-acceleration for BVH traversal, ray/triangle and bounding box intersections, and instance transformation (someone correct me if I’m wrong on this, but I’m guessing this is HW acceleration for rapidly updating any asset changes like breaking glass for example but every bits of glass being shattered is being updated in the BVH and being ray-traced in real-time instead of the object being completely removed from the BVH). The Ampere cards are level 3 cards in ray-tracing, there are six different levels of RT and to achieve FULL HW-accelerated RT, it takes more time and research (and of course, more custom hardware).
The Levels of Ray Tracing - RTRT MONITOR
There are six, says Imagination Technologies. With ray tracing becoming increasingly important for a wide range of graphics applications, Imagination Technologies has developed a Ray Tracing Level System to give developers and OEMs an insight into the capability of solutions for ray tracing...rtrtmonitor.com
AMD’s RDNA 2 cards are level 2 cards in ray-tracing since they only have custom hardware acceleration for ray/triangle and ray/bounding box intersection tests, that’s pretty much it.
The part highlighted in red here is what the RX 6000 series cards (level 2 RT solution) have HW acceleration for, versus what NVIDIA has HW acceleration for (level 3 RT solution).
NVIDIA’s so ahead they even moved on to ray-traced MOTION BLUR.
I think AMD will start to improve with RT performance with RDNA 3 and 4 by a lot, I don’t think it’s fair to just shit on them since this is their first attempt at it, it’ll only get better from there. But I think NVIDIA will probably achieve level 5 by the time the RTX 50 or 60 series cards are out because of how early they started R&D on this. I was actually planning to get the 3080 but might as well wait for the 3070 Ti next year.
Oh man... things looking bad for AMD.... who could have predicted that?
AMD already released the recommendations for developers with the best practices for ray tracing on RDNA 2.
Too many active ray query object in scope at any time?
Too large of a threadgroup size?
Not limiting enough the group shared memory?
To me these sounds mostly the same as DXR’s recommendations or for the memory structure, gimp your RT like consoles because it would collapse our GPU. Not some secret sauce that will catch up to 2 times the performances in path tracing. We’re compliant with DXR, but please avoid this, and that...
They only have a BVH intersection accelerator block. That’s it. Rest is relying on shaders for every decisions. There’s only so much possible optimization squeezed out of that. It’s bare minimum above just software ray tracing.
I guess 3Dmark’s full path tracing benchmark does not count either? The one that welcomed AMD in the RT arena to not have an Nvidia monopoly anymore? Feel free to @ me when there’s an independent full path tracing game where you feel AMD was not left on the sides.
Oh man... things looking bad for AMD.... who could have predicted that?
The 6800 still looks like a great value for anyone who doesn't care about the Nvidia features. Especially if they have an x570 board and Ryzen 5000 processor.
To expensive, card would have been great if it was 399.
They should definitely have tried to match the 3070 price tag at least. Either way, they have so little stock that they are selling out everything easily. We'll see if they do a price change once things settle down.