• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

RDNA 3 GPU Compute Units To Deliver Enhanced Ray Tracing Capabilities & Even Higher Clock Speeds

winjer

Gold Member


There are a lot of things that were overlooked during AMD Financial Analyst Day 2022. Things like the fact that AMD confirmed its Zen 4 3D V-Cache Ryzen CPUs coming later this year. Another key comment that was missed out came from AMD's Senior Vice President of Engineering at Radeon Technologies Group, David Wang, who confirmed that AMD will be delivering enhanced raytracing capabilities with their next-gen RDNA 3 GPUs.

It (RDNA 3) is also our first gaming GPU architecture that will leverage the enhanced 5nm process and an advanced chip packaging technology. And another innovation includes a architected compute units with enhanced ray-tracing capabilities and an optimized graphics pipeline with even faster clock speeds and improved power efficiency.

And to bring more photorealistic effects into the domain of realtime gaming, we are developing hybrid approaches that takes the performance of the rasterization combined with the visual fidelity of raytracing, to deliver the best realtime imerssive experiences without comprising performance.

Lastly, our next-generation multimedia, we will support advanced video codecs such as AV1 to deliver high-quality video streaming and reduce latencies and bitrates. We will also improve our display capbilities with the new DisplayPort 2.0 standard to support upcoming HDR displays with high resolutions and refresh rates.

David Wang, AMD's SVP of Engineering at Radeon Technologies Group

Some of the key features of the RDNA 3 GPUs highlighted by AMD will include:

  • 5nm Process Node
  • Advanced Chiplet Packaging
  • Rearchitected Compute Unit
  • Optimized Graphics Pipeline
  • Next-Gen AMD Infinity Cache
  • >50% Perf/Watt vs RDNA 2
 
If my 750 watt psu can handle it, I may get the 7800xt. Waiting to see how much vram the 4060ti/70 has, and power usage.

My cpu is the 5800x3D so not a power hungry 12900k... Maybe my power supply can cope.
 

GreatnessRD

Member
Jack Nicholson Reaction GIF
 

twilo99

Member
If my 750 watt psu can handle it, I may get the 7800xt. Waiting to see how much vram the 4060ti/70 has, and power usage.

My cpu is the 5800x3D so not a power hungry 12900k... Maybe my power supply can cope.

I think 750w should be enough. How are you liking that x3d?

I am think about going for the exact same combo when RDNA3 comes out and I have a 850w supply
 
Last edited:

winjer

Gold Member
If my 750 watt psu can handle it, I may get the 7800xt. Waiting to see how much vram the 4060ti/70 has, and power usage.

My cpu is the 5800x3D so not a power hungry 12900k... Maybe my power supply can cope.

It's not the Wattage of the power supply that matters that much, but rather the quality and efficiency.
 

Trogdor1123

Member
If my 750 watt psu can handle it, I may get the 7800xt. Waiting to see how much vram the 4060ti/70 has, and power usage.

My cpu is the 5800x3D so not a power hungry 12900k... Maybe my power supply can cope.
My uncle bought the 3d to replace his 5800. Says it runs about 10 degrees cooler and has better performance
 

winjer

Gold Member
The question is whether this improvement in Ray-tracing is sufficient to catch up to nvidia, because Ada Lovelace is sure to also have some Ray-tracing improvements.
This next Fall is going to be very hot in the GPU market.
 

Crayon

Member
I hope it's up to some acceptable level. It's abysmal, right now. I can live without dlss, but I really like some implementations of ray tracing and with our DNA to stuff it's hard to justify turning it on.

Look at cyberpunk on the new consoles. You slash your frame rate in half just for contact shadows. That's harsh.
 
I think 750w should be enough. How are you liking that x3d?

I am think about going for the exact same combo when RDNA3 comes out and I have a 850w supply
It's an absolute beast, I posted performance difference on like page 10 or so on the 2022 PC thread :)

Metro Exodus EE is getting choked hard by my gpu, can't hit full utilization even at 1080p dlss (720p internal) targeting 120.
 
The question is whether this improvement in Ray-tracing is sufficient to catch up to nvidia, because Ada Lovelace is sure to also have some Ray-tracing improvements.
This next Fall is going to be very hot in the GPU market.

I think Nvidia will pull ahead with ray-tracing performance.

I expect RDNA 3 to have more dedicated accelerators for ray-tracing when compared to RDNA 2 which means it will be a substantial improvement. Likely Ampere levels of performance or maybe even ahead, just speculation on my part though.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Greater than 50% per watt uplift?
We can basically gauge how powerful these chips are based on TDPs.....effectively that could mean the 200W RDNA3 could walk an RX6950XT....which in pure raster trades blows with RTX 3090Tis.
At 200Ws......shiiiiiiieeeeeeeeet thats gonna be glorious if their FSR and RT solutions are actually good.
RTX 3090ti+ performance for sub $500 while staying in the 200W range sounds like madness, honestly that RX7700XT might be the best bang for buck card next generation especially for my EU brothers struggling with electricity prices.

The 4K gaming dream might actually be upon us.

But lets be real, I dont actually see RDNA3 making a big dent on Nvidias chokehold.
We can dream though cuz Zen made Intel get off their asses, maybe RDNA3 will have Nvidia realizing they need to get frikken serious.



Zen4 with AVX512 is sounding pretty amazing.
A 35% improvement over Zen3 is actually quite large.
Raptorlake isnt expected to be that high in the ~20% range.
The CPU war will be looking mighty tasty come end of year.

They also confirmed that Samsungs mobile chips are gonna have RDNA inside cant wait to see how that turns out.
Samsung are all but ready for their nGage moment.

5qm7sevbkmc51.jpg
 

01011001

Banned
well all that matters in the end is if RDNA3 can compete with RTX40 in RT performance. if not then this will be another generation where most people will ignore AMD.

the current generation can not repeat itself. we are currently seeing RTX3060ti outperforming RX6900xt as soon as a game makes use of raytracing
 

Panajev2001a

GAF's Pleasant Genius
The question is whether this improvement in Ray-tracing is sufficient to catch up to nvidia, because Ada Lovelace is sure to also have some Ray-tracing improvements.
This next Fall is going to be very hot in the GPU market.
I wonder for how long they can delay having to introduce the equivalent of tensor cores in their SoC’s. Shaders are fast but dedicated HW is still a magnitude faster / more power efficient.
 

hlm666

Member
I want to see those TDP numbers compared to the monsters that Nvidia is likely to drop.
It's likely to not be a massive difference between companies at equivalent performance tiers. Nvidia had to push the 3000 series hard because the samsung 8nm was not as efficient as tsmc 7nm AMD were on, that changes this time round. Even that kopilite guy said something about nvidia having better efficiency not long ago.

"oh oh an OEM little bird just told me that, from latest IHV marketing material, AD104 has better claimed efficiency than Navi 33. I know, it's the opposite of what the leakers are saying but my source is solid. Don't ask for more, that's all I have for you today...."

 

sendit

Member
Amazing! This is great news! Enhanced ray tracing and higher clock speeds. Truly unexpected from AMD's next generation of GPUs.
 

Crayon

Member
I wonder for how long they can delay having to introduce the equivalent of tensor cores in their SoC’s. Shaders are fast but dedicated HW is still a magnitude faster / more power efficient.

Oh hey, it's been awhile.

When you mean equivalent, do you mean any hardware to accelerate rt or a thing that specifically dies the matrix...thing. I don't understand how that really works. I'm wonder if and needs something just like tensor cores to compete in rt.
 

Diogodx

Neo Member
It's likely to not be a massive difference between companies at equivalent performance tiers. Nvidia had to push the 3000 series hard because the samsung 8nm was not as efficient as tsmc 7nm AMD were on, that changes this time round. Even that kopilite guy said something about nvidia having better efficiency not long ago.

"oh oh an OEM little bird just told me that, from latest IHV marketing material, AD104 has better claimed efficiency than Navi 33. I know, it's the opposite of what the leakers are saying but my source is solid. Don't ask for more, that's all I have for you today...."

Navi 33 is rumored to be on 6nm that is the same 7nm with minor tweaks. It would be terrible if Nvidia could not beat that on 5nm.
 

Three

Member
I wonder for how long they can delay having to introduce the equivalent of tensor cores in their SoC’s. Shaders are fast but dedicated HW is still a magnitude faster / more power efficient.
I think their line of thinking is that games will target their console SoCs anyway so I wouldn't expect them to significantly change their RT architecture until maybe next gen and even that's a maybe.
 

Panajev2001a

GAF's Pleasant Genius
Oh hey, it's been awhile.

When you mean equivalent, do you mean any hardware to accelerate rt or a thing that specifically dies the matrix...thing. I don't understand how that really works. I'm wonder if and needs something just like tensor cores to compete in rt.

I was either thinking about ML / Tensor cores is NVIDIA’s name for their ML accelerator cores. DLSS 2.x is one of the uses they made of it, but others are for example image denoising which can be used to reduce the need to increase the numbers of rays you trace every frame (look at Quake 2 RT edition before and after they added denoising).
 

Tams

Gold Member
I hope it's up to some acceptable level. It's abysmal, right now. I can live without dlss, but I really like some implementations of ray tracing and with our DNA to stuff it's hard to justify turning it on.

Look at cyberpunk on the new consoles. You slash your frame rate in half just for contact shadows. That's harsh.
brb, just going to remove all my DNA.

Mad Mr Bean GIF by Working Title
wes craven GIF
 

Buggy Loop

Member
I was either thinking about ML / Tensor cores is NVIDIA’s name for their ML accelerator cores. DLSS 2.x is one of the uses they made of it, but others are for example image denoising which can be used to reduce the need to increase the numbers of rays you trace every frame (look at Quake 2 RT edition before and after they added denoising).

For games they denoise on the shader pipeline sadly, not the tensor cores. For non real time applications they do use tensor cores in Optix/Blender.

They are clearly working on it though and I think the marble demo used it, but I don’t think a single published game features this yet.

In fact they might have moved away to some other solutions completely, they published a lot of papers recently.









Don’t know if it will be only for offline renderes, but if they manage all that in gaming with a new architecture that is no longer on Turing’s baseline and they make a new ASIC model for the above methods? …

boom smile GIF
 
If my 750 watt psu can handle it, I may get the 7800xt. Waiting to see how much vram the 4060ti/70 has, and power usage.

My cpu is the 5800x3D so not a power hungry 12900k... Maybe my power supply can cope.
8/10GB for 4060ti and 4070 and you get 300Watt for the latter.
I see it coming already
 

supernova8

Banned
For games they denoise on the shader pipeline sadly, not the tensor cores. For non real time applications they do use tensor cores in Optix/Blender.

They are clearly working on it though and I think the marble demo used it, but I don’t think a single published game features this yet.

In fact they might have moved away to some other solutions completely, they published a lot of papers recently.









Don’t know if it will be only for offline renderes, but if they manage all that in gaming with a new architecture that is no longer on Turing’s baseline and they make a new ASIC model for the above methods? …

boom smile GIF

deer fallow skalerzzz
 

SolidQ

Member
I think Nvidia will pull ahead with ray-tracing performance.

I expect RDNA 3 to have more dedicated accelerators for ray-tracing when compared to RDNA 2 which means it will be a substantial improvement. Likely Ampere levels of performance or maybe even ahead, just speculation on my part though.
Ampere avg 50-60% faster in RT than RDNA2 -

Most leaks saying 3.5x+ time RDNA3 faster than RDNA2
 
Last edited:

draliko

Member
now that the prices of 3080 have fallen i'm so tempeted to buy one... on the other hand i'm way too curious to sse what amd will do with 7XXX series... for sure everything mid/high card you buy now will suffice for the entire generation. (still thinks 3080 /4070 will be the way to go from here till january)
 
I haven't bought an AMD GPU in years and then I only bought one because I had an old PC and I was limited to what AGP graphics cards I could use at a time when most new GPUs were supporting PCI-e. I did try two 5870s in CrossFire for about a year but found that like NVIDIA's SLI, which I tried with two 680s, that it was so poorly supported that it ultimately just wasn't worth the hassle.

I now only buy high-end NVIDIA GPUs because, at this point, they are so far ahead of AMD's in terms of features and performance that I find myself being unimpressed by anything new that AMD release. NVIDIA pushed ray-tracing and now AMD are playing catch-up but after years of lacklustre driver support (their DirectX 11 and OpenGL drivers for example). NVIDIA also have DLSS, which was shaky at launch but has now developed into something truly worth using, especially with ray-tracing. AMD again are behind here, only having a (poor) software upscaling solution for their cards until recently with the release of FSR 2.0 but even then its image quality cannot compare to the hardware-supported DLSS on NVIDIA's cards.

I recently upgraded my 4.5 year old GTX 1080 Ti to a RTX 3080 finally when I found a Palit one selling at its MSRP instead of the ridiculous overinflated prices that we have had for the last 2 (or is it 3?) years. I am very happy with it. I would like to see AMD becoming more competitive and innovative in the GPU market like they were with Intel in the CPU market. However, I just don't think they will ever offer a graphics card that beats NVIDIA in terms of features and performance as they are just too far behind at this point.
 

01011001

Banned
That comparison video is awful, they aren't even using the same specs between the two PC's.

yeah but still, a 6900xt is roughly competing with the 3060ti as soon as you turn on RT.
Someone mentioned in one of the FSR2.0 threads that he gets ~30fps in Cyberpunk when RT is on on a 6900XT at 1440p, and that's literally what I get on my 3060ti

and, then there's comparing the RX6000 series to the GTX10 series... and it becomes a bit sad

this is Watch Dogs 3 running with RT max on a 1080ti



and here it is on a 6900xt, both are 1080p


you can see... the 6900xt is about 65%-68% faster... than a card with no RT acceleration.
I mean that is something I guess, but damn...

also... that 1080ti really still powering on eh? that thing can compete with the damn consoles still!
too bad that less and less games with RT actually support the GTX10 series cards... that 1080ti could still be used for RT on console level if they did for many games.
especially Doom Eternal is a bummer, I really wonder how that would run on the 1080ti given that it is extremely well optimized
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
now that the prices of 3080 have fallen i'm so tempeted to buy one... on the other hand i'm way too curious to sse what amd will do with 7XXX series... for sure everything mid/high card you buy now will suffice for the entire generation. (still thinks 3080 /4070 will be the way to go from here till january)
The 4070 and 3080 should have similarish performance while drawing similarish power.
So either will do the job.
But assuming MSRP hasnt skyrocketed because Nvidia gonna Nvidia, you could wait 6 months for the 4070.


I recently upgraded my 4.5 year old GTX 1080 Ti to a RTX 3080 finally when I found a Palit one selling at its MSRP instead of the ridiculous overinflated prices that we have had for the last 2 (or is it 3?) years. I am very happy with it. I would like to see AMD becoming more competitive and innovative in the GPU market like they were with Intel in the CPU market. However, I just don't think they will ever offer a graphics card that beats NVIDIA in terms of features and performance as they are just too far behind at this point.
Palit brothers.👊
Due to their availability in the US they dont get much attention.
But Palits cards are some of the best out there.
Their build quality rivals pretty much all the "top dogs".
And they dont fluff their powerlimits....nextgen im almost certainly getting a Palit, I dont need any higher powerlimits because serious who is overclocking in this day and age?

A 3080 should easily last you the generation assuming you are playing at UW 1440p (like me).
If you are at 4K and want Native + All the bells, you might need an upgrade.
And realistically for RTX 3080 owner the only real upgrade is the 4090.
Putting the 3080 on GA102 will give it legs for days me thinks.

Being real gen on gen upgrading doesnt make sense anymore.
Your 3080 will be fine fine until the 5080 shows up.
also... that 1080ti really still powering on eh? that thing can compete with the damn consoles still!
too bad that less and less games with RT actually support the GTX10 series cards... that 1080ti could still be used for RT on console level if they did for many games.
especially Doom Eternal is a bummer, I really wonder how that would run on the 1080ti given that it is extremely well optimized
Games that dont use DXR backend seemingly dont let Nvidias DXR override work.
Pretty much every game that has RT is using DXR though so the 10 series should work.
Doom Eternal is Vulkan.
 

draliko

Member
The 4070 and 3080 should have similarish performance while drawing similarish power.
So either will do the job.
But assuming MSRP hasnt skyrocketed because Nvidia gonna Nvidia, you could wait 6 months for the 4070.
Yep that's true, in fact i'm more curious about amd offering on the new node than nvidia. I'm at 1440p too, and 3080 will be more than enough for all the gen (considering what the consoles are packing...), i'm still trying to sell my 5700xt, so for now i'm still waiting, in the last week prices got down of about 100€ here (3080 can be found a 750€), but as soon as i find a 3080 for 700€ i'll bite
 

01011001

Banned
Games that dont use DXR backend seemingly dont let Nvidias DXR override work.
Pretty much every game that has RT is using DXR though so the 10 series should work.
Doom Eternal is Vulkan.

Doom Eternal has a DX12 renderer tho right? and I'm pretty sure it doesn't work either way.
also pretty sure it's also at the mercy of the developers to flag GTX10 cards for RTX compatibility.

what does Quake 2 RTX run with? in game it simply says "RTX" and Open GL... Quake 2 RTX works on 10 series cards
edit: Quake 2 RTX is Vulkan
 
Last edited:

Buggy Loop

Member
yeah but still, a 6900xt is roughly competing with the 3060ti as soon as you turn on RT.
Someone mentioned in one of the FSR2.0 threads that he gets ~30fps in Cyberpunk when RT is on on a 6900XT at 1440p, and that's literally what I get on my 3060ti

and, then there's comparing the RX6000 series to the GTX10 series... and it becomes a bit sad

this is Watch Dogs 3 running with RT max on a 1080ti



and here it is on a 6900xt, both are 1080p


you can see... the 6900xt is about 65%-68% faster... than a card with no RT acceleration.
I mean that is something I guess, but damn...

also... that 1080ti really still powering on eh? that thing can compete with the damn consoles still!
too bad that less and less games with RT actually support the GTX10 series cards... that 1080ti could still be used for RT on console level if they did for many games.
especially Doom Eternal is a bummer, I really wonder how that would run on the 1080ti given that it is extremely well optimized


Yea, somehow AMD was part of the consortium with Nvidia and Microsoft for DXR for years and saw Nvidia be the first at implementing it with Turing… and even 2 years later, after seeing someone’s homework, they managed to make a version that is less efficient on a raytracing/CU basis than Turing.

I hope they are faster, and not just because there’s more CUs but that they changed the pipeline.

Although, according to /r/AMD, nothing is worth using raytracing ;)
 

Sanepar

Member
The 4070 and 3080 should have similarish performance while drawing similarish power.
So either will do the job.
But assuming MSRP hasnt skyrocketed because Nvidia gonna Nvidia, you could wait 6 months for the 4070.



Palit brothers.👊
Due to their availability in the US they dont get much attention.
But Palits cards are some of the best out there.
Their build quality rivals pretty much all the "top dogs".
And they dont fluff their powerlimits....nextgen im almost certainly getting a Palit, I dont need any higher powerlimits because serious who is overclocking in this day and age?

A 3080 should easily last you the generation assuming you are playing at UW 1440p (like me).
If you are at 4K and want Native + All the bells, you might need an upgrade.
And realistically for RTX 3080 owner the only real upgrade is the 4090.
Putting the 3080 on GA102 will give it legs for days me thinks.

Being real gen on gen upgrading doesnt make sense anymore.
Your 3080 will be fine fine until the 5080 shows up.

Games that dont use DXR backend seemingly dont let Nvidias DXR override work.
Pretty much every game that has RT is using DXR though so the 10 series should work.
Doom Eternal is Vulkan.
I believe 4070 will be between a 3090 and 3090 ti.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Doom Eternal has a DX12 renderer tho right? and I'm pretty sure it doesn't work either way.
also pretty sure it's also at the mercy of the developers to flag GTX10 cards for RTX compatibility.

what does Quake 2 RTX run with? in game it simply says "RTX" and Open GL... Quake 2 RTX works on 10 series cards
edit: Quake 2 RTX is Vulkan
Quake 2 RTX uses VKRay for Raytracing.
VKRay is Nvidias solution.

Maybe id are using some other Vulkan Raytracing solution that doesnt recognize the GTX10 series as RT capable?
 

01011001

Banned
Quake 2 RTX uses VKRay for Raytracing.
VKRay is Nvidias solution.

Maybe id are using some other Vulkan Raytracing solution that doesnt recognize the GTX10 series as RT capable?

Doom has a DX12 renderer tho, wouldn't that mean it uses DXR? at least in that mode?
 

FireFly

Member
yeah but still, a 6900xt is roughly competing with the 3060ti as soon as you turn on RT.
Someone mentioned in one of the FSR2.0 threads that he gets ~30fps in Cyberpunk when RT is on on a 6900XT at 1440p, and that's literally what I get on my 3060ti

and, then there's comparing the RX6000 series to the GTX10 series... and it becomes a bit sad

this is Watch Dogs 3 running with RT max on a 1080ti



and here it is on a 6900xt, both are 1080p


you can see... the 6900xt is about 65%-68% faster... than a card with no RT acceleration.
I mean that is something I guess, but damn...

also... that 1080ti really still powering on eh? that thing can compete with the damn consoles still!
too bad that less and less games with RT actually support the GTX10 series cards... that 1080ti could still be used for RT on console level if they did for many games.
especially Doom Eternal is a bummer, I really wonder how that would run on the 1080ti given that it is extremely well optimized

I don't think it's just "turning on" ray tracing, since AMD often do ok when just using ray traced shadows. In "lightly" ray traced titles like Metro, they can deliver 3070/2080 Ti level performance. The issue issue is that performance tanks as more ray tracing effects are added. Cyberpunk is basically a worst case because it has RT reflections + shadows + GI + diffuse lighting.
 
Top Bottom