• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD RDNA 3 GPUs To Be More Power Efficient Than NVIDIA Ada Lovelace GPUs, Navi 31 & Navi 33 Tape Out Later This Year

hlm666

Member
Some of you need to remember nvidia is on a worse node than AMD, they will both be on tsmc 5nm next round and AMD wont have the free advantage in performance although nvidia loses it's price advantage so expect prices going up from both companies. AMD should have sunk the boot while they had the advantage but somehow the pandemic worked out for nvidia with supply, some bad luck for amd there.

"Quickly recapping, Samsung’s node is 50% worse in density than TSMC’s, performance is 15% worse on average, and yet NVIDIA has managed to include 28.3 billion transistors and increase the Boost frequency to 1, 7 GHz , but also and this is a key point, it has managed to increase the energy efficiency of each chip by 1.9x compared to Turing."

 

ZywyPL

Banned
Nvidia: laughs in DLSS

Rasterization in RDNA2 is already better than any NV card, so what? The battlefield has changed while AMD seems to be still stuck in the past. Hopefully RDNA3 will implement the ML features from XSX and FSR will be tweaked to use it in a similar fashion as DLSS works.

The AMD cycle:

- rabid, hyperbolic speculation about how much ass product++ is going to kick

- it launches and falls short of the hype

- “Nvidia/Intel might be better right now, but just wait until [better drivers/games are optimized for AMD because consoles use the same architecture/games start to benefit from all that VRAM]”

- competitor releases a new product that dominates the market

- “it’s not fair to compare the other guy’s next gen product with AMD’s current one. Just wait for product++”

- repeat

Yeah, I'm still waiting to see 3080 tank in performance due to its 10GB VRAM while RX 6000 cards will "age like a fine wine" ;)
 

Kenpachii

Member
Nvidia: laughs in DLSS

Rasterization in RDNA2 is already better than any NV card, so what? The battlefield has changed while AMD seems to be still stuck in the past. Hopefully RDNA3 will implement the ML features from XSX and FSR will be tweaked to use it in a similar fashion as DLSS works.



Yeah, I'm still waiting to see 3080 tank in performance due to its 10GB VRAM while RX 6000 cards will "age like a fine wine" ;)

U mean, a card that is still nvidia's flag ship on the market, that sponsors every game under the sun is not having issue's with 10gb yet in games that are all made around 3gb ps4 games? no way.
 
Last edited:
Some of you need to remember nvidia is on a worse node than AMD, they will both be on tsmc 5nm next round and AMD wont have the free advantage in performance although nvidia loses it's price advantage so expect prices going up from both companies. AMD should have sunk the boot while they had the advantage but somehow the pandemic worked out for nvidia with supply, some bad luck for amd there.

"Quickly recapping, Samsung’s node is 50% worse in density than TSMC’s, performance is 15% worse on average, and yet NVIDIA has managed to include 28.3 billion transistors and increase the Boost frequency to 1, 7 GHz , but also and this is a key point, it has managed to increase the energy efficiency of each chip by 1.9x compared to Turing."

How do you figure Nvidia will be able to get any product from TSMC next time around?
I see consoles needing AMD chips, I see RDNA3, I see now Steam Deck needing some parts from AMD as well. Chances are NVidia stays with Samsung, or at least majority Samsung next year too.
 

Md Ray

Member
Power consumption has never been a contention of buying product A over B. Performance always has and always will be for me. If AMD can top Nvidia, I'll go with them. But it can't just be winning in rasterization, but raytracing as well.

I'm not saying it can't happen, but look at RDNA2 hype/expectations compared to reality? Rasterization was great, even power consumption was good. But DLSS and raytracing were a night and day difference, which turned me off, and led me to ampere instead.
Same here. On top of that, NVIDIA was many times cheaper than AMD GPUs for me, it usually is the case in my country. For instance, 5700 XT was going for like $100+ more than RTX 3070 FE at the time of my purchase. :messenger_tears_of_joy:
 
Last edited:

nekrik

Member
AMD is full of #%^* as always. I still remember the “poor Volta” memes from them and don’t forget the rdna 2.0 rx 6xxx series which should allegedly would be “Ampere” killer.
 

RoboFu

One of the green rats
U mean, a card that is still nvidia's flag ship on the market, that sponsors every game under the sun is not having issue's with 10gb yet in games that are all made around 3gb ps4 games? no way.
It won’t tank but it won’t push ultra textures in taxing games. The big pro it has going for it is that most games are still not totally new and still spec to last gen consoles and gpus.
 
Last edited:

SantaC

Member
Nvidia: laughs in DLSS

Rasterization in RDNA2 is already better than any NV card, so what? The battlefield has changed while AMD seems to be still stuck in the past. Hopefully RDNA3 will implement the ML features from XSX and FSR will be tweaked to use it in a similar fashion as DLSS works.



Yeah, I'm still waiting to see 3080 tank in performance due to its 10GB VRAM while RX 6000 cards will "age like a fine wine" ;)
Rasterization is the foundation of all games.

Current PC games that has raytracing which are worth it is like 2 games.
 
Last edited:

Armorian

Banned
U mean, a card that is still nvidia's flag ship on the market, that sponsors every game under the sun is not having issue's with 10gb yet in games that are all made around 3gb ps4 games? no way.

There are more AMD partnered games right now I would argue, and in them there is this one Ubisoft title that can't even use Nvidia card properly - ACValhalla (it maxes out ~70% card TDP). You can blame console makers for making machines with 13GB of available RAM, and that's for GPU and CPU so 10GB 3080 will be fine for a long time I guess. Plus there is XSS so games will have to run under ~8GB of memory anyway, 8GB cards will be fine for entire gen, just not on ultra settings.
 

FireFly

Member
I hope we'll see such massive jumps from both makers.

If you believe their rumors, they have Hopper set to '24 in their roadmap, but knowing how it's extremely rare to see any rumor from wccftech come to fruition yeah it's safe to ignore that as well.
kopite7kimi has arguably the best track record for leaks and he seems to be hinting that Nvidia will jump straight to Hopper if Lovelace is not enough.


Some of you need to remember nvidia is on a worse node than AMD, they will both be on tsmc 5nm next round and AMD wont have the free advantage in performance although nvidia loses it's price advantage so expect prices going up from both companies. AMD should have sunk the boot while they had the advantage but somehow the pandemic worked out for nvidia with supply, some bad luck for amd there.

"Quickly recapping, Samsung’s node is 50% worse in density than TSMC’s, performance is 15% worse on average, and yet NVIDIA has managed to include 28.3 billion transistors and increase the Boost frequency to 1, 7 GHz , but also and this is a key point, it has managed to increase the energy efficiency of each chip by 1.9x compared to Turing."

The 1.9x is basically Nvidia estimating the power consumption of a 3090 clocked to deliver 2080 performance. Obviously if you take a huge chip and clock it really low, you can get great performance per watt, because power scaling is not linear. But comparing product to product, the 3080 is only ~18% more power efficient at 4k than the 2080/2080 Ti.


I wonder how much of that power efficiency improvement is coming simply from the half node shrink, given that Samsung were advertising 35% better power efficiency than their 14 nm process, for RF chips at least.
 

twilo99

Member
RDNA3 was always the "the endgame" for the RDNA family in general, so yes, this is going to be a very potent/capable architecture. Looking forward to it.

RDNA2 is really important, and it will remain as such for the next 4-6 years for obvious reasons, but RDNA3 is going to be amazing.
 

Turk1993

GAFs #1 source for car graphic comparisons
AMD got nothing on Nvidia, not even the rasterization performance. Some people are trippin, and with DLSS and RT performance on top of that they are as always behind.
4K.png

e66a176d-c179-434d-9f55-9383dfc60b28.png

AMD-vs-Nvidia.png
 

Kenpachii

Member
It won’t tank but it won’t push ultra textures in taxing games. The big pro it has going for it is that most games are still not totally new and still spec to last gen consoles and gpus.

There are more AMD partnered games right now I would argue, and in them there is this one Ubisoft title that can't even use Nvidia card properly - ACValhalla (it maxes out ~70% card TDP). You can blame console makers for making machines with 13GB of available RAM, and that's for GPU and CPU so 10GB 3080 will be fine for a long time I guess. Plus there is XSS so games will have to run under ~8GB of memory anyway, 8GB cards will be fine for entire gen, just not on ultra settings.

AC is a mixed bag on nvidia and amd cards. most big juggernaut pc titles nvidia is present in amd is nowhere to be seen. Ac valhalla is maybe the exception.

Anyway

The thing is we don't know anything where next gen is going to take us, until we actually got software that is really pushing those boxes and first gen software is probably not going to do that.

But still about v-ram.

V-ram can easily snowball with rtx i/o reserving a chunk, console games moving back into 900p with low settings territory ( drops while targetting 1080p or 1440p ), PS5 ports that don't hit xbox, higher minimum requirements, devs choosing higher v-ram amounts as minimum because market moved on etc on with PC or simple dirty ports.

It could also be the other way around, DLSS, PS5 is using more memory then we think it does for swapping data aka reserves a large chunk of memory. High resolution RT focus = easier to run on lower v-ram cards.

However 16gb over 10gb is a big difference and frankly i wouldn't want to risk 10gb on a card if i kept it for longer around. The 3080 is out now what? 9 months or something soon to be 1 year old. With still no next gen game on the market that is specifically builded to push consoles to the brink of the edge completely developed for it that also has a PC port. 3rd party's are still on old gen, which helps the 3080 stay relevant as movement forwards is slower. But the moment the games do hit it could be fast over u have that risk.

And the reason i do know that risk is because a 580 was top end when i bought it and stayed that for a long while until a gen shift happened and it couldn't runa nything anymore which was considered before that impossible yet it happened and the reason was v-ram. Never again.

what will happen however is all just speculation at this point, but i rather am prepared then not when i upgrade.
 
Last edited:

rnlval

Member
AMD hit the ball out the park when they introduced the chiplet designs into Zen 2 and likewise we could see them having the same "Zen 2" moment with the chiplet design RDNA 3, the top card is rumoured to have around 160 CU's which is insane, besides rasterisation (which I think AMD will edge Nvidia out on) I'm curious to see how it'll stack up against Lovelace in terms of ray tracing performance.
3D Marks Mesh Shader benchmark



RX 6800 XT scored 523.60 fps
IDYkwye.png


For raytracing and mesh shaders, AMD would need 3X improvements to compete against NVIDIA's ADA.

Without Mesh Shaders(using shaders for geometry workload), RX 6800 XT/6900 XT's legacy geometry hardware is substantially inferior.
 
Last edited:

rnlval

Member
Nvidia: laughs in DLSS

Rasterization in RDNA2 is already better than any NV card, so what? The battlefield has changed while AMD seems to be still stuck in the past. Hopefully RDNA3 will implement the ML features from XSX and FSR will be tweaked to use it in a similar fashion as DLSS works.



Yeah, I'm still waiting to see 3080 tank in performance due to its 10GB VRAM while RX 6000 cards will "age like a fine wine" ;)
RTX Ampere GPUs have excess TFLOPS/TIOPS for GpGPU decompression MS Direct Storage PC version.
 
The AMD cycle:

- rabid, hyperbolic speculation about how much ass product++ is going to kick

- it launches and falls short of the hype

- “Nvidia/Intel might be better right now, but just wait until [better drivers/games are optimized for AMD because consoles use the same architecture/games start to benefit from all that VRAM]”

- competitor releases a new product that dominates the market

- “it’s not fair to compare the other guy’s next gen product with AMD’s current one. Just wait for product++”

- repeat

That's what people said about Ryzen, but look where we are now.

AMD has been competing with two companies that are each 10 times their size and has managed to do a pretty fucking good job.

3D Marks Mesh Shader benchmark



RX 6800 XT scored 523.60 fps
IDYkwye.png


For raytracing and mesh shaders, AMD would need 3X improvements to compete against NVIDIA's ADA.

Without Mesh Shaders(using shaders for geometry workload), RX 6800 XT/6900 XT's legacy geometry hardware is substantially inferior.


That's an interesting benchmark, but how does that impact actual GPU performance in games?
 
Last edited:

rnlval

Member
That's what people said about Ryzen, but look where we are now.

AMD has been competing with two companies that are each 10 times their size and has managed to do a pretty fucking good job.



That's an interesting benchmark, but how does that impact actual GPU performance in games?
Game development is in transision towards next-generation game console hardware.
 

nerdface

Banned
What other argument can you make when you are losing in both performance, and features?

I’m a big fan of power efficiency, but their marketing team is just going back to the well.

Sell laptop chips. Survive.
 

99Luffy

Banned
The AMD cycle:

- rabid, hyperbolic speculation about how much ass product++ is going to kick

- it launches and falls short of the hype

- “Nvidia/Intel might be better right now, but just wait until [better drivers/games are optimized for AMD because consoles use the same architecture/games start to benefit from all that VRAM]”

- competitor releases a new product that dominates the market

- “it’s not fair to compare the other guy’s next gen product with AMD’s current one. Just wait for product++”

- repeat
RDNA2 already broke that cycle.
Which is why we now hear 'doesnt matter since AMD doesnt have DLSS.' And we'll keep hearing that for awhile..

fake edit: Too late.
 

99Luffy

Banned
Some of you need to remember nvidia is on a worse node than AMD, they will both be on tsmc 5nm next round and AMD wont have the free advantage in performance although nvidia loses it's price advantage so expect prices going up from both companies. AMD should have sunk the boot while they had the advantage but somehow the pandemic worked out for nvidia with supply, some bad luck for amd there.

"Quickly recapping, Samsung’s node is 50% worse in density than TSMC’s, performance is 15% worse on average, and yet NVIDIA has managed to include 28.3 billion transistors and increase the Boost frequency to 1, 7 GHz , but also and this is a key point, it has managed to increase the energy efficiency of each chip by 1.9x compared to Turing."

Nvidia having a much bigger chip with more transistors was also an advantage.
 
the Toyota Prius is super efficient too, but you’re not winning the quarter mile with it. I’m glad AMD is still out there swinging, but nothing comes close to NVIDIA. I really don’t care how power efficient the 3090 TI will be, I just want those sweet FPS’.
 

Dream-Knife

Banned
the Toyota Prius is super efficient too, but you’re not winning the quarter mile with it. I’m glad AMD is still out there swinging, but nothing comes close to NVIDIA. I really don’t care how power efficient the 3090 TI will be, I just want those sweet FPS’.
RDNA2 is pretty good man. I'm getting higher FPS with a 6800 than a 3080 in Insurgency Sandstorm. That game is programmed poorly though. It also asks for 13gb of VRAM...
 
RDNA2 is pretty good man. I'm getting higher FPS with a 6800 than a 3080 in Insurgency Sandstorm. That game is programmed poorly though. It also asks for 13gb of VRAM...
Certainly there are corner cases in which AMD exceeds, but they are too few and far between unfortunately which has given Nvidia the ability to set any price they want with their flagships.
 

hlm666

Member
How do you figure Nvidia will be able to get any product from TSMC next time around?
I see consoles needing AMD chips, I see RDNA3, I see now Steam Deck needing some parts from AMD as well. Chances are NVidia stays with Samsung, or at least majority Samsung next year too.
The consoles will stay on 7nm, not sure what steamdeck is on but probably 7nm aswell. They wont impact nvidia and amds 5nm products, apple should be moving ahead leaving behind a big chunk of 5nm capacity at tsmc for amd and nvidia. Were also reports of nvidia booking tsmc 5nm capacity late last year.
 
The consoles will stay on 7nm, not sure what steamdeck is on but probably 7nm aswell. They wont impact nvidia and amds 5nm products, apple should be moving ahead leaving behind a big chunk of 5nm capacity at tsmc for amd and nvidia. Were also reports of nvidia booking tsmc 5nm capacity late last year.
The consoles will not stay 7nm, there's already reports of ps5 going 6nm if i'm not misremembering. Regardless that takes production capacity out of TSMC, as in, there's only so many chips they can produce per month.
I'm not saying your wrong though, there might be some capacity and Nvidia could get a share of it, but I think it's quite likely it will either be very limited or none at all. They might even only get capacity for the 4090/80 and have the lower end models in Samsung's hands. Or only the professional cards getting that fabrication process. It's too early to know...
 

Darius87

Member
AMD got nothing on Nvidia, not even the rasterization performance. Some people are trippin, and with DLSS and RT performance on top of that they are as always behind.
4K.png

e66a176d-c179-434d-9f55-9383dfc60b28.png

AMD-vs-Nvidia.png
no one is tripping tflop for tflop AMD > Nvidia at rasterization and compute.
6900XT just 23Tflops and at 2 place while RTX 3900 is 35 Tflops at 1st place.
 

Armorian

Banned
I dunno last game I looked at was Resident Evil Village. Apparently the 6800xt outperforms the 3090.

No one expected this. So yeah, broken. AMD flops = Nvidia flops

This game is designed to run well on AMD with RT. It's very light implementation, resolution of reflections is like 640x480 (or 320x240 LOL) and GI works in screen space. Engine itself also runs better on AMD so it's no surprise it's performing better on RDNA even with RT, PC port is just straight console code and it shipped with performance braking DRM...

no one is tripping tflop for tflop AMD > Nvidia at rasterization and compute.
6900XT just 23Tflops and at 2 place while RTX 3900 is 35 Tflops at 1st place.

Ampere Teraflops can be only compared to Ampere teraflops, this numbers are meaningless when Nvidia started counting them differently

2080Ti 13.5TF
3070 20TF

^ Both have almost identical performance
 
Last edited:

hlm666

Member
The consoles will not stay 7nm, there's already reports of ps5 going 6nm if i'm not misremembering. Regardless that takes production capacity out of TSMC, as in, there's only so many chips they can produce per month.
I'm not saying your wrong though, there might be some capacity and Nvidia could get a share of it, but I think it's quite likely it will either be very limited or none at all. They might even only get capacity for the 4090/80 and have the lower end models in Samsung's hands. Or only the professional cards getting that fabrication process. It's too early to know...
So it's too early to say nvidia will be using tsmc 5nm when they booked capacity over 6 months ago but saying the ps5 will be using 6nm from some rumor thats got more holes in it than a fly screen is fine.

Consoles are not changing node until you get a new model like a pro or a slim, you can't just take the 7nm design and use a different process. Apple are moving to 3nm and leaving a whole lot of 5nm capacity they used this year, apple had 80% of tsmc's 5nm capacity and are moving to 3nm. Pretty safe to assume amd and nvidia wont need all the capacity apple are leaving there, if you think they do buy shares now.
 
Last edited:
So it's too early to say nvidia will be using tsmc 5nm when they booked capacity over 6 months ago but saying the ps5 will be using 6nm from some rumor thats got more holes in it than a fly screen is fine.

Consoles are not changing node until you get a new model like a pro or a slim, you can't just take the 7nm design and use a different process. Apple are moving to 3nm and leaving a whole lot of 5nm capacity they used this year, apple had 80% of tsmc's 5nm capacity and are moving to 3nm. Pretty safe to assume amd and nvidia wont need all the capacity apple are leaving there, if you think they do buy shares now.
It's irrelevant the 6nm node and wether they are or aren't moving too it. TSMC will only produce a certain number of chips and there are contracts in place that will last years and years. If apple moves to 3nm, those will still be TSMC made. Unless they are expanding their production capacity, there's only so much they can do.
We're already having massive shortages as it is, and that's with Samsung in the mix. Remove them from the equation and we'll be in an ever worse place come next year.

My prediction is Nvidia will still keep Samsung, probably for their 4060/4070 next time around. They will try and get some contract going for the 4090 under TSMC, with the least performing chips sold as 4080's.

We'll see next year if i'm wrong.
 

Darius87

Member
Ampere Teraflops can be only compared to Ampere teraflops, this numbers are meaningless when Nvidia started counting them differently

2080Ti 13.5TF
3070 20TF

^ Both have almost identical performance
what do you mean counting them differently? RTX 3090 is 35.85Tflops (10496 * 2 * 1695) same way AMD's 6900XT is 23.04Tflops (5120 * 2 * 2250) it looks same to me.
2080Ti compensates with extra bandwidth and memory that's why it performs same as 3070.
AMD cards performing better with less Tflops just shows RDNA2 has better arch then Ampere for games in rasterization.
 

SantaC

Member
AMD got nothing on Nvidia, not even the rasterization performance. Some people are trippin, and with DLSS and RT performance on top of that they are as always behind.
4K.png

e66a176d-c179-434d-9f55-9383dfc60b28.png

AMD-vs-Nvidia.png
Lol 4K only when a lot of people still got 1440p monitors where 6900XT is better
 

hlm666

Member
It's irrelevant the 6nm node and wether they are or aren't moving too it. TSMC will only produce a certain number of chips and there are contracts in place that will last years and years. If apple moves to 3nm, those will still be TSMC made. Unless they are expanding their production capacity, there's only so much they can do.
We're already having massive shortages as it is, and that's with Samsung in the mix. Remove them from the equation and we'll be in an ever worse place come next year.

My prediction is Nvidia will still keep Samsung, probably for their 4060/4070 next time around. They will try and get some contract going for the 4090 under TSMC, with the least performing chips sold as 4080's.

We'll see next year if i'm wrong.
Errr yeh they are expanding production capacity, they don't just tear down a 5nm fab and build a 3nm fab its it's place. So at the moment tsmc's 7nm capacity is being hammered by everyone thats not apple, apple are moving to 3nm which is a new fab thats been being worked on for years.

 

Armorian

Banned
what do you mean counting them differently? RTX 3090 is 35.85Tflops (10496 * 2 * 1695) same way AMD's 6900XT is 23.04Tflops (5120 * 2 * 2250) it looks same to me.
2080Ti compensates with extra bandwidth and memory that's why it performs same as 3070.
AMD cards performing better with less Tflops just shows RDNA2 has better arch then Ampere for games in rasterization.


Same memory, almost the same bandwith and almost the same performance.

Now: 3060ti is 16.20TF and 2080S is 11.15TF...

 

Darius87

Member

Same memory, almost the same bandwith and almost the same performance.

Now: 3060ti is 16.20TF and 2080S is 11.15TF...

2080S has more bandwith and 10 more CU's. while it's doesn't look that big of difference but less Tflops difference are between two cards less bandwitth and CU's and memory it needs for worse arch card to compensate.
we can't go on and on comparing but you can't deny that RDNA2 performs better at similiar spec to ampere arch, big advantage RDNA2 gives L3 cache. Nvidia invested it's transistors in to CU's count and AMD in L3 cache size.
 

KungFucius

King Snowflake
These rumors are stupid. FFS the rumors about RDNA2 weeks before reveal were just bullshit hype, why the fuck would anyone take a rumor about how a GPU that is a year away compares to another that is a year away when rumors about GPUs that are already in boxes and have drivers ready are fucking wrong?

I am almost at the point of ignoring AMD. They were supposed to have something better than Nvidia each generation for the last 8 years or so and have consistently failed to deliver. Don't get me wrong, I'd love to see both companies plus Intel being competitive because that means better prices post shortage, but any information now is useless.
 

FireFly

Member
what do you mean counting them differently? RTX 3090 is 35.85Tflops (10496 * 2 * 1695) same way AMD's 6900XT is 23.04Tflops (5120 * 2 * 2250) it looks same to me.
2080Ti compensates with extra bandwidth and memory that's why it performs same as 3070.
AMD cards performing better with less Tflops just shows RDNA2 has better arch then Ampere for games in rasterization.
TFLOPS are used to estimate performance on a given architecture, because there is generally a fixed ratio between compute, fill rate, and texture rate. With Ampere, Nvidia doubled compute performance, but kept everything else the same. So we shouldn't expect performance to double as well, and there is no need to attribute any perceived shortfalls to a lack of memory bandwidth or capacity. The 3070 Ti has 36% more memory bandwidth than the 3070, but is only 5% faster at 1440p. So clearly the 3070 is not being held back by a lack of memory bandwidth.
 
Last edited:

The Skull

Member
Here we go again, the never ending trend of superior Radeon GPUs rumors that never come to life
Rumors fly around for both vendors that never come true. RDNA2 was barely supposed to be as good as a 2080ti. Yet it's pretty much equal to their Nvidia counterparts.


I'll call out unbelievable or downright bullshit AMD fanboyism but a rumour that just says they'll be more efficient? Not hard to believe considering RDNA2 is more efficient than Ampere.
 

Darius87

Member
TFLOPS are used to estimate performance on a given architecture, because there is generally a fixed ratio between compute, fill rate, and texture rate. With Ampere, Nvidia doubled compute performance, but kept everything else the same. So we shouldn't expect performance to double as well, and there is no need to attribute any perceived shortfalls to a lack of memory bandwidth or capacity. The 3070 Ti has 36% more memory bandwidth than the 3070, but is only 5% faster at 1440p. So clearly the 3070 is not being held back by a lack of memory bandwidth.
of course CU's doesn't scale performance linearly but still improves it specially with higher resolutions more pixels requires more parallelisation same with bandwith, so i guess that's the reason 3070 Ti is only 5% faster then 3070 at 1440p because 3070 doesn't lack BW but have enough of it.
i was comparing 3090 with 6900XT at 4K res btw.
 
3D Marks Mesh Shader benchmark



RX 6800 XT scored 523.60 fps
IDYkwye.png


For raytracing and mesh shaders, AMD would need 3X improvements to compete against NVIDIA's ADA.

Without Mesh Shaders(using shaders for geometry workload), RX 6800 XT/6900 XT's legacy geometry hardware is substantially inferior.


I’m already familiar with the benchmarks but they’re also synthetic so they don’t mean much as of yet. Also no graphics engines are currently using mesh/primitive shaders.

RDNA 3 is a completely different architecture to RDNA 2, so much so in fact that AMD even consider RDNA 3 and 4 a part of the new GFX 11 family, they only do such things when big changes happen.

Back to Mesh/Primitive Shader performance, AMD have a number of patents for geometry handling and workload distribution, all filed in the past year or two and all in line for RDNA 3. So the geometry performance of RDNA 3 should be something to look out for since AMD have made zero changes to their Geometry Engine since Vega.

EDIT: also forgot to add, a few leakers did mention early on that RDNA 3 would bring multiple improvements with it one of which being “drastically improved geometry handling”.

But this is all just speculation for now lol
 
Last edited:
Rumors fly around for both vendors that never come true. RDNA2 was barely supposed to be as good as a 2080ti. Yet it's pretty much equal to their Nvidia counterparts.


I'll call out unbelievable or downright bullshit AMD fanboyism but a rumour that just says they'll be more efficient? Not hard to believe considering RDNA2 is more efficient than Ampere.
Well, more efficient does not necessarily mean better so you have a point indeed. But it's not equal to Ampere when their ray tracing performance is so bad.

RDNA2 was still revolutionary when it comes to nvidia's ampere. RDNA3 may close the gap with chiplet design.
Nah. It was Ok. Revolutionary it was not.
 
U mean, a card that is still nvidia's flag ship on the market, that sponsors every game under the sun is not having issue's with 10gb yet in games that are all made around 3gb ps4 games? no way.

The way I see it consoles will be the benchmark and they don't have 10 gigabyte dedicated GPU.
 
Top Bottom