• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD RDNA 3 GPUs To Be More Power Efficient Than NVIDIA Ada Lovelace GPUs, Navi 31 & Navi 33 Tape Out Later This Year

putaquepariu! People still insisting that PS5/RDNA2 "don't support Mesh Shaders", as if the different implementation makes any difference in practice here.
Coalition finally released their UE5 GDC demo for the SeX with it's "true and only RDNA2" GPU and their demo uses 100 million polygons. Somehow the PS5 with it's broken and fake Primitive Shaders was showing billions of polygons on Epic's demo.
 

Darius87

Member
and PS5 land of nanite runs 1440p while XSX between 1080p - 1440p native res with more flops it's weird usually it should have higher res.
 

rnlval

Member
What does this have to do with anything? Consoles use RDNA2 and Zen 2 now....?

That benchmark shows the 3070 being the 6900XT, but in actual games the 6900XT is substantially faster. Synthetics are never representative of anything.
Current games do not use Mesh Shaders. NAVI 21 does not have TFLOPS high ground.
 

rnlval

Member
putaquepariu! People still insisting that PS5/RDNA2 "don't support Mesh Shaders", as if the different implementation makes any difference in practice here.
Coalition finally released their UE5 GDC demo for the SeX with it's "true and only RDNA2" GPU and their demo uses 100 million polygons. Somehow the PS5 with it's broken and fake Primitive Shaders was showing billions of polygons on Epic's demo.
Both AMD RX 5700 XT (via Primitive Shader) and NVIDIA Turing (via Mesh Shader) presented Next Generation Geometry Pipeline (NGGP) programming models. Microsoft has rejected RX 5700 XT's Primitive Shader NGGP.

The main point with Primitive Shaders is shader-based geometry culling that can scale with increase TFLOPS power while rendered geometry density shouldn't exceed resolution pixel count.

Differences between AMD RX 5700 XT (via Primitive Shader) and NVIDIA Turing (via Mesh Shader) Next Generation Geometry Pipeline (NGGP) programming models.

ZbIEt9N.jpg

PS: Vega's NGGP is broken.

PC and XSS/XSX RDNA 2 follows NVIDIA's NGGP model.

PS5 RDNA has RDNA 2's high clock speed improvements.

NVIDIA's Mesh Shader NGGP model exists for DirectX12U and Vulcan APIs. Mesh Shader NGGP model is NVIDIA's toy NOT MS's.
 
Last edited:

Marlenus

Member
That would be a very rough sell, when usually we see 25-50% uplift for nvidia in games with RT and that's without the magical alientech that is DLSS.




oh and RE viliage has been patched and is now smooth sailing



I would be interested in Guru 3d retesting RE because they were the outlier so I figure the higher than 8GB memory usage was scene specific. Wonder if that stays with the latest patch.
 

rnlval

Member
Looks like I may make the switch to AMD next generation. 9900K is still holding up. But my 2080Ti is not.

What does this have to do with anything? Consoles use RDNA2 and Zen 2 now....?

That benchmark shows the 3070 being the 6900XT, but in actual games the 6900XT is substantially faster. Synthetics are never representative of anything.
For the green team, 2X scale from 35 TFLOPS FP32 (e.g. RTX 3080 Ti) has 70 TFLOPS.

For the red team, 3X scale from ~23 TFLOPS FP32 (e.g. RX 6900XT) has 69 TFLOPS.

RTX 3080 Ti has excess TFLOPS compute power for its given raster hardware.

Since Mesh Shader is related with raw TFLOPS compute power e.g. RTX 3080 Ti/RTX 3090 beats RX 6900 XT.

RX 6900 XT's raster improvements are good, but it's fighting in the last war, just as Bulldozer was AMD's Pentium IV high clock speed with a very long pipeline, but with "more cores". LOL

RX 6900 XT does not have TFLOPS high ground when workloads are not bound by the classic raster graphics pipeline.

 
Last edited:

Kenpachii

Member
The way I see it consoles will be the benchmark and they don't have 10 gigabyte dedicated GPU.

They actually have. Just check xbox series X. that's the entire reason they split it like that.


People thought Navi 21 being 2x Navi 10 performance was BS and that was not.

If AMD are gunning for 2.5x performance over the 6900XT then it may be possible with a wide MCM design that has a 500W TDP. This would be the equivalent to a 295X2 type card only it would look like a single GPU and work without requiring game support. Infact 66% more TDP + 50% more perf / watt gets you 2.5x performance.

Is it doable, maybe and AMD have done 500W cards in the past so it is not out of their scope but I am not sure you could compare it directly to the 6900XT, it would probably sit in an entirely new tier if they hit that target.

This the only reason the 3090 chipset beats the 6900xt, nvidia just went full retard on power budget to counter it. Hell i would not be surprised if the cards where originally designed for even 100 more watt power consumption just to be sure to be of higher performance then RDNA2 as those cards have no trouble handling it.

That was probably nvidia's stratagy at the end, a super power hungry card that amd wouldn't expect nvidia to launch in a time like this. but they did and beated them because of it.

nvidia-geforce-rtx-3090-with-rtx-2080ti.jpg



Drop that wattage to 6900xt levels and that 3090 will perform totally different.

Hell if i was AMD for RDNA3 i would just create a 500w card just to piss off nivdia and drop every other card of there lineup to half that output, so nvidia will be properly shitting the brick specially now that FSR is a thing. The only thing they have to work at is rt performance but that's about it.
 
Last edited:
It's just as bad as the console warriors that are also bleeding in here.
Don't let me start on people that fight between games of the same system. But let's be honest, sometimes they are half the fun (or annoyance in some cases) of threads. At least here we can express ourselves with mild moderation, unlike most places these days.
 
Last edited:

Irobot82

Member
Don't let me start on people that fight between games of the same system. But let's be honest, sometimes they are half the fun (or annoyance in some cases) of threads. At least here we can express ourselves with mild moderation, unlike most places these days.
Meanwhile in the real world. I would be happy to have either the 6800XT or the 3080 at MRSP. But those don't exist in the real world.
 
Meanwhile in the real world. I would be happy to have either the 6800XT or the 3080 at MRSP. But those don't exist in the real world.
Yeah, and here we are discussing what it's even to come. That's why forums like ours just represent 1%-2% of gamers. We live in an echo chamber for the most part.
 

Kenpachii

Member
I knew people that didn't even play games but just bought the hottest of the hottest hardware for the simple reason to brag about it with pictures :D

hell a friend of mine has a 5950x / 3090 / 128gb of ram, and only plays wow tbc and ragnarok and the moment the 4090 comes out he will get it.

ss_33ab34ec781d8eaf2c936950bee837bc3c4cee18.1920x1080.jpg

media_image_bcc_karazhan_full.jpg
 
Last edited:

Irobot82

Member
Yeah, and here we are discussing what it's even to come. That's why forums like ours just represent 1%-2% of gamers. We live in an echo chamber for the most part.
For sure. I really hope supply will be better by next gen of cards. Etherium moving to POS will help. I'm still on my 1080 and it hurts. Moving down several settings to medium and barely getting around 70-90fps @ 1440p.
 

Marlenus

Member
Hell if i was AMD for RDNA3 i would just create a 500w card just to piss off nivdia and drop every other card of there lineup to half that output, so nvidia will be properly shitting the brick specially now that FSR is a thing. The only thing they have to work at is rt performance but that's about it
I have a feeling that the top end part is going to be a 500W monster. Maybe less if they can get more than a 1.5x perf/watt improvement from the node shrink.

The maffs.
1.66x more TDP and 1.5x more perf/watt = 2.5x more performance.
 

Buggy Loop

Member
This the only reason the 3090 chipset beats the 6900xt, nvidia just went full retard on power budget to counter it. Hell i would not be surprised if the cards where originally designed for even 100 more watt power consumption just to be sure to be of higher performance then RDNA2 as those cards have no trouble handling it.

nvidia-geforce-rtx-3090-with-rtx-2080ti.jpg



Drop that wattage to 6900xt levels and that 3090 will perform totally different

364W vs 322W? Really?

That’s a mere reasonable undervolt. Optimum tech gets 100W less for same performance.

These arguments would not even be brought up if Nvidia had picked a different default volt/Hz curve.
 

twilo99

Member
Hell if i was AMD for RDNA3 i would just create a 500w card just to piss off nivdia and drop every other card of there lineup to half that output, so nvidia will be properly shitting the brick specially now that FSR is a thing. The only thing they have to work at is rt performance but that's about it.

Nah, they seem to take too much pride in efficiency with the RDNA family, otherwise yeah, Nvidia's approach this gen was more brute for sure.

This is an interesting take on what they did with ampere

eieADMc.jpg


 
Last edited:
Did that dumbass 'forgot' gtx 680 somehow ? Why does he have 580 to 780? but named the graph "ppw between gen" ? I'm certain it's just an unlucky accident and not to show fabricated inflated results between two generations and claim that its one gen increase.

Why does only forum warriors seem to care about nvidia using a few more watts, wonder why and who those guys prefer hmm? The diff in power usage between say rtx 3080 and 6800xt is so miniscule that regular users never would even notice.
 
Last edited:

martino

Member
For the green team, 2X scale from 35 TFLOPS FP32 (e.g. RTX 3080 Ti) has 70 TFLOPS.

For the red team, 3X scale from ~23 TFLOPS FP32 (e.g. RX 6900XT) has 69 TFLOPS.

RTX 3080 Ti has excess TFLOPS compute power for its given raster hardware.

Since Mesh Shader is related with raw TFLOPS compute power e.g. RTX 3080 Ti/RTX 3090 beats RX 6900 XT.

RX 6900 XT's raster improvements are good, but it's fighting in the last war, just as Bulldozer was AMD's Pentium IV high clock speed with a very long pipeline, but with "more cores". LOL

RX 6900 XT does not have TFLOPS high ground when workloads are not bound by the classic raster graphics pipeline.


I will wait the content to exist to confirm this.
But on paper it's the theory
 
Nah, they seem to take too much pride in efficiency with the RDNA family, otherwise yeah, Nvidia's approach this gen was more brute for sure.

This is an interesting take on what they did with ampere

eieADMc.jpg


Why not include 2080 to 3080 = 64 % increase in perf ? Even if you include power usage, that nobody really cares about: 2080 ~250W rtx 3080 ~300W = 20%; 64 - 20 = 44% perf per watt increase.

Oh but that wouldn't fit the fake narrative.
 
For the green team, 2X scale from 35 TFLOPS FP32 (e.g. RTX 3080 Ti) has 70 TFLOPS.

For the red team, 3X scale from ~23 TFLOPS FP32 (e.g. RX 6900XT) has 69 TFLOPS.

RTX 3080 Ti has excess TFLOPS compute power for its given raster hardware.

Since Mesh Shader is related with raw TFLOPS compute power e.g. RTX 3080 Ti/RTX 3090 beats RX 6900 XT.

RX 6900 XT's raster improvements are good, but it's fighting in the last war, just as Bulldozer was AMD's Pentium IV high clock speed with a very long pipeline, but with "more cores". LOL

RX 6900 XT does not have TFLOPS high ground when workloads are not bound by the classic raster graphics pipeline.



Games are more than just geometry.
 

FireFly

Member
Why not include 2080 to 3080 = 64 % increase in perf ? Even if you include power usage, that nobody really cares about: 2080 ~250W rtx 3080 ~300W = 20%; 64 - 20 = 44% perf per watt increase.

Oh but that wouldn't fit the fake narrative.
In terms of TDP, it's 320W vs 225W. Techpowerup had the 3080 as 18% more efficient than the 2080 at 4K.

 
Last edited:
In terms of TDP, it's 320W vs 225W. Techpowerup had the 3080 as 18% more efficient than the 2080 at 4K.

You might be right for a stock FE card. I just quickly looked around for power draw and mine sits at ~240-290 at 0,875v on full load, but regardless let me ask you a question.

How many guys you met in real life who chooses graphics card for main gaming rig by looking at how much power it draws versus deciding what card to get by looking at gaming benchmarks and frame rate, frame times etc ?

I have never heard someone say: "Dude I'd rather pay 1000$ for this cool GTX 750 because I can run it of pci slot, it's so awesome how little power it uses, right, riiight !? Your RTX 3080 Ti is way too power hungry. Imagine how many dozens more $ you'll need to pay for electricity per year bro !!! Get this cool GTX 750 trust me !" Wait I can actually imagine a salesman somewhere trying this.
 

Buggy Loop

Member
You might be right for a stock FE card. I just quickly looked around for power draw and mine sits at ~240-290 at 0,875v on full load, but regardless let me ask you a question.

How many guys you met in real life who chooses graphics card for main gaming rig by looking at how much power it draws versus deciding what card to get by looking at gaming benchmarks and frame rate, frame times etc ?

I have never heard someone say: "Dude I'd rather pay 1000$ for this cool GTX 750 because I can run it of pci slot, it's so awesome how little power it uses, right, riiight !? Your RTX 3080 Ti is way too power hungry. Imagine how many dozens more $ you'll need to pay for electricity per year bro !!! Get this cool GTX 750 trust me !" Wait I can actually imagine a salesman somewhere trying this.

Nobody care ultimately. SFF fans will undervolt which is more than enough, in fact, everyone should undervolt a bit, same performance, lower or disappearing coil whine, lower power/heat, it’s the best goddamn trick on PC.
 

Turk1993

GAFs #1 source for car graphic comparisons
You might be right for a stock FE card. I just quickly looked around for power draw and mine sits at ~240-290 at 0,875v on full load, but regardless let me ask you a question.

How many guys you met in real life who chooses graphics card for main gaming rig by looking at how much power it draws versus deciding what card to get by looking at gaming benchmarks and frame rate, frame times etc ?

I have never heard someone say: "Dude I'd rather pay 1000$ for this cool GTX 750 because I can run it of pci slot, it's so awesome how little power it uses, right, riiight !? Your RTX 3080 Ti is way too power hungry. Imagine how many dozens more $ you'll need to pay for electricity per year bro !!! Get this cool GTX 750 trust me !" Wait I can actually imagine a salesman somewhere trying this.
Literally this, spending between 2-4k on a rig to get the highest setting, frame rate, resolution, ... and than crying over max 50$ a year more on electricity bills lol.
 

Kenpachii

Member
You might be right for a stock FE card. I just quickly looked around for power draw and mine sits at ~240-290 at 0,875v on full load, but regardless let me ask you a question.

How many guys you met in real life who chooses graphics card for main gaming rig by looking at how much power it draws versus deciding what card to get by looking at gaming benchmarks and frame rate, frame times etc ?

I have never heard someone say: "Dude I'd rather pay 1000$ for this cool GTX 750 because I can run it of pci slot, it's so awesome how little power it uses, right, riiight !? Your RTX 3080 Ti is way too power hungry. Imagine how many dozens more $ you'll need to pay for electricity per year bro !!! Get this cool GTX 750 trust me !" Wait I can actually imagine a salesman somewhere trying this.

It's not about electricity cost, its about heat.

364W vs 322W? Really?

That’s a mere reasonable undervolt. Optimum tech gets 100W less for same performance.

These arguments would not even be brought up if Nvidia had picked a different default volt/Hz curve.

I straight up see 3090's move to 500+ watt, yet to see any 6900xt hitting that.

Now i am not saying the GPU magically becomes faster, however i do feel like the 3090 isn't there top end. I do think a 3090ti could be made or was even in the works but canned eventually when they saw what amd was up to.
 
Last edited:

Buggy Loop

Member
I straight up see 3090's move to 500+ watt, yet to see any 6900xt hitting that.

Now i am not saying the GPU magically becomes faster, however i do feel like the 3090 isn't there top end. I do think a 3090ti could be made or was even in the works but canned eventually when they saw what amd was up to.

You’re talking about under 1ms transients? Like.. these don’t matter at all, it’s such a short burst that thermals don’t even have time to be affected and modern PSU can take these easily. If our power systems would not take transients, every fucking device would fry.

Igorslab made a nice article with a good explanation on it.

As for surviving the Nvidia « killer », I’m sure Nvidia shit their pants for a card that has no competition in the market it was proposed for (creators, professionals). This card being that common in gaming rigs is a symptom of the current market, not Nvidia’s proposition as a logical product for gamers.
 
Weird, I would think the 2080Ti would eat that for breakfast.
Yeah that seems weird to me too. I was running FFXIV on a 1080ti at 120fps on maximum at a 3440 x 1440p resolution, so I think it's weird a 2080ti would have any issues with it. All I can say for sure is that the 3090 doesn't even break a sweat haha.
 

rnlval

Member
They actually have. Just check xbox series X. that's the entire reason they split it like that.

This the only reason the 3090 chipset beats the 6900xt, nvidia just went full retard on power budget to counter it. Hell i would not be surprised if the cards where originally designed for even 100 more watt power consumption just to be sure to be of higher performance then RDNA2 as those cards have no trouble handling it.

That was probably nvidia's stratagy at the end, a super power hungry card that amd wouldn't expect nvidia to launch in a time like this. but they did and beated them because of it.

nvidia-geforce-rtx-3090-with-rtx-2080ti.jpg



Drop that wattage to 6900xt levels and that 3090 will perform totally different.

Hell if i was AMD for RDNA3 i would just create a 500w card just to piss off nivdia and drop every other card of there lineup to half that output, so nvidia will be properly shitting the brick specially now that FSR is a thing. The only thing they have to work at is rt performance but that's about it.
The size between MSI RTX 2080 Ti Gaming X Trio and MSI RTX 3080 Ti Gaming X Trio are similar.

MSI RTX 2080 Ti Gaming X Trio: 327 x 140 x 55.6 mm

MSI RTX 3080 Ti Gaming X Trio: 324 x 140 x 56 mm

Both AIB OC'ed.
 

FireFly

Member
You might be right for a stock FE card. I just quickly looked around for power draw and mine sits at ~240-290 at 0,875v on full load, but regardless let me ask you a question.

How many guys you met in real life who chooses graphics card for main gaming rig by looking at how much power it draws versus deciding what card to get by looking at gaming benchmarks and frame rate, frame times etc ?

I have never heard someone say: "Dude I'd rather pay 1000$ for this cool GTX 750 because I can run it of pci slot, it's so awesome how little power it uses, right, riiight !? Your RTX 3080 Ti is way too power hungry. Imagine how many dozens more $ you'll need to pay for electricity per year bro !!! Get this cool GTX 750 trust me !" Wait I can actually imagine a salesman somewhere trying this.
1.) For end users I think it's mainly a tiebreaker, assuming the GPU fits within the thermal/power constraints of the system. I have a 3080 stuck in a SFF case that ships with a glass front door, which would normally cause the GPU to throttle. I didn't want to have to undervolt it, so I bought a custom made mesh panel, which took weeks to ship. I only did all that because I wanted the best ray tracing performance, but if performance was equal and AMD had cooler and less power hungry card, I would have gone for that instead.
2.) From an architectural perspective, GPUs are mainly power limited at the high end. So better performance per watt, means better performance. For example, if Nvidia could have found an extra 20% performance per watt at 330W, then the 3080 would have been double a 2080, not ~64% faster.
 

Kenpachii

Member
You’re talking about under 1ms transients? Like.. these don’t matter at all, it’s such a short burst that thermals don’t even have time to be affected and modern PSU can take these easily. If our power systems would not take transients, every fucking device would fry.

Igorslab made a nice article with a good explanation on it.

As for surviving the Nvidia « killer », I’m sure Nvidia shit their pants for a card that has no competition in the market it was proposed for (creators, professionals). This card being that common in gaming rigs is a symptom of the current market, not Nvidia’s proposition as a logical product for gamers.

The size between MSI RTX 2080 Ti Gaming X Trio and MSI RTX 3080 Ti Gaming X Trio are similar.

MSI RTX 2080 Ti Gaming X Trio: 327 x 140 x 55.6 mm

MSI RTX 3080 Ti Gaming X Trio: 324 x 140 x 56 mm

Both AIB OC'ed.

I aint talking about spikes bro, i am talking about constant watt usage.

af8e3b749ab3d686fb9443b48e0d93ee.gif


Naming means nothing for Nvidia, they rename and scramble names all day long to there favor. calling there new 3080 a ti is laughable at best. It doesn't even come close to being it, hell i wouldn't even call a 3090 a ti over the 3080, the 3080ti could have been called a actual 3080ti, if the 3080 was based around the 104 core which is the 3070ti. Its all names to sell there cards to idiots aka casuals. U honestly have people believe a 3080ti is a actual ti while it shares nothing of what a ti stood for, for the last 3 generations if not 4.

Anyway lets move on the next point i made.


944bbd3e22c492ff0a1bf3d0f6a0afcf.png


2080ti models getting rid of watt limitation.


1daaedb31202322acc6b1949feffe641.png


3090 with just a bios watt unlock. ( it goes as high as 570w from what i saw almost 600w card basically, which is in another universe of what 6000 series was created on, as i can't find a single card pushing anywhere near that )

f30ef6efd1f6e08214553452879360aa.png


Sorry, i experienced a lot of cards and frankly unless all this information is complete made up ( which isn't ), i couldn't believe me eyes when i saw that 3090 pushing over 450+ watt, let alone 500w with hitting close to 600w. and that's not peek it straight up stand there and heats the entire fucking room like nothing else on those wattage without breaking a sweat as that thing can handle that cooling perfectly fine on top of it on air. Go push that on a 6900xt see how fast it melts and dies.

U honestly think that nvidia which got praised for its sleek design, good cooling and somewhat decent power output in the last 2 or 3 generations now suddenly decides to make a space heater, size of a space ship exactly what outlets slammed AMD for, for all those years? that fits in no small build case for absolute no reason?

Then about your 3090 is for professional card nonsense. U do realize its the same die right? its a 102. Why did they release a 3080 on the same die as a 3090 and not just go for there 104 solution like they always do?



"nvidia killer"

That's why, because of AMD's RDNA2.

They had to release the 102 for there 3080 model, a cut down 3090, and a 3090 they released as FE edition is still a cut down solution of what could have been which u see when outlets like EVGA start to use the card and its design to its full solution besides the die.

Sorry mate. Nvidia only got the higher ground because they developed a 500w monster of a card to counter RDNA2, that 3090 was never designed as a 350w i don't believe it for a second i 100% believe they they have another chip laying around that is the actual full 3090. And why do i assume this? because the 3080 exists which is a cut down 3090. Why would nvidia however not release it? who the fuck wants a 500w card. nobody.

Which brings me to my original post.

With AMD pushing Nvidia this hard on there first "high end card attempt", i can only imagine what RNDA3 can bring at 500w halo card solution. If i was NVIDIA i would be worried.
 
Last edited:
NAVI 21 was beaten on raytracing and geometry.
Yeah so? Games are more than just geometry and raytraced lighting and shadows.
Also Navi 21 was beaten by GA102, a GPU with more than twice the vector ALU's.
Navi 21 also completely annihilates GA104 and below, in spite of their theoretical mesh shader performance.

If you can recall, GCN used to have far more compute power than Nvidia's architectures historically - didn't really help much in terms of performance because those ALU's were not being fully utilised.
Ampere has the same problem. The 3080 has 2x the compute of the 2080Ti, but is barely 40% faster. Those ALU's only really get utilisation at high resolutions, hence the strange performance scaling with resolution.

Maybe in a few years time when console games use mesh shaders we'll see a change, but those games will be optimised for RDNA2, not RDNA3 or Ampere or Lovelace or Hopper or whatever. In that sense it will be restricted. Except maybe in certain Nvidia sponsored titles.

Of course realistically speaking, it will never really make that much of a difference, because Ampere whilst having huge compute, is bottlenecked in other areas.
 

rnlval

Member
Yeah so? Games are more than just geometry and raytraced lighting and shadows.
Also Navi 21 was beaten by GA102, a GPU with more than twice the vector ALU's.
Navi 21 also completely annihilates GA104 and below, in spite of their theoretical mesh shader performance.

If you can recall, GCN used to have far more compute power than Nvidia's architectures historically - didn't really help much in terms of performance because those ALU's were not being fully utilised.
Ampere has the same problem. The 3080 has 2x the compute of the 2080Ti, but is barely 40% faster. Those ALU's only really get utilisation at high resolutions, hence the strange performance scaling with resolution.

Maybe in a few years time when console games use mesh shaders we'll see a change, but those games will be optimised for RDNA2, not RDNA3 or Ampere or Lovelace or Hopper or whatever. In that sense it will be restricted. Except maybe in certain Nvidia sponsored titles.

Of course realistically speaking, it will never really make that much of a difference, because Ampere whilst having huge compute, is bottlenecked in other areas.
Not correct.


average-fps_3840-2160.png


RTX Ampere was designed to have excess TFLOPS relative rasterization hardware for other workloads such as
1. Mesh shader (compute)
2. Direct Storage GpGPU decompression (compute)
3. DirectML (compute)

For each Turing SM vs Ampere SM difference, each Turing Integer CUDA core was turned into integer/floating-point CUDA core in Ampere.
Turing SM
64 Integer CUDA cores
64 floating-point CUDA cores

Ampere SM
64 Integer/ floating-point CUDA cores
64 floating-point CUDA cores

Try again.
 
Top Bottom