• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nvidia to use 5nm TSMC for 4000 series - No MCM design

SantaC

Member

NVIDIA Next-Gen Gaming GPUs, GeForce RTX 40 ‘Ada Lovelace’ Series, Launching in 2022 & Will Utilize TSMC’s 5nm Process Node

NVIDIA won't be using an MCM design on its Ada Lovelace GPUs so they will keep the traditional monolithic design

Going the traditional monolithic design could potentially hurt them pretty bad if AMDs MCM design is a beast.
 
Last edited:

Buggy Loop

Member
Ah yea, the new hope

Nvidia, like nearly all silicon chip designers have R&D MCM for years now, just looked like this when looking at their architecture optimization under monolithic vs MCM

Calculating Oh No GIF by MOODMAN


And took a worse decision. This is so fucking amateur engineering to think it's so simple that MCM bulldozes monolithic at this point in time when foundries keep optimizing their nodes and they have different architectures.

AMD fans right now :

I Hope Please GIF
 

tusharngf

Member
May be they are confident enough that they can beat their competitors with this design. 80-90 TF on single CHIP sure thing why NOT !! Nvidia always shows up with performance numbers.
 

IntentionalPun

Ask me about my wife's perfect butthole
Ah yea, the new hope

Nvidia, like nearly all silicon chip designers have R&D MCM for years now, just looked like this when looking at their architecture optimization under monolithic vs MCM

Calculating Oh No GIF by MOODMAN


And took a worse decision. This is so fucking amateur engineering to think it's so simple that MCM bulldozes monolithic at this point in time when foundries keep optimizing their nodes and they have different architectures.

AMD fans right now :

I Hope Please GIF
You do know that.. anyone.. can buy.. any graphics card? They aren't some ecosystem you are locked into.. they don't require some special motherboard you have to replace..

They are just, a thing, you can buy... whether you previously had some other brand's card doesn't matter lol
 

Going the traditional monolithic design could potentially hurt them pretty bad if AMDs MCM design is a beast.
Lol, how MCM is better than monolithic ?
MCM may be cheaper because you don't have to throw out whole chip if it has broken parts for example. But it's not 'better' performance wise or something.
 

winjer

Gold Member
I hope AMD's use of MCM designs improves yields. Advances in performance are great, but it would be nice if I didn't have to get selected in a retailer draft and then overpay by 400% to get a GPU.

It will. A bit like on Zen CPUs, we can have two dies, each with a disabled core, but glued together to make a more powerful chip.
Also, it might be possible to have dies of different process nodes. For example, on Zen3 thew CPU cores are on 7nm, but the IO die is on 12nm.
 

Kenpachii

Member
Lol, how MCM is better than monolithic ?
MCM may be cheaper because you don't have to throw out whole chip if it has broken parts for example. But it's not 'better' performance wise or something.

But what if the chips aren't broken and u drop 2 entire chips on 1 design? u get massive amounts of performance gain.
 

Kenpachii

Member
You can do 1 chip with more cores.
Question of design (mono vs mcm) is not a question of performance. It's anything but performance I would say.

It's all about performance.

This is what NVIDIA had to say about MCM.

Historically, improvements in GPU-based high performance computing have been tightly coupled to transistor scaling. As Moore’s law slows down, and the number of transistors per die no longer grows at historical rates, the performance curve of single monolithic GPUs will ultimately plateau. However, the need for higher performing GPUs continues to exist in many domains. To address this need, in this paper we demonstrate that package-level integration of multiple GPU modules to build larger logical GPUs can enable continuous performance scaling beyond Moore’s law. Specifically, we propose partitioning GPUs into easily manufacturable basic GPU Modules (GPMs), and integrating them on package using high bandwidth and power efficient signaling technologies. We lay out the details and evaluate the feasibility of a basic Multi-Chip-Module GPU (MCMGPU) design. We then propose three architectural optimizations that significantly improve GPM data locality and minimize the sensitivity on inter-GPM bandwidth. Our evaluation shows that the optimized MCM-GPU achieves 22.8% speedup and 5x inter-GPM bandwidth reduction when compared to the basic MCM-GPU architecture. Most importantly, the optimized MCM-GPU design is 45.5% faster than the largest implementable monolithic GPU, and performs within 10% of a hypothetical (and unbuildable) monolithic GPU. Lastly we show that our optimized MCM-GPU is 26.8% faster than an equally equipped Multi-GPU system with the same total number of SMs and DRAM bandwidth.

Here's also a good indication of how mono vs mcm stack up, 1080ti = 28sm, 3090 = 88sm.

Lineare = moors law which isn't doable, which no longer applies as its not doable anymore.
High parallelism = MCM
Limited parallelism = Mono

e2609505af473229198620ca3d6be228.png


Bandwidth:

9c80010846f603cad5595c83e9efbf5c.png


This is why AMD is moving to MCM next GPU. No clue why nvidia isn't however. Maybe hopper is simple not ready yet.
 
Last edited:
This is it. This will be my last GPU. RTX 4090 coming from a 1080 Ti. This card lasted me 5 years and I imagine the 4090 will last me even longer. I really think the 4090 will be my final graphics card. I don't like the direction computing is heading with these chiplets and brick wall on transistor shrinking.
 

KungFucius

King Snowflake
I don't give a fuck what it is. I am going to try for a few weeks to get one and settle for keeping my 3090 if I fail.
 

SantaC

Member
Lol, how MCM is better than monolithic ?
MCM may be cheaper because you don't have to throw out whole chip if it has broken parts for example. But it's not 'better' performance wise or something.
If you havent noticed; intel is going MCM with their graphics card, nvidia is moving to MCM with Hopper.

I am pretty sure they know what they are doing including AMD. RDNA3 will be a beast unleashed.
 
If you havent noticed; intel is going MCM with their graphics card, nvidia is moving to MCM with Hopper.

I am pretty sure they know what they are doing including AMD. RDNA3 will be a beast unleashed.
Bacause it's more profitable? Of course. There are many benefits from chiplet design.
 
i didnt now where else to post this, but they have basically copied Apple?! Granted it's a server based CPU. Perhaps Windows 10/11 for ARM can finally have the raw performance it needs to run smoothly. Qualcomm are you listening?
tLQDNvR.jpg

Vs Apple MCM

1on2pj4.jpg
 

lukilladog

Member
They could use those new AI technologies presented the other day to figure out how to put graphics cards on the hands of the players that have supported them for the last 22 years.
 

winjer

Gold Member
i didnt now where else to post this, but they have basically copied Apple?! Granted it's a server based CPU. Perhaps Windows 10/11 for ARM can finally have the raw performance it needs to run smoothly. Qualcomm are you listening?

You might not know this, but there are other companies that had already build an interconnect to glue chips.
An example, AMD with the Infinity Fabric in 2017. 5 years earlier.
 

Dream-Knife

Banned
amd probably gonna be faster this gen, but their drivers, i dont know...
Yeah AMDs stability is terrible. When I had my RX6800 I had to reinstall windows probably 6-7 times over the course of a year due to drivers getting corrupt.

They also still haven't fixed idle memory spikes causing zero RPM to be useless. Never again.
 

ethomaz

Banned
i didnt now where else to post this, but they have basically copied Apple?! Granted it's a server based CPU. Perhaps Windows 10/11 for ARM can finally have the raw performance it needs to run smoothly. Qualcomm are you listening?
tLQDNvR.jpg

Vs Apple MCM

1on2pj4.jpg
I'm not sure here...

The first pic is the motherboard (PCB) with two packages in the middle and outside it has the RAM, etc.
The Apple pic is a single package with everything.

If you get one package from the nVidia picture then you can compare with the whole Apple picture.
 
Last edited:

lukilladog

Member
I just checked BestBuy and there are GPUs available.

Availability has increased last week and prices start to come down, but this is just natural market behaviour due to ethereum uncertain near future and Nvidia and AMD hoarding too much chips. Their measures to counter mining were a joke.
 

TheKratos

Member
Availability has increased last week and prices start to come down, but this is just natural market behaviour due to ethereum uncertain near future and Nvidia and AMD hoarding too much chips. Their measures to counter mining were a joke.
What about ethereum uncertain future?
 
You might not know this, but there are other companies that had already build an interconnect to glue chips.
An example, AMD with the Infinity Fabric in 2017. 5 years earlier.

Perhaps AMD can make an ARMv9 microarchitecture-based CPU/GPU/NPU (Zen/RDNA with ML) in one? They only seem to have x86 portfolio, with the exception of samsung utilizing RDNA2 with its exynos brand.

The NVIDIA Grace is for servers, while the apple M1 ultra is for consumers. I guess the following questions seem to emerge from me:

1) Will Intel, AMD, Qualcomm, Google and NVIDIA copy Apple with their own ARMv9 based CPU/GPU/NPU hybrid with shit ton of RAM+RAM bandwidth, CPU, GPU, and ML cores for consumers? (trend alert!)
2) Even if they did, what would be the benefit? ARM is mostly for portability, battery life, 4G/5G. Besides making Windows 10/11 on ARM run as smooth as Windows 10/11 X86 version, what type of software support will it get (apps, games)? Some Windows 10/11 x86 apps dont work on Windows 10/11 ARM. I can't think of any games built from the ground up running on hi-fidelity for windows 10/11 ARM.
3) I do think ARM based chips are gaining momentum: Google, NVIDIA, Apple, Qualcomm are giants that are fully utilizing it. Question is: Will Intel and AMD?
 
Last edited:

winjer

Gold Member
Perhaps AMD can make an ARMv9 microarchitecture-based CPU/GPU/NPU (Zen/RDNA with ML) in one? They only seem to have x86 portfolio, with the exception of samsung utilizing RDNA2 with its exynos brand.

The NVIDIA Grace is for servers, while the apple M1 ultra is for consumers. I guess the following questions seem to emerge from me:

1) Will Intel, AMD, Qualcomm, Google and NVIDIA copy Apple with their own ARMv9 based CPU/GPU/NPU hybrid with shit ton of RAM+RAM bandwidth, CPU, GPU, and ML cores for consumers? (trend alert!)
2) Even if they did, what would be the benefit? ARM is mostly for portability, battery life, 4G/5G. Besides making Windows 10/11 on ARM run as smooth as Windows 10/11 X86 version, what type of software support will it get (apps, games)? Some Windows 10/11 x86 apps dont work on Windows 10/11 ARM. I can't think of any games built from the ground up running on hi-fidelity for windows 10/11 ARM.
3) I do think ARM based chips are gaining momentum: Google, NVIDIA, Apple, Qualcomm are giants that are fully utilizing it. Question is: Will Intel and AMD?

Intel only handed out X86 licenses because IBM demanded it, 40 years ago. But if Intel had it's way, they would be the sole producer of X86 chips.
ARM on the other hand doesn't produce chips. They just design them and license it. And they have basically, 3 license types.
One to manufacture their designs as is. Many low cost mobile makers, use this type of license, then have the chips made at TSMC, Samsung, SMIC, or other Fab.
Another to allow a company to change the designs and then produce it. This is what nVidia and QUALCOMM are doing.
Finally, there is the license to completely design a chip based on the ARM spec. This is what Apple and AWS are doing. Qualcomm dis this some years ago, but they decided it was cheaper and faster, to just improve a bit on ARMs designs.
AMD and Intel have an ARM license, but make little use of it. In fact, at one point , AMD was designing an ARM server chip for AWS, but that went really bad.

Qualcomm is already doing SoCs similar to Apple. Same with Samsung. And other companies.
The difference is that these companies only have mobile phones and tablets as a market.
Apple also has it's desktop and laptop markets. That is why the M1 Ultra exists.
If Apple didn´t have desktop and laptop share of the market, they wouldn´t bother making such a big chip.

Neither ARM nor X86 are bound to just one type of market. ARM and X86 can both do low and high power devices.
But X86 started as a PC chip. For decades, it was mostly for desktops, laptops and servers. Because of this most of it's designs were made with performance first. Because of this, it's desi
ARM 's history is different, and it was used for more specialized devices for a while. In the last decade, it became de default option for mobile devices. Smartphones and Tablets.
Because of this, power consumption and size were given priority.
But these two markets are converging. ARM has bigger and powerful chips. Such as is the example with the M1 Ultra.
But X86 is also getting more concerned with power usage. That's why Alder Lake has small cores and big cores.

The idea that ISA defines power usage, size, performance is not correct. X86 is not less efficient than ARM. Not in a significant way.
The size of the extra microcode and decoding stage on X86, is almost inconsequential.
More important is the process node. And Apple being a big investor in TSMC, gets first dibs with the newest and best process nodes.
The advantage of X86 is that there are a lot of companies that depend on compatibility of X86. So they will never change.
On the PC market, is also dominated by X86. Especially gaming. And with consoles also using X86, that is not going to change soon.
ARM has the mobile market cornered.
Apple is doing it's own thing, so they can choose whatever they want. They have already changed ISA several times.
Who knows, maybe they will change again, in a decade or two.
 

Sega Orphan

Banned
GPU advancements are outstripping developers abilities at this point.
We have got Mesh Shader tech out for years and no game uses it. They are saying that AMDs new card will hit 70tflops, yet the PS5 and XSXs 10-12tflops arnt anywhere tapped out.
 

Dream-Knife

Banned
GPU advancements are outstripping developers abilities at this point.
We have got Mesh Shader tech out for years and no game uses it. They are saying that AMDs new card will hit 70tflops, yet the PS5 and XSXs 10-12tflops arnt anywhere tapped out.
Developers aren't optimizing as well either these days. I think this will lead to a lot of power being wasted. I fear that when DLSS and FSR become standard, devs will further rely on this to make up for the lack of optimization.
 

Larogue

Member
i didnt now where else to post this, but they have basically copied Apple?! Granted it's a server based CPU. Perhaps Windows 10/11 for ARM can finally have the raw performance it needs to run smoothly. Qualcomm are you listening?
tLQDNvR.jpg

Vs Apple MCM

1on2pj4.jpg
CoWoS is a TSMC's technology, Apple just happens to be the first client to use it.

 
Last edited:

sendit

Member

NVIDIA Next-Gen Gaming GPUs, GeForce RTX 40 ‘Ada Lovelace’ Series, Launching in 2022 & Will Utilize TSMC’s 5nm Process Node

NVIDIA won't be using an MCM design on its Ada Lovelace GPUs so they will keep the traditional monolithic design


Going the traditional monolithic design could potentially hurt them pretty bad if AMDs MCM design is a beast.

Can't wait, the 3090 is getting is going to be fossil fuel when this thing comes out.
 

A.Romero

Member
Availability has increased last week and prices start to come down, but this is just natural market behaviour due to ethereum uncertain near future and Nvidia and AMD hoarding too much chips. Their measures to counter mining were a joke.

There are more reasons than that:

- LHR made it so the cards lost about 40% efficiency at mining ETH. This was compensated by mining a secondary crypto at the same time but not as attractive as 100% ETH
- ETH is switching from proof of work to proof of stake during 2022. No specific date but it would surprise me that this doesn't happen this year

Obviously the crypto market crashing helps as it doesn't make sense to invest now and buy equipment to start mining but if you already have the equipment it doesn't make sense to stop.
 
Top Bottom