• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Apple's M1 Max GPU is as powerful as an Nvidia RTX 2080 desktop GPU and the Sony PS5 gaming console

DrAspirino

Banned
I'd rather take my $3500 and build a proper gaming PC. Apple makes scalpers look like the good guys.
...aaaaand that's where you're wrong if you live in California, Colorado, Hawaii, Oregon, Vermont or Washington, since those states have very strict power-efficiency laws that actually forbids manufacturers to ship power-hungry gaming PCs or inefficient PSUs.

If you live outside of those states, then it makes sense to build a gaming PC with that money, ALTHOUGH I would highly recommend looking up at your country's power efficiency laws, since they're getting stricter by the day.

And Apple's REAL achievement wasn't getting to 10.4 TFLOPS - any company can brute-force their way to that - but to actually achieve it USING 100 WATTS. AMD, Intel, and nVidia should take a note on how to do things right these days and age, because power efficiency right now is as equally important as performance.
 
Last edited:
...aaaaand that's where you're wrong if you live in California, Colorado, Hawaii, Oregon, Vermont or Washington, since those states have very strict power-efficiency laws that actually forbids manufacturers to ship power-hungry gaming PCs or inefficient PSUs.

If you live outside of those states, then it makes sense to build a gaming PC with that money, ALTHOUGH I would highly recommend looking up at your country's power efficiency laws, since they're getting stricter by the day.

And Apple's REAL achievement wasn't getting to 10.4 TFLOPS - any company can brute-force their way to that - but to actually achieve it USING 100 WATTS. AMD, Intel, and nVidia should take a note on how to do things right these days and age, because power efficiency right now is as equally important as performance.
Didn't that affect ONLY pre builts? I'm pretty sure people who have any other bit of interest would still build a PC over getting a mac with a m1 chip. M1 is amazing, but without proper gaming support, it's not worth it to people on this forum

Now if they were doing nothing but video and picture editing, I could complete understand going the Apple route.
 

Codes 208

Member
For graphics design, art and editing this sounds amazing. But as a gamer it comes off as overkill, not because its too much power but because the only noteworthy game i can even think of that runs on apple is just team fortress 2
 

twilo99

Member
And Apple's REAL achievement wasn't getting to 10.4 TFLOPS - any company can brute-force their way to that - but to actually achieve it USING 100 WATTS. AMD, Intel, and nVidia should take a note on how to do things right these days and age, because power efficiency right now is as equally important as performance.

That is correct, the efficiency here is incredible.

P.A. Semi - Wikipedia

^ an absolutely stellar acquisition by them in 2008.. that engineering team has outclassed and outperformed the whole segment for over a decade now. Incredible really.
 
It pretty weird that google allow there staff to use apple but not windows.
Is because they think windows is just bad or because of some deep seeded company wide competitive doctrine?

Apple are googles bigger competitor anyway.
Macs offer a better dev environment. It comes down to the command line and macOS being Unix based. If you’re an app developer, it’s also the only thing that runs XCode. Windows does great things, but macOS (especially when it had Bootcamp) just offers more value for devs.

Also understand, Google and Apple used to be historically close during the early 2000’s before Android launched. Then Steve Jobs declared war on them, but they’ve again drifted closer together in recent years… just not publicly.
 
Last edited:

Bitmap Frogs

Mr. Community
For graphics design, art and editing this sounds amazing. But as a gamer it comes off as overkill, not because its too much power but because the only noteworthy game i can even think of that runs on apple is just team fortress 2

EVE Online has an official mac client (universal) and so does WoW.
 

MrFunSocks

Banned
What a shame there are barely any games to make use of it though. Apple are still not doing enough to encourage developers to port AAA titles.
That's coming. They 100% are making a push to video games. First they're getting the in-house hardware ready, then the tools will follow, then the chequebook will come out. Sony/MS/Nintendo would be getting very worried right about now. Apples silicon is making AMD/nVidia/Intel look like fools.
 
Last edited:

Futaleufu

Member
What a shame there are barely any games to make use of it though. Apple are still not doing enough to encourage developers to port AAA titles.
Isnt Apple infamous for dropping legacy support real quick? Can you run 32 bit games on current products? Why would you want to build a game library on their platform?
 

T-Cake

Member
Isnt Apple infamous for dropping legacy support real quick? Can you run 32 bit games on current products? Why would you want to build a game library on their platform?

That is very true but I think now they have their own chips in place, compatibility should get better in the long run. We're talking decades before they drop 64-bit for 128-bit or something like that. But it's all moot anyway. I can't afford M1 Max prices. :p
 

Xyphie

Member
Even if your game is ARM+Metal native I wouldn't expect Apple to maintain stuff like legacy Metal versions indefinitely. Your expectation if you make a Mac game should be that it won't run on the latest OS release eventually unless you actively maintain it.
 
That's coming. They 100% are making a push to video games. First they're getting the in-house hardware ready, then the tools will follow, then the chequebook will come out. Sony/MS/Nintendo would be getting very worried right about now. Apples silicon is making AMD/nVidia/Intel look like fools.
It seems like they only greenlight projects that lock users/devs into their platform (SpriteKit, AUv3, M1). As long as they keep doing that, they aren't going to get any game support outside of whatever exclusives they buy. There's just no future where apple exclusivity makes sense for game companies.
 

Dream-Knife

Banned
yeah sure if you want to play games then your $3500 is better put into a PC.

these laptops are not aimed at people who want to play games. just like you might go spend $3500 on a PC for gaming there are people out there who would pay $3500 for a computer that they can use to improve their coding, rendering, or music production performance. i'm in the UK and the 16" macbook pro tops out at £5,900. for someone who makes £200-250 a day it quickly pays for itself. earning £200/day in a 5 day week it'd pay itself off in about a month and a half. when you need a tool for your job you want something high quality that will get the job done easier/faster, right?

Disregard. Forgot about people building apps and emulating hardware.
 
Last edited:

Mattyp

Gold Member
It’s truely weird to see people shitting on innovation, Apple have had and continue to have the most powerful custom phone chipsets since I can remember. All in house, all their work.

They release the M1 amazing for what it does, density wise, heat, power, cooling. Within 12 months they release the next generation with 40% leaps. And people are comparing it to GPUs 20 times the size physically and going but see!!!

Shit on Apple all you want but their R&D chipset department is no joke and has to be the best in the world by far. It’s a shame it’s locked to Apple hardware I can only imagine with the density and efficiency they manage what something the size of a 3090 would produce.

And this is from someone who doesn’t own and will never at this point a Mac.
 
Last edited:

winjer

Gold Member
It’s truely weird to see people shitting on innovation, Apple have had and continue to have the most powerful custom phone chipsets since I can remember. All in house, all their work.

The way you talk, it's like Apple don't owe a great deal of their performance advantage to TSMC and people like Jim Keller going in and setting up the R&D teams and product development.
 
It’s truely weird to see people shitting on innovation, Apple have had and continue to have the most powerful custom phone chipsets since I can remember. All in house, all their work.

They release the M1 amazing for what it does, density wise, heat, power, cooling. Within 12 months they release the next generation with 40% leaps. And people are comparing it to GPUs 20 times the size physically and going but see!!!

Shit on Apple all you want but their R&D chipset department is no joke and has to be the best in the world by far. It’s a shame it’s locked to Apple hardware I can only imagine with the density and efficiency they manage what something the size of a 3090 would produce.

And this is from someone who doesn’t own and will never at this point a Mac.
Yep. Imagine if we could have Apple hardware in consoles (in a dream world where every game is natively supported and fully optimized). Just imagine what Apple could do with 250W of power budget.
 

Zimmy68

Member
It means absolute zero. I don't care if it is as powerful as 2 PS5 together, it doesn't matter if there is no content for it.
The first iPad Pro was supposed to be as powerful as an Xbox One but have you seen anything released that looks like a mid level XBO game? Maybe the NBA 2k they showed, maybe???
 

sobaka770

Banned
I think M1 is an impressive chip but let's be clear:

- 5nm technology is a large part of the performance leap. Any reduction is size allows for more stuff to be crammed in so that 40% gain can be attributed in large part to going from 7 to 5 nm process
- Their chips run on a very closed ecosystem designed for specific products and anything else requires emulation which basically makes these processors weaker than competition.

I think all chip manufacturers are doing a great job of competing against each other. The main difference with Apple is that Qualcomm makes a chip to be used for many different phone configurations and open programs (like Intel) while Apple M1 is pretty much not usable anywhere outside of a specifically configured Mac.
 
I think M1 is an impressive chip but let's be clear:

- 5nm technology is a large part of the performance leap. Any reduction is size allows for more stuff to be crammed in so that 40% gain can be attributed in large part to going from 7 to 5 nm process
- Their chips run on a very closed ecosystem designed for specific products and anything else requires emulation which basically makes these processors weaker than competition.

I think all chip manufacturers are doing a great job of competing against each other. The main difference with Apple is that Qualcomm makes a chip to be used for many different phone configurations and open programs (like Intel) while Apple M1 is pretty much not usable anywhere outside of a specifically configured Mac.
Which generally means nothing for the general or even enthusiast crowd. Some could have a GPU that is 3x the performance, with 10x the power efficiency... What does that mean? Nothing for the average gamer or so. If I can't play x,y,z game at the same framerates as before... What does it matter?
 

UnNamed

Banned
The only meaningful benchmark: run Tomb Raider and Premiere on both M1Max (4K) and the Razo (3K) and tell me which run better.
 

twilo99

Member
Apple is on TSMC's 5 nm process node and Apple doesn't own TSMC.

Who said anything about them owning a fab?

PA Semi was a fabless chip designer, which is what Apple inherited. They design the architecture, TSMC does the production. Just like AMD..
 
Being discussed already on the other thread: https://www.neogaf.com/threads/apple-preparing-a-‘portable-hybrid-console’-with-a-brand-new-soc-that-offers-enhanced-gpu-performance-increase.1604948/page-7

This is a bigger SOC, but still smaller than the GPUs from AMD and Nvidia.
Apple could very well make GPU-only dies to offer standalone upgrade to the big Mac that would shatter anything AMD could offer. It's not just Intel that lost a market, AMD also.
This is a rather bold proclamation.

There have not been can comparisons between M1's GPU Architecture and RDNA2, leave alone RDNA3.
 
lack of memory bandwidth?
Did you see the specs? M1 Max has over 400GB/s of bandwith thanks to a 512bit memory interface and LPDDR5-6400, and it can use up to 64GB of unified memory.

Do you have a link?

Wasn't able to find the specs.

Although I'm curious how you get 400 GB/s with LPDDR5-6400?
 
Last edited:
This is a rather bold proclamation.

There have not been can comparisons between M1's GPU Architecture and RDNA2, leave alone RDNA3.

Their GPU already performs good enough to play games, and their media engines have no competition.
Why they need anything from AMD anymore? They don't.
 
Their GPU already performs good enough to play games, and their media engines have no competition.
Why they need anything from AMD anymore? They don't.
They don't need AMD. Even if they did, they don't want AMd, because they want total vertical integration.

That doesn't mean that they're going to suddenly outperform Nvidia or AMD in the GPU space.
I'm sure Jade 2C and 4C will be impressive. But it's not like either Nvidia or AMD are standing still either.
 

Schnozberry

Member
I think besides "gaming" they have the rest covered.

For content creation it'll be fantastic hardware, particularly if you're heavily invested in Apple's software that benefits from their in house optimization.

It'll be interesting to see how Intel/AMD respond to M1. Apple's vertical integration makes an SOC approach like M1 a lot more feasible. PC OEM's don't operate with that kind of business model.
 

iQuasarLV

Member
If I were to hazard a guess of paper performance -10%~15% for overhead, I would put the M1 32 core at the performance of the M1 at:
Radeon 6600M mobile GPU
Nvidia 3060M series

at around 50-62% of the power usage (6600@90w 3060@80w) of the AMD and Nvidia counterparts. That would say do 1080p gaming at 60-100fps on a $3500 laptop. So for a stupid premium you are getting extreme power efficientcy against today's mobile offerings from AMD / Nvidia. I would not game at native resolution obviously.
 

ZoukGalaxy

Member
Yes sure Jan.

Big Mouth Lol GIF by MOODMAN
 
I love how you were so fast to dismiss that claim without having even looked at the specs. Never stop being a fanboy.

Anyway, here they are: https://www.apple.com/newsroom/2021...the-most-powerful-chips-apple-has-ever-built/

I saw a 3GB/s number in the OP and assumed it was the bandwidth. I looked for the specs and couldn't find them. There's nothing fanboyish about making a knee-jerk reaction.

I admit it was premature and knee-jerk but it was rooted in a very logical assessment of the admittedly limited and incorrect info I was presented.

Rather than making baseless accusations, you'd come across as less of an ass if you just corrected my mistake in a graceful way. I guess that's too much to ask for someone like you though.
 
Last edited:

rnlval

Member
For content creation it'll be fantastic hardware, particularly if you're heavily invested in Apple's software that benefits from their in house optimization.

It'll be interesting to see how Intel/AMD respond to M1. Apple's vertical integration makes an SOC approach like M1 a lot more feasible. PC OEM's don't operate with that kind of business model.
AMD is already selling defected yield PS5 APU into the PC market as AMD 4700S APU with 256 bit GDDR6-14000. 4700S APU has disabled 40 CU iGPU.

AMD could have sold XSX APU into AMD's graphics card product channels. Most AIB PC graphics cards are missing CPU, southbridge**, ACPI HAL, and UEFI boot loader.

**AMD Zen already has a southbridge function to operate as single-chip SoC mode. Some low-cost A520 motherboards don't include AMD's southbridge chipset.

AMD 4700S/PS5 APU is effectively an AMD graphics card with a Zen2 SoC attached. A major issue with GDDR6 is the supply.
 
Last edited:

Leo9

Member
So not that good unless it is offscreen lol

I really want to see real benchmarks because right now it seems to be only PR marketing.
You should only use offscreen numbers to make a comparison.
Offscreen = same resolution.
On screen = M1 is rendering at almost 4k and that 3080 is driving a lower resolution, probably 1440p (or M1 is limited at 120fps because of vsync).
 
Last edited:

Bluntman

Member
Okay, so, let's get this straight.

The M1 (Max, Pro) has an extremely wide, 128-way execution block which is a 4096 bit vector engine / execution unit.

This has two upsides:
  1. More theoretical TFLOPS with fewer transistors
  2. Is good for the workloads which the Macbook Pro is designed for. Hint: it's not gaming.
For many complicated reasons (mainly memory overlapping issues) this design is shit for complex games using complex shaders.

It's good for the mobile benchmark softwares, so yes, you're going to get nice scores. It's also good for less complex mobile games. And ofcourse for the pro workloads the MBP is intended for.

For AAA games using complex (compute) shaders it's a big no-no. That's why Nvidia and AMD aren't doing this. They could, and they could claim insane theoretical TFLOPS easily but it's just not good for gaming.
 

rnlval

Member
Okay, so, let's get this straight.

The M1 (Max, Pro) has an extremely wide, 128-way execution block which is a 4096 bit vector engine / execution unit.

This has two upsides:
  1. More theoretical TFLOPS with fewer transistors
  2. Is good for the workloads which the Macbook Pro is designed for. Hint: it's not gaming.
For many complicated reasons (mainly memory overlapping issues) this design is shit for complex games using complex shaders.

It's good for the mobile benchmark softwares, so yes, you're going to get nice scores. It's also good for less complex mobile games. And ofcourse for the pro workloads the MBP is intended for.

For AAA games using complex (compute) shaders it's a big no-no. That's why Nvidia and AMD aren't doing this. They could, and they could claim insane theoretical TFLOPS easily but it's just not good for gaming.
RTX 3080 has TFLOPS coming from CUDA cores, RT cores, and Tensor cores, TMU's floating point texture filtering hardware, Polymorph units (geometry is floating-point data format), and ROPS (floating point capable blending hardware).

Most TFLOPS debates between AMD vs NVIDIA only cover the shader TFLOPS.
 

Bluntman

Member
RTX 3080 has TFLOPS coming from CUDA cores, RT cores, and Tensor cores, TMU's floating point texture filtering hardware, Polymorph units (geometry is floating-point data format), and ROPS (floating point capable blending hardware).

Most TFLOPS debates between AMD vs NVIDIA only cover the shader TFLOPS.

We are talking about TFLOPS which is the theoretical maximum compute capability of the vector ALUs, nothing else.
 
Top Bottom