Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
They are quite good at building GPUs, considering how much of the headstart Nvidia had. In some key areas (instruction scheduling, resource allocation, ALU utilization) Apple is ahead of Nvidia. I mean, we have Nvidia engineers openly saying that Apple managed to achieve things Nvidia tried and failed. There is a lot of evidence that Apple is building a superscalar GPU. It’s all quite exiting stuff for me as GPU tech enthusiast. In fact, I think that the GPU division is innovating at a much higher pace than the CPU division.

The Blender render score for the 76 core M2 Ultra is 3,220.
The Blender render score for the Nvidia 4090 is 11,200.

 
  • Like
Reactions: xyz01 and DailySlow
The Blender render score for the 76 core M2 Ultra is 3,220.
The Blender render score for the Nvidia 4090 is 11,200.

4090 is like 3.5x faster but consumes 7.5x the power (not to mention the M2 power consumption includes CPU), and that’s the M2 with no RT hardware
 
I just want the M3 Ultra or something to allow user upgradable parts.

I’m specifically looking at the Mac Pro. The 2019 compared to the M2 Ultra was such a step backward.
 
The Blender render score for the 76 core M2 Ultra is 3,220.
The Blender render score for the Nvidia 4090 is 11,200.

Apple has M/Mpro/Mmax and Multra for 4 years that put to shame probably hundreds of nvidia/amd gpu cards build over decades and decades. Any M1/2/3 to put to shame any Intel/Amd integrated gpu SoC its an embarrassing since Intel and Amd are doing this for decades with huge R&D resources
 
  • Like
Reactions: ConvertedToMac
Not defending NVIDIA or anything but Apple has been designing GPUs since the A series chips...
 
Not defending NVIDIA or anything but Apple has been designing GPUs since the A series chips...
yes, but for fanless little devices...NVIDIA is doing for decades for everything else, especially for laptops and any other PC segment
What Nvidia has to beat A series in mobile segment? A series and Qualcomm are the main competitors
Let resume all of this like :
-Nvidia after 3 decades of R&D and money spent has upper hand in the high end gpu laptop/desktop space
-Apple in 1 decade has the upper hand in the mobile segment
-Apple in 4 years has the upper hand in low mid and some high gpu laptop segments
-Nvidia has nothing in mobile segment
 
I sure hope it isn’t the last one, but that was the first thought that occurred to me.

Another thought is that Apple could move to an entirely different approach for the high end desktop. Have one die that has lots of CPU cores (plus other basic SOC stuff) and move the GPU to its own die that connects via ultrafusion. That would allow for more customization in terms of cpu to gpu balance.

It would probably not make sense to develop such a highly custom design for the existing high end Mac desktop market. But if Apple were to also use such a design for AI model training in their own data centers, then it might make sense
Apple was rumored to be developing a discrete GPU based on their integrated GPU architecture during the initial Apple Silicon rollout but it never materialized. (Alongside the rumored Jade4C chip which would have been the 4xMax/2xUltra "Extreme" version that proved too difficult to manufacture). Given the limitations of the M2 Ultra in the Mac Pro compared to its predecessor Xeon chips it wouldn't surprise me if Apple is looking at ways to break apart the SoC while keeping the benefits to allow for more scalable GPU performance and memory capacity. Since it would be a largely custom chip for a niche slice of the market it's entirely possible it would be limited to just the Mac Pro given its price point.
 
I'm skeptical. The M3 Max die is already huge!
Sorry I must have missed it. What exactly are the dimensions of the M3Max die? Or even the M3 die? And I mean quantitatively, in inches or mm or mils. Don’t just point to the same Apple presentation picture without units or physical references. I’ve seen data for the M1 and M2 chip families but not M3
 
Sorry I must have missed it. What exactly are the dimensions of the M3Max die? Or even the M3 die? And I mean quantitatively, in inches or mm or mils. Don’t just point to the same Apple presentation picture without units or physical references. I’ve seen data for the M1 and M2 chip families but not M3
M3 Max is around the same as the M2 Max,no big difference...so i dont know what Potentpeas means...maybe for him even M2 Max die is already Huge but then,, he defiantly didnt saw then how much space an Nvidia 4060+32 gb Ram+I9 uses
 
  • Like
Reactions: Chuckeee
yes, but for fanless little devices...NVIDIA is doing for decades for everything else, especially for laptops and any other PC segment

This is a moot distinction. Performance per watt is performance when available watts are limited by physics.

I'm not about to argue that Apple's GPUs are better than Nvidia's, but we need to get past the fan/no fan distinction. I guarantee you that what Nvidia's engineering team is looking at is how to make their GPUs more power efficient so they can do more in the same thermal envelope.
 
  • Like
Reactions: AgeOfSpiracles
MaxTech is the WORST, do not take anything they say seriously when they have zero technical knowledge or background on any topic outside of maybe iMovie or Pages.

These guys have a habit of pumping out videos with all sorts of wild claims, and then months later they'll follow up a "WE WERE WRONG" video. Zero credibility.
 
This is a moot distinction. Performance per watt is performance when available watts are limited by physics.

I'm not about to argue that Apple's GPUs are better than Nvidia's, but we need to get past the fan/no fan distinction. I guarantee you that what Nvidia's engineering team is looking at is how to make their GPUs more power efficient so they can do more in the same thermal envelope.
Nvidia lost the train regarding mobile segment where a lot of the money are...and they lost the low/mid gpu segment
Nvidia just rely on the windows pc, because are sold in way higher volume...but from performance perspective nvidia with decades of R&D to be crushed by a beginner its embarrassing...like Intel was on cpu when M1 arrived
The next step is again in the low/mid segment now that Qualcomm steps into the windows pc, if they do really well...then sells will be divided in 3 while Apple has monopoly over macOS
 
M3 Max is around the same as the M2 Max,no big difference...so i dont know what Potentpeas means...maybe for him even M2 Max die is already Huge but then,, he defiantly didnt saw then how much space an Nvidia 4060+32 gb Ram+I9 uses
What the original poster was implying is that the larger the die of a chip the more likely there are to be lithography imperfections. You eventually reach the point where a chip die is so big that it's difficult to manufacture one without defects. Apple gets a better chip yield taking two Max chips and stitching them together than to try to fabricate the Ultra as a single die.

An nVidia discrete GPU and an i9 might take up more physical space laid out on the logic board but their individual component chip dies are at most of the size of even a base M chip if not smaller.
 
  • Like
Reactions: DailySlow
I sure hope it isn’t the last one, but that was the first thought that occurred to me.

Another thought is that Apple could move to an entirely different approach for the high end desktop. Have one die that has lots of CPU cores (plus other basic SOC stuff) and move the GPU to its own die that connects via ultrafusion. That would allow for more customization in terms of cpu to gpu balance.

It would probably not make sense to develop such a highly custom design for the existing high end Mac desktop market. But if Apple were to also use such a design for AI model training in their own data centers, then it might make sense
Yeah, good point, if they eat their own dogfood internally (seems increasingly likely), or even (though unlikely) introduce an AI focused product using unified memory as a selling point (rebirth of the XServe!) that would bring the economies of scale in
 
Nvidia lost the train regarding mobile segment where a lot of the money are...and they lost the low/mid gpu segment
Nvidia just rely on the windows pc, because are sold in way higher volume...but from performance perspective nvidia with decades of R&D to be crushed by a beginner its embarrassing...like Intel was on cpu when M1 arrived
The next step is again in the low/mid segment now that Qualcomm steps into the windows pc, if they do really well...then sells will be divided in 3 while Apple has monopoly over macOS
Nvidia decided some time ago that the G in GPU means AI, and that decision is doing quite well for the company. Part of the early mover advantage they got from that decision was having built an ecosystem and reputation around Nvidia in machine learning that will be hard to others to supplant. I don't think they care at all about mobile benchmarks, and I don't think it will hurt their business that they don't.

I also don't think it's a problem that Apple isn't spewing out tons of noise on AI. They'll remain as competitive in mobile inferencing performance as they need to be, while continuing to deliver a well rounded product. They aren't a component company, they're a systems company.
 
  • Like
Reactions: Chuckeee
MaxTech is the WORST, do not take anything they say seriously when they have zero technical knowledge or background on any topic outside of maybe iMovie or Pages.

These guys have a habit of pumping out videos with all sorts of wild claims, and then months later they'll follow up a "WE WERE WRONG" video. Zero credibility.
With a sponsor conveniently in every video.
 
Nvidia decided some time ago that the G in GPU means AI, and that decision is doing quite well for the company. Part of the early mover advantage they got from that decision was having built an ecosystem and reputation around Nvidia in machine learning that will be hard to others to supplant. I don't think they care at all about mobile benchmarks, and I don't think it will hurt their business that they don't.

I also don't think it's a problem that Apple isn't spewing out tons of noise on AI. They'll remain as competitive in mobile inferencing performance as they need to be, while continuing to deliver a well rounded product. They aren't a component company, they're a systems company.
Agree on the Ai part, of course, AI is now the big bubble
 
  • Like
Reactions: Chuckeee
What the original poster was implying is that the larger the die of a chip the more likely there are to be lithography imperfections. You eventually reach the point where a chip die is so big that it's difficult to manufacture one without defects. Apple gets a better chip yield taking two Max chips and stitching them together than to try to fabricate the Ultra as a single die.

An nVidia discrete GPU and an i9 might take up more physical space laid out on the logic board but their individual component chip dies are at most of the size of even a base M chip if not smaller.
Yes, maybe and hoping for just a single die , Apple cleary took a different path now with the M3 family since the M3 Max is quite different from M3 Pro and not a scaling like M1 and M2 were
Hmm dont know if you know the phisical dimensions of the M3 Max and the I9+dgpu 4070 for example what i have with 32 gb ram...M3 Max is clearly take less space
But we are going off topic way too much
So lets hope M3 Ultra will be a single die, even if costs Apple much more money than 2xM3 Max...but lets not forget - apple is about profit like a fish is hungry for water
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.