Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Even with the M1 Ultra vs 3090 comparison?

Rigged, 'cherry picked' stats. They got ripped for 100's of posts in Mac Pro forums for that. You see that haven't gone back to that 'well' since M1 during M2-M3 roll outs. (in addition to there being no M4 Ultra).

That was a significant blunder for Apple marketing. There was nothing they could say to the hyper-modular fans of the 3090. Nothing. That stat was like throwing gas on the fire. Most folks with a W6800 duo for computation are looking at that as a 'fail'.

It was a weak attempt to snag more of the W5700 (and maybe single W6800+Afterbuner ) and earlier GPUs into the Studio because the Mac Pro was going to b e missing in action for a long while, that trying to threaten the 3090.

It was like when Apple bragging about first 64 bit personal workstation PowerMac G5 when Boxx had been shipping an AMD 64 model for months. One too many cups of Cupertino kool-aid.

Even MacRumors didn't by that kool-aid. Front page story form that era

"... Despite Apple's claims and charts, the new M1 Ultra chip is not able to outperform Nvidia's RTX 3090 in terms of raw GPU performance, according to benchmark testing performed by The Verge. ..."


Maybe vendors like throwing out 'softballs' for the Tech press to trounce from time to time so the 'tech hype press' retain some credibility that they are objective. Not sure who Apple thought they were going to fool. ( AMD and Intel do the same thing on regular basis. ). Perhaps the folks who aspire to maybe owning a x090 one day rather than the folks who require them.
 
Last edited:
  • Like
Reactions: MRMSFC
Now that Intel will be producing chips with Nvidia graphics built in, should Apple consider burying the hatchet with Nvidia and incorporating their GPUs into Apple Silicon? While Apple has been successful at designing powerful CPUs, they have had less success with GPUs, which are increasingly important for AI.
Really depends on AS architecture and what NVIDIA has to offer. Integration with others increases lifecycle R&D costs. It could happen if AS realizes some limits in the next year and NVDIA is poised to offer significant improvements and everybody plays together and the supply markets work out. I think Apple should consider it and at least see if major performance improvements are somehow possible by leveraging NVIDIA IP.
 
Really depends on AS architecture and what NVIDIA has to offer. Integration with others increases lifecycle R&D costs. It could happen if AS realizes some limits in the next year and NVDIA is poised to offer significant improvements and everybody plays together and the supply markets work out. I think Apple should consider it and at least see if major performance improvements are somehow possible by leveraging NVIDIA IP.

The Apple - Nvidia relationship was broken back in 2008 when nvidia GPUs were literally detaching from the logic boards due to faulty design and the "bumpgate" issues. Also, there is no way Apple would be able to integrate Nvidia GPUs with Apple Silicon without a complete redesign of the SOCs, likely dropping UMA in the process. There's also the cost factor associated with Nvidia, which woud likely force Apple to raise its pricing across the board, and the fact that Nvidia is reluctant to build custom GPUs for its partners.
 
Really depends on AS architecture and what NVIDIA has to offer. Integration with others increases lifecycle R&D costs. It could happen if AS realizes some limits in the next year and NVDIA is poised to offer significant improvements and everybody plays together and the supply markets work out. I think Apple should consider it and at least see if major performance improvements are somehow possible by leveraging NVIDIA IP.

Significant improvements are likely only possible on the Mac Pro, since most Macs are too thermally constrained to achieve meaningful performance with Nvidia hardware. Given how niche that product is, the R&D costs would be excessive. Not to mention that this would completely break Apple's programming model and require developers to write two different codebases.
 
Last edited:
  • Like
Reactions: rb2112
I was originaly opposed to this, however, there needs to be some kind of compatibilty with CUDA libraries.
I am trying to run ComfyUI locally on my M4 Mac Mini Pro 64GB/1TB.
I am finding the models are all written to CUDA, and do NOT embrace the Apple Silicon.
Some will work, most do not. You have to load much lower grade models, and finding them is a lot of work.

Shrug, gonna have to build another expen$ive Linux box, this time with mega RAM and a GTX 5090.
Sigh...
 
I was originaly opposed to this, however, there needs to be some kind of compatibilty with CUDA libraries.
I am trying to run ComfyUI locally on my M4 Mac Mini Pro 64GB/1TB.
I am finding the models are all written to CUDA,

Nvidia's basic stance that "Metal has to loose for CUDA to win" is largely why Apple isn't going to partner with Nvidia.

Intel's OneAPI ( leveraging SYSCL ) and AMD's ROCm ( leveraging HIP) take a somewhat more of a heterogeneous/portable approach. (although both of those are a bit skewed to x86-64 CPU host driven). Those are systems where if the developer wants to write more portable accelerator code they can. The core problem though is whether developers want to write portable code or not.

The other problem is about every AI vendor strongly desires to build the kind of moat around their hardware that Nviaida has built with CUDA. They want to suck apps onto their own hardware after optimization. [ e.g, Nvidia did a bit of a "embrace, delay, extinguish " move on OpenCL ]


But as far as getting binary translations for CUDA , Nvidia has rattle the sabre on that topic that Intel , AMD , and others don't really want to mess with that can of worms . There is an open source project ZLUDA ( that doesn't have a major sponsor).

and do NOT embrace the Apple Silicon.
Some will work, most do not. You have to load much lower grade models, and finding them is a lot of work.

Chuckle, you would think that some "smart AI thingy" could present/sort whether stuff had squarely CUDA bits in it or not.


Shrug, gonna have to build another expen$ive Linux box, this time with mega RAM and a GTX 5090.
Sigh...

Integrating into the Apple SoC die (or multiple chip package) isn't all that necessary. Setting things up in a Mac Pro so that there was a wider ecosystem for "AI accelerator" cards ( as opposed to run the GUI graphics ) would push less folks off the platform.

Or at least enhance the core hypervisor so could do PCI-e card direct pass through to a VM (if macOS is going to ignore that type of accelerator card).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.