Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

KPOM

macrumors P6
Original poster
Oct 23, 2010
18,543
8,694
Now that Intel will be producing chips with Nvidia graphics built in, should Apple consider burying the hatchet with Nvidia and incorporating their GPUs into Apple Silicon? While Apple has been successful at designing powerful CPUs, they have had less success with GPUs, which are increasingly important for AI.
 
Intel partnering with Nvidia is not as new as you think. Nvidia previously built Intel chipsets with integrated graphics (9400M, which was used in a MacBook a while ago), and Intel+AMD partnered to build a combined APU with HBM memory (which was not successful, contrary to expectations).

As to Apple partnering with Nvidia... what would be the point? That would maybe make some sense for products like Mac Pro or (to a lesser degree) Studio, and it would entirely mess up Apple's GPU software strategy. I mean, if Apple were willing to use 100+ watt GPUs in their computers, why not just ship larger and hotter Apple GPUs? For AI, Apple already ships more memory than Nvidia's customer solutions (except the low volume Spark), and while Apple indeed is behind in ML acceleration, they are pouring a lot of effort into it. Based on the A19 announcement, I'd expect the M5 Max to be close to RTX 5070 for AI on some workflows, with more performance and features sure to come later.
 
Now that Intel will be producing chips with Nvidia graphics built in, should Apple consider burying the hatchet with Nvidia and incorporating their GPUs into Apple Silicon? While Apple has been successful at designing powerful CPUs, they have had less success with GPUs, which are increasingly important for AI.

No. Intel's move is a based upon being in an extremely weak position. Apple isn't in one.

First, Nvidia has no solution for the bulk of the Apple line up ( iPhone/iPad). Which means Apple would have to do dual track GPUs. Intel is substantively behind on GPU for the bulk of their line up. They are not walking away from a moe dominant. Intel used to ship vastly higher quantieis of GPUs than NVidia. They bundlged that so waving the 'white flag'.

Similarly it would make zero sense for Nvidia to plow several billion dollars into Apple. Apple has money. Intel is the one sliding to the position AMD was in several years ago of wondering if going to be able to keep paying the light bill. Intel needs the money; badly. ( Nvidia is in part propping up the USA minority stake that current administration tossed into the company. Hence there will be a 'free pass' from FTC/DoJ on this. (Slimi chance this is going to get 'fast track' in China though. ) There is zero USA taxpayer money invested in Apple. Unlikely would get free pass from FTC/DoJ on Nvidia-Apple move. Apple has enough problems with gov on their tail. )

Pretty good chance in a couple of years Intel will be quoting Lando Calrissian ; "This deal is getting worse all the time" . Nvidia isn't quitting they ARM/GPU SoCs at all for the data center. This is likely one of those "embrace, extend, extinguish" moves. Nvidia just navigating the transitio to where they nuke the x86 cores once have made deeper inroads into the datacenter market. ( Arm in datacenter is in the 15-25% range now. If that gets to 50-50 the value that Intel will being to the table will be what? Especially if AMD has 25-40% of that x86 50% ? )

Intel has no answer for Strix Halo. It substantively looks like they won't have an answer for Medusa Halo ( Zen 6 + RDNA 5 ). The Mn Max is in decent competitive shape versus the AMD or this new offering in laptop space if battery matters at all. There are some rumors Intel will try to combine a much larger GPU chiplet with the CPU , that is mainly just plain 'catch up' to what AMD has already delivered.

Nvidia is in less of a trailing position than AMD being able to fuse x86-CPU+GPU for datacenter packages. Apple isn't particular trying to do a commerical data center product to sell so it really doesn't matter there at all. INtel is going to have trouble holding onto the AI hardware team they have in datacenter; if not going to kill it following this deal.

Intel is likely to end up with backsliding iGPU group at the end of this. It is a short term fix with little long term upside. If manic AI hardware bubble collapses Nvidia will dump this project relatively quickly. ( long term this isn't strategic for Nvidia).


Second, the notion that Apple has not has success in GPUs for most of the products they make is myoic at best. If it didn't get the hottest AAA , super hype game port then it is a 'fail' is extremely poor metric. The Phones are relatively underpower GPU wise versus they competition? Nope. iPads? Nope. Untethered Headset ? Nope. Apple TV ? Nope. MacBook Air? Nope. Mac Mini ( at its current offent sales price at $500 range)? Nope ( go look at "Windows Copilot+ capable small form factor space. You'll be paying more than $500 for better performance.) .

Apple managed to kick dGPUs out of the bottom half of the Mac line up with little to no complaints. ( Mac Mini intel GPU solution versus M-series Mini .. folks are complaining about the GPU performance ? ) . The Mini and "Mini Pro" are vastly better systems now, than before. MBP's relatively same improvements.

The "box with slots" crowd isn't happy, but that hasn't been the vast bulk of what Apple sells for more than a decade.


Third , The A19 iteration introduced another AI tenors like processors to the GPU nodes. Apple is inrementally making steady progress on each iteration. If Intel was doing steady , regular incremental improvements they would be in much better shape. Intel tried both 'swinging for the fence' and 'shotgun ... shoot everything all at once' approaches for years which largely landed them in the mess they are in.

the large 'missing' AI piece that Apple is lacking is largely more software than hardware. Nvidia doesn't really bring in missing software pierce in and of themselves in the inference space. AI "inference" is where Apple is either is or is not going to make money .

Nvidia's major revenue is the max power consumption in the mega cloud data center. That isn't aligned with Apple's general product strategy at all. Local power versus "omnipotent' cloud power. ( versus Intel would be more than happy to go back to printing profits off of data center parts sales. )
 
....

As to Apple partnering with Nvidia... what would be the point? That would maybe make some sense for products like Mac Pro or (to a lesser degree) Studio, and it would entirely mess up Apple's GPU software strategy. I mean, if Apple were willing to use 100+ watt GPUs in their computers, why not just ship larger and hotter Apple GPUs?

The part of the AI bubble that appears likely to bust the soonest is the notion the "power consumption at any level". Most of the competitive projects to build power efficient AI hardware have been defunded to chance the mania to build the bigger power consuming centers possible. There are only so many power plants out there. Multiple competitors all trying to suck the power grid dry is likely going to be a 'scale' problem.

Apple's GPU/CPU are really designed toward max overclock at any cost. So 'hotter' GPU would largely mean "larger' GPU. larger GPU means more expensive GPU. It isn't really about Apple's willingness, it is far more about customers willingness to buy more expensive GPUs. From Apple? Is any hyperscalar going to trust Apple to delivery in data center space? No. is "expense is no object" Apple fan customer base substantively large (for a company the size of Apple)? No (e.g., Vision Pro. )
 
No. Intel's move is a based upon being in an extremely weak position. Apple isn't in one.

Nvidia has some impediments also .

" Between Qualcomm's range of Snapdragon X chips and a pair of N1 processors, consumers would have a wide range to choose from.....it was believed that millions of N1X chips would ship in Q4 2025 and that millions of N1 processors would follow in 2026. ...There are also rumors that the chips will have a TDP of 80W or 120W. ..."

" ... but also that this delay might be at least in part due to slow progress with Microsoft's "next-generation operating system. ... "
https://www.pcgamer.com/hardware/pr...t-sorting-out-its-next-gen-os-quickly-enough/


Windows 10 still being larger than Windows 11 is likely contributing to the problem of "next gen" Windows. (free extension of W10 if backup settings to MS cloud is largely going to kick the can down the road for another year). MS can't really get to 12 if have not really retired 10.

If the transition is going to be longer than Intel+GPU packages can server as intermediate bridge until MS gets their mess sorted out. Plus "MediaTek" packages made in Taiwan aren't going to sell well at the White House for next couple of years.

As with the datacenter CPU, Nvidia has every intention to crush the x86 over the long term. They have already made those investments. ( MediaTek was likely also being queued up to be steam-rolled eventually even if the Windows on Arm had made more signifiant progress. ). Intel has more and deeper experience in working with MS to get 'next gen' Windows out the door. Nvidia is just playing both paths by pushing Intel GPU out of more Intel x86 Windows deployments on the next gen OS. ( x86 Windows isn't going to implode very fast. Intel-Nvidia chip may not make it until after next gen Windows ships , but much of the base isn't going away. )


Why would Apple funnel lots of money to Nvidia so a much healthier Nvidia+Windows can run over the Mac product line in the future?

Apple has formerly dumped x86 support. They own both sides (OS+hardware) and have relatively minimal coordination problems. Nvidia isn't going to help Apple dump x86 faster. macOS isn't comatose ( although Liquid Glass probably won't break any user adoption rate records. Windows be a W10-like problem though. ) Mac sales are not dropping. Nvidia propping up x86 for longer will actually help as much as it hurts Mac sales, because Nvidia is saddling themselves with the x86 ( which from a CPU perspective M-series has had no major competition problems with at all).
 
Last edited:
You can expect a 30-39% for the m5 and m5 max based off of the increase from the a19 and a19 pro. The m5 max will beat out the m3 ultra in gpu performance. a conceivable m5 ultra should be on par with the Nvidia too GPU's.
 
You can expect a 30-39% for the m5 and m5 max based off of the increase from the a19 and a19 pro. The m5 max will beat out the m3 ultra in gpu performance. a conceivable m5 ultra should be on par with the Nvidia too GPU's.

Don't forget that a lot of A19 GPU performance is due to increased FP16 capability. That won't translate to all workflows. But it could be a nice boost for gaming and image processing provided the software is optimized for half precision computations.

P.S. We also see a significant boost in RT-heavy gaming benchmarks, and it remains to be seen whether it's because of FP16 or something else.
 
Now that Intel will be producing chips with Nvidia graphics built in, should Apple consider burying the hatchet with Nvidia and incorporating their GPUs into Apple Silicon? While Apple has been successful at designing powerful CPUs, they have had less success with GPUs, which are increasingly important for AI.

Not gana happen.
 
Here comes 20 pages of random Apple vs Nvidia vs Intel vs AMD stuff.

Answer is no. Nvidia's strategy is to partner with Mediatek, Intel, and maybe future Samsung/Qualcomm SoCs to get Geforce into as many computers as possible. This is all in response to Apple's SoC entry into laptop and desktop world.

I think this is interesting because it signals that Nvidia does not plan to make its own full SoC for consumers. It prefers to partner with companies like Mediatek and Intel. Sort of like Arm vendor for GPUs.

Had Nvidia acquired Arm, I think they would have created their own full SoC.
 
Last edited:
I can't see why Nvidia graphics cards can't work in current MacPro's or in Thunderbolt 5 docks.

Would be useful for AI and CUDA workloads in certain applications.

Probably won't ever happen, but wish it would
 
I can't see why Nvidia graphics cards can't work in current MacPro's or in Thunderbolt 5 docks.

Would be useful for AI and CUDA workloads in certain applications.

Probably won't ever happen, but wish it would

I think it is technically possible to write a driver using the standard interfaces. I remember there was a potential incompatibility because Nvidia drivers rely on a specific memory mapping mode that is not supported on Apple Silicon (for security reasons, I assume), but that sounded like something that can be worked around.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.