Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

KPOM

macrumors P6
Original poster
Oct 23, 2010
18,556
8,722
Now that Intel will be producing chips with Nvidia graphics built in, should Apple consider burying the hatchet with Nvidia and incorporating their GPUs into Apple Silicon? While Apple has been successful at designing powerful CPUs, they have had less success with GPUs, which are increasingly important for AI.
 
Intel partnering with Nvidia is not as new as you think. Nvidia previously built Intel chipsets with integrated graphics (9400M, which was used in a MacBook a while ago), and Intel+AMD partnered to build a combined APU with HBM memory (which was not successful, contrary to expectations).

As to Apple partnering with Nvidia... what would be the point? That would maybe make some sense for products like Mac Pro or (to a lesser degree) Studio, and it would entirely mess up Apple's GPU software strategy. I mean, if Apple were willing to use 100+ watt GPUs in their computers, why not just ship larger and hotter Apple GPUs? For AI, Apple already ships more memory than Nvidia's customer solutions (except the low volume Spark), and while Apple indeed is behind in ML acceleration, they are pouring a lot of effort into it. Based on the A19 announcement, I'd expect the M5 Max to be close to RTX 5070 for AI on some workflows, with more performance and features sure to come later.
 
Now that Intel will be producing chips with Nvidia graphics built in, should Apple consider burying the hatchet with Nvidia and incorporating their GPUs into Apple Silicon? While Apple has been successful at designing powerful CPUs, they have had less success with GPUs, which are increasingly important for AI.

No. Intel's move is a based upon being in an extremely weak position. Apple isn't in one.

First, Nvidia has no solution for the bulk of the Apple line up ( iPhone/iPad). Which means Apple would have to do dual track GPUs. Intel is substantively behind on GPU for the bulk of their line up. They are not walking away from a moe dominant. Intel used to ship vastly higher quantieis of GPUs than NVidia. They bundlged that so waving the 'white flag'.

Similarly it would make zero sense for Nvidia to plow several billion dollars into Apple. Apple has money. Intel is the one sliding to the position AMD was in several years ago of wondering if going to be able to keep paying the light bill. Intel needs the money; badly. ( Nvidia is in part propping up the USA minority stake that current administration tossed into the company. Hence there will be a 'free pass' from FTC/DoJ on this. (Slimi chance this is going to get 'fast track' in China though. ) There is zero USA taxpayer money invested in Apple. Unlikely would get free pass from FTC/DoJ on Nvidia-Apple move. Apple has enough problems with gov on their tail. )

Pretty good chance in a couple of years Intel will be quoting Lando Calrissian ; "This deal is getting worse all the time" . Nvidia isn't quitting they ARM/GPU SoCs at all for the data center. This is likely one of those "embrace, extend, extinguish" moves. Nvidia just navigating the transitio to where they nuke the x86 cores once have made deeper inroads into the datacenter market. ( Arm in datacenter is in the 15-25% range now. If that gets to 50-50 the value that Intel will being to the table will be what? Especially if AMD has 25-40% of that x86 50% ? )

Intel has no answer for Strix Halo. It substantively looks like they won't have an answer for Medusa Halo ( Zen 6 + RDNA 5 ). The Mn Max is in decent competitive shape versus the AMD or this new offering in laptop space if battery matters at all. There are some rumors Intel will try to combine a much larger GPU chiplet with the CPU , that is mainly just plain 'catch up' to what AMD has already delivered.

Nvidia is in less of a trailing position than AMD being able to fuse x86-CPU+GPU for datacenter packages. Apple isn't particular trying to do a commerical data center product to sell so it really doesn't matter there at all. INtel is going to have trouble holding onto the AI hardware team they have in datacenter; if not going to kill it following this deal.

Intel is likely to end up with backsliding iGPU group at the end of this. It is a short term fix with little long term upside. If manic AI hardware bubble collapses Nvidia will dump this project relatively quickly. ( long term this isn't strategic for Nvidia).


Second, the notion that Apple has not has success in GPUs for most of the products they make is myoic at best. If it didn't get the hottest AAA , super hype game port then it is a 'fail' is extremely poor metric. The Phones are relatively underpower GPU wise versus they competition? Nope. iPads? Nope. Untethered Headset ? Nope. Apple TV ? Nope. MacBook Air? Nope. Mac Mini ( at its current offent sales price at $500 range)? Nope ( go look at "Windows Copilot+ capable small form factor space. You'll be paying more than $500 for better performance.) .

Apple managed to kick dGPUs out of the bottom half of the Mac line up with little to no complaints. ( Mac Mini intel GPU solution versus M-series Mini .. folks are complaining about the GPU performance ? ) . The Mini and "Mini Pro" are vastly better systems now, than before. MBP's relatively same improvements.

The "box with slots" crowd isn't happy, but that hasn't been the vast bulk of what Apple sells for more than a decade.


Third , The A19 iteration introduced another AI tenors like processors to the GPU nodes. Apple is inrementally making steady progress on each iteration. If Intel was doing steady , regular incremental improvements they would be in much better shape. Intel tried both 'swinging for the fence' and 'shotgun ... shoot everything all at once' approaches for years which largely landed them in the mess they are in.

the large 'missing' AI piece that Apple is lacking is largely more software than hardware. Nvidia doesn't really bring in missing software pierce in and of themselves in the inference space. AI "inference" is where Apple is either is or is not going to make money .

Nvidia's major revenue is the max power consumption in the mega cloud data center. That isn't aligned with Apple's general product strategy at all. Local power versus "omnipotent' cloud power. ( versus Intel would be more than happy to go back to printing profits off of data center parts sales. )
 
....

As to Apple partnering with Nvidia... what would be the point? That would maybe make some sense for products like Mac Pro or (to a lesser degree) Studio, and it would entirely mess up Apple's GPU software strategy. I mean, if Apple were willing to use 100+ watt GPUs in their computers, why not just ship larger and hotter Apple GPUs?

The part of the AI bubble that appears likely to bust the soonest is the notion the "power consumption at any level". Most of the competitive projects to build power efficient AI hardware have been defunded to chance the mania to build the bigger power consuming centers possible. There are only so many power plants out there. Multiple competitors all trying to suck the power grid dry is likely going to be a 'scale' problem.

Apple's GPU/CPU are really designed toward max overclock at any cost. So 'hotter' GPU would largely mean "larger' GPU. larger GPU means more expensive GPU. It isn't really about Apple's willingness, it is far more about customers willingness to buy more expensive GPUs. From Apple? Is any hyperscalar going to trust Apple to delivery in data center space? No. is "expense is no object" Apple fan customer base substantively large (for a company the size of Apple)? No (e.g., Vision Pro. )
 
No. Intel's move is a based upon being in an extremely weak position. Apple isn't in one.

Nvidia has some impediments also .

" Between Qualcomm's range of Snapdragon X chips and a pair of N1 processors, consumers would have a wide range to choose from.....it was believed that millions of N1X chips would ship in Q4 2025 and that millions of N1 processors would follow in 2026. ...There are also rumors that the chips will have a TDP of 80W or 120W. ..."

" ... but also that this delay might be at least in part due to slow progress with Microsoft's "next-generation operating system. ... "
https://www.pcgamer.com/hardware/pr...t-sorting-out-its-next-gen-os-quickly-enough/


Windows 10 still being larger than Windows 11 is likely contributing to the problem of "next gen" Windows. (free extension of W10 if backup settings to MS cloud is largely going to kick the can down the road for another year). MS can't really get to 12 if have not really retired 10.

If the transition is going to be longer than Intel+GPU packages can server as intermediate bridge until MS gets their mess sorted out. Plus "MediaTek" packages made in Taiwan aren't going to sell well at the White House for next couple of years.

As with the datacenter CPU, Nvidia has every intention to crush the x86 over the long term. They have already made those investments. ( MediaTek was likely also being queued up to be steam-rolled eventually even if the Windows on Arm had made more signifiant progress. ). Intel has more and deeper experience in working with MS to get 'next gen' Windows out the door. Nvidia is just playing both paths by pushing Intel GPU out of more Intel x86 Windows deployments on the next gen OS. ( x86 Windows isn't going to implode very fast. Intel-Nvidia chip may not make it until after next gen Windows ships , but much of the base isn't going away. )


Why would Apple funnel lots of money to Nvidia so a much healthier Nvidia+Windows can run over the Mac product line in the future?

Apple has formerly dumped x86 support. They own both sides (OS+hardware) and have relatively minimal coordination problems. Nvidia isn't going to help Apple dump x86 faster. macOS isn't comatose ( although Liquid Glass probably won't break any user adoption rate records. Windows be a W10-like problem though. ) Mac sales are not dropping. Nvidia propping up x86 for longer will actually help as much as it hurts Mac sales, because Nvidia is saddling themselves with the x86 ( which from a CPU perspective M-series has had no major competition problems with at all).
 
Last edited:
  • Like
Reactions: unsui_grep
You can expect a 30-39% for the m5 and m5 max based off of the increase from the a19 and a19 pro. The m5 max will beat out the m3 ultra in gpu performance. a conceivable m5 ultra should be on par with the Nvidia too GPU's.
 
You can expect a 30-39% for the m5 and m5 max based off of the increase from the a19 and a19 pro. The m5 max will beat out the m3 ultra in gpu performance. a conceivable m5 ultra should be on par with the Nvidia too GPU's.

Don't forget that a lot of A19 GPU performance is due to increased FP16 capability. That won't translate to all workflows. But it could be a nice boost for gaming and image processing provided the software is optimized for half precision computations.

P.S. We also see a significant boost in RT-heavy gaming benchmarks, and it remains to be seen whether it's because of FP16 or something else.
 
Now that Intel will be producing chips with Nvidia graphics built in, should Apple consider burying the hatchet with Nvidia and incorporating their GPUs into Apple Silicon? While Apple has been successful at designing powerful CPUs, they have had less success with GPUs, which are increasingly important for AI.

Not gana happen.
 
Here comes 20 pages of random Apple vs Nvidia vs Intel vs AMD stuff.

Answer is no. Nvidia's strategy is to partner with Mediatek, Intel, and maybe future Samsung/Qualcomm SoCs to get Geforce into as many computers as possible. This is all in response to Apple's SoC entry into laptop and desktop world.

I think this is interesting because it signals that Nvidia does not plan to make its own full SoC for consumers. It prefers to partner with companies like Mediatek and Intel. Sort of like Arm vendor for GPUs.

Had Nvidia acquired Arm, I think they would have created their own full SoC.
 
Last edited:
  • Like
Reactions: everythingisspatial
I can't see why Nvidia graphics cards can't work in current MacPro's or in Thunderbolt 5 docks.

Would be useful for AI and CUDA workloads in certain applications.

Probably won't ever happen, but wish it would
 
  • Like
Reactions: rb2112 and wyliej
I can't see why Nvidia graphics cards can't work in current MacPro's or in Thunderbolt 5 docks.

Would be useful for AI and CUDA workloads in certain applications.

Probably won't ever happen, but wish it would

I think it is technically possible to write a driver using the standard interfaces. I remember there was a potential incompatibility because Nvidia drivers rely on a specific memory mapping mode that is not supported on Apple Silicon (for security reasons, I assume), but that sounded like something that can be worked around.
 
I think it is technically possible to write a driver using the standard interfaces. I remember there was a potential incompatibility because Nvidia drivers rely on a specific memory mapping mode that is not supported on Apple Silicon (for security reasons, I assume), but that sounded like something that can be worked around.

Everything can be worked around!
 
I think it is technically possible to write a driver using the standard interfaces. I remember there was a potential incompatibility because Nvidia drivers rely on a specific memory mapping mode that is not supported on Apple Silicon (for security reasons, I assume), but that sounded like something that can be worked around.
Apple is using UMA Unified Memory Architecture in that the Memory shared between CPU an GPU, whereas Nvidia using seperate VRAM for GPU, so would have to break the UMA model that the heart of Apple Silcon/Mac OS path.

Am sure it could be done but at the cost of a fundamental choice apple has made for the Mac platform with Apple Silicon.

Also apple doing a lot of what Nvida doing on GPU with the Neural Engine and Media Engine parts of the SoC so not sure why Apple would want more then the basic Graphics Cores and ditch the extra parts of modern GPU,
 
Apple is using UMA Unified Memory Architecture in that the Memory shared between CPU an GPU, whereas Nvidia using seperate VRAM for GPU, so would have to break the UMA model that the heart of Apple Silcon/Mac OS path.

Am sure it could be done but at the cost of a fundamental choice apple has made for the Mac platform with Apple Silicon.

You are preaching to the choir, friend. That's not what I meant at all — my message was that if someone is interested in running Nvidia GPUs on Apple Silicon, they can probably write the drivers themselves.

Also apple doing a lot of what Nvida doing on GPU with the Neural Engine and Media Engine parts of the SoC so not sure why Apple would want more then the basic Graphics Cores and ditch the extra parts of modern GPU,

Apple literally just introduced machine learning units on the GPU, which fulfill the same role as Nvidia's tensor cores. And it makes total sense due to how GPUs work.
 
I can't see why Nvidia graphics cards can't work in current MacPro's or in Thunderbolt 5 docks.

Would be useful for AI and CUDA workloads in certain applications.

Probably won't ever happen, but wish it would
I've seen someone get it working, but haven't had the chance to look into it too deeply.

1758908284394.png


 
Last edited:
Nvidia would never agree if they were smart.

Look at everyone else who partnered up with Apple. They got designed out by Apple's in-house team. Samsung, Imagination, Intel, Qualcomm, Broadcom.
 
Nvidia would never agree if they were smart.

It was never solely Nvidia's decision to make. Nvidia threw gas on the bridge with Apple and blew it up 2 or t times. Like Nvidia has won 'product partner' of the year awards with the system vendors they work with.


Look at everyone else who partnered up with Apple. They got designed out by Apple's in-house team. Samsung,

Apple still uses Samsung. Samsung also messed up on their end also. First, the Fab 'division' both designed SoC and fabricated parts. Lots of players are sketchy about dealing with Samsung because your details about your products you want fabbed ( or want Samsung parts for) tend to 'leak' over to the other part of the congolmerate and then 'ta-da' you are looiking at your duplicated product as market competition.

Intel much of same issue while they paid lip service of doing general fab for long while the fab and chip design were so incestiously in bed with one another... who is going to trust them.

Second, while Samsung was ahead of TSMC it was a bit of a tortise and hare story. TSMC kept plugging alot and Samsung is quirky progress. Once they get it right they can scale/volume a process well. But wrong leaders in wrong spots and have problems. If Samsung fabs were still year ahead of TSMC , maybe Apple would have stayed.

Displays continue to get Apple wins because often they are just better than all of the alternatives. Apple will choose someone who is just better than everyone else even if they don't like them.


Imagination,

Pretty good chance this primarily came down to Imagination not moving fast enough for Apple. Imagination wanted a more stable ecosystem of license ( Intel , MediaTek , etc. ) . The other problem is that they were small enough someone could have bought them and taken them off the table. ( e.g., Intel , Broadcom , Arm , etc. ) .
I think it was after Apple 'left' but Imagination at one point bought MIPS.

[ Arm + Imagination if it existed in alternative timeline Apple might have stuck with longer. But they were mostly settled to be 'competitors' by the time Apple came looking for options in the 2000's ]



The pinnacle to efficient , consistent execution during 2010--2020 decade. *cough*. Not! If Intel could have delivered something like Meteor Lake 2-3 years earlier , killed legacy BIOS about 5 years earlier , not screwed up their fab process, and evolved UEFI more securly, than they did... they might have held the contract longer.


Qualcomm,

Apple thought Qualcomm was too greedy ; not that the products didn't technically perform. If Qualcomm had better pricing , Apple probably would have never bough the Intel modem business.


Broadcom.

Reported Apple is sticking with Broadcom for a data center I/O solution. But other subsystems yes. I don't think Broadcomm excelled at the very low power options.
 
  • Like
Reactions: JPack
M chip target to replace Nvidia in all aspect including game and AI, how come Apple be partner with nvidia?
 
M chip target to replace Nvidia in all aspect including game and AI,

It does not. There is little indication that Apple has a x090 (4090,5090,etc) target objective. The far extreme end of what the Mac Pro covered is not a M-series objective.

Replace Nvidia completely in all aspects in non-heavyweight laptops? Yes. Across the entire previous product line? No. Apple has covered the vast bulk of Mac sales, but 'all' and 'everything' no.

There is also extremely little indication that Apple wants to be "King of large scale AI training" also. (even the rumored "AI servers" appear far more geared toward inference than training even if eventually get something that isn't tagged M-series. ). Is Apple loosing tons of sleep over the DG Spark solution that recently shipped? Probably not, but that isn't the scope of Nvidia's offerings. In some sense, that is more Nvidia responding to Apple than Apple completely targeting Nvidia.

so Apple isn't foolishly trying to attack every single niche that Nvidia is in. Nvidia is going to be far more willing to scrafic Perf/Watt , costs , etc just to 'win' the king of the mountain gaming dGPU crown at each generation. ( One contributing reason why AMD just lets them have it periodically, because the 'win at any cost' don't really make sense if not almost a monopoly in that segment )


how come Apple be partner with nvidia?

There are a number of people who want Apple to build 'everything'. Every single niche of the computation market that Apple touched in the past they have to touch forever.

There is another set of people who are Nvidia fan boys where everyone (including Apple) should just submit to Nvidia's rule of GPU land and bow. Inviting them into the discussion just churns the thread far more so than changes Apple's mind.
 
Now that Intel will be producing chips with Nvidia graphics built in, should Apple consider burying the hatchet with Nvidia and incorporating their GPUs into Apple Silicon? While Apple has been successful at designing powerful CPUs, they have had less success with GPUs, which are increasingly important for AI.

Reportedly Nvidia is 'making' Intel buy into Nvidia interconnect to do this. If look at the MediaTek/Nvidia N1 chip.


NVIDIA-GB10-SUPERCHIP-3.jpeg


From Nvidia Press Release
"... The companies will focus on seamlessly connecting NVIDIA and Intel architectures using NVIDIA NVLink — integrating the strengths of NVIDIA’s AI and accelerated computing with Intel’s leading CPU technologies and x86 ecosystem to deliver cutting-edge solutions for customers. ..."

NVLINK-C2C
".. NVIDIA DGX Spark is a compact, personal AI supercomputer accelerated by the NVIDIA GB10 ..."

If Nvidia weaves a Display controller onto the Intel die and Intel adopts C2C. then Nvidia can probably sell the same iGPU chiplet that they are selling to MediaTek to Intel also with no changes on the GPU 'side' of the equation.


Intel Fabs working with Nvidia to map C2C onto Foveros would make all kinds of sense. [ It sounds like the project was already in flight.] If Intel fab wanted to pitch the MediaTek/Nvidia partnership of using Intel 2.5 and 18A pieces would already be there. It is just whether the PC product design wanted to join in.


Apple giving up on their Display Controllers in addition the GPU cores. And adopting whatever chiplet interface specification that Nvidia comes up with? Probably not.

As for the Intel data center SoCs and native NV-Link... that didn't work so well for IBM. ( circa 2017 )


"

With IBM POWER9, we’re all riding the AI wave.​

....
POWER9 will be the first commercial platform loaded with on-chip support for NVIDIA’s next-generation NVLink, OpenCAPI 3.0 and PCI-Express 4.0. These technologies provide a giant hose to transfer data.
.... "


It will help that x86-64 has a much deeper and broader data center footprint than Power did in the 2010's and now.
But also in deeper peril because Nvidia has their own Arm solution that they are selling against ( just like Nvidia had x86-64+Nvidia solutions to sell against Power9. )


Another issue is whether Nvidia is using C2C as a stop gap solution or as a way of blunting. UCIe


If a general interface where can sub in whoever's GPU cores you wanted that is a decent track for Intel to evolve down. IF the Intel iGPU works or AMD iGPU or Nvidia iGPU works ... use it. If this bend multiple folks to using C2C as the standard GPU core chiplet path then Intel is writing into a dangerous swamp.


P.S. As long as Intel is committed to 'lower common denominator' memory channel support for this, I really don't think it is going to be all that competitive with what Apple is doing. It will be a bit behing the curve on memory bandwidth and using 'Nvidia' as a magic work to sprinkle that as a magical indirection around that won't help as much as they think it will in the Windows context. Suspect will still fall short of the AMD 'Halo' APUs in perf/watt and several tasks.

Data Center is clearner. Intel has several accelerators on their SoC that have questional utility for most. Dumping that die area space assignment for those and switching to C2C probably has more wide spread utility. [ and gives them a cover story as to why the accelerator is going away. ] . Higher mark-up on the SoC will help offset Nvidia probably walking away with more than their fair share of the profits also. (going to have to 'pay' to use their tech. )
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.