Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
We still have a surprisingly large crowd that insists that Bootcamp on Apple Series Macs would be trivial for Apple to do.
I don't know about trivial, but Frederigi's own statement to Ars Technica (see below) implies it woudn't be an issue for Apple to get that working. He says the decision falls upon MS. So if Frederighi is being accurate, then the 'crowd's' misunderstanding is not about how much work Apple would have to do, but rather about who needs to do the work, and who is putting up the barrier.

Of course, that could just be Frederighi doing some technical posturing. It's possible the most accurate statement would be:

"It would be a lot of work, but this work could be done either by MS or Apple, and Apple isn't interested in doing it, but if MS is we'd be happy to make it happen."

If so—i.e, if this really is game-playing about who is responsible, and the technical authorities aren't speaking straightforwardly—then I would say a lot of the fault for the public misunderstanding about this lies with the former.

Here's the history I've found of Federighi's statements about running ARM Windows natively on AS:


In June 2020, Federighi told The Verge this:

"We’re not direct booting an alternate operating system"
[Source: https://www.theverge.com/2020/6/24/21302213/apple-silicon-mac-arm-windows-support-boot-camp ]

But in Nov. 2020, in an interview with Ars Technica, Frederighi updated his position to say, when it comes to running windows natively on Apple Silicon:

"That's really up to Microsoft. We have the core technologies for them to do that, to run their ARM version of Windows, which in turn of course supports x86 user mode applications. But that's a decision Microsoft has to make, to bring to license that technology for users to run on these Macs. But the Macs are certainly very capable of it."
[Source: https://arstechnica.com/gadgets/202...ewing-apple-about-its-mac-silicon-revolution/ ]
 
I don't know about trivial, but Frederigi's own statement to Ars Technica (see below) implies it woudn't be an issue for Apple to get that working. He says the decision falls upon MS. So if Frederighi is being accurate, then the 'crowd's' misunderstanding is not about how much work Apple would have to do, but rather about who needs to do the work, and who is putting up the barrier.

Of course, that could just be Frederighi doing some technical posturing. It's possible the most accurate statement would be:

"It would be a lot of work, but this work could be done either by MS or Apple, and Apple isn't interested in doing it, but if MS is we'd be happy to make it happen."

If so—i.e, if this really is game-playing about who is responsible, and the technical authorities aren't speaking straightforwardly—then I would say a lot of the fault for the public misunderstanding about this lies with the former.

Here's the history I've found of Federighi's statements about running ARM Windows natively on AS:


In June 2020, Federighi told The Verge this:

"We’re not direct booting an alternate operating system"
[Source: https://www.theverge.com/2020/6/24/21302213/apple-silicon-mac-arm-windows-support-boot-camp ]

But in Nov. 2020, in an interview with Ars Technica, Frederighi updated his position to say, when it comes to running windows natively on Apple Silicon:

"That's really up to Microsoft. We have the core technologies for them to do that, to run their ARM version of Windows, which in turn of course supports x86 user mode applications. But that's a decision Microsoft has to make, to bring to license that technology for users to run on these Macs. But the Macs are certainly very capable of it."
[Source: https://arstechnica.com/gadgets/202...ewing-apple-about-its-mac-silicon-revolution/ ]
While the November article certainly says that this quote is in reference to native Windows on AS Macs, I don't think it actually was - everything leading up to that point was on virtualization, Windows hadn't released a version of ARM Windows whose license allowed for installation, and its the same language Apple used to describe their new virtualization framework in June and even right before your pull quote from November.

We asked what an Apple Silicon workflow will look like for a technologist who lives in multiple operating systems simultaneously. Federighi pointed out that the M1 Macs do use a virtualization framework that supports products like Parallels or VMWare, but he acknowledged that these would typically virtualize other ARM operating systems.

"For instance, running ARM Linux of many vintages runs great in virtualization on these Macs. Those in turn often have a user mode x86 emulation in the same way that Rosetta does, running on our kernel in macOS," he explained.

While running Linux is important for many, other users are asking about Windows. Federighi pointed to Windows in the cloud as a possible solution and mentioned CrossOver, which is capable of "running both 32- and 64-bit x86 Windows binaries under a sort of WINE-like emulation layer on these systems." But CrossOver's emulation approach is not as consistent as what we've enjoyed in virtualization software like Parallels or VMWare on Intel Macs, so there may still be hills to climb ahead.
This is the series of paragraphs before, note how they're saying CrossOver and cloud stuff will work and Linux virtualization will work but we're not sure if Parallels or VMWare will work? Then suddenly in the article it shifts to native Windows booting?

Then also note that the main part that Frederighi says MS has to do is "to bring to license that technology for users to run on these Macs". That's true for virtualization, not for direct boot. The problem comes down that simply being an ARM-ISA core is not enough to get an ARM-compatible OS to boot on Apple Silicon. MS is not an open source hacker project. They are not going to reverse engineer Apple Silicon to get Windows working natively. That would mean Apple and Microsoft would have to work together. And that's just the CPU to say nothing of writing DirectX drivers for Apple's GPU. Is all of this doable? Of course. But a lot of work and you'd have to have someone, preferably both parties, taking responsibility for the support which is another mountain of work. But it also means that fundamentally MS couldn't just provide Windows to people and Apple had everything ready for them.

The June quote is almost certainly more accurate with the caveat that Apple did help the Asahi team (not direct support but in making changes to the OS to aid in its development). However, the manner of the help they provided shows that especially in 2020, Apple did not in fact have the ability to direct boot another OS initially on Apple Silicon, that came in later OS updates after the Asahi project was well underway. So that's more evidence that Federighi was talking about virtualization not direct boot because Apple at the time did not in fact have direct boot tech for multiple OSes for AS yet.

As such when considering this quote in that context:

"We have the core technologies for them to do that, to run their ARM version of Windows, which in turn of course supports x86 user mode applications. But that's a decision Microsoft has to make, to bring to license that technology for users to run on these Macs. But the Macs are certainly very capable of it."

The above is exactly true for virtualization. Apple built the core technology to run Windows virtualized which would support MS's x86 user mode applications to the extent MS did for ARM-windows, and all MS needed to do was make the decision to allow users to buy the license to install ARM-Windows which MS eventually did. The assertion in the article that the quote is in reference to running Windows natively notwithstanding, none of that was true for direct boot and it still isn't for Windows. Thus, in the context of the actual tech that was available and even the article itself, I think there was a miscommunication as either the Ars writer misunderstood the answer or Frederighi misunderstood the question (we don't see the specific question that was asked and the way it was worded).
 
Last edited:
While the November article certainly says that this quote is in reference to native Windows on AS Macs, I don't think it actually was - everything leading up to that point was on virtualization, Windows hadn't released a version of ARM Windows whose license allowed for installation, and its the same language Apple used to describe their new virtualization framework in June.


This is the series of paragraphs before, note how they're saying CrossOver and cloud stuff will work and Linux virtualization will work but we're not sure if Parallels or VMWare will work? Then suddenly in the article it shifts to native Windows booting?

Then also note that the main part that Frederighi says MS has to do is "to bring to license that technology for users to run on these Macs". That's true for virtualization, not for direct boot. The problem comes down that simply being an ARM-ISA core is not enough to get an ARM-compatible OS to boot on Apple Silicon. MS is not an open source hacker project. They are not going to reverse engineer Apple Silicon to get Windows working natively. That would mean Apple and Microsoft would have to work together. And that's just the CPU to say nothing of writing DirectX drivers for Apple's GPU. Is all of this doable? Of course. But a lot of work and you'd have to have someone, preferably both parties, taking responsibility for the support which is another mountain of work.

The June quote is almost certainly more accurate with the caveat that Apple did help the Asahi team in ways (not direct support but in making changes to the OS to aid in its development). However, the manner of the help they provided shows that especially in 2020, Apple did not in fact have the ability to direct boot another OS initially on Apple Silicon, that came in later OS updates after the Asahi project was well underway. So that's more evidence that Federighi wasn't talking about virtualization not direct boot because Apple at the time did not in fact have direct boot tech for multiple OSes for AS yet.

In the context of the article and the actual tech context, I think there was a miscommunication as either the Ars writer misunderstood the answer or Frederighi misunderstood the question (we don't see the specific question that was asked and the way it was worded).
Yes, what you wrote may be true. But that provides even stronger support for my principal point, which is that a lot of the confusion about technical issues in computing is due to poor and confusing communication by the technical sources.

More broadly, I'll add the following, which isn't directed towards you, but are just my general thoughts on the subject:

I find there is a significant qualitative differerence between the ability of (most) natural scientists to explain science to those outside their field, and the ability of (most) computer scientists to do the same.

Natural scientists, espcially those that communicate with the public, are often from academia, and have years of teaching experience. They're thus experts at knowing how to communicate with those who know less than they do. By contrast, the overwhelming majority of computer scientists who communicate with the public are from the industry, and have little actual teaching experience.

Further, when a good teacher answers a question and their audience doesn't understand, the first thing the teacher does is to look to themselves, thinking they need to provide a better answer. In particular, they'll try to identify the exact source of the confusion, and custom-craft an answer to address it. When someone who's not a teacher provides an answer the questioner doesn't understand, often the first thing they do is to blame the questioner.

Then we have to add the profit motive to all of this. Academics, by nature, want to communicate clearly. It's in their blood. The communications from industrial sources, by contrast, are often motivated by other considerations.
 
Last edited:
  • Like
Reactions: StoneJack
I strongly feel that Hector's comment on the eGPUs has been misinterpreted. I never felt that his message was an "emphatic yes". To me it was a technical commentary on a technical topic. Maybe Hector could have been more careful with his wording. But then maybe the gamer/linux influencers could also be a bit more responsible about understanding the content. We are not politicians, I feel like we should not be under extreme scrutiny just because some influencers misunderstood or misrepresented what we say.
Ok, change “emphatic” yes to “unambiguous” yes. :) What he said hasn’t really been misinterpreted. He intentionally gave an unambiguous Yes on the one hand. Then, as someone in the position to consider the best use of resources available, could have also given an unambiguous No on the other hand.

In this area specifically, if the goal was to avoid undue scrutiny, “No” does that in an easy and straightforward way. Being ambiguous? Well, being ambiguous does what being ambiguous does pretty much every time.
 
Last edited:
On whose responsibility Boot Camp is...

With Intel Macs, it was always Apple's, by Apple's choice. Intel Macintosh hardware and firmware was deliberately close enough to the generic x86 PC platform that Apple could do 100% of the work to make it boot Windows. That was the course set when they started serious work on the Intel transition, and once they shipped that way it always made sense to continue it. Platforms (CPU + firmware + a handful of low-level peripherals like the interrupt controller) tend to be very stable.

However, the Arm platform choices WoA is built on do not match Apple's. There was no widely adopted Arm platform standard when Apple first started work on iPhone, so Apple just did what everyone else was doing: they invented their own.

Over the years between then and 2020, the rest of the world slowly hammered out an industry standard Arm platform for personal computers. However, when Apple chose to make the Apple Silicon Mac, there was little chance of them adopting it. Remember what I said about platforms tending to stay stable? They didn't want to rock their own boat. Also, Apple's Arm platform does some things better, and they're things Apple really cares about, so even if they gave any thought to changing over I bet they viewed it as a downgrade.

So here we are. Windows isn't going to boot natively on Arm Mac without Microsoft taking an interest and doing some of the work. The only possible way Apple can do it themselves is to binary patch the Windows kernel and bootloader. That comes with both legal and practical difficulties, so Apple isn't going to do that. Hence the stance taken by Apple execs: they'd love to help Microsoft bring Windows to Arm Mac, if Microsoft is interested.
 
Yes, what you wrote may be true. But that provides even stronger support for my principal point, which is that a lot of the confusion about technical issues in computing is due to poor and confusing communication by the technical sources.

More broadly, I'll add the following, which isn't directed towards you, but are just my general thoughts on the subject:

I find there is a significant qualitative differerence between the ability of (most) natural scientists to explain science to those outside their field, and the ability of (most) computer scientists to do the same.

Natural scientists, espcially those that communicate with the public, are often from academia, and have years of teaching experience. They're thus experts at knowing how to communicate with those who know less than they do. By contrast, the overwhelming majority of computer scientists who communicate with the public are from the industry, and have little actual teaching experience.

Further, when a good teacher answers a question and their audience doesn't understand, the first thing the teacher does is to look to themselves, thinking they need to provide a better answer. In particular, they'll try to identify the exact source of the confusion, and custom-craft an answer to address it. When someone who's not a teacher provides an answer the questioner doesn't understand, often the first thing they do is to blame the questioner.

Then we have to add the profit motive to all of this. Academics, by nature, want to communicate clearly. It's in their blood. The communications from industrial sources, by contrast, are often motivated by other considerations.
I don't blame anyone in this instance given the error, but I'm pretty sure this was just a mistake which was never corrected. While what you wrote may or may not be true, I'm not sure it applies here as I think it was simply a mistake which could have happened to anyone regardless of their explanatory skills or motivations.

Hence the stance taken by Apple execs: they'd love to help Microsoft bring Windows to Arm Mac, if Microsoft is interested.

I think the stance was garbled and he was talking about virtualization because he was talking about how all MS needed to do was provide the license to users and Apple had already done the work to get it working, which you and I both said is that is not true for native booting as both MS and Apple would need to make changes to ARM-Windows and write drivers, etc ... It was true for virtualization though and that is in fact what happened. In his earlier quote to Gruber regarding direct booting it's clear Frederighi is talking about direct booting instead of the ambiguous "that" pronoun in the November quote, which Ars implies a reference to native Windows on AS but no where does Frederighi explicitly say. In the June quote, Frederighi says explicitly no direct booting and, for Windows at least, indeed, there has been no native Windows on AS. Sure there could be one day, but no one should hold their breath. Everything else you wrote though I agree with.
 
Last edited:
I don't know about trivial, but Frederigi's own statement to Ars Technica (see below) implies it woudn't be an issue for Apple to get that working.

I am quite confident that Frederigi never talked about native booting. I feel that his statements have been consistently mischaracterized and taken out of the context by the community. The context was Microsoft licensing of Windows on ARM for virtualization purposes. That has since been addressed and Windows runs perfectly fine on ARM Macs.
 
Apple silicon excels in power efficiency, and in giving you great performance that is not throttled even when your MacBook is not plugged in. That has always been Apple's value proposition right from the very start.



In the first video, the MSI Titan clearly smokes the MBP in a photography benchmark test, though it needs to be plugged in (the power cable is a massive brick). Also, the creator notes that the fans spin up noticeably loud during this time, and it also runs extremely hot. It is also a much bulkier and heavier device (also no surprise to anyone here).

When not plugged into power, the MSI Titan takes a noticeable performance hit, which is to be expected.

Does this make the RTX 5090 better? Depends on what you ultimately look for in a laptop, I suppose. Apple continues to provide a unique value proposition in that you get performance and power efficiency in one sleek and portable package. With Windows, compromises still abound.
 
Apple silicon excels in power efficiency, and in giving you great performance that is not throttled even when your MacBook is not plugged in. That has always been Apple's value proposition right from the very start.



In the first video, the MSI Titan clearly smokes the MBP in a photography benchmark test, though it needs to be plugged in (the power cable is a massive brick). Also, the creator notes that the fans spin up noticeably loud during this time, and it also runs extremely hot. It is also a much bulkier and heavier device (also no surprise to anyone here).

When not plugged into power, the MSI Titan takes a noticeable performance hit, which is to be expected.

Does this make the RTX 5090 better? Depends on what you ultimately look for in a laptop, I suppose. Apple continues to provide a unique value proposition in that you get performance and power efficiency in one sleek and portable package. With Windows, compromises still abound.
Shame it’s an M3 instead of an M4, but interesting nonetheless. The thing that stood out the most is the awful setup times on the Windows laptop. What is taking so long?
 
Currently, Apple cant make RTX 5090 grade chip due to SoC design which can not just upgrade GPU cores itself instead of putting two Max chips together. Also, since Apple cant make Mac Pro or workstation grade GPU, they are heavily suffering from AI competition as they are stuck with 5 years old 50,000 GPUs while they aren't using Nvidia GPUs instead of Amazon and Google.

This is a serious problem nonetheless and Apple really needs to re-design how to make Apple Silicon chip for Ultra and Extreme more efficiently. Power efficiency is not everything as we also need performance. Apple is not able to make RTX 5090 and workstation grade GPU and therefore, it only limit their line up.

Since we have a rumor that M6 series will have MCM design which you can design smaller chips and then combine them, we could expect a whole new Apple Silicon chip but for now, SoC is only proves that it's good only for laptop.
 
  • Like
Reactions: Agent007

Dont forget that Apple is falling behind with AI competition just because they dont have strong Apple Silicon chips to replace their old 50,000 GPUs which is more than 5 years old while others are spending tons of money on Nvidia GPUs. Since Apple is not able to make Mac Pro grade CPU and GPU which can be also used for server and super computer, it only proves that SoC design is bad.
 

Dont forget that Apple is falling behind with AI competition just because they dont have strong Apple Silicon chips to replace their old 50,000 GPUs which is more than 5 years old while others are spending tons of money on Nvidia GPUs. Since Apple is not able to make Mac Pro grade CPU and GPU which can be also used for server and super computer, it only proves that SoC design is bad.
How do we know Apple doesn’t have any Blackwell DGX’s?
 

Dont forget that Apple is falling behind with AI competition just because they dont have strong Apple Silicon chips to replace their old 50,000 GPUs which is more than 5 years old while others are spending tons of money on Nvidia GPUs. Since Apple is not able to make Mac Pro grade CPU and GPU which can be also used for server and super computer, it only proves that SoC design is bad.
Apple falling behind in AI isn't about hardware. All that is server side.
 
I am quite confident that Frederigi never talked about native booting. I feel that his statements have been consistently mischaracterized and taken out of the context by the community...
I don't blame anyone in this instance given the error...
Except blame is being leveled, in a more general sense, and that's what I've been trying to take exception to.

On this thead, and many others, when the MacRumors rank-and-file gets something wrong, the default assumption is to blame the receiver rather than than the sender, i.e., the community rather than the tech authorities, e.g.:

"It's because people have no patience or interest in learning about relevant details."

To which I'd like to respond: Just a sec, not so fast. To be sure, some of the responsibility does lie with the audience. But let's not put it all there. Some of the responsibility can also be laid squarely upon the lousy state of techncial communication within the industry.

As I've opined, this is due principally to two things: (1) Even when tech authorities want to communicate clearly, they often don't know how, because they're not trained in the art of teaching; and (2) Often, for marketing reasons, info. is released that is designed to mislead and obscure.

Everyone does the latter, like when AMD provided bar graphs purporting to show their new CPU's beat AS in performance, failing to mention they were comparing MC scores from high-core-count AMD processors with Apple's base models.

Another great example of dishonest and misleading tech communication can be seen in Apple's introduction of the Pro Display XDR at WWDC 2019, where they ridiculously claimed it could replace the $43k Sony Trimaster BVM-X300 HDR mastering monitor (they didn't mention it by name, but that's the monitor they showed), when they surely knew it lacked the capability to achieve Dolby Vision HDR certification, which is the BVM-X300's defining feature.

1745125358501.png

[More precisely, Dolby certifies facilities rather than displays. But generally, to get Dolby Vision HDR Certification for your facility, you need an HDR mastering monitor capable of 200,000:1 static contrast. The XDR's is much lower. Even if you search through Apple's whole damned white paper about the XDR, which I did, you won't find a meaningful contrast figure, just a value of 1,000,000:1 without any explanation of what it means.]

So to the extent the community in under the misimpression the XDR can serve as an HDR mastering monitor, it's not the community's fault, it's Apple's.

I'd imagine Apple's engineers, who certainly knew the monitor's limitations, begged the marketing people not not to do that presentation, since they knew it would damage Apple's credibility with the pro community (something they'd been working hard to rebuild).

Regardless, the Apple execs green-lit the presentation, with the expected results. Here are some posts from a discussion thread at liftgammagain.com, a professional colorists' forum. These pretty much sum up the colorists' reaction to it: They find the monitor may or may not be good for other things, but were disgusted by the phony claim that it can do what the BVM-X300 can do. These are all industry professionals, use their real names, and have profiles on IMDb.

1602973605635.png


1602973598917.png


1602973591709.png
 
Last edited:
Except blame is being leveled, in a more general sense, and that's what I've been trying to take exception to.

On this thead, and many others, when the MacRumors rank-and-file gets something wrong, the default assumption is to blame the receiver rather than than the sender, i.e., the community rather than the tech authorities, e.g.:

"It's because people have no patience or interest in learning about relevant details."

To which I'd like to respond: Just a sec, not so fast. To be sure, some of the responsibility does lie with the audience. But let's not put it all there. Some of the responsibility can also be laid squarely upon the lousy state of techncial communication within the industry.

As I've opined, this is due principally to two things: (1) Even when tech authorities want to communicate clearly, they often don't know how, because they're not trained in the art of teaching; and (2) Often, for marketing reasons, info. is released that is designed to mislead and obscure.

Everyone does the latter, like when AMD provided bar graphs purporting to show their new CPU's beat AS in performance, failing to mention they were comparing MC scores from high-core-count AMD processors with Apple's base models.

Another great example of dishonest and misleading tech communication can be seen in Apple's introduction of the Pro Display XDR at WWDC 2019, where they ridiculously claimed it could replace the $43k Sony Trimaster BVM-X300 HDR mastering monitor (they didn't mention it by name, but that's the monitor they showed), when they surely knew it lacked the capability to achieve Dolby Vision HDR certification, which is the BVM-X300's defining feature.

View attachment 2503598
[More precisely, Dolby certifies facilities rather than displays. But generally, to get Dolby Vision HDR Certification for your facility, you need an HDR mastering monitor capable of 200,000:1 static contrast. The XDR's is much lower. Even if you search through Apple's whole damned white paper about the XDR, which I did, you won't find a meaningful contrast figure, just a value of 1,000,000:1 without any explanation of what it means.]

So to the extent the community in under the misimpression the XDR can serve as an HDR mastering monitor, it's not the community's fault, it's Apple's.

I'd imagine Apple's engineers, who certainly knew the monitor's limitations, begged the marketing people not not to do that presentation, since they knew it would damage Apple's credibility with the pro community (something they'd been working hard to rebuild).

Regardless, the Apple execs green-lit the presentation, with the expected results. Here are some posts from a discussion thread at liftgammagain.com, a professional colorists' forum. These pretty much sum up the colorists' reaction to it: They find the monitor may or may not be good for other things, but were disgusted by the phony claim that it can do what the BVM-X300 can do. These are all industry professionals, use their real names, and have profiles on IMDb.

1602973605635.png


1602973598917.png


1602973591709.png
Screenshot 2025-04-19 at 11.56.16 PM.png


Despite what it may appear at times, @leman and I are not actually the same person. :) Given the mistake in the Ars article, I said I do not blame anyone for thinking this - hell until I read the article a few times and thought about it for quite some time, it made me think the same! I'm pretty sure you can even find me on these forums quoting the damn article when it came out saying "look Craig says it's possible!" So, while I don't want to speak for him, @leman and I may or may not have a slight difference of opinion of who is to blame here in this particular instance.

Also, in this instance, the Ars mistake is more similar to both of these quotes pointing to me (which I have seen before and I think is a weird forum bug so even still a little different than a poor or mischaracterized junket quote), than it was to an Apple marketing claim that was misleading or exaggerated like the 3090 vs M1 Ultra GPU comparison or the color grading capabilities of the XDR or apparently a huge percentage of Apple's Apple Intelligence presentation.

Here is the full quote from the Ars article:

As for Windows running natively on the machine, "that's really up to Microsoft," he said. "We have the core technologies for them to do that, to run their ARM version of Windows, which in turn of course supports x86 user mode applications. But that's a decision Microsoft has to make, to bring to license that technology for users to run on these Macs. But the Macs are certainly very capable of it."

That this quote block is referring to Windows natively running on the machine is itself something not in quotes. That's important because it means that it relies on the Ars writer to sum up the context from which he pulled the quote from Federighi and if that summation is wrong, which I allege it is, then the rest of the quote is misleading but that's not necessarily on Federighi's head. The Ars writer could've screwed up the write up of the conversation; or the question or interview conversation moment, which we don't know and don't have a direct transcript of, could've been ambiguous; or Craig, exhausted from a press junket misunderstood what was a clear question about native Windows booting but answered mistaking it for one about virtualization and thus seemingly answered in opposition to what he himself had said earlier and indeed what Apple would later go on to say and do (i.e. that virtualization and emulation being the path forward for Windows). Should this have been caught and corrected in that case? Probably. Given the structural state of the tech journalism it frankly isn't surprising that it wasn't (if it was the writers screw up rather than Craig's then ... well ... most outlets have no copy editors anymore and also in most outlets articles are only rarely edited for accuracy after publishing if no one caught it).

But to me, that is indeed a little different from the examples you brought up where I share your opinion that Apple in official presentations have absolutely screwed up in a far more misleading and seemingly deliberate ways and should be called out for it and that isn't the fault of the community. Again, I don't want to speak for @leman, but I've seen him call out misleading Apple marketing on more than one occasion and not blame "the community" for the resulting confusion as well. So yes, Apple themselves are definitely the cause of some of the more pernicious falsehoods that spread, but I'd also like to think that I (and @leman) do a decent job of attributing that to Apple when appropriate.

That said, there are also definitely instances where the community gets some weird damn ideas that just become these accepted facts that have no basis in anything or at least are extrapolated so far from the original sources that blaming those sources would be grossly unfair regardless of those source's motivations or explanatory abilities. Hell the entire tech community well beyond Apple forums does that ... hell that's just society on almost every topic under the sun.
 
There is no dispute. Nvidia is a king of GPU and Apple's GPU are not as good. Apple is good in power efficiency but that's another matter. For GPU, Nvidia and AMD's discrete and even internal GPU are probably first and second best.
 
How do we know Apple doesn’t have any Blackwell DGX’s?
They dont and they even used Google or Amazon instead of Nvidia GPU for Apple Intelligence according to them.

Apple falling behind in AI isn't about hardware. All that is server side.
Server is part of the hardware problem which you admitted to yourself. This is the main reason why Nvidia dominated the AI market. Using more than 5 years old graphic cards proves a lot and since they hate Nvidia, I highly doubt that it's not a hardware problem.

You need GPU for training.
 
Now a potential interesting story is Sony developing a handheld around a processor that has backwards compatibility with x86 but apparently is not x86. It could be (probably will be) an AMD ARM processor as it already will work with AMD GPUs, what would be more interesting is if Apple has been designing a new Apple Silicon CPU to collaborate with GPUs, specifically AMD GPUs in this case and that will be the CPU in this forthcoming Playstation portable. There have been rumors of Apple and Sony collaborating behind closed doors. Apple may be announcing retail sales of Sony's PSVR2 Sense controllers along with Apple Vision compatibility very soon (probably at the Moscone center.) Sony updated PSVR2 software to include hand tracking, which behaves very similarly to Apple's tech. This could also be the first M5 and an M5 Max for a real Mac Pro workstation with GPU support that doesn't cost as much as a car. One can dream.
 
There is no dispute. Nvidia is a king of GPU and Apple's GPU are not as good. Apple is good in power efficiency but that's another matter. For GPU, Nvidia and AMD's discrete and even internal GPU are probably first and second best.

Even Nvidia falls short when using larger LLM models. Alex Ziskind did some head-to-head comparisons between a maxed out Mac Studio and a Windows system with an RTX 5090 and 192GB RAM. Once the models required more RAM than the nVidia GPU had, any performance benefits disappeared. Even when Alex set the Mac to the same GPU/system RAM split as the Windows machine, the Mac was by far the faster machine while retaining its power efficiency.

At this stage, Nvidia doesn't even care about the consumer market, as evidenced by the dumpster fire that has been the 50 series launch. Cards shipping with missing ROPs, the 12v connector still being a piece of junk and melting both connectors and power cords, drivers breaking the most basic monitoring functionality, letting scalpers run the market, and somehow still believing an 8GB GPU is still viable when many newer titles won't even run on a GPU with less than 12-16GB RAM. All Nvidia cares about is the AI/ML market, and they will eventually wind up losing on that front as well because their approach is "throw more crap at it!".
 
Even Nvidia falls short when using larger LLM models. Alex Ziskind did some head-to-head comparisons between a maxed out Mac Studio and a Windows system with an RTX 5090 and 192GB RAM. Once the models required more RAM than the nVidia GPU had, any performance benefits disappeared.
But that's just running existing models. The much more interesting scenario is training on those systems for enthusiasts at home. Professionals will do that on clusters anyway. But something like a cheap 5090 makes it interesting for use at home compared to data center cards, as it's very affordable.

All Nvidia cares about is the AI/ML market, and they will eventually wind up losing on that front as well because their approach is "throw more crap at it!".
Hardware in the AI/ML market is only a part of their business model. Software is the other. Nvidia is decades ahead of everyone else when it comes to software tools and frameworks/libraries. There's a tool for everything, robotics, autonomous driving, computer vision, genetics, generative AI... you name it, they have it. And that's why people keep coming back to Nvidia, it makes everything so much easier when developing your own AI systems or integrate them into applications. The cost of breaking free from Nvidia is massive (financially) and comes with the high risk of being left behind, as your competition using Nvidia will progress much faster.
 
  • Like
Reactions: innerproduct
But that's just running existing models. The much more interesting scenario is training on those systems for enthusiasts at home. Professionals will do that on clusters anyway. But something like a cheap 5090 makes it interesting for use at home compared to data center cards, as it's very affordable.


Hardware in the AI/ML market is only a part of their business model. Software is the other. Nvidia is decades ahead of everyone else when it comes to software tools and frameworks/libraries. There's a tool for everything, robotics, autonomous driving, computer vision, genetics, generative AI... you name it, they have it. And that's why people keep coming back to Nvidia, it makes everything so much easier when developing your own AI systems or integrate them into applications. The cost of breaking free from Nvidia is massive (financially) and comes with the high risk of being left behind, as your competition using Nvidia will progress much faster.

Deepseek R1 is already going head to head to head with Meta's Llama and Google Gemini, and to be honest outside of Nvidia's own website I have heard and read literally nothing regarding the use of Nvidia's models - most likely because they have nothing for the end user and developer markets. Regardless of whether you are running existing models or training new ones, even cards such as the 5090 will be seriously constrained with any model that uses more than 32GB RAM because the model will have to be partially offloaded to the system RAM. With the majority of GPUs in use having only 8-16GB of RAM, those constraints mean users would have to limit themselves to even smaller models. The only way to get GPUs with more than 32GB of RAM is to purchase the datacenter versions, which cost significantly more than most end users either can or will pay. That is why Nvidia will shoot themselves in the foot over the long run, as the effects of prioritizing datacenter parts and AI over the consumer market is already showing negative effects with respect to any sort of quality control for either the 4xxx or 5xxx RTX cards.
 
  • Haha
Reactions: BNBMS
Deepseek R1 is already going head to head to head with Meta's Llama and Google Gemini, and to be honest outside of Nvidia's own website I have heard and read literally nothing regarding the use of Nvidia's models - most likely because they have nothing for the end user and developer markets. Regardless of whether you are running existing models or training new ones, even cards such as the 5090 will be seriously constrained with any model that uses more than 32GB RAM because the model will have to be partially offloaded to the system RAM. With the majority of GPUs in use having only 8-16GB of RAM, those constraints mean users would have to limit themselves to even smaller models. The only way to get GPUs with more than 32GB of RAM is to purchase the datacenter versions, which cost significantly more than most end users either can or will pay. That is why Nvidia will shoot themselves in the foot over the long run, as the effects of prioritizing datacenter parts and AI over the consumer market is already showing negative effects with respect to any sort of quality control for either the 4xxx or 5xxx RTX cards.
Are you focusing on LLMs? The AI market is much, much bigger than that (for Nvidia and others). When I say Nvidia is decades ahead of others, I'm talking about AI tools offered to developers in general, not specifically trained LLMs. Have a look at their Omniverse ecosystem, with all the different options. We're regularly using DRIVE Sim with many aspects and Isaac Sim for mobile robotics. And yes, any system is limited. The Studio is limited to 512GB as well. The question here is, how much time do you put into the training and development outside of running a pre-existing model and how much does that change the performance. A training process is very different from inference and not really comparable as the latter is mostly dominated by how much you can fit into the memory. And then there's the question how large your model needs to be for inference. If you have a small LLM that can be run on a Raspberry Pi, it might be adequate to use it as an agent for reasoning and decision making in decentralised swarm robotics. This is very different from what R1, Gemini and others are using it for.

So the general question is what exactly are you using it for and how much memory do you need for it. If you absolutely have to fit it into memory without (partial) offloading and you're limited when it comes to budget, then a Mac Studio can be a good and cheap option for up to 512 GB. But you'll likely have to deal with more effort during development vs. Nvidia with their tools and samples they provide. That's a bit of a binary situation. Does the model fit or not? Depending on the answer, the choice is clear. If the model fits though (on both) the general consensus vs 4090/5090 is, that the Nvidia solution is much faster (again, not for inference). That's been discussed over and over at LLMDevs over at Reddit.

But without specifying what the use case is exactly, it's pointless to discuss as results will vary a lot. This also ignores power consumption if that is a factor. I think "cheap" systems are great to try things at home here and there. But for professional work, nothing beats Nvidia for the hardware (even if it's a cheap card) and software combination. Let's say one wants to try a few things regarding AI in the Sim-2-Real-gap for robotics, that's a face-off between Isaac Sim on Nvidia and Unity/Unreal + a ton of DIY on non-Nvidia systems. And no, Coppelia & Co. are not really an option except for the most basic stuff. In the end, pick the right tool for the job.
 
  • Like
Reactions: BNBMS
Are you focusing on LLMs? The AI market is much, much bigger than that (for Nvidia and others). When I say Nvidia is decades ahead of others, I'm talking about AI tools offered to developers in general, not specifically trained LLMs. Have a look at their Omniverse ecosystem, with all the different options. We're regularly using DRIVE Sim with many aspects and Isaac Sim for mobile robotics. And yes, any system is limited. The Studio is limited to 512GB as well. The question here is, how much time do you put into the training and development outside of running a pre-existing model and how much does that change the performance. A training process is very different from inference and not really comparable as the latter is mostly dominated by how much you can fit into the memory. And then there's the question how large your model needs to be for inference. If you have a small LLM that can be run on a Raspberry Pi, it might be adequate to use it as an agent for reasoning and decision making in decentralised swarm robotics. This is very different from what R1, Gemini and others are using it for.

So the general question is what exactly are you using it for and how much memory do you need for it. If you absolutely have to fit it into memory without (partial) offloading and you're limited when it comes to budget, then a Mac Studio can be a good and cheap option for up to 512 GB. But you'll likely have to deal with more effort during development vs. Nvidia with their tools and samples they provide. That's a bit of a binary situation. Does the model fit or not? Depending on the answer, the choice is clear. If the model fits though (on both) the general consensus vs 4090/5090 is, that the Nvidia solution is much faster (again, not for inference). That's been discussed over and over at LLMDevs over at Reddit.

But without specifying what the use case is exactly, it's pointless to discuss as results will vary a lot. This also ignores power consumption if that is a factor. I think "cheap" systems are great to try things at home here and there. But for professional work, nothing beats Nvidia for the hardware (even if it's a cheap card) and software combination. Let's say one wants to try a few things regarding AI in the Sim-2-Real-gap for robotics, that's a face-off between Isaac Sim on Nvidia and Unity/Unreal + a ton of DIY on non-Nvidia systems. And no, Coppelia & Co. are not really an option except for the most basic stuff. In the end, pick the right tool for the job.
Nvidia has been relatively quiet about their vehicle autonomy platform/stack recently. I am glad to see someone is using it, haha.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.