Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Out of curiosity, which benchmark do you use for this comparison?
I started googling and I'm going to just say what I find may not be comprehensive. The first video is from someone I'm familiar with, he's largely a PC person who loves building ITX computers and clearly a 6,000 dollar desktop build should beat out a laptop, so take this one with a grain of salt.

MaxTech is more of an apple fanboy but he does show the M1 Max losing to 5900hx laptop
 
Supporting two entirely different architectures for the sake of the Mac Pro seems really dumb.
Even the kernel is mostly architecture independent, xnu has just 11% of C/ASM code in the four architecture-specific sub-directories arm, arm64, i386 and x86_64. I'm not saying it's the best idea, I'm not saying it's the only idea, I'm not saying they must keep Intel support. What I'm doubting is that we on the outside can be sure that one solution is clearly better than the other. As you said yourself, there are always tradeoffs. I guess we are all a bit antsy to see what Apple will unveil, hopefully not too far in the future.
 
There's really no evidence for that, and in fact I'd say if apple designed a desktop cpu from square one and the M1 was the result they missed too many opportunities. The M1 is first and foremost a mobile processor, every design choice screams mobility imo. I'm not saying having its bad as a desktop processor but the lineage is clearly from Apple A series processors.
Agree. Even the Watch used the same E-cores as the M1. Sure, it's "just" the E-cores, but it's also just a Watch!
 
What they likely mean is that 96-core EPYC single-core turbo maxes out at 3.7Ghz. This means that per-core performance of these CPUs are close to the iPhone 11.
Where did you get iPhone 11? In Geekbench a single 3.7 GHz Zen 4 core about matches a single 3.2 GHz Firestorm. And while Geekbench is not my favourite benchmark at least it's cross-platform.
 
UMA also has disadvantages, like an RTX 4090 has 25% more memory bandwidth just for the GPU compared to what an M1 Ultra has to divide between CPU and GPU.
I'm not sure about this. I think it might be the other way around actually.
In a non UMA architecture, for the GPU to be able to use data the CPU has written on the RAM, it would first have to copy it from the RAM to the VRAM and then the GPU would have to load it from the VRAM, so using twice the bandwidth to do the same task.
With UMA, the GPU can directly read from the UMA what the CPU has written on it, without having to copying first. UMA is not just sharing a pool of memory.

So in your example I think an M1 Ultra would be more efficient than CPU+RTX 4090 actually. Not to mention that the bandwidth of the RAM when copying to the VRAM would probably be a bottleneck as well.

I'm a software engineer, nor a hardware one so happy to be corrected by an expert if I'm wrong.
 
Even the kernel is mostly architecture independent, xnu has just 11% of C/ASM code in the four architecture-specific sub-directories arm, arm64, i386 and x86_64. I'm not saying it's the best idea, I'm not saying it's the only idea, I'm not saying they must keep Intel support. What I'm doubting is that we on the outside can be sure that one solution is clearly better than the other. As you said yourself, there are always tradeoffs. I guess we are all a bit antsy to see what Apple will unveil, hopefully not too far in the future.
Those aren't the "easy" 11%, though...
 
Where did you get iPhone 11? In Geekbench a single 3.7 GHz Zen 4 core about matches a single 3.2 GHz Firestorm. And while Geekbench is not my favourite benchmark at least it's cross-platform.

Very unlikely. A 5.5 Ghz Zen4 (7900X) scores around 2200 in Geekbench. I would be surprised if a 3.7Ghz Zen4 will significantly breaks past 1500 points.
 
Those aren't the "easy" 11%, though...
No, but they already exist. What is more work, maintaining that code or developing a workstation CPU? Rhetorical question, we don't know. Sure, there's more to it, but there's also more to an ARM Mac Pro than just the CPU.
 
  • Like
Reactions: RuralJuror
In a non UMA architecture, for the GPU to be able to use data the CPU has written on the RAM, it would first have to copy it from the RAM to the VRAM and then the GPU would have to load it from the VRAM, so using twice the bandwidth to do the same task.
More often than not things are copied once to the GPU and then used many times. Think textures in a game. Of course that's not always the case, so the long answer is, as so often, "it depends".
 
The Mac Pro sells in extremely limited quantities.

A quick note on this: it might still make sense for Apple to keep a flagship Mac Pro around just for prestige, even if it costs them extra and nobody buys them. This struggle is all about perceptions and having a computer in a lineup that can outperform an top of the line Threadripper at half the power consumption is very good advertising, even if there are no customers for that particular computer.

Which again reinforces your point that keeping around x86 macOS for a Mac Pro makes no sense. If you already offer a flagship, this has to be the flagship that shows off your technology, not the technology you have abandoned.
 
Very unlikely. A 5.5 Ghz Zen4 (7900X) scores around 2200 in Geekbench. I would be surprised if a 3.7Ghz Zen4 will significantly breaks past 1500 points.
My mistake, the 4.5 GHz in the GB listings are probably base clock, not actual frequency during ST tests.
 
No, but they already exist. What is more work, maintaining that code or developing a workstation CPU? Rhetorical question, we don't know. Sure, there's more to it, but there's also more to an ARM Mac Pro than just the CPU.

This is not about macOS, it's about software ecosystem. Forcing the developers to support both platforms is going to kill the Mac business in a very short time. Not to mention that Nvidia GPUs (like you advocate) probably means killing off Metal
 
Last edited:
Apple employees are probably itching to get rid of support for x86 CPUs and AMD GPUs in macOS. Speaking as a software engineer, there is probably a ton of code in macOS and first-party software that is something like "If Intel CPU, do this. If Apple Silicon, do this.". I would want to delete this code asap.
I can’t wait for this day
 
  • Like
Reactions: Sydde
First, yes, Apple should believe that an M2/M3 Extreme would be better than any CPU AMD can make for the Mac Pro.

Using a CPU with many slow cores designed for data center use will not beat any Apple Silicon that is 2x Ultra for macOS users. It'll look good in Cinebench though. :D

Second, Apple does not want to support AMD and Nvidia products in addition to Apple Silicon. It's not as simple as slapping together PC parts from Newegg and call it a day. You need to write drivers, support them in first-party software such as Finalcut, and provide years of support. All of these things cost money, time, and resources. It also makes the codebase for macOS much more complicated in the long run.

Third, AMD + Nvidia does not satisfy the unified memory model. This is the model Apple is going for. It has huge advantages in performance and applications that support it.

Lastly, AMD CPUs and Nvidia GPUs simply don't support a lot of features that Apple Silicon can. IE. Using the neural engine for extracting objects from photos. Using ProRes encoders. Using the secure enclave. Using image processing acceleration. Even if these parts can be replicated with AMD + Nvidia, Apple would not want to. Not for the Mac Pro which is a niche.

Apple employees are probably itching to get rid of support for x86 CPUs and AMD GPUs in macOS. Speaking as a software engineer, there is probably a ton of code in macOS and first-party software that is something like "If Intel CPU, do this. If Apple Silicon, do this.". I would want to delete this code asap.
This. Plus full in-house control of all components, leverage when ordering chips, cross benefits with their whole product-line, logistics etc. No way Apple will go back to relying on development of key components by external parties. Apple must be quite frustrated with the slow development of their own modem chips…
 
  • Like
Reactions: Colstan
Forcing the developers to support both platforms is going to kill the Mac business in a very short time.
I have been wanting to see numbers on this for a long time. Like how much software really needs more than a recompile? We all know that "optimised for" is often just a marketing slogan.
 
3. Even if we assume that Apple would be willing to diversify its CPU lineup, a big problem is that editing programs, such as Adobe Premiere, Davinci Resolve, and, to a lesser extent, FCP, occasionally unexpectedly quit. I believe this is, thank GD, becoming quite rare with FCP is due to Apples control of the hardware and software.
In my experience compiling the same software on multiple platforms helps expose and fix bugs which improves stability.
 
I have been wanting to see numbers on this for a long time. Like how much software really needs more than a recompile? We all know that "optimised for" is often just a marketing slogan.

It's very simple really. If you have no incentive to do something, why would you? If you have to support both ARM and x86 versions of your app, and you know that ARM Macs run x86 version just fine, so why would you bother with building and testing the ARM version?
 
  • Like
Reactions: Colstan
It's very simple really. If you have no incentive to do something, why would you? If you have to support both ARM and x86 versions of your app, and you know that ARM Macs run x86 version just fine, so why would you bother with building and testing the ARM version?
Apple could still remove Rosetta 2 in a couple of years. That's a separate decision.
 
I don't consider those benchmarks to be fair because Anandtech used WSL instead of Linux. The performance penalty for using WSL is unclear.

There is no performance penalty. It's just running Linux-compiled software under Windows kernel. There is no interfacing with devices, this is just CPU code, so any potential inefficiencies associated with crossing driver boundaries do not apply. If it makes you feel better, add 2% to the x86 scores. Not that it changes anything.
 
Apple could still remove Rosetta 2 in a couple of years. That's a separate decision.

While offering both x86 and ARM machines? Do you think they are suicidal?

We have a practical example in the industry: Windows and 64-bit support. It took more than a decade since wide availability of 64-bit x86 processors for 64-bit software to become the standard, simply because Microsoft wanted to "play it safe".
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.