Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Sure, there are ways around it, I was just not buying the "remove 32bit from x86 and everything is easy". Also because x86-32 and x86-64 are so very similar. That was the whole point of it, extend to 64bits in the least disruptive way possible.

Ah, I see what you mean. Yeah, absolutely, removing 32–bit from x86 won’t change much. The complexity of the ISA is elsewhere.
 
  • Like
Reactions: Basic75
Where does it say that Anandtech ran this via WSL in Windows?
We run the tests in a harness built through Windows Subsystem for Linux, developed by our own Andrei Frumusanu. WSL has some odd quirks, with one test not running due to a WSL fixed stack size, but for like-for-like testing is good enough. SPEC2006 is deprecated in favor of 2017, but remains an interesting comparison point in our data. Because our scores aren’t official submissions, as per SPEC guidelines we have to declare them as internal estimates from our part.
 
  • Like
Reactions: Basic75
The video shows that WSL had a performance penalty (up to 8% performance penalty, five months ago) But, we still don't know how much it had 2 years ago when those benchmarks were done.
View attachment 2140375

I still don't understand why Anandtech tried to compare WSL vs macOS benchmarks when the circumstances are not fair.

That's under 4% performance penalty (between Windows and WSL). Again, you are welcome to add those 5% to Anandtech scores (or 8% if you want to compare macOS to Linux). Doesn't really change the argument. The 8+2-core 35-40W M1 Pro/Max is still around two times slower in SPECint benchmarks and a bit faster in SPECfp benchmarks compared to AMD's 16-core 140 watt monster.
 
  • Like
Reactions: Xiao_Xi
That's under 4% performance penalty (between Windows and WSL). Again, you are welcome to add those 5% to Anandtech scores (or 8% if you want to compare macOS to Linux). Doesn't really change the argument. The 8+2-core 35-40W M1 Pro/Max is still around two times slower in SPECint benchmarks and a bit faster in SPECfp benchmarks compared to AMD's 16-core 140 watt monster.
I don't consider those Anandtech results suitable for comparison with Mx because I don't know the performance penalty WSL suffered two years ago.

Cinebench R23 has a similar problem. Would you consider Cinebench R23 to be fairer if Mx results get a 10% increase (performance that a patch gives Embree)? I wouldn't because I don't know to what extent poorly optimized code penalizes Mx results.

Anyway, I'm getting more and more suspicious of benchmarks. Somehow an M1 Pro crushes a Ryzen 5950 in AES-XTS and ML in multicore, although it loses in single-core.
 
Cinebench R23 has a similar problem. Would you consider Cinebench R23 to be fairer if Mx results get a 10% increase (performance that a patch gives Embree)? I wouldn't because I don't know to what extent poorly optimized code penalizes Mx results.

CB23 is not about fairness. It's simply about the fact that it only tests a very narrow software domain that is not representative for most things people do with CPUs. The fact that the ARM code is suboptimal is just an addendum to that. I wouldn't consider CB23 a good CPU benchmark even if Embree was completely rewritten with first-class ARM SIMD support.

And as I already said, it's not a benchmark in which Apple can win over x86. In a SIMD workload with long data dependency chains it's all about clock frequency and data bandwidth. Apple Silicon has neither, as part of its focus on energy efficiency. E.g. Intel can fetch 128 bytes of data per cycle from L1 cache using AVX512 (or 96 bytes using AVX256), while Apple can load do 48 bytes per cycle. And that's just per cycle! Given the fact that Intel runs higher clocks, the peak L1D load bandwidth can differ by a factor of 2.5x-3x. That alone makes a big impact on this kind of workloads.

Anyway, I'm getting more and more suspicious of benchmarks. Somehow an M1 Pro crushes a Ryzen 5950 in AES-XTS and ML in multicore, although it loses in single-core.

Yeah, that's weird, right?
 
Last edited:
Honestly I'm not worried about more power in the future. I'm thinking more that the issue would be on effiency on the chips especially M1 Pro and M1 Max.

I think the M Pro line up should have 2 extra E cores instead of the 2 P cores in the full working die, but that's just my opinion.

The M1 Max for sure should have 10 P cores.
 
  • Like
Reactions: pshufd
I know. But it's definitely the kind of thread that will bring those people out the woodwork like pouring vinegar on woodworm.
That hasn't been the case at all. I found with 6 pages of posts, discussions have been respectful, even if someone disagrees, its done with respect
 
  • Like
Reactions: turbineseaplane
My suggestion to all these posters, let Apple do their thing and go buy a Dell if you want all that instead of trying to change Apple. There are already many pc companies just like the ones you want Apple to be. Why care about Apple when what you want is already around the corner? ;) 😄

If you agree with everything Apple does, then you have no reason to be here, because all there is left is to praise Apple. And I thought this was a forum for discussion.

Why are you here?
 
  • Like
Reactions: turbineseaplane
The more I read about benchmarks, the less sense I find them. For example, M1 Pro crushes Ryzen in GCC (specint2017), but has similar performance in Clang (Geekbench 5). I wouldn't expect that much disparity in such a similar workload.

(Edited for better visualization)
specint2017.png


1673530590233.png


Anandtech's analysis explains that the difference between the M1 Pro/Max and AMD/Intel CPUs is greater in memory-bound workloads than in compute-bound workloads.
The one workload standing out to me the most was 502.gcc_r, where the M1 Max nearly doubles the M1 score, and lands in +69% ahead of the 11980HK. We’re seeing similar mind-boggling performance deltas in other workloads, memory bound tests such as mcf and omnetpp are evidently in Apple’s forte. A few of the workloads, mostly more core-bound or L2 resident, have less advantages, or sometimes even fall behind AMD’s CPUs.
Is the huge memory bandwidth that Mx SoCs have more important to the success of Apple Silicon than the use of ARM ISA?
 
  • Like
Reactions: Basic75
Unless you want us to rely on the Rosetta crutch forever, it’s either all or nothing. Apps are already too slow to transition and we are two years in. If the Mac Pro remains x86, then it will be less likely to see native ARM apps.
 
Unless you want us to rely on the Rosetta crutch forever, it’s either all or nothing. Apps are already too slow to transition and we are two years in. If the Mac Pro remains x86, then it will be less likely to see native ARM apps.

Almost all my apps are native ARM on my MBP.

I also migrated my HTPC from x86/nVidia to Raspberry Pi 4 last year. Without any issues regarding software compatibility.

It's really just the commercial stuff that takes forever. Was the same with 32bit to 64bit (so yes, mostly games ran on 32bit as well as enterprise applications, while all others had native 64bit) and of course apples move from PPC to x86 (also back in the 2000s).
 
  • Like
Reactions: Basic75
Kind of unrelated to this thread, but I really like AMD's CDNA 3 GPU. Because it's an SoC just like Apple's M1. High-performance computing is taking inspiration from Apple's unified memory and seeing the efficiency gains. NVIDIA Hopper is somewhat discrete with unified memory nonetheless.
 
I remember at the end of 2020 I had initially planned to buy one of those last 13" Intel MacBook Pros as a stopgap solution to replace my 2009 MacBook and wait a bit to get an Apple Silicon Mac Mini to replace my 2012 quad-core i7 Mini as my desktop. But instead, come early 2021 I decided to take the plunge and buy an M1 MacBook Air, attracted by its' lower price tag, performance being roughly the same as the M1 MacBook Pro of the time (especially with the 8-core graphics and 16 GB of RAM, which is how I ordered mine), the ability to run some iOS apps (like GoldWave), and the increasing software support for Apple Silicon. In the end, I was glad I made that decision, as it's currently my fastest and most powerful Mac on hand, yet generates less energy and heat, compared to all the Intel and PowerPC Macs I have in my collection. (Seriously; I remember being blown away at how it renders video in just one third of the time it'd take to render the same project on my 2012 i7 Mac Mini, no matter the software.) I'm looking forward to see what the refresh Mac Mini line will be like; I'm hoping to replace said 2012 Mini with an M2 Pro-equipped Mini, as a logical step up from the i7 and my M1 MacBook Air.
 
Unless you want us to rely on the Rosetta crutch forever, it’s either all or nothing. Apps are already too slow to transition and we are two years in. If the Mac Pro remains x86, then it will be less likely to see native ARM apps.
From my workflow the only apps missing is basically What's App. However I know some other people especially ML and Artists who are still stuck in Rosetta app and that sucks...

Hopefully we get an announcement of pulling the plug on Rosetta to speed things up in the dev community.
 
Hopefully we get an announcement of pulling the plug on Rosetta to speed things up in the dev community.
Can they pull the plug on Rosetta when they're still selling Intel based Macs or would that not matter?
 
Can they pull the plug on Rosetta when they're still selling Intel based Macs or would that not matter?
Personally I don't see any reason for why the moments in time when Apple removes Rosetta and when Apple stops selling Intel Macs need to have any particular relationship to each other. What strikes me as important is how much time they give Rosetta after the first ARM Macs started being sold, and we are at two years counting.
 
  • Like
Reactions: Unregistered 4U
Can they pull the plug on Rosetta when they're still selling Intel based Macs or would that not matter?
I don’t see why not. Developers still shipping Intel only applications right now don’t care to have Apple Silicon customers and Rosetta is just enabling bad behavior. Turning that off doesn’t affect any developer that’s shipping new code. It would only affect customers still running Intel only stuff and, for a large number that just bought their first Mac as an Apple Silicon, it’s less likely that they have a large library of Intel only apps.
 
  • Like
Reactions: Basic75
We had the exact same conversations when the Intel switch happened.

The answer back then was: Apple is going to show you how better the Intel offerings are, compared to PowerPC.

I think this will wind up being the same answer...
 

What’s the point of ARM?​


I personally think that my ARMS help me a lot in my daily routine…

I see my self out
Well having two arms and two legs really helps as I'll still be able to hop along and feed myself after buying my next Mac. Don't know what I'll do when have to upgrade beyond that though.
 
  • Haha
Reactions: Matsamoto
If you agree with everything Apple does, then you have no reason to be here, because all there is left is to praise Apple. And I thought this was a forum for discussion.

Why are you here?

Who said I agree with all things Apple? I both praise and criticize Apple when justified and you can criticize and discuss all you want with logic and valid reasons but I don’t make sensational claims that don’t make much sense.

I explained in my comment why I'm here; to summarize many ridiculous never-ending posts which been circulating here since the release of Apple Silicon. That was what I was discussing.

"What's the point of ARM?"
You only have to watch Apple’s keynotes since 2020 to get your answer. If you still don’t understand you probably never will.

”For the Mac Pro, why not simply put a 96-core EPYC AMD CPU in it with a RTX 4090”
Simply? They have put a lot of time and effort to switch to their own silicon for many reasons and you think they’re going to ”simply” throw away all that and go back to Nvidia/AMD/Intel? You think running a tech company is like changing the flavor of your ice cream whenever you want?

”Does Apple really believe the M2 Extreme would beat a 192-core AMD CPU and a RTX 4090?”
No, but you really seem to believe they do. Where and when did you get that impression? The rumors say there won’t even be a M2 Extreme. It doesn’t mean they couldn’t build a Mac Pro as fast as 192-core Epyc Genoa with RTX 4090 cards but to what cost and for what use? The 192-core Epyc and 4090 is not exactly cheap either. Just because you could do something it doesn’t mean you should.

So criticize all you want but don’t loose your touch with reality and don’t expect the Mac be a pc.
 
Last edited:
First, yes, Apple should believe that an M2/M3 Extreme would be better than any CPU AMD can make for the Mac Pro.

Using a CPU with many slow cores designed for data center use will not beat any Apple Silicon that is 2x Ultra for macOS users. It'll look good in Cinebench though. :D

Second, Apple does not want to support AMD and Nvidia products in addition to Apple Silicon. It's not as simple as slapping together PC parts from Newegg and call it a day. You need to write drivers, support them in first-party software such as Finalcut, and provide years of support. All of these things cost money, time, and resources. It also makes the codebase for macOS much more complicated in the long run.

Third, AMD + Nvidia does not satisfy the unified memory model. This is the model Apple is going for. It has huge advantages in performance and applications that support it.

Lastly, AMD CPUs and Nvidia GPUs simply don't support a lot of features that Apple Silicon can. IE. Using the neural engine for extracting objects from photos. Using ProRes encoders. Using the secure enclave. Using image processing acceleration. Even if these parts can be replicated with AMD + Nvidia, Apple would not want to. Not for the Mac Pro which is a niche.

Apple employees are probably itching to get rid of support for x86 CPUs and AMD GPUs in macOS. Speaking as a software engineer, there is probably a ton of code in macOS and first-party software that is something like "If Intel CPU, do this. If Apple Silicon, do this.". I would want to delete this code asap.
Agreed.

This thread and your response reminds me of a heated debate on why [my company's] software is not compatible with new OSs right at the release, and it comes down to the cost of engineers and building around Betas vs. the cost of building on a release candidate. Not remotely the same in terms of the issue, but a compliment on your comprehensive analysis. While things like this are obvious to [us], it's always interesting to read my thoughts when written from somebody else.

I've been a member since 2006 and it is still rare to find reasonable responses like yours.

Cheers.
 
  • Like
Reactions: Stratus Fear
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.