Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
A benchmark that only tests integer performance. But does it check Apple relevant things like audio/video accelerators? ML cores? Etc...? There is more to Mx than pure integer pushing power.....a lot more!

Integer and FP performance. This is general-purpose CPU performance after all. Not everyone cares about audio/video (I couldn't care less for example). I want a fast capable CPU and GPU. My M1 Max is still the fastest mobile CPU on the market for all practical intends and purposes (and neither Zen4 nor Raptor Lake changes this).
 
Are you asking what’s the point of a laptop being thin, light, fast, silent and long lasting?

'twas sarcasm I'm afraid, hence the /s. Unfortunately, unlike the many more generous contributors above, I regard the incessant premise of, "Why doesn't Apple do what every other pc assembler does" somewhat disingenuous. I mean the OP will, at some point, be able to purchase the desired configuration, if they wish. Apple's general philosophy and emerging architecture won't deny them of that.
 
Oh they most certainly are. I would expect that move sooner then later imo
But would they and should they even if they kept x86 Macs? That was the context of my comment. If they transition fully of course they'll get rid of it, never any doubt.
 
For the Mac Pro, let's not make a bunch of assumptions until it's actually been announced and we have the actual specs.

Yes, I get that Mark Gurman & co. have their sources and say a lot of things. And I'm sure Apple is testing the kind of Mac Pro with the kind of specs Gurman is proposing. It sounds reasonable.

But he's not right on everything. And I really don't see Apple focusing a lot on anything but the VR/AR product in 2023 (and eventually iPhones 15 in Q3-Q4).

The VR/AR product is clearly the big, significant step for Apple's immediate future and something they've been preparing for many years.

A new Mac Pro is, regardless of how long we've been waiting for it, not going to amount to more than a new design and beefed up specs at best. And at worst, it's just the same cheese grater Pro with some new M2 internals.

But whatever it turns out to be, it's still just another desktop Mac. It's not a new product in a new product category, and not something that will have a huge impact on Apple's future if it fails to sell.
 
This video does some testing and there appears to be around 15% penalty on using WSL instead of bare metal.

The chart in the video shows 1348 points for native Windows and 1311 points for Linux on WSL. That's 3% difference, which might be random noise for all we know (I din't watch the video in detail — did he run multiple tests and compare the distributions are are these just point values). Or did I overlook something?
 
One of the benefits of Apple sticking with ARM for the Mac Pro is developers will decide if it's worth their effort to optimize for Apple Silicon or stop using macOS. I'm in science and a lot of work is going into making most of our tools compatible with Apple Silicon (I'm not a software developer and other people are doing this work). This is because with the power and efficiencies, Apple Silicon makes for really powerful portable workstations that have great battery life. Scale up this power with a desktop workstation (where power constraints are not as significant) and you have great performance with higher efficiencies. This makes the workstations cheaper to operate over time. This is also better for the environment.
 
One of the benefits of Apple sticking with ARM for the Mac Pro is developers will decide if it's worth their effort to optimize for Apple Silicon or stop using macOS. I'm in science and a lot of work is going into making most of our tools compatible with Apple Silicon (I'm not a software developer and other people are doing this work). This is because with the power and efficiencies, Apple Silicon makes for really powerful portable workstations that have great battery life. Scale up this power with a desktop workstation (where power constraints are not as significant) and you have great performance with higher efficiencies. This makes the workstations cheaper to operate over time. This is also better for the environment.
And they way power prices are evolving nowadays it also is better for our wallets!
 
  • Like
Reactions: casperes1996
The chart in the video shows 1348 points for native Windows and 1311 points for Linux on WSL. That's 3% difference, which might be random noise for all we know (I din't watch the video in detail — did he run multiple tests and compare the distributions are are these just point values). Or did I overlook something?
My tests with WSL show it's generally about 10-15% slower than "native", although I haven't done the comparisons in about a year so maybe it's improved. I know this is anecdote without me providing data (don't have access to it right now) but I just wanted to write and say I've seen about the same speed deficit. For my purposes it's much easier to use WSL. I'm ready to delete my dedicated Linux install because WSL is close enough in speed for my purposes. Microsoft has done an excellent job with it. I'm sure any speed differences will mostly go away over time with additional optimization.
 
  • Like
Reactions: RuralJuror
The chart in the video shows 1348 points for native Windows and 1311 points for Linux on WSL. That's 3% difference, which might be random noise for all we know (I din't watch the video in detail — did he run multiple tests and compare the distributions are are these just point values). Or did I overlook something?
No this was just one of those drag racing, lets see how wins. My take away is WSL doesn't impact performance too much. I've been using it, and actually have Kali Linux running with xfce, which is pretty cool imo
 
But would they and should they even if they kept x86 Macs? That was the context of my comment. If they transition fully of course they'll get rid of it, never any doubt.
I believe that apple in both their words and actions has made it 100% clear that they never intend to make another X86 Macintosh the second that all Intel models are gone.
The mid 2020 27 inch iMac was the one computer Tim was referring to when he said they still had Intel computers in the pipeline 2 1/2 years ago and even that one was discontinued.
The only Intel Macs Apple currently sells are wildly out of date, and more than likely only exist to not have huge gaps in the lineup, and because there is still inventory.
The second those computers are gone, Intel is gone from Apple’s entire lineup, and it is most certainly not coming back.
And once those computers are gone, it’ll be software support that comes next.
Like I said earlier, I wouldn’t be surprised to see the version of macOS released in 2025 (16) be the version that absolutely guts the OS of any reference of Intel.
M1 and Forward only on the hardware support list, Rosetta gone, any lingering code or interfaces made specifically for Intel computers gone, all drivers for their Intel computers gone.
The current version of macOS doesn’t support a computer from any earlier than 2017, believe me a big cut off is coming and it’s going to hurt.
 
My tests with WSL show it's generally about 10-15% slower than "native", although I haven't done the comparisons in about a year so maybe it's improved. I know this is anecdote without me providing data (don't have access to it right now) but I just wanted to write and say I've seen about the same speed deficit. For my purposes it's much easier to use WSL. I'm ready to delete my dedicated Linux install because WSL is close enough in speed for my purposes. Microsoft has done an excellent job with it. I'm sure any speed differences will mostly go away over time with additional optimization.

Which software domain? CPU only or are you interacting with other devices (storage, GPU, network)?
 
I believe that apple in both their words and actions has made it 100% clear that they never intend to make another X86 Macintosh the second that all Intel models are gone.
Ignoring that your response completely missed the point of what I was actually talking about, I generally agree with what you say. But Apple also said they'd finish the transition by last year, which they didn't. That's why people are speculating left and right. I suppose only Apple releasing the next Mac Pro, or discontinuing the model line, will put an end to that.
 
They clearly have big problem with ARM Mac Pro, since they have not updated the Mac Pro for 4 years (2019).
And the rumored one keeps getting pushed back year after year.

It's costing them a lot of pro customers who are switching NVIDIA/AMD high end solutions, with no alternative from Apple.
 
Last edited:
The chart in the video shows 1348 points for native Windows and 1311 points for Linux on WSL.
The video shows that WSL had a performance penalty (up to 8% performance penalty, five months ago) But, we still don't know how much it had 2 years ago when those benchmarks were done.
Linux-HV.png


I still don't understand why Anandtech tried to compare WSL vs macOS benchmarks when the circumstances are not fair.
 
Does Apple really believe the M2 Extreme would beat a 192-core AMD CPU and a RTX 4090? Heck, you can probably put multiple RTX 4090 in the Mac Pro even (if Apple solves their politics with NVIDIA).
Good luck with that. I'm running a Linux rig with dual 4090 and I can tell you two 4090 wouldn't fit in the current design. In fact, with proper cooling the case is huge. I'm only running two 4090 in the case, nothing else and the case doesn't fit under the desk. They would have to engineer a specific solution for such a setup (water cooled G5 says hello). Other than that, Apple + Nvidia won't happen again. Other than that too, the 4090 is more of a gaming card, the RTX6000 would be a much better choice.
Usually, they are x86 and Nvidia supremacists who can't stand that Apple Silicon is better than what they can buy in the Windows world.
Well, they don't have to stand it because it's not true. Here's a rusty old 1080 running circles around a M1 Ultra: https://sebastianraschka.com/blog/2022/pytorch-m1-gpu.html

That's a specific workflow and there are others where AS is better. Hobby/YouTube style video processing, some music, and photo work, AS win's hands down if the required software is available for it. Unreal Engine for games/3D simulation, not so much. It really depends on the use case. I love my MacBook Pro for what it is, it's the first time that I only own a single Mac for everything. I've always owned a Powerbook/MBA/MBP and an additional desktop (Powermac/Mac Pro) for the heavy lifting. This is the first time the MBP is "good enough"... well, almost, for the heavy lifting I still own a PC with Windows/Linux because some software isn't available (anymore) for the Mac (hello Cuda, etc.) or the biggest Mac still doesn't provide enough power for my workloads.

I'm looking forward to the next Mac Pro, but I'm afraid I'll be disappointed. I keep hearing 384GB RAM/unified memory, but even if it's true, that's a joke for some (not all!) professional work. Me and my team are running plenty of machines with 1TB+ RAM. That's as far as we get with single desktop machines. The rest is easily pushed to clusters with 250TB+ RAM and a few PB for storage (upgrade in progress).

So the Apple fanboys are just as bad as the "Nvidia supremacists". AS might be the better choice for some, while Nvidia is the better choice for others.
 
Last edited:
They clearly have big problem with ARM Mac Pro, since they have not updated the Mac Pro for 4 years (2019).
That's my take, but then you can't take an architecture that's optimized for low power, everything on silicon and plop it into a workstation where expandability, and performance are the primary metrics to measure the workstation.

Just go back to the mac pro video when they released it, they promoted how fast rendering is, compared the competition, how you can add drives, video card, other components to increase performance. All of that is not possible on the current M1 architecture
 
UMA also has disadvantages, like an RTX 4090 has 25% more memory bandwidth just for the GPU compared to what an M1 Ultra has to divide between CPU and GPU
Not to mention, it would be not very economical to incorporate upwards of 16TB UMA RAM for big Data applications and scientific research field. Heck, we haven’t even seen 256GB UMA yet. Those workstation users are going to scoff at 128GB of RAM.
All tech has tradeoffs. But I really prefer UMA as a software engineer. I think Apple is in a unique position to bring UMA to the mass for consumer/prosumer computers
While yes For most folks, when coming towards big data and needing to process millions of variables, 128GB UMA is Just not enough. You can argue cloud datacenter is designed for that, but a powerful local workstation with 2TB of RAM still has its use.
I just can’t imagine how Apple would squeeze more than 256GB of RAM into their SOC without drastically changing the design, unless Apple opens PCIE lanes or sth. Memory bus maybe?
 
Not to mention, it would be not very economical to incorporate upwards of 16TB UMA RAM for big Data applications and scientific research field.

Where did you see a 16TB GPU? Or even a 16TB single-board computer?

Once you have the technical capability of offering that much RAM at acceptable speeds (and we are talking dozens of TB/sec) implementing UMA is peanuts.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.