Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
So we are comparing an M2 Ultra with Intel 28 Core unit from 2016, which seems unfair.
These days Intel and AMD have CPUs with 56 and 64 cores with access to 4TB of RAM.

The most significant benefit of the M processors is power consumption, but workstation has fewer restrictions and needs to deliver raw power.

I would love to see a comparison with the newer CPUs from AMD and Intel.

Apple is not even in the first 100 when it comes to raw power
Passmark 😂😂😂😂 use a benchmark that actually runs properly on Apple silicon. (Hint, its the one every professional reviewer uses to give balanced comparisons. Its the one in this article)
 
M2 Ultra is perfect for the form factor of the Studio. It can't compete with the 13900KS, but that thing is just Intel jacking up the power consumption to retake the performance crown over AMD. At full load it's stupid hungry. I don't think Apple is ever going to care about competing in that space.
Not to mention will any x86 chip in that performance range be acceptable in a silent room or silent PC use case?

Mac Studio's active HSF is designed to make it as silent as possible.

The Ultra's die shrink makes it possible to run cool enough for that overengineered active HSF and yet have the highest performance per watt in its chip class.

i9 & RTX 4090 may make sense in a room where having multiple airblowers are acceptable like say gaming but the Ultra was never positioned for that. Apple took 3 years to get Game Porting Toolkit (GPT) out... that's how serious they are with gaming on the Mac.
 
  • Like
Reactions: SFjohn
Remember how everyone doubted M1 before it came out? Nothing seems to have changed.
Everyone thought it would be absurd that a iGPU would by equal to a 4yo dGPU until Apple did so for a $499 3.7L desktop that sipped less than 30W of power.
 
I'd like primate labs to explain what happened between Geekbench 5 and 6. The Ultra used to score almost double of the M2 Max, then with Geekbench 6 it was much lower. Is this internal Geekbench programming not being as good, are there some new limitations in GB 6 that weren't there in GB5?

I read their blurb, but nothing stood out to me to explain this
 
Go imagine M3.

BTW, no reasons why the Vision Pro must have a M2. It will come out in 2024 with M3, more power, better battery life.
No doubt. You wouldn't believe the arguments I've been getting in with people who can't accept that a device that is likely 12 months from shipping is completely locked hardware wise. It's like the Apple Silicon DTK, with an A12z, they're showing this off with hardware that exists so developers can start working on software, but when they're ready to sell it, it will be the best device they can make. Shipping this device in 2024 with GPU cores from 2020 and on essentially the same node as the M1 when there's a 3nm chip with 3+ years of development that could give it potentially 20%+ performance increase or 20+ battery life increase just to keep specs in line with an announcement video would be insane
 
I'd like primate labs to explain what happened between Geekbench 5 and 6. The Ultra used to score almost double of the M2 Max, then with Geekbench 6 it was much lower. Is this internal Geekbench programming not being as good, are there some new limitations in GB 6 that weren't there in GB5?

I read their blurb, but nothing stood out to me to explain this

It’s a different methodology. It’s not about better or worse.
 
Doesn't change the fact that Intel is way faster than M2 Ultra.
Except those Intel systems (especially with an nVidia graphics card) don't run macOS. If someone is building a computer to just crunch numbers, then an Intel system can be faster and cheaper potentially. But it doesn't run macOS, so right there it's going to be a hard stop if macOS is what you want to run...
 
I don’t see how this is expected to compete with only 192GB RAM when the old one had 1,536GB RAM. I mean it depends on your workload but idk how you can keep the fixed amount so low and expect everyone to be fine upgrading to that from the old machine. It’s not modular if you can’t upgrade the RAM. They need to figure out some faster interconnects for that.

Or are they just going to completely ignore that market segment moving forward? It’s just such a strange device that seems like a half-assed attempt being pseudo upgradeable. I mean I don’t even think you can upgrade GPUs in it anymore. And yet it’s so big.

I honestly don’t see the point of it vs. the Mac Studio, and this seems to be by design. Apple can point to poor Mac Pro sales and spew out some PR crap: “Users love the power and portability of Mac Studio, and we look forward to bringing exciting new updates to this platform moving forward.” And maybe they’ll bring back the iMac Pro and call it the iMac Studio with a 32” 6K display.

I just don’t think the Mac Pro is long for this world.
 
Doesn't change the fact that Intel is way faster than M2 Ultra.
How about you post a video of you or someone running Geekbench 6 on a stock i9-13900K system? No overclocking, no overvolting/undervolting, no water-cooling, no fudging results. Then run the benchmark on a stock M2 Ultra. Post that video as well. Total CPU power use is also a factor to consider (Apple's engineers are not giving Apple Silicon the same TDP as Intel's high end chips; they could though) so also include performance per watt as an additional metric.

Also running other benchmarks than just Geekbench will be necessary.

Then, do real-world workloads of equally optimized software.

Then have at least 30 other people independently do the same thing so we have a little more data to be comfortable with statistical comparisons.

Because what it looks like is the typical result for an i9-13900K is 2940 single core: https://browser.geekbench.com/processors/intel-core-i9-13900k

That's great performance on the benchmark but it's not matching your cherry-picked other scores. The cheery-picked results might be interesting but if they are not readily reproducible, they aren't particularly meaningful and are not representative of real world workflow performance.

I'm not discounting benchmarks, they are important but they are not the only important thing.

I will wait patiently for your videos and results, or at least a measured reply providing more evidence to support your claims.
 
Last edited:
WWDC 2020 announced Apple Silicon.

WWDC 2023 announced GPT
Yeah but it sounds like you're saying like they have been working on it for 3 years, irl they may have made it in few months porting DXVK guy's code to be compatible with metal instead of vulkan
 
Yeah but it sounds like you're saying like they have been working on it for 3 years, irl they may have made it in few months porting DXVK guy's code to be compatible with metal instead of vulkan
What I was pointing out is Apple's priority for Mac gaming.

If it took them 3 years to get GPT via DXVK guy's code then it isn't that high up in their priorities list.

Apple Makes More on Games than PlayStation, Xbox, Nintendo + Activision + Windows COMBINED.

Their incentive to get into Mac gaming is lazy or smart scheduling considering PC gamers for the past 3 years keep pointing out that the M1, M2 and hopefully not the M3 have an iGPU too weak compared to dGPU to play triple-A titles.
 
Last edited:
How about you post a video of you or someone running Geekbench 6 on a stock i9-13900K system? No overclocking, no overvolting/undervolting, no water-cooling, no fudging results. Then run the benchmark on a stock M2 Ultra. Post that video as well. Total CPU power use is also a factor to consider (Apple's engineers are not giving Apple Silicon the same TDP as Intel's high end chips; they could though) so also include performance per watt as an additional metric.

Then have at least 30 other people independently do the same thing so we have a little more data to be comfortable with statistical comparisons.

Then, do real-world workloads of equally optimized software.

Because what it looks like is the typical result for an i9-13900K is 2940 single core: https://browser.geekbench.com/processors/intel-core-i9-13900k

That's great performance on the benchmark but it's not matching your cherry-picked other scores.
Why would I when PC has way more potential than Apple Silicon which Apple restricted so heavily? You cant do that? Well, that's Apple Silicon's problem, not PC. You only care about power by watt which is totally meaningless and pointless especially toward desktop users who need high performances. Beside Intel 13th gen are Intel 7 which is more like TSMC 7nm, not 5nm which you didn't even care. It is Apple who decided to make power by watt focused chips with a lot of restrictions so I dont see your point. I dont care about power by watt when the performance is too low or poor while working in real life. That's how Apple GPU failed as they compared M1 Ultra to RTX 3090 and yet, it never ever close to it. Typical Apple's lie that only Apple fanboys praise for it.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.