Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
As if there are already not plenty of explanations available?

Short answer: those TSMC wafers and packaging are not cheap.
I guess the part that needs explaining is the timing of the releases. Though it was not like the M2 Ultra Studio didn't lag behind the M3 Max MBP, but their releases were 6 months apart. But this time around the difference is putting the M4 Max along side the M3 Ultra at release.
 
This really shouldn’t have been posted until more than one result is in, to make sure there wasn’t an error. I don’t think the numbers are necessarily wrong but there is already some conflicting information out there and it hasn’t even been 2 hours.

Content for content’s sake isn’t always a great thing, especially since this is a released product not a rumored upcoming one. I’d have killed this story if I were the editor. Just creates the potential for confusion among the audience.

If you can’t, as a writer (and editor assuming there is one) understand why “we would not be surprised if additional Geekbench 6 results for the M3 Ultra chip end up having higher performance scores” is in direct conflict with “M3 Ultra Isn’t Much Faster than M4 Max in First Benchmark Result” there is a problem.
 
Apple’s processor roadmap is a mess. Why isn’t it linear? M3, M3 pro and M3 max came out a year and a half ago, we’ve had m4 variants for a while now on some things, now M3 ultra. FFS.
M3 Ultra was likely used for Private Cloud Compute and now they have the capacity and chip manufacturing agreement to take advantage of the node that already exists.

I agree with you though, I’d never buy the M3 Ultra. Too many compromises with M3 in general (no SME being the primary one if you’re doing AI / ML work), and too many unknowns about both M5 and the upcoming Mac Pro. I guess the ultra-niche use case of having a lot of extant AMX-specific code would sell a lot of them for the single-digit number of companies that have possibly done that.

It’s a pretty weird product.
 
Apple's language use here is not accurate.

1.5X = 50% faster or 150% as fast, but not 150% faster.
2.5X = 150% faster.
Exactly. This kind of mistake is often made in conversations between people at the coffee table, but unacceptable for Apple to make because they fully know it's wrong, and it's not like they don't think much about what they say.
 
M3 Ultra was likely used for Private Cloud Compute and now they have the capacity and chip manufacturing agreement to take advantage of the node that already exists.

I agree with you though, I’d never buy the M3 Ultra. Too many compromises with M3 in general (no SME being the primary one if you’re doing AI / ML work), and too many unknowns about both M5 and the upcoming Mac Pro. I guess the ultra-niche use case of having a lot of extant AMX-specific code would sell a lot of them for the single-digit number of companies that have possibly done that.

It’s a pretty weird product.
Which is why the Mac Pro remains stagnant. Even Apple knows lol
 
  • Like
Reactions: novagamer
This of course will lead so many who are just doing a number competition to (intentionally?) misunderstand what role each product has.

One buys the Ultra SoC for the following:
1) supports more displays;
2) double the video encoders;
3) more RAM.

The internet is full of people who simply want bragging rights (insert here long sociological discussions about male preening and posturing), and that is usually done based on My-Number-Is-Bigger-Than-Your-Number propositions.
Just for the record, my number is really bigger than yours: 8 > 1 :)
 
Which is why the Mac Pro remains stagnant. Even Apple knows lol
If they’re both in the lineup going forward with a generation gap at the high end I fear what the Mac Pro prices will be. Maybe the $2,999 up-charge will cover it, but I wonder if the gap will widen.
 
  • Like
Reactions: Joshuaorange
The Ultra screams 'aimed at AI crowd'.

One of the easiest way a regular person can get a 'full size' model running locally. Expensive but easy
 
The Ultra screams 'aimed at AI crowd'.

One of the easiest way a regular person can get a 'full size' model running locally. Expensive but easy
100%. If you NEED to run a large model locally this is the best way to do it …for 6-8? months assuming the Mac Pro update is coming.

The Mac Pro, even if it “only” has an M4 Ultra, will cream the Studio at time to first token speed by 20-30%+. If it has an M5 or some bespoke chip it’s going to obliterate it and suddenly the $10k you dropped to run that model doesn’t seem like a smart idea long-term.

Knowing Apple they will keep the M3 Ultra in the lineup for 2 years, although I guess the same can be said about the current Mac Pro.

For some startup though that is focused on model fine-tuning for something ‘bespoke’ (not training which these all will suck at) it might be worth buying 4 of these M3 Ultra Studios and linking via TB5, but that’s such a niche use case and it’s also time boxed as far as how relevant it will be.

It’s an interesting but odd product.
 
Last edited:
  • Like
Reactions: Apple Weeb
Geekbench seems to have trouble with many-core CPUs.

For example, the $15k+ AMD threadripper with 96 (!) cores is only around 30000 points which makes no sense.

We need to wait for benchmarks like blender, cinebench, etc.

 
Geekbench seems to have trouble with many-core CPUs.

For example, the $15k+ AMD threadripper with 96 (!) cores is only around 30000 points which makes no sense.

We need to wait for benchmarks like blender, cinebench, etc.

That's the way it's intended to work, it's made like that, because it's a benchmark for common usage, where many apps can't scale properly after a certain number of cores.

I thought everyone knew after all these years, but I guess the macrumors article standard is still at the "copy & paste without any understanding" level.
 
It should be noted that Geekbench 6 are a collection of lightly threaded workloads. It doesn't reflect multicore performance. For example, in Geekbench 6, the 96-core AMD EPYC 9864X only scores 21422, less than i9-13900KS.

When evaluating huge chips like M3 Ultra, we need to look at benchmarks like Cinebench R23 or 2024.

Screenshot 2025-03-06 at 10.38.52 PM.png
 
It’s good to note, but my understanding of the Ultra is that buyers aren’t looking for single core speed. It’s multi, GPU, and that giant dose of RAM, all of which is what many professions require, including “AI” model.
 
Apple’s processor roadmap is a mess. Why isn’t it linear? M3, M3 pro and M3 max came out a year and a half ago, we’ve had m4 variants for a while now on some things, now M3 ultra. FFS.

Because that’s not how professional chip releases work. Not for Apple, and not for Intel. I’m less sure about others. The reason is that it takes more work and more engineering to get these mega chips ready.
 
  • Like
Reactions: atonaldenim
The M4 Max will be faster in any "normal" usecase - or at least as fast as the M3 Ultra - for much less money

The M3 Ultra makes only sense for people who need
- an insane amount of RAM
- insane amount of GPUs.

=> the AI crowd.. :)

All others -> go with the M4 Max

I'm quite surprised to read many comments pointing out the AI usage. I get that AI is the thing of the moment, still: what about the regular stuff these machines were supposed to work with? You know: video editing, 2D/3D graphics and audio editing.

Are all professionals suddenly turned into AI farm builders or what? To do what, then? OK, you have a remarkable computing power on hand. And what about it? What data are your locally running AI models going to compute?

Considering the RAM/GPU features, is an M3 Ultra more suitable for video editing than M4 Max? Shouldn't this question be more appropriate for the regular user to evaluate the M4/M3 differences?
 
We'll have to wait until we see more deeper results and analysis after the M3 Ultra and M4 Max come out next week in the Mac Studio. But here's where i think the difference will be significant in terms of performance.

The GPU in M4 Max brings Apple’s advanced graphics architecture (second-generation) to Macs, including dynamic caching, hardware-accelerated mesh shading, and [most importantly] a second-generation ray-tracing engine for more seamless content creation and gaming. As Apple notes in its PR, "Mac Studio with M4 Max starts at 36GB of unified memory, with support for up to 128GB, so users can do everything from sorting through thousands of images with speed and precision, to producing complex compositions with hundreds of tracks, plug-ins, and virtual instruments, all played in real time."

However, the M3 Ultra with Apple's advanced graphics architecture also brings Dynamic Caching, along with hardware-accelerated mesh shading and ray tracing [first generation/2x slower ray-tracing engine], so graphics workflows like GPU-based renderers are up to 2.6x faster than Mac Studio with M1 Ultra.

Key point is that the CPUs across the M4 family feature the world’s fastest CPU core, delivering the industry’s best single-threaded performance, and dramatically faster multithreaded performance. The GPUs build on the breakthrough graphics architecture introduced in the previous generation (M3), with faster cores and a 2x faster ray-tracing engine.

In conclusion, if you want best ray-tracing performance, the M4 Max GPU builds on the breakthrough architecture of the previous generation (M3), with faster cores and a 2x faster ray-tracing engine.
 
The main question is "is the m3 ultra still vulnerable to gofetch?" https://gofetch.fail/

Yes we understand Apple is deep into recycling but it makes no sense to release product with known hardware problem
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.