Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
It creates mistrust when a (h/w + s/w) company does not seem to understand simple relational database concepts, in this case when assigning a product ID designation.

Sure, the underlying technology of the M3U is cool and all. That becomes irrelevant though because shoe-horning an M3* era concept into an M4* era sell sheet is is always going to be just plain weird.

At least it is for me, and it would be hard for a sales person to convince me otherwise.
 
Last edited:
  • Sad
Reactions: G5isAlive
IMO Geekbench is not where you should be looking. The various large differences between M3U and M4M will not be accurately described by Geekbench; unless of course you coincidentally have a workflow that Geekbench emulates. E.g. where does Geekbench account for the 4x as much RAM available to M3U users? That single parameter is all-important to some workflows - - but meaningless to others.

I suggest looking hard at what your 2025-2030 workflow may want to do, then plan your new box configuration accordingly. With special attention to ever-increasing RAM demands.
Also, Geekbench recently posted they nerfed multicore CPUs some amount to make it more representative of desktop workload, which is not going to show the ultra in the best light for workloads that ARE NOT general desktop.
 
  • Like
Reactions: G5isAlive and cl516
We should all be paying attention to the direction DeepSeek has been going.
Quality High, Speed Low.

It costs me about $1 for a million tokens on Fireworks.ai.

Not worth the 10k investment when Apple themselves are ditching their own GPUs for NVIDIA.
 
M1 Ultra - Yes
M2 Ultra - Yes
M3 Ultra - Yes
M4 Ultra - Not yet, may or may not happen.

So Apple have in fact released an Ultra for every generation except the M4 so far.
I was unclear, sorry. I was referencing Ultras like the M3 with huge RAM available. M1 and M2 Ultras were not as special as the M3 Ultra is and the M3 Ultra release date is also so very different, coming concurrent with M4 Max Studios.

IMO Apple's long term plans at the high end remain unclear. I am not complaining, just observing.
 
  • Like
Reactions: G5isAlive
Quality High, Speed Low.

It costs me about $1 for a million tokens on Fireworks.ai.

Not worth the 10k investment when Apple themselves are ditching their own GPUs for NVIDIA.
Folks with $10k to spend are very different from those spending $Billions on large clusters. Claiming "Apple themselves are ditching their own GPUs" is a bit disingenuous - - unless you are referencing folks building huge server farms.

That said, the jury is still out on local LLMs. Your Fireworks reference is very valid.
 
#1 will be more expensive (after combining the cost of two generations of Studios), faster in many areas, last slightly longer than #2 (again, after combining two Studios), and probably be challenged by RAM limit from time to time.

#2 will be less expensive, RAM sufficient, but often a bit slower.
Turning over equipment more quickly will always be more expensive in annualized costs.

However, there are also opportunity costs, and keeping an older (though originally more expensive but higher spec) machine may bring with it two opportunity costs:
a) newer machine may introduce a new capability to exploit;
b) higher initial outlay for the up-market machine (M3 Ultra) will take cash up front that could otherwise be used on something else.

We've all beat this horse to death, but to repeat: there are two use cases that justify an M3 Ultra:
1) you're going to be doing lots of video per week; and/or
2) you can't live with just 128GB of RAM.

If you're not in one of those use cases, you're just better off with an M4 Max.
 
Folks with $10k to spend are very different from those spending $Billions on large clusters. Claiming "Apple themselves are ditching their own GPUs" is a bit disingenuous - - unless you are referencing folks building huge server farms.

That said, the jury is still out on local LLMs. Your Fireworks reference is very valid.
Apple will be using NVIDIA to power their AI models. There are talks of them putting in a very large order with NVIDIA. They are also sponsoring developers to have MLX run on CUDA.

Maybe the next Ultra will also use an NVIDIA GPU.

Googles Gemini just destroyed every other model.

TLDR - use cloud if you can don’t waste time with large local LLMs.

😂
 
Last edited:
There are talks of them putting in a very large order with NVIDIA.

And if you looked at Apple's annual report, you'll see how important "Services" has become as a sector of the corporate revenue. Apple will do what it has to do to grow that segment. They are not going to sacrifice billions of dollars over some brand pride.

Maybe the next Ultra will also use an NVIDIA GPU.
Surely you jest.

TLDR - use cloud if you can don’t waste time with large local LLMs.
The thing is though, that the models like Claude Sonnet and Gemini and chatGPT are strategically jacks-of-all-trades.

For a local machine, the ideal will be a tuned model, for a specific goal. I have things I'd like to do locally, for specific work. A smaller model that is fine-tuned to a specific kind of work is something I expect to become exceeding common in coming years.
 
  • Like
Reactions: Allen_Wentz
And if you looked at Apple's annual report, you'll see how important "Services" has become as a sector of the corporate revenue. Apple will do what it has to do to grow that segment. They are not going to sacrifice billions of dollars over some brand pride.


Surely you jest.


The thing is though, that the models like Claude Sonnet and Gemini and chatGPT are strategically jacks-of-all-trades.

For a local machine, the ideal will be a tuned model, for a specific goal. I have things I'd like to do locally, for specific work. A smaller model that is fine-tuned to a specific kind of work is something I expect to become exceeding common in coming years.
Project Digits will be perfect for that.
 
Googles Gemini just destroyed every other model.

TLDR - use cloud if you can don’t waste time with large local LLMs.

😂
• AI processing is in its infancy. Different models will continue leapfrogging each other. Even if models were very similar, which they are not [models and goals vary all over the place], claiming any model as having "destroyed every other model" is just silly at this point. Like saying oaks just destroyed strawberries; it depends upon whether one needs acorns or berries.
• Especially since AI has not [yet] commenced designing itself in a major way.

• Distributed processing versus centralized processing has been being argued for 50 years or more. Which is preferable has always depended upon the specific work output involved.

• Addendum to the processing discussion: i/o tech evolution may have significant impact on the issue of distributed processing versus centralized processing.
 
Last edited:
My M1 MacBook Air is still powerful enough to process photos when I travel. My M2 Max Studio base model is more than powerful enough to process photos and it has never showed any signs of strain from anything I throw at it. Apple produces new products to continue to grow revenue even when the products show just marginal improvements every year.
 
My M1 MacBook Air is still powerful enough to process photos when I travel. My M2 Max Studio base model is more than powerful enough to process photos and it has never showed any signs of strain from anything I throw at it. Apple produces new products to continue to grow revenue even when the products show just marginal improvements every year.

The 'it's good enough for me, it's good enough for everyone' argument? You seem to be suggesting we don't need these incremental improvements it's just Apple having corporate greed (like a for profit company?). Well, there are people that do push their machines harder than you do, and do see value in more processors and gpu's running faster. Improvement almost always comes in incremental changes.
 
You seem to be suggesting we don't need these incremental improvements it's just Apple having corporate greed (like a for profit company?)
To say nothing of the masses to don't have an Apple M-anything, much less a Max variant.

Seriously, some people just assume that if Apple is making something then it must be for them.

Probably less than 1% of the people on this planet have an Apple desktop computer. And those who do mostly own iMacs.

So, there's a potential market for plenty of new customers for a high performing desktop box.
 
Last edited:
I returned an M4Max MBP16 in January in anticipation of an M4 Ultra Studio. The M4Max MBP16 did not impress me as being much of an upgrade from my M1Max MBP16 that I use with Davinci Resolve. Plus, it got hot and my impression is that I was destroying the battery. This M3 Ultra decision by Apple has made me question what to do. Wait for the M5Max MBP most likely to be introduced in November. Or, since Apple has skipped the M4 Ultra maybe an M5 Ultra will be released in the spring of 2026.
 
I returned an M4Max MBP16 in January in anticipation of an M4 Ultra Studio. The M4Max MBP16 did not impress me as being much of an upgrade from my M1Max MBP16 that I use with Davinci Resolve. Plus, it got hot and my impression is that I was destroying the battery. This M3 Ultra decision by Apple has made me question what to do. Wait for the M5Max MBP most likely to be introduced in November. Or, since Apple has skipped the M4 Ultra maybe an M5 Ultra will be released in the spring of 2026.
For Resolve, here's a good Youtube video which has tests from various machines on a leaderboard and keeps getting updates from users:
which should give you a guide on things.

You'll see the latest top end M3 Ultra is currently second on the chart which looks good on paper but...
its 32 seconds slower than my 5090 PC (on a render that takes only 2mins, 28secs) but Resolve isn't really optimized or properly supported yet for the 50 series Nvidia cards.
So, as a percentage in terms of speed, that a pretty big gap and that was for the 80 core GPU M3 Ultra.

I'm also undecided like you but am probably staying with PC for now for video work but its really annoying as the m4 Studio Max looks great for photo work and some tests show its faster in Photoshop etc than the M3 Ultra.
Am curious about the Mac Pro but that will be silly money I'm guessing.
 
I picked up the base for $1699. Personally I think that is a very fair deal for an Apple product with that chip + amount of memory. Is it a perfect configuration no. But is it worth it to pay an extra $900-$1200 to have the perfect config - I am not so sure. I've been able to get by with a Mini that has 24GB (although that always hits yellow) so I think the base might be ok and I might have to manage when it's not.
Which model is this please? Thanks
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.