Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I don't think any Studio system is right for everybody except those needing a higher-end machine. The majority should be purchasing a Mac Mini, if going with a desktop.

I've never seen a Studio, or know anyone that has any interest in one.
I found one review which shows some differences well today. Compares to M3 Max but in heavy long video render it was almost twice as fast and in Topaz AI video upscaling it was 1:45 compare to 8 min.

 
Intel's current line up Lunar Lake (Core Ultra 300) ins on N3B. Arrow Lake ... also N3B ( was going to be split with Intel 20A but production only rolled out on N3B as 20A pruned to put more resources into 18A ). Intel's entire contemporary line up is on N3B. Arrow Lake isn't competing as well with AMD's offerings, but there is loads of wafers being consumed. Even with AMD gaining share, Intel is still in the many millions of chips zone.

Intel's next generation is suppose to get a complete ride on 18A. ( at least for Big CPU cores).

TSMC knows Intel wants to dial back once they get the chance , but this generation they got the consumer big CPU business. I doubt TSMC is eager to expand the lines there, but N3B isn't going to be quickly wound down either ( as it looked when there was no other high volume customer).

I remember Intel being one of the original companies that was onboard to use N3B, but I thought I read that Intel had backed off due to cost/compatibility issues*. Maybe they just scaled back at first? But I have read many reports that N3B was more expensive and complicated due to the half a dozen or so more layers. And that Apple had reserved most, if not all, initial production?

Well if it wasn’t production issues, then I guess the only other thing I can think for the M3 Ultra “delay” was they wanted to upgrade the Thunderbolt controllers. Just seems weird that they waited until after almost everything moved off N3B, to start fab’ing these SoCs. That and that they’re only producing the two variants; base and Ultra.

*Now that I think about it, it can take years to design these chips, so the option of just throwing the design out is usually not feasible and doesn’t really make sense. Especially since the N3B is completely incompatible with the rest of the ”3nm” nodes.

…I know, I know, rumors and reports should be taken with a grain of salt.
 
I'm debating whether to get a M4 Max with 64 or 128GB RAM.

I use it for FCPX editing, and I want to start getting into some basic sculpting using Blender this summer, so while I don't need 128GB RAM at the moment, I don't change computers very often so wonder whether it comes down to whether I should 'future proof'.

Saying that, its a big jump in price, and my future proofing in the past hasn't really added any value when it comes to selling later down the line! Leaning more towards sticking with 64GB......
In Blender the main jump in ram comes from imported HDRIs and high res textures. From my experience 64gb is plenty. However if you want to become serious with Blender the more GPU cores you have the better, as in M3 ultra. Cycles eats GPUs for lunch.
 
  • Like
Reactions: Ifti
The extra memory bandwidth of the M3 Ultra doesn't seem to be available for general purpose computing (computing that doesn't use the GPU or Neural Engine), because the M3 Ultra is only a few percentage points faster than the M4 Max despite having double the number of performance and efficiency cores. The only substantial advantage to the M3 Ultra in the general computing case is that memory is configurable up to 512GB and SSD up to 16TB. This also begs the question of whether either of the configuration limits on the M4 Max are artificial--the 8TB SSD limit in particular.

The same memory bandwidth is available to the entire SoC. The memory bandwidth in these chips is far above what is needed to keep data flowing through the CPUs. The reason for higher bandwidth is almost specifically to keep the GPU cores churning. There’s 80 cores in the Ultra, if you don’t have the bandwidth, you’re wasting resources.

The Ultra has twice the channels as the Max, in both memory and storage and everything else… it is quite literally 2 Max SoCs.
 
By default? That's not at all true.
Are you from a different universe or something? Any app can access virtually all of memory if it needs. Only system locked memory is unavailable to userland apps, which is typically just a few percent of the total memory in the system. Thus, if a Mac is configured with say 512GB RAM, just one app can potentially access ~500GB of it. If the app goes too far with attempted memory acquisition--going beyond 512GB, in this case--the kernel will unceremoniously kill it.
 
It would have taken nearly all the cores of a given SKU to begin saturating the available memory bandwidth. In those benchmarks, particularly ones only using CPU, we are seeing M4 Max performing very close to a supposedly doubled performance of two M3 Max, we are not seeing the bandwidth being a limit. There are other bottlenecks in the hardware chain.
For general purpose computing, using the Geekbench 6 benchmark, the M4 Max is roughly 22% faster than the M3 Max--this is with all cores deployed and nowhere near the "doubled" performance you claim. In my hands, having tested multiple new systems, the M4 Max and M3 Max produce significantly higher Geekbench scores than what geekbench.com reports for them, yet the M4 Max still performs about 22% faster on this benchmark. In my actual work, though, the M4 Max is more like 3-7% faster, while for some work the M4 Max is actually a hair slower than the M3 Max. Geekbench uses all cores. My work uses all cores.

Now consider the M3 Ultra (which I've not personally tested). The highest multi-core Geekbench scores logged thus far for it are only a few percent higher than those for the M4 Max. This suggests Geekbench needs optimization for so many cores (32) or that the memory bandwidth is nearly saturated already with the M4 Max and its 16 cores. The principal advantage to the M3 Ultra over the M4 Max for general purpose computing is that Apple allows configuration with more than 128GB RAM and as much as 16TB SSD.
 
  • Disagree
Reactions: Mathew31de
Are you from a different universe or something? Any app can access virtually all of memory if it needs.

No, it can't. Not by default. The reason you don't know this is probably because you've never had to access your entire VRAM allocation, which you've probably never had to do because you don't run local LLMs. But if you tried to do this, you would know that it takes a manual action to make all of the VRAM available.
 
The same memory bandwidth is available to the entire SoC. The memory bandwidth in these chips is far above what is needed to keep data flowing through the CPUs. The reason for higher bandwidth is almost specifically to keep the GPU cores churning. There’s 80 cores in the Ultra, if you don’t have the bandwidth, you’re wasting resources.

The Ultra has twice the channels as the Max, in both memory and storage and everything else… it is quite literally 2 Max SoCs.
I've been clear about referring to the case of general purpose computing. The M3 Ultra is roughly 7% faster than the M4 Max, despite having double the Performance and Efficiency cores--32 cores total vs. 16.
 
I've been clear about referring to the case of general purpose computing. The M3 Ultra is roughly 7% faster than the M4 Max, despite having double the Performance and Efficiency cores--32 cores total vs. 16.

But are people buying the M3 Ultra for general purpose computing? I hope not. For the type of work I do - AI dev - there is no contest, the M3 Ultra smokes the M4 Max on benchmarks that are meaningful to our category.
 
  • Like
Reactions: Mathew31de
I've been clear about referring to the case of general purpose computing. The M3 Ultra is roughly 7% faster than the M4 Max, despite having double the Performance and Efficiency cores--32 cores total vs. 16.

General purpose is incredibly vague. I believe you're referring to CPU multicore? You would be surprised how little consumer software is actually designed to take advantage of 32 CPU cores.

For the vast majority of users M4 Max will be faster. That's because its individual CPU cores are faster than those in M3 Ultra.

The thing is that you aren't buying M3 Ultra for single-core CPU performance. You're buying it for that monster GPU and that gigantic memory pool.
 
The thing is that you aren't buying M3 Ultra for single-core CPU performance. You're buying it for that monster GPU and that gigantic memory pool.

Yep. The reason you buy the M3 Ultra over the M4 Max is this (and note that the M4 Max is barely an improvement over the M2 Ultra):

Screenshot 2025-03-12 at 3.51.26 PM.png
 
  • Like
Reactions: G5isAlive
The one thing that basically nobody should be doing is be considering upgrading from an M3 to an M4 Mac. And yet 90% of the reviewers will focus on that question for a huge portion of their review.
Consumerism.

Especially in the US (but not exclusively), we've surrendered our humanity and let others label us as "consumers".

Consumers must consume, consume, and consume ever more.

So yeah, the "reviewers" are just advertising agents.
 
The reason you don't know this is probably because you've never had to access your entire VRAM allocation

If I may, let's encourage the industry to just drop this terminology.

The term gets pushed by the likes of Nvidia (who also pushed "GPU" as a product label), but technically all M series RAM is the same.

MacOS allocates RAM to whatever process is needed.

However, by default MacOS limits GPU cores access to only a portion of the total SDRAM.

It's pretty straightforward to change that limit, but supposedly (I cannot speak to the statistics) doing so risks a kernel crash.

I suspect that is only in Macs with little RAM, but it may happen even with a machine with lots of RAM.
 
For general purpose computing, using the Geekbench 6 benchmark, the M4 Max is roughly 22% faster than the M3 Max--this is with all cores deployed and nowhere near the "doubled" performance you claim. In my hands, having tested multiple new systems, the M4 Max and M3 Max produce significantly higher Geekbench scores than what geekbench.com reports for them, yet the M4 Max still performs about 22% faster on this benchmark. In my actual work, though, the M4 Max is more like 3-7% faster, while for some work the M4 Max is actually a hair slower than the M3 Max. Geekbench uses all cores. My work uses all cores.

Now consider the M3 Ultra (which I've not personally tested). The highest multi-core Geekbench scores logged thus far for it are only a few percent higher than those for the M4 Max. This suggests Geekbench needs optimization for so many cores (32) or that the memory bandwidth is nearly saturated already with the M4 Max and its 16 cores. The principal advantage to the M3 Ultra over the M4 Max for general purpose computing is that Apple allows configuration with more than 128GB RAM and as much as 16TB SSD.
I realized what I typed being too vague that you misunderstood: "supposedly doubled performance of two M3 Max" meant the M3 Ultra cannot reach its 2x potential, exactly what you are saying now. I was talking about how the M3 Ultra CPU multi-core benchmark barely edges out the M4 Max, using this as a baseline to expand on how the memory bandwidth difference between the two systems not being the differentiator.
 
Now consider the M3 Ultra (which I've not personally tested). The highest multi-core Geekbench scores logged thus far for it are only a few percent higher than those for the M4 Max. This suggests Geekbench needs optimization for so many cores (32) or that the memory bandwidth is nearly saturated already with the M4 Max and its 16 cores.

Someone with the technical background to know said on X that the issue is with Geekbench under-reporting the true performance due to not being designed for such high-core count CPUs (it evidently does the same on high-core count Xeons and EPYCs, as well).
 
It’s not just about capacity. The memory bandwidth is atrocious compared to NVIDIA solutions for AI. Sure it might not get to 512GB but the performance makes up for it.
That's incorrect. They can't even run the models. You need to be able to fit the complete model size into RAM. You would need to hamstring 16 Nvidia 5090's together to run the full version of deepseek. This Mac ultra with 512 can do it at 18 tokens per second. Nothing else approaches that until you're talking about
Multiple Hoppers and hundreds of thousands of dollars.
 
But are people buying the M3 Ultra for general purpose computing? I hope not. For the type of work I do - AI dev - there is no contest, the M3 Ultra smokes the M4 Max on benchmarks that are meaningful to our category.
If one needs a Mac and more than 128GB memory, then obviously the M3 Ultra is the only option, whether it's for general purpose computing (like I do), graphics or AI. (For some of my work, 512GB isn't close to enough, which is why I have a 2019 Mac Pro.)
 
Someone with the technical background to know said on X that the issue is with Geekbench under-reporting the true performance due to not being designed for such high-core count CPUs (it evidently does the same on high-core count Xeons and EPYCs, as well).
I don't fully rely Geekbench, as I have my own multithreaded software that runs efficiently at high core numbers on macOS and linux, and comparisons of Geekbench results between linux and macOS aren't reliable indicators of performance. But Geekbench is reasonably accurate for relative performance between the various Mac platforms (of which I've tested many).
 
  • Haha
Reactions: Mathew31de
I've never even physically seen a computer with higher than 16gb of ram, and I've never seen a Studio.

You might want to consider getting out more. Heck go in any Apple Store :) I had 64 gb 15 years ago. And own a Studio now. I've never seen a great white shark, but I'm pretty sure they exist.

Your point that most people don't need a Studio... maybe, but so? We're discussing the relative merits of the computer, not how many it will sell or not sell.
 
I found one review which shows some differences well today. Compares to M3 Max but in heavy long video render it was almost twice as fast and in Topaz AI video upscaling it was 1:45 compare to 8 min.


The thing I find frustrating about all these early reviews is they are using the maxed out or nearly maxed out M3 Ultra, the chip that adds 1500 to the price of a base m3 ultra, storage that adds another $2,200 (or $4,600) and the 512 gb ram that adds $4000...I get that's what the AI guys want, but I am curious how the base M3 Ultra performs, the one that is almost afforable lol. If I went the M4 max route, which is tempting because of the single core speed, I would add some storage and ram and the price bumps up against the base M3 ultra. What I am getting out of the reviews, is if the tricked out Ultra is only marginally faster in things I would do with it, the base ultra probably isn't worth giving up the single core speeds for. And yes, again, I know, the AI crowd is excited about the tricked out ultra.
 
  • Like
Reactions: picpicmac
I remember Intel being one of the original companies that was onboard to use N3B, but I thought I read that Intel had backed off due to cost/compatibility issues*. Maybe they just scaled back at first? But I have read many reports that N3B was more expensive and complicated due to the half a dozen or so more layers. And that Apple had reserved most, if not all, initial production?

N3B is more expensive even before get to yield; the wafers just cost more. Follow on N3-family probably will cost more also. Even N3E costs substantively move than N5/N4 though. N3B was just even more than that.


Intel hugely delayed their wafer usage allocation. There was some notion that perhaps that was to wait for N3E, but it turns out it was mainly Intel being late because chasing 'everything at the same time' and was designing Arrow lake for multiple fabs at the same time. ( and also skittish on the Lunar Lake approach with DRAM soldered onto the subassembly. )

" ... TrendForce yesterday issued a report claiming that Intel had decided to postpone the start of Meteor Lake's GPU tile production on TSMC's N3 node from late 2022 to 'late 2023,' which allegedly caused TSMC to revisit its N3 capacity investment plans. ..."

There are some issues here because the Meteor Lake's GPU tile was done on TSMC N5. Was that due to hardware quirks in GPU hardware that needed fixing or because N3 wasn't really going to make it in volume by Q4 2022. Or some combo of both.

There are other rumors that Intel cut their wafers way back when shifted plans to split production on Intel 20A and N3B for the Arrow Lake line up. TSMC cut their 'volume' discount , but Gelsinger/Intel thought get more milage out of show viability of 20A (to attract fab customers? ). Turns out didn't really work. Five nodes in four years really was "a Bridge too far".

N3E also has "more layers and complication" than N5/N4. N3B just somewhat stretched about to limit of abilities. Because it takes longer to make , the iterative Quality Assurance improvement process took longer. Then TSMC had customers moving slower ( not moving volume. Missed iPhone volume window. ). Intel dropping out of early production, just made the improvement cycle even longer.

However, that was largely blown out of proportion by the rumors mill as N3B was incurably bad and never would yield in decent volumes.

I think Apple somewhat defacto reserved the initial production largely because everyone else wasn't buying. Not that it wasn't for sale.

Well if it wasn’t production issues, then I guess the only other thing I can think for the M3 Ultra “delay” was they wanted to upgrade the Thunderbolt controllers. Just seems weird that they waited until after almost everything moved off N3B, to start fab’ing these SoCs. That and that they’re only producing the two variants; base and Ultra.

I don't think they wanted to 'upgrade' the Thunderbolt controllers as much as match them. If fully committed to using the M4 Max on the lower half of the Studio line up. How are they going to 'miss' TBv5 on the upper half?
Thunderbolt is also has substantive portions that are 'I/O' (off die) oriented. N3-whatever doesn't bring much shrinkage there. If not pushing the M4's TBv5 very hard on the fringe of N3 tech it shouldn't be too hard to place in on both N3B and N3E if don't sure either one for max density.

If they have to make a M3 Max+ die anyway to put the UltraFusion connector on the larger die, they need a new die mask anyway. So adding in TBv5 would not a huge incremental effort.



*Now that I think about it, it can take years to design these chips, so the option of just throwing the design out is usually not feasible and doesn’t really make sense. Especially since the N3B is completely incompatible with the rest of the ”3nm” nodes.

N3E SRAM (cache ) density is the same as N5/N4. "completely incompatible" is a bit of an overstatment. I/O circuits also really didn't go anywhere radically new (size and density) either . The "incompatible" is far more grounded in the densest logic circuit options between N3E and N3B. If they don't aim at those , then they don't 'hit' those incompatibilities quite as hard.

Where Apple is primarily using N3 logic density gains ( the CPU cores , GPU cores , NPU cores ) really are not changing between the laptop M3 Max and the M3 Max+ . Memory subsystem isn't changing much either ( using denser RAM dies is off the primary M-series die.).
 
The thing I find frustrating about all these early reviews is they are using the maxed out or nearly maxed out M3 Ultra, the chip that adds 1500 to the price of a base m3 ultra, storage that adds another $2,200 (or $4,600) and the 512 gb ram that adds $4000...I get that's what the AI guys want, but I am curious how the base M3 Ultra performs, the one that is almost afforable lol.

Can use tools like activity monitor to gauge what your workload RAM and cores footprint is.

Back of the envelope. If there is a 25% reduction in GPU cores , then a 25% reduction of a task that wasn't maxing out memory would be a decent estimate. ( benchmarks that load up max sized AI model are not useful for scaling, but "Photoshop workload 42" benchmark probably isn't RAM limited. )


Similar for the CPU benchmarks where drop down four P cores ( 17% drop in P core count).

For single threaded tasks there is no difference ( presuming task fits in RAM memory).

If I went the M4 max route, which is tempting because of the single core speed, I would add some storage and ram and the price bumps up against the base M3 ultra. What I am getting out of the reviews, is if the tricked out Ultra is only marginally faster in things I would do with it, the base ultra probably isn't worth giving up the single core speeds for.

Which is way folks don't do stories about two systems with absolutely minimal differences. Major upscaled Max versus lowest possible Ultra. The differentiator is largely based on performance for specific apps in question for very specific sets of users. That isn't going to be a broad interest story that will attract lots of readers.
[ maybe a small scale blog will find that readership count attractive , but the broad market tech new feeders won't. ]

The most stripped down M3 Ultra Studio likely use case is in a render/compute farm where the 'lower' price (relative to more expensive Ultra configurations) is the gateway to buying more of them rather than as a single user workstation. ( Somewhat similar to the Mac Pro 2019 that came with a 512GB SSD option in the most stripped down state. Which again makes more sense as a virtualization platform where the VM image is run off another drive. The primary drive just runs the base VM engine via macOS and no local 'users'. )
 
Last edited:
M1 Max (MBP - Oct 2021) to M1 Ultra (Studio - March 2022) was 4 months.
M2 Max (MBP) to M2 Ultra (Studio) was 6 months.

The M4 Max came out in October 2024, so based on past performance, the M4 Ultra appearing sometime between March and May was a perfectly reasonable expectation.

Thanks for confirming everything I wrote.
 
I don't know where you came up with 20%. For general purpose computing, M3 Ultra is ~10% higher Geekbench multi-core vs. M4 Max, but that's using the averaged score for M4 Max from Geekbench.com, which is low. On multiple M4 Max systems with nothing else installed, I got scores averaging ~26,400. That makes the highest M3 Ultra scores currently posted on Geekbench.com only about 7.5% faster. Again, this is for general purpose computing, not video, AI, etc.

I guess you just think graphics performance isn’t important?
 
In Blender the main jump in ram comes from imported HDRIs and high res textures. From my experience 64gb is plenty. However if you want to become serious with Blender the more GPU cores you have the better, as in M3 ultra. Cycles eats GPUs for lunch.

Thank you - the most Id be doing is sculpting the odd model/character for 3D printing. Nothing too extreme. So sounds like 64GB would be plenty for me.....
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.