Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I think the M3 Ultra is going to mainly appeal to the A.I. crowd with 512MB memory and 32 neural engine cores.
It would've appealed better if it had 2x the memory bandwidth of the M4 Max, but the bandwidth is unfortunately on par with the old M1 and M2 Ultra chips aside from the larger memory size. At this stage, I'm waiting for ASICs and language processing units to make an introduction between 2026-2030.
 
There's no UltraFusion interconnect on the M4 Max. Apple engineering also said that means no M4 Ultra.
As i can remember it was said about M3 as well. And they said not every generation will have ultra. But haven’t said specifically M4. If it should be M5 it may come in two years then not next year.
And graphics performance come short of some 300k predictions.
Last, real life tests especially in AI will be important and of course any tasks that needs hoards of ram will just fly.
 
  • Like
Reactions: Dunkirk20
Never mind that, how about Crysis?

In all seriousness though, unless developers release class A games on the Mac it won't matter how much GPU performance Apple adds to their chips, it will never be a serious contender for legit high-end gamers. They salivate over NVIDIA RTX boards, not unified RAM SOCs with no expandability
GPU is not only about games though. There are ML or DL, But I will not talk about it; but I will talk about my Domain 3D where many program uses GPU acceleration. Thing such as GPU accelerated rendering in Arnold or Karma XPU which predominantly use CUDA or Optix. Truth to be told, Support for AMD card is also doesn't exist or even if it exists it is very low.

Texturing also are heavily GPU accelerated. Painter & Mari both supports OpenCL & CUDA. But CUDA perform generally better.

Real time Physics are dominated by GPU acceleration whether Havok, Chaos physics in Unreal, Houdini engine in Houdini or Bullet physics in Maya. Apart from Houdini engine most run on OpenCL only Houdini use CUDA but still supports AMD hardware. Don't know how Apple GPUs perform here.

Physical or Offline Simulations also heavily depended on GPU acceleration like Bifrost in Maya exclusively uses CUDA or computation & Optix for Raytracing, Houdini use both CUDA & OpenCL, recently Houdini made a move to Vulkan as well with 20.5 release.

Now from what I have heard Houdini, Maya, Unreal or Havok does support Metal, but performance clearly lags behind Nvidia & AMD.

I haven't seen much benchmark of Apple hardware in this domain.
 
  • Like
Reactions: phuklok1
Yeah, that’s not very impressive, if true.
The jump in performance from M2 Max to M3 Max was also only iterative on pure raster performance, which is what the metal benchmark is using with GeekBench.

A couple of early takeaways and caveats for these benchmark leaks:

1. We already assumed the single-core performance would be worse than M4 Max. That's a given.
2. The multi-core benchmark in GeekBench doesn't scale linearly with cores. There are likely many apps that will better make use of the cores available.
3. The GPU has ray tracing and mesh shading. Neither of which are utilised here in GeekBench. Blender will be a better test for this GPU's new capabilities. 80 cores with hardware ray tracing should be a massive jump over the M2 ultra's 0 cores with hardware raytracing.

:)
 
  • Like
Reactions: G5isAlive
Finally Doom will be playable on a Mac. Been waiting for this moment ever since 1993. Knew we’d catch up with the DOS PCs
 
note they don't compare to AMD or NVIDIA GPU performance. It really, really sucks the M-series of Macs can no longer use external GPUs. This really screwed up our workflows. Would be great if Apple still produced or supported a new x86 (AMD/Intel) option if they were intent of gimping M chip based macs this way.

It is requiring our organization to move off of Apple products for workstations and I am really going to miss the integration and UI of Mac OS when working on these. (And on downtime, I personally miss being able to use parallels for some gaming sessions on high end GPUs.)
It used to mainly matter to gamers. But now with AI models being able to run locally IF they have the right resources and memory, it will effect a LOT more. And really makes the Mac not a development machine for any serious AI developer working directly with many parts of the process.

Sure, you can load some models onto a new Studio, but it will never come anywhere close to matching what a good external GPU or more focused AI PCI card could offer. Even if it caught up or beat them, with external GPUs you can just add another, and another.

But nope. Not anymore with external GPU support removed. Very bad timing for a move like that which to me shows that indeed Apple was completely blindsided by the AI wave. And the AI wave is not just all fun & games, there is a LOT of real work that can be done, for AI developers. Which sucks.
 
  • Like
Reactions: phuklok1
I'm hoping that Apple can get ahead of these CPU launches shortly to clear this up. Having multiple generations around makes a certain amount of sense, but having a previous generation chip be newly released and faster than the new gen chips that were released before it doesn't make a lot of sense.

I mean it does. It's an Ultra the other is a Max. Consumers putting too much emphasis on the number in the processor name, they're just marketing names, they don't mean much else.

If they'd gone the way of Intel and just given long forgettable strings of numbers to things no one would care - because over there too their top end Xeon processors are a generation or more behind.
 
I mean it does. It's an Ultra the other is a Max. Consumers putting too much emphasis on the number in the processor name, they're just marketing names, they don't mean much else.

If they'd gone the way of Intel and just given long forgettable strings of numbers to things no one would care - because over there too their top end Xeon processors are a generation or more behind.
Well no. One is an M3, the other is an M4. Apple makes it very clear that when they have a new gen of processors, that is the best one. This is the first time they're going against that.
 
I'm hoping that Apple can get ahead of these CPU launches shortly to clear this up. Having multiple generations around makes a certain amount of sense, but having a previous generation chip be newly released and faster than the new gen chips that were released before it doesn't make a lot of sense.
I disagree. It makes a lot of sense. Those in the market for a Mac Studio differ from those in the market for an iMac or Mac mini for instance. What I expect to do on a Mac Studio vs an iMac is extremely different. The needs of someone using an M2 Ultra for graphics performance is better than someone using an M4 Pro. Is CPU performance is the underlying factor than the M4 Pro has an advantage. There should be an expectation for the consumer to know the computing needs that they have. The type of consumers that need 192gb of memory are no where near the same as some that barely uses 16gb of memory. Again we are talking M3 Ultra. For my use case the M4 Max is better suited for my needs. The M3 Ultra could definitely handle my use case, but would be overkill at a price point where I can purchase two Mac Studio M4 Max's.
 
  • Like
Reactions: mech986
I guess the thing is, M3 cores are still very snappy and fast. It's not like anyone is sitting around using an M3 generation processor thinking 'wow my machine is sluggish'. In fact I don't think a Mac has felt sluggish at all since the shift to M1 and onwards. So for those who want a very fast, snappy machine, AND the most multicore performance, AND the most GPU performance, AND insane amounts of RAM, the M3 Ultra studio is STILL a very good machine.

Personally I am currently running an M3 Max fully maxed out MBP16 and if I'm being honest with myself, I only want a new M4 Max Mac Studio simply because I like to have the fastest thing I can get. I do a lot of multitasking with fairly heavy apps, but I am yet to hit the limit of my M3 Max, or my M2 Max before that.

Looking at getting an M4 Max 16/40 core, 128GB RAM and 4TB SSD.

I have had 64GB RAM on my last 3 machines (27" iMac 10 core i9, M2 Max MBP16, M3 Max MBP16), and I feel like I could get away with 64GB again, however I would like to keep this new studio for a longer time.
 
The Ray Tracing on the M3 Ultra that is lacking on the M2 Ultra is a big deal. At this point, No one should get an M2 Ultra to try and save a few bucks if you will be rendering Blender or Maya files. That would be a terrible decision.
Having said that, the overall GPU performance gains over the M4 Max are still kinda lacking for Blender work with that hefty $2000 markup. I think all signs point for me skipping the Ultra this year and waiting for a more modern Ultra, as the M4 Max alone is already nipping at it for substantially less, and the M5 Max later this year probably making it an even worse value over the M4 Max for anything other than running LLMS
 
Last edited:
Seeings these benchmarks today really has me torn since I've also been waiting since late 2023 for the next refresh, although for my reasoning is a lot dumber.

For the past 20 years, on every desktop computer I've owned I have found that I was fine with a prosumer level CPU for 4 to 5 years and the GPU was always the thing I needed to upgrade. Although I occasionally do some video work, 3D, and a fair amount of software development, I still like the ability to run games on the side which is why I had a couple of Mac Pro's in the past and eventually switched to the fastest 2018 Mac Mini with an eGPU in 2019. I last upgraded my GPU 2021, and the 4 year old Radeon I'm using basically benchmarks around the same as an M4 Pro (excluding the eGPU overhead, which puts it slightly below).

Since eGPU is no more and I wanted a newer Mac, in 2023 I decided I would get the M3 Ultra when it came out because I didn't like the idea of getting something with a worse GPU than what I was replacing and wanted something good enough that the GPU would still be "average" 4 years from now until I'm ready to upgrade again. I immediately ordered the 60-core version of the Studio when it went up this week, assuming that since the M4 GPU was only about 18% faster than M3, 50% more cores than the M4 Max would put the 60 core Ultra at least 25% over the M4 Max.

With the presumably 80 core model only being 38% faster in this benchmark, it's making me think the extra cost might not have been worth it even though I want to keep the machine at least 4 years. I've tried running a bunch of numbers to estimate how the 60 core Ultra might fare compared to M4 Max, but nothing I've come up with makes sense. The best estimate that I can come up with places the 60 core M3 only 1.3% faster than the 60 Core M2, which is still ahead of the 40 core M4 Max by 16% but this implies that the M3 GPU cores are barely improved over M2.

I was hoping for the equivalent of mid-tier desktop card with the 60 core model instead of a high end laptop equivalent, but it's sounding like I should rethink my plan and switch to M4 Max because I'd have to pay triple the price to get the equivalent of a "pretty good" discrete desktop card.
M3 Ultra 60-core GPU should murder M2 Ultra 60-core in anything that supports Metal's RT acceleration, which got introduced in M3. So your choice of that chip could be well justified, depending on what you plan using it for. Games-wise, I'd wait to see how Cyberpunk Metal port performs on that.
 
M3 Ultra 60-core GPU should murder M2 Ultra 60-core in anything that supports Metal's RT acceleration, which got introduced in M3. So your choice of that chip could be well justified, depending on what you plan using it for. Games-wise, I'd wait to see how Cyberpunk Metal port performs on that.
Not to mention their dynamic memory allocating too, so yah real world applications will probably show a big jump.
 
  • Like
Reactions: rp2011 and darkblu
M3 Ultra 60-core GPU should murder M2 Ultra 60-core in anything that supports Metal's RT acceleration, which got introduced in M3. So your choice of that chip could be well justified, depending on what you plan using it for. Games-wise, I'd wait to see how Cyberpunk Metal port performs on that.
I was convinced by a friend to keep the 60 core order and not switch down. It's exactly what I would have ordered had they refreshed with M3 last year, and even a smaller 16% bump over the M4 Max is still around the level of improvement between the M3 and M4 putting me about a year ahead in GPU performance which is what I want if I'll end up keeping the machine for 4 or 5 years. Once the base models catch up with that it'll be time to upgrade again.
 
Looking at the master chart of Metal Benchmarks, this still trails both the AMD RX 6900XT and the W6800X. AMD is now 2 generations ahead of this, never mind nVidia. Pathetic.
 
Last edited:
  • Like
Reactions: phuklok1
Look, the numbers are great and all that, but I can't run games on it. I get the whole - buy a PS5 argument (and I own one) but if it were easier on game devs to port it'd be a very nice thing.
Why do you say it can't run games? I have games on mine.
 
The Ray Tracing on the M3 Ultra that is lacking on the M2 Ultra is a big deal. At this point, No one should get an M2 Ultra to try and save a few bucks if you will be rendering Blender or Maya files. That would be a terrible decision.
Having said that, the overall GPU performance gains over the M4 Max are still kinda lacking for Blender work with that hefty $2000 markup. I think all signs point for me skipping the Ultra this year and waiting for a more modern Ultra, as the M4 Max alone is already nipping at it for substantially less, and the M5 Max later this year probably making it an even worse value over the M4 Max for anything other than running LLMS

I wonder if the M5 Max GPU will surpass the raster performance of the M2 Ultra. That would be pretty neat if it did.
 
Looking at the master chart of Metal Benchmarks, this still trails both the AMD RX 6900XT and the W6800X. AMD is now 2 generations ahead of this, never mind nVidia. Pathetic.
I had a Mac Pro with a W6800X Duo in it, and even the M2 Ultra is faster at rendering in Blender. In real-world benchmark scenes the M4 Max is faster than the M2 Ultra by up to 2x, and almost as fast as a 4080 Super. I don't put much faith in synthetic benchmarks, only real-world render times. Some really good examples here (M1 to M3):


These M processors aren't nearly as bad as you're suggesting.
 
I disagree. It makes a lot of sense. Those in the market for a Mac Studio differ from those in the market for an iMac or Mac mini for instance. What I expect to do on a Mac Studio vs an iMac is extremely different. The needs of someone using an M2 Ultra for graphics performance is better than someone using an M4 Pro. Is CPU performance is the underlying factor than the M4 Pro has an advantage. There should be an expectation for the consumer to know the computing needs that they have. The type of consumers that need 192gb of memory are no where near the same as some that barely uses 16gb of memory. Again we are talking M3 Ultra. For my use case the M4 Max is better suited for my needs. The M3 Ultra could definitely handle my use case, but would be overkill at a price point where I can purchase two Mac Studio M4 Max's.
That misses the point of my comment. My comment is that having an M3 chip come out after an M4 doesn't make sense.

I fully understand that someone with an iPad has different needs than someone who wants a Mac Studio. Thats obvious.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.