Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I've had my M1 Max Mac Studio since day one, and so far it's been excellent. It's done everything I've asked without batting an eye and has been super silent doing it. I'll keep it until I start seeing a need to upgrade, but I don't see that happening for at least 3 more years.
Many people are dumping their M1 Max and M1 Ultra Mac Studio models, just look on FB Marketplace or eBay to see so many for sale currently. The price has sure dropped also. You may want to keep yours, or sell it and upgrade to a M4 Max or M3 Ultra Mac Studio model, while you can still get a decent price for yours. I have a feeling the M1 Max and M1 Ultra Mac Studio models will soon become dinosaurs and not command that much of a price.
 
Yep, I just checked eBay and it seems that person paid hundreds of USD more for that M1 Max 32/512 GB Mac Studio than what they are going for now just a few months later. They are less than US$900 on eBay now. It should be even less than that on Kijiji or Facebook Marketplace or whatever. At those prices, maybe an M1 Max Mac Studio is a reasonable choice.
Yes, prices are dropping and there are many good deals to be found on M1 Mac Studio models. I just picked up on Facebook Marketplace locally (Saint Louis) a M1 Max 32 Core GPU 32GB RAM Mac Studio with 1 TB SSD for $850, and I also bought a M1 Ultra 64 Core GPU 64GB RAM Mac Studio with 2TB SSD for $1,900.
 
  • Like
Reactions: jouster and EugW
I hate when companies say up to xxx% faster.
They are completely correct in saying "up tp.." the prime example is watching a 12 minute YouTube cat video. The hyper-fast M7-Ultra will still take 12 minutes to do the cat video task. But it will be 15 times faster for 3D rendering. So it is "up to 15 times faster".

For most things that people do with their Macs, using a faster CPU or GPU will make only a very slight difference. So "up to.." is an accurate way to say it.
 
  • Like
Reactions: lotones
I just ran GeekBench AI using the Neural engines on an M3 Ultra and M4 Max. I can confirm that the Ultra is slower which would indicate it is more like to the 36 tops not 76.

Hey thanks for confirming that. How do we correct MR? This is the second time they posted those incorrect numbers -- and worse in a buyer's guide. Were someone to use that information, they wouldn't just be disappointed but they would have bought the wrong (for them) system. The M4 Max currently the fastest Mac for NPU-bound work while being cheaper than the M3 Ultra.
 
  • Wow
Reactions: jouster
man I bought a deco wifi 7 mesh a year ago and still don't have single wifi 7 device 🤣
I purchased a Linksys 7 last year. biggest benefit was replacing 3 nodes with just 1. And it is located at the end of my house.
 
I hate when companies say up to xxx% faster.

It makes me think that they are hiding something.
Perhaps you have a negative view on the framing but it's reasonable and accurate - I don't think your suspicious are warranted given the context is annotated. It might be nice to have sustained performance numbers but marketing generally focuses on peak for fairly obviously reasons - humans are obsessed with potential and that's reflected in how things are marketed.
 
If you own a 5K/5K2K monitor, you won’t be able to use the preferred/“ideal” scaled HiDPI resolution (3840x1620 in the case of a 5K2K monitor) on your fresh new M4 Max Mac Studio.

You probably can on the M3 Ultra Mac Studio, but at double the cost - and being forced to buy 96 GB of RAM at a minimum (with no obvious reason why).

So buy an M3-based MacBook Pro instead and stop complaining” - sure, let me order up a nano-texture M3 MacBook Pro … oh, wait.

Caveat emptor. — Some Roman, probably
 
I hate when companies say up to xxx% faster.

It makes me think that they are hiding something.

I don't think it is intentionally hiding. If you are sensitive on performance, you should wait until products come out and do the benchmarking on tasks that are relevant to you. The only benchmark that makes a real difference is your own task mix.
 
I got an M4 Max 64GB 16 cores 4TB just after they came out. Also picked up a 27” Studio display. This is replacing a 6 year old iMac with an i9. This is fantastic. It will last me for the next five years at least.

This is working very well. Everything opens in a flash. Everything is totally smooth. Affinity Serif apps were a bit balky on the i9 but are butter smooth on this setup.

I am very happy with this purchase.
 
Have the base M2 Max myself and it still feels aOK for the things I use it for.

Not into things like music production, video editing etc but more than enough for my needs.
 
Yes, prices are dropping and there are many good deals to be found on M1 Mac Studio models. I just picked up on Facebook Marketplace locally (Saint Louis) a M1 Max 32 Core GPU 32GB RAM Mac Studio with 1 TB SSD for $850, and I also bought a M1 Ultra 64 Core GPU 64GB RAM Mac Studio with 2TB SSD for $1,900.
These things are not holding their prices as well as I might have expected. I would have expected the M1 Max 32/512 Mac Studio to be around US$1000 at this time, so yes $850 is a nice deal. I guess it's a testament to just how quickly the M series chips have improved in the past several years.

BTW, my friend got around US$1200 for his M1 Max 32/512 just back in February, which as mentioned, I thought was too much.
 
I just ran GeekBench AI using the Neural engines on an M3 Ultra and M4 Max. I can confirm that the Ultra is slower which would indicate it is more like to the 36 tops not 76.

I am not sure what you are trying to communicate... its been well reported Geekbench 6 does not accurately measure m3 ultra performance..

 
  • Disagree
Reactions: BelgianChoklit
I would think that a lot of those Mac Studio M3 Ultra’s with 512 gigs of unified memory are going straight into AI roles. Not much point in that much Unified memory otherwise.
 
I am not sure what you are trying to communicate... its been well reported Geekbench 6 does not accurately measure m3 ultra performance..
.. okay, I will try again being more direct. The M3 Ultra is ~36 tops, not 76 tops if the M4 series is 38 tops. I did not say that I used Geekbench 6. I said I used Geekbench AI specifically targeting the neural engines running Core ML which is what the person was asking about. I just ran it on my base model M4 mini. Nearly the same scores as the M4 Max and faster than the M3 Ultra. That is what would be expected if the M3 Ultra is using two sets of M3 neural engines. The article you linked doesn't even talk about the neural engines. I'm not saying the M3 Ultra is not faster in many tasks because it is but it is also slower at some things.
 

Attachments

  • m4max.jpg
    m4max.jpg
    91.7 KB · Views: 41
  • m3ultra.png
    m3ultra.png
    73.4 KB · Views: 41
  • m4.png
    m4.png
    403.3 KB · Views: 32
Last edited:
  • Like
Reactions: bzgnyc2
I recall there were various cautions about interpreting FLOPS and TOPS due to different underlying tests (and perhaps integer versus floating point operations). Are we now at a point where I can safely accept the comparisons, such as in this article presenting TOPS, as good approximations for valid comparison?

Thanks in advance!
 
Good comparison table. The latest Mac Studio models are plenty powerful and should do any kind of work. I personally don't require all the power that the Mac Studio offers.
 
  • Like
Reactions: mganu
By the sound of it , no pun intended, the noise issues in rev 2 have mostly been sorted out which makes it imo a better buy
My Mac Mini M4 routinely runs over 100C with the fan manually set to 100%. Is the Mac Studio just as poorly designed?
 
I recall there were various cautions about interpreting FLOPS and TOPS due to different underlying tests (and perhaps integer versus floating point operations). Are we now at a point where I can safely accept the comparisons, such as in this article presenting TOPS, as good approximations for valid comparison?

Thanks in advance!

I would not interpret FLOPS and TOPS as being the same or compare one system's TOPS with another systems's FLOPS.

However, the performance of certain functions/system components are better specified using TOPS and others FLOPS. Generally if the function is designed for calculations on floating-point types (typically IEEE 32-bit or 64-bit floats), FLOPS is used. If the function is designed to perform bulk calculations on integer or reduced precision floating-point types (16-bit, IEEE or non-standard formats), [T]OPS is used. The floating-point performance of modern CPU is typically reported in GFLOPS (i.e. billion FLOPS). NPU are typically reported in TOPS (i.e. trillion [not general / not-the-traditional floating-point types] operations per second).
 
I’m fascinated by the use of NPU TOPs as if the GPUs on these Macs doesn’t blow the NPUs out of the water. Unless you think an iPhone 16 (35 TOPs) and an M4Max (38 TOPs) have broadly the same ML performance…

In theory, this is why tech bloggers have a job, where they test the real world performance. Which tasks are handled by the NPU, which by the GPU? When is the RAM bandwidth the limiting factor in either case? These are genuine questions and I would love it if someone can answer them.

Also: has Apple actually improved the NPU that much in one generation (M3 was 18 TOPs) or did they drop to INT8 or even INT4 calculations? Are the GPU and NPU capable of running the same precision?
 
My Mac Mini M4 routinely runs over 100C with the fan manually set to 100%. Is the Mac Studio just as poorly designed?
I think the coil whine was more prominent with the max models from what I remember but I've never had one. It's a somewhat common thing when using heat pipes though. I've owned M1, M2, M3 Ultra model studios and have never heard it. I frequently hammer them for days at a time doing AI upscaling work too so there would be ample time for it to crop up if it were going to.
 
I’m fascinated by the use of NPU TOPs as if the GPUs on these Macs doesn’t blow the NPUs out of the water. Unless you think an iPhone 16 (35 TOPs) and an M4Max (38 TOPs) have broadly the same ML performance…

Also: has Apple actually improved the NPU that much in one generation (M3 was 18 TOPs) or did they drop to INT8 or even INT4 calculations? Are the GPU and NPU capable of running the same precision?
Right there is that. M3 Ultra GPU is clearly faster so even with the neural engines being slightly slower it is generally the faster option for AI without even considering the memory differences. There are cases though where things do run primarily on the NPU. In those cases an iPhone 16, barring any throttling, would have pretty similar performance to the M4 series. Mac Whisper, for example, uses only NPU by default. You can force Topaz Video AI to mostly use the NPU by turning on low power mode and in that case that should level out the performance between devices that have similar TOPS.
 
I'm not saying the M3 Ultra is not faster in many tasks because it is but it is also slower at some things.

Agreed. What I am saying is if Geekbench 6 can't be trusted with the ultra, why do we think Geekbench AI is valid? You may be right that my benchmark skepticism is too high, and I couldnt agree more that the m4 Max is faster at some things than the M3 ultra. For me, the M3 ultra single core is fast enough, dont notice any change in use, but when I need the cores, definitely notice the speed of the ultra.
 
Agreed. What I am saying is if Geekbench 6 can't be trusted with the ultra, why do we think Geekbench AI is valid? You may be right that my benchmark skepticism is too high, and I couldnt agree more that the m4 Max is faster at some things than the M3 ultra. For me, the M3 ultra single core is fast enough, dont notice any change in use, but when I need the cores, definitely notice the speed of the ultra.
I read on another forum that while Geekbench cannot max out M3 Ultra's CPU, it's actually more reflective of most real world CPU-dependent workloads than benchmarks like Cinebench. Most real world third party software cannot max out the M3 Ultra's CPU either. Thus, there is no desire to design Geekbench in such a way as to consistently max out all CPU cores, because according to some, that would actually make it much less useful as a CPU benchmark. Also, you have the option of looking at the specific subtests in the bench. Some are more parallelized than others.

Not sure about AI though.
 
  • Like
Reactions: G5isAlive
Agreed. What I am saying is if Geekbench 6 can't be trusted with the ultra, why do we think Geekbench AI is valid? You may be right that my benchmark skepticism is too high, and I couldnt agree more that the m4 Max is faster at some things than the M3 ultra. For me, the M3 ultra single core is fast enough, dont notice any change in use, but when I need the cores, definitely notice the speed of the ultra.
I get your skepticism. I'm not an M3 Ultra hater. Like I said, I have one. I just have a base model right now, but depending on WWDC, I may get a beefed one with more RAM. Anyway, in this case, I think the math from the benchmark indicates it's using them or at least using all of the cores pretty equally between the different machines. I'm saying this because as far as we know, the M3 Ultra is using M3 NPUs, and the M3 series with 16 cores was said to be 18 tops, so 32 of those would be 36 tops with 100% scaling. We know Apple has said the M4 NPU is 38 tops, which is more than 36 tops. If the M3 Ultra was using the M4 NPUs, at the very least, it should have matched the benchmark performance of base M4/M4 Max, which it did not. It also was not less than half of the performance, which if it was just using one bank of the NPUs (16 cores), it should have been, so that indicates it is using more than 16 of the NPU cores. Just knowing that it is using the M3 NPU cores is really all you need to know to say M3 Ultra is not 76 tops because that would be equivalent to 32 M4 NPU cores. Doubling the M4 NPU number instead of the M3 NPUs is where the mistake in the math was likely made by the original overview posted.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.