Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Forgive me if this has been discussed before. Does anyone know how the Mx SoCs handle things if one of more parts on the chip fail? For example, let's say that two of the GPU cores fail and maybe a bank of RAM. Does the SoC dynamically switch off those portions so the rest function as normal, or is the whole chip toast at this point?
I think there's a contingency in place as no one on MR has reported a catastrophic failure like that as of yet.
 
My ‘21 16” base MBP handles my simple YT needs “okay.” I am getting tired of the 45min export times, though.
 
I have an M2 Ultra Studio with 128Gb and 2Tb SSD arriving this week. But I’m going from a late 2019 16” MBP to the Studio so the performance bump should be insane.
Wroooooooom! Waiting as long as possible to update is the way to go. 🔥🔥🔥
 
Mac Pro with M2 Ultra is very sad.

Workstations are intended to squeeze the time as much as possible, and having a quiet workstation is just irrelevant. Of course you would save 50USD in electricity per year (if just any workstation user cares about that moreover when you spend +7000USD in the device...)

I'm working with M1 Ultra and in real world is not always double faster than M1 Max not even with the "optimized" Apple Apps.

I bet X86 would be king for a while in many jobs and for many people (more than ever)

Of course Mac laptops are insane, but the performance/battery is not enough to penetrate in the general public mind, and PCs are doing some tricks to get there somehow. as CPU is not the only battery eater, 18h vs 15h is not enough to switch OS, though performance is quite better in M2, who cares when you dont know nothing about chips, and Excell performance is even better in Windows no matter the chip.

A common Windows user is not going to switch to mac just to gain some extra battery time. The hassle to learn a new OS is a big lap to regular humans.

Apple needs a good ARM competitor in order to make Windows push harder the transition and then, force market to get better tech and better ARM advances.

And GPU limitations....

They are too confortable and with little alternatives in with the "kings of performance per watt" tittle. And X86 ecosystem is still the Hulk in computing
 
Last edited:
M2 Ultra is a very good CPU and GPU combo, but it's designed to fit a Mac Studio, not a Mac Pro. It's very impressive in the former machine and very meh in the latter.
 
  • Like
Reactions: dmr727
Mac Pro with M2 Ultra is very sad.
...
They are too confortable and with little alternatives in with the "kings of performance per watt" tittle. And X86 ecosystem is still the Hulk in computing
This feels like any doomsaying about the Mac since the late 80s.

It is possible enough people keep buy Macs... just because they like them :)
 
  • Like
Reactions: citysnaps
This feels like any doomsaying about the Mac since the late 80s.

It is possible enough people keep buy Macs... just because they like them :)

Well, I do think it's fair to say that the current Mac Pro is going to cede ground for some workflows. High-end graphics? No option on the Mac. High amounts of RAM? No option on the Mac.
 
Well, I do think it's fair to say that the current Mac Pro is going to cede ground for some workflows. High-end graphics? No option on the Mac. High amounts of RAM? No option on the Mac.
No doubt, Apple is ceding some of the "pro" market. Hopefully some future M* Quadra will claw a little of it back.
 
20% is not enough of a reason for me (as a casual user) to upgrade my M1 Mac Studio Ultra to an M2 Mac Studio Ultra but I can see why businesses may well consider if if they can complete tasks 20% faster. Time is money and it won't take long to recuperate the cost of upgrading.
Time is money. It also assumes that the 20% performance increase scales linearly. You might not see a 20% reduction in time.
 
I just waded through all >250 comments (don't ask why, probably brain damage). I am amazed that nobody has pointed out the one really interesting thing coming out of these benchmark numbers.

I mean, we all get that M2 Ultra CPU single- and multi-core have improved on the order of 15-20%. Totally expected, a continuation of the M2 story: Apple got stuck on the N5/N4 process, not only losing the advantage of a new process node, but also having to shelve their new designs, which couldn't be backported. As an almost-last-minute fallback position, the M2 was a good save but not the chip anyone wanted.

...but wait.

The M2 Ultra's "compute" numbers (ie, GPU) are NOT as expected. I'm not comparing to nVidia or AMD - I've no intention of wading into that morass. I'm just talking about the change from the M1 Ultra, or from the M2 Max.

Here's the M1 Max/Ultra and M2 Max/Ultra numbers. Note that these are not perfectly reliable as GeekBench numbers tend to vary a lot between reporters, probably due to a lot of people having no clue how to run benchmarks. And since they don't provide an average for this number (as opposed to the single/multicore), I'm just eyeballing the highest substantial cluster within recent results

M1Max: ~119k
M1Ultra: ~175k (very few outliers in 180+)
M2Max: ~138k
M2Ultra: ~223k

So... The M1 Ultra, as we already knew, scaled very poorly. ~47% higher score with 100% more GPU cores.

The M2 Max GPU score is roughly 15% higher than the M1 Max. But what about the M2 Ultra? It's 27% higher than the M1 Ultra and 61% higher than the M2 Max. That's a substantial improvement in scaling. It's *still* pretty poor, but notably better. (We also have very few scores for the Ultra so far; it's possible the numbers will change as more results come in.)

This is one of very few things in the M2 line that are outside the envelope of the 15-20% bump that comes mostly from clocks.

[Edit: added a comparison M1/M2 Ultra, accidentally left that out]

Yes.
Apple has a set of patents relating to scalability of the SoC generally. My guess is these have not yet been implemented.

There is also a second set of patents specifically related to GPU scalability and how work is distributed across multiple GPU cores. There are various pieces to this including
- affinity (try to schedule work on cores that were doing earlier related to work so hopefully get some cache reuse)
- work stealing (essentially the old way was to send work items to a queue on each core, and even if one core finishes earlier, work on a different core is stuck till that core finishes; the new way uses "virtual" queues, and if one core finishes early, work can be moved from a more busy core's queue to the non-busy core)

My guess is at least some of these GPU scalability ideas have been implemented. As you say, things are still not perfect (I suspect the ideas were implemented in a hurry, without time for full simulation and optimization, but better than nothing.)
 
  • Like
Reactions: dgdosen
Yes.
Apple has a set of patents relating to scalability of the SoC generally. My guess is these have not yet been implemented.
Right, I wasn't even aware of many of those until I read your research on github. Thanks for that, BTW - very enlightening!

There is also a second set of patents specifically related to GPU scalability and how work is distributed across multiple GPU cores. There are various pieces to this including
- affinity (try to schedule work on cores that were doing earlier related to work so hopefully get some cache reuse)
- work stealing (essentially the old way was to send work items to a queue on each core, and even if one core finishes earlier, work on a different core is stuck till that core finishes; the new way uses "virtual" queues, and if one core finishes early, work can be moved from a more busy core's queue to the non-busy core)

My guess is at least some of these GPU scalability ideas have been implemented. As you say, things are still not perfect (I suspect the ideas were implemented in a hurry, without time for full simulation and optimization, but better than nothing.)
My take was a little different. If you look at GPU perf M1->M2 it seems to be barely different isoclock. I think the drastic falloff in performance scaling on the Ultra is due to implementation issues with the ultrafusion interconnect - all the speed in the world (which 20tbps is a good first approximation of!) won't help you if (to pull a random example out of my ass) you have to frequently resend packets on the NoC because you miscalculated propagation delays on your worst-case transit across the interconnect. If that's the case then the GPU improvement on the M2 Ultra is more likely due to fixing the easiest of the glitches in the interconnect.

On the bright side they've had a lot more time now to simulate and optimize :) so hopefully the M3 Ultra (and larger?) will kill it. I'm sorta clinging to the hope that they really got shafted by the process delay and that the misfires of the M2 generation (most obviously in the pathetic Pro workstation) will all be remedied with the M3, and are not a true indication that Apple has decided to write off the high end of the market.

Of course I've been wrong about the M3 already - I thought they'd at least announce it for the Pro by WWDC. :-(
 
  • Like
Reactions: DailySlow
That's exactly my point. There is not that much difference YOY for CPU/GPU speed / productivity increases. The delta starts at about the 5 year mark where and older system will be "noticeably" slower. Even then, you would probably have to time it or have it side by side to notice.
That's been true up until recently for Intel chips, which were generally getting single-digit percentage bumps each generation. It hasn't been true for Apple's A-series chips, with many (though definitely not all, or even most) generations seeing 20%+ bumps. This is obscured largely because performance hasn't been an issue for most iphone users for many years. Of course with the M series, we only have two generations so far, and the M2 is known to be highly un-representative because it's not the "real" M2 design, due to last minute pullbacks because of the delayed N3 process. (Notably, they still got 15-20%, mostly due to clocks.)

Where you notice performance changes is entirely dependent on your workload. If you're using a word processor, chances are your 2012 MBP will work as well as your 2023 MBP for nearly everything you do. If you're importing, converting, and exporting media all day long, you're going to notice that single-generation delta immediately because you'll have an extra hour+ free every single day. Trust me, that's something you notice. Most real workloads fall somewhere in between, and I'd avoid generalizing about that.
 
That's been true up until recently for Intel chips, which were generally getting single-digit percentage bumps each generation. It hasn't been true for Apple's A-series chips, with many (though definitely not all, or even most) generations seeing 20%+ bumps. This is obscured largely because performance hasn't been an issue for most iphone users for many years. Of course with the M series, we only have two generations so far, and the M2 is known to be highly un-representative because it's not the "real" M2 design, due to last minute pullbacks because of the delayed N3 process. (Notably, they still got 15-20%, mostly due to clocks.)

Where you notice performance changes is entirely dependent on your workload. If you're using a word processor, chances are your 2012 MBP will work as well as your 2023 MBP for nearly everything you do. If you're importing, converting, and exporting media all day long, you're going to notice that single-generation delta immediately because you'll have an extra hour+ free every single day. Trust me, that's something you notice. Most real workloads fall somewhere in between, and I'd avoid generalizing about that.
My workload is photo/video/graphic creation. So I do push my hardware a bit.
 
My sense after plowing through all these comments is that the M2 level SOCs are not something to wet your pants over. Apple has an opportunity to realize big tech/market share advances in M3 and I hope so. Good news is that my M1 mini will be useful longer because the state of the M2s and legacy compatibility demands will mean the M1s won't age out so quickly.
 
My sense after plowing through all these comments is that the M2 level SOCs are not something to wet your pants over. Apple has an opportunity to realize big tech/market share advances in M3 and I hope so. Good news is that my M1 mini will be useful longer because the state of the M2s and legacy compatibility demands will mean the M1s won't age out so quickly.
I hope you are right with that one about the M1 not aging out so quickly. Apple needs to slow down on the kicking perfectly good machines to the curb.
 
I hope you are right with that one about the M1 not aging out so quickly. Apple needs to slow down on the kicking perfectly good machines to the curb.
Apple is pruning machines quickly right now because it wants to stop shipping intel OS and get its customers on ARM. The M1 will have a perfectly reasonable life span because it is part of "the future" still.
 
I completely agree. The Mac Pro doesn’t seem well thought out. No pcie 5. No upgradable ram. No pcie gpu. No standard nvme slots. I doubt m2 ultra will run at any higher clock speeds in the Mac Pro than the Mac Studio.
yep absolutely not thought out. seems like they are reverting to the idea of the trash can mac pro and not thinking of the actual pros. if they were serious about it they would tick every box off.
I would also bet the same for the clock speed, it is a mobile chip and was designed for the macbook pro and it's just doubled-up.
 
yep absolutely not thought out. seems like they are reverting to the idea of the trash can mac pro and not thinking of the actual pros. if they were serious about it they would tick every box off.
I would also bet the same for the clock speed, it is a mobile chip and was designed for the macbook pro and it's just doubled-up.
Agreed, when it comes to the mac pro, give up with proprietary crap like ram and ssd, and give the pros upgradability. I watched a youtube video this morning about an ARM workstation, it has ram/ssd/video card and expansion slots. It's a BEAST. It was the fastest computer they have ran on any bench mark.

Apple needs to follow this lead for their PRO series systems. Both laptops and workstations. Upgradability. Keep the air/mini as is with their non upgradable money making design, but give pros what they need.
 
  • Love
Reactions: jezbd1997
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.