Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Unpopular opinion.

M2 Pro/Max - 5np
M2 Ultra - will never exist
M3 - 3nm
M3 Pro/Max/Ultra - 3nm
M4 - 3nm “enhanced”
M4 Pro/Max - 3nm “enhanced”
M4 Ultra - TBD
M5 - 2nm
M5 Pro/Max - 2nm

And……go!
Sounds reasonable to me. And how might a new Mac Pro be configured?
 
Sounds reasonable to me. And how might a new Mac Pro be configured?

Still seems to be two Ultras connected via UltraFusion based on both the prototype M1-based SoC that Gurman said Apple tested and the leaks/rumors from other folks about CPU and GPU core counts for an M2 version.

EDIT - That Apple didn't release said dual M1 Ultra Mac Pro could be a sign they felt it was not the correct path and are working on something else.
 
Last edited:
  • Like
Reactions: Tagbert
Sounds reasonable to me. And how might a new Mac Pro be configured?
That’s the million dollar question. I puzzle it out in my mind and still have trouble believing they’re simply going to bolt two Ultras together and that’s going to be satisfying to anyone which is why I think an M2 Ultra gets skipped to concentrate on filling out the 3nm portfolio. I know that flies in the face of conventional thinking and rumors, so I present it here as grist for the mill. I’ll take my lumps if I’m wrong.
 
  • Like
Reactions: CWallace
That’s the million dollar question. I puzzle it out in my mind and still have trouble believing they’re simply going to bolt two Ultras together and that’s going to be satisfying to anyone which is why I think an M2 Ultra gets skipped to concentrate on filling out the 3nm portfolio. I know that flies in the face of conventional thinking and rumors, so I present it here as grist for the mill. I’ll take my lumps if I’m wrong.

That (per Gurman) Apple did not end up releasing the "dual M1 Ultra" Mac Pro after developing/testing it could be a sign that they were not happy with the performance of such a configuration (or the defect rate was too high).
 
  • Like
Reactions: Zdigital2015
That (per Gurman) Apple did not end up releasing the "dual M1 Ultra" Mac Pro after developing/testing it could be a sign that they were not happy with the performance of such a configuration (or the defect rate was too high).
And I’ll pretend I read that somewhere before (I probably did) instead of just now in your post. I think Apple knows that 5nm has useful life and they can get a little more out of it before worrying about the next leap. If 5nm enhanced or whatever it ends up being called ends up being feasible for them to create an Ultra they will. But the Ultra is such a cutting edge part, I think if it’s the foundation for the Mac Studio and the Mac Pro the benefits of skipping 5np and building off of 3nm exclusively are too important to waste time with the 5np bump. There’s still a lot of experimentation going on with the ASi SoC that we don’t see as Apple is being incredibly conservative because any big screw up is going to hurt them dearly. As in, a big enough one could wreck the whole she-bang. Sounds OTT, but they’re playing high stakes poker right now even if the gimme more faster faster crowd here ignores the risks and pressures Apple faces. The M1 SoC could be with us another 2-3 years and still be a viable path for a lot of users, I see that old design MBA being the 2012 13” MBP of the ASi era. Or iPad 2 of the ASi era…anyways. My 2 cents.
 
  • Like
Reactions: CWallace
And I’ll pretend I read that somewhere before (I probably did) instead of just now in your post. I think Apple knows that 5nm has useful life and they can get a little more out of it before worrying about the next leap. If 5nm enhanced or whatever it ends up being called ends up being feasible for them to create an Ultra they will. But the Ultra is such a cutting edge part, I think if it’s the foundation for the Mac Studio and the Mac Pro the benefits of skipping 5np and building off of 3nm exclusively are too important to waste time with the 5np bump. There’s still a lot of experimentation going on with the ASi SoC that we don’t see as Apple is being incredibly conservative because any big screw up is going to hurt them dearly. As in, a big enough one could wreck the whole she-bang. Sounds OTT, but they’re playing high stakes poker right now even if the gimme more faster faster crowd here ignores the risks and pressures Apple faces. The M1 SoC could be with us another 2-3 years and still be a viable path for a lot of users, I see that old design MBA being the 2012 13” MBP of the ASi era. Or iPad 2 of the ASi era…anyways. My 2 cents.
I come to a similar conclusion - or perhaps no conclusion at all - around what Apple might do with Mac Pro. The Studio is a sweet new box that IMO pushes the Mac Pro up and the Mac Mini down on the desktop product performance list. So if Mac Pro is "up" it really needs more than just double-ultra or something along those lines.

The only ideal solution that comes to mind would have to be new and on 3nm process, yet folks say a new process never starts with the top usage and 3nm is way new (will not deliver to Apple until 2023). Plus of course poor initial yields mean 3nm chips will be hella expensive for weeks. If we see a Mac Pro in 2022 my guess is it would be using 5nm (good yields) chips in some configuration we have not seen before, not 3nm.

So personally I have no clue. Perhaps 9/7 Tim Cook will give us better guidance than again telling us that a new Mac Pro is coming someday, we will see. It has been so long now that Apple must deliver a great product, and if it means 6 months more delay it is what it is.
 
Last edited:
Still seems to be two Ultras connected via UltraFusion based on both the prototype M1-based SoC that Gurman said Apple tested and the leaks/rumors from other folks about CPU and GPU core counts for an M2 version.

EDIT - That Apple didn't release said dual M1 Ultra Mac Pro could be a sign they felt it was not the correct path and are working on something else.

Either that was always for a future generation, or it was indeed an M1 Quad. But in the latter scenario, they must have cancelled that long, long before shipping the M1 Max, because it lacks the appropriate connector.
 
Either that was always for a future generation, or it was indeed an M1 Quad. But in the latter scenario, they must have cancelled that long, long before shipping the M1 Max, because it lacks the appropriate connector.

There was an Apple leaker who leaked a visual of the UltraFusion connector in both "landscape" (as used in the M1 Ultra) and "portrait" mode, which could bind two M1 Max along their longitudinal axis (so side-to-side as opposed to the top-to-bottom of the M1 Ultra).

Gurman also was correct on the CPU and GPU core counts for M1, M1 Pro, M1 Max and M1 Ultra so it stands to reason his core counts for the Mac Pro SoC (which was equal to 4xM1 Max or 2xM1 Ultra) was also correct. That strongly implies the Mac Pro SoC as originally envisioned would have been 4 M1 Max bound together and that should have been possible if Apple used UltraFusion on both the longitudinal and latitudinal axis.

Now, M1 Max as released only has an UltraFusion connector on the latitudinal axis (for use when paired into an M1 Ultra) so we can only speculate that Apple's plan was to make a separate model of M1 Max that also had the UltraFusion connector on the longitudinal axis (likely mirrored in a left-right pair to complement the top-bottom pair of an M1 Max). It's possible the test yields for such an SoC were poor due to the complicated nature and TSMC was not confident they could significantly improve them.

It is also possible that such a configuration was "mass producible", but there was some type of issue when scaled to that extreme number of CPU and GPU cores that the performance delivered was not justifying the projected BTO price (considering Apple wants $2400 for an M1 Ultra, a $4999 minimum price for an "M1 Extreme" seems likely).
 
If those two added cores are presumed to be p-cores (are they? I can't find much info on that), that's a 10+2 instead of 8+2 setup. If we assume the same clock rate bump, that would indeed be a 38% improvement.

Of course, as always with additional CPU cores, diminishing returns.



Sure, GPU is a different matter. So is e-core speed.
I think I saw it's unclear if those +2 cores are most likely E cores or some new 'middle ground' cores (in case of 3nm I think). Having 4 E cores might not be such full load performance jump, but might be more beneficial for battery life - there's a MaxTech video about low power mode in M2 Air with 4 of those better cores and it's a great option.
 
I am really interested in how AMD's current Ryzen 6XXX chips and the upcoming 7XXX (later this year) stack up against the M2. Apple marketing and consumers parrot the idea that Apple Silicon is years ahead of the competition in performance per watt but marketing talk and anecdotal evidence is not valid. We need to see numbers on paper.

I have read some pro-Apple reviews and they say Apple Silicon is 5 years ahead of the competition and x86 is dead. I have read neutral reviews and they say the M2 is ahead in performance per watt but it's not as big as we thought. AMD's recent chips are closer to the M2 in power consumption than people think. I have also read pro-Intel and AMD reviews and they say the M2 isn't ahead at all.

How can we take an objective, comprehensive look at the M2? Is it really ahead of the competition? If yes, how do measure it? We can't use the popular benchmarks either because we get inconclusive data.
Find reviews that compare apps/workflow that you use and how it relates to form factor that you want and if you need it more on battery or plugged in.
 
There was an Apple leaker who leaked a visual of the UltraFusion connector in both "landscape" (as used in the M1 Ultra) and "portrait" mode, which could bind two M1 Max along their longitudinal axis (so side-to-side as opposed to the top-to-bottom of the M1 Ultra).

Gurman also was correct on the CPU and GPU core counts for M1, M1 Pro, M1 Max and M1 Ultra so it stands to reason his core counts for the Mac Pro SoC (which was equal to 4xM1 Max or 2xM1 Ultra) was also correct. That strongly implies the Mac Pro SoC as originally envisioned would have been 4 M1 Max bound together and that should have been possible if Apple used UltraFusion on both the longitudinal and latitudinal axis.

Now, M1 Max as released only has an UltraFusion connector on the latitudinal axis (for use when paired into an M1 Ultra) so we can only speculate that Apple's plan was to make a separate model of M1 Max that also had the UltraFusion connector on the longitudinal axis (likely mirrored in a left-right pair to complement the top-bottom pair of an M1 Max). It's possible the test yields for such an SoC were poor due to the complicated nature and TSMC was not confident they could significantly improve them.

It is also possible that such a configuration was "mass producible", but there was some type of issue when scaled to that extreme number of CPU and GPU cores that the performance delivered was not justifying the projected BTO price (considering Apple wants $2400 for an M1 Ultra, a $4999 minimum price for an "M1 Extreme" seems likely).
I still have to think that Apple did not get the performance lift they expected out of binding 2xUltras together when they started testing and figured out that macOS needs some extensive plumbing changes to get the performance out of the SoC and there was no time to do that with Monterey and so they scrapped the whole thing over hardware and software issues.
 
More like chicken, egg and a rooster ;) First there have to be cameras that are equipped with the Uhs III slots and can (want to) write that fast to SD. But in the meantime CFexpress cards took off making space for higher end market.
Yup. Far faster than SD, XQD (same form factor) was in use in my Nikon 5 years ago, evolved to CF Express in current pro Nikons. The XQD/CFEx form factor is IMO far superior to SD in addition to being faster. But slow SD is in all kinds of consumer products so it is not going away.
 
Last edited:
There was an Apple leaker who leaked a visual of the UltraFusion connector in both "landscape" (as used in the M1 Ultra) and "portrait" mode, which could bind two M1 Max along their longitudinal axis (so side-to-side as opposed to the top-to-bottom of the M1 Ultra).

That’s what I mean. The M1 Max ad shipped only has one connector. So long before it shipped, they must have shelved the idea.
 
You seem to be mixing together node size and microarchitecture.
Yes, because it makes sense to do so. I am very well aware of the differences. And I don't see from you arguments below, where I was wrong?
Assuming Apple keeps to the same naming convention it has been using with the A-series chips, M1/M2/M3 should refer to the microarchitecture (M1 is A14-based = Firestorm/Icestorm cores; M2 is A15-based = Avanche/Blizzard cores; etc.), not the process.
That doesn't really change the fact, that the M3 is likely to be 3nm based.
It so happens that they also changed the process between M1 and M2, going from N5 to N5P, but you can have a process change with or without an architecture change, and an architecture change with or without a process change.
More often than not, you can't do a process change without doing an architecture change. But not sure why you think this is relevant to my post?
 
  • Like
Reactions: Tagbert
Yes, because it makes sense to do so. I am very well aware of the differences. And I don't see from you arguments below, where I was wrong?

That doesn't really change the fact, that the M3 is likely to be 3nm based.

More often than not, you can't do a process change without doing an architecture change. But not sure why you think this is relevant to my post?
Sure--here was my thinking:

You were replying to a poster who was wondering what the advantage was in going from 5 nm to 3 nm. You wrote:

"Smaller electronics in the chips means they use less power and thus produce less heat, or alternatively, Apple can dial up the speed more, when switching from 5nm chips to 3nm chips. It is not going to make them 200% faster, but the jump from M2 to M3 is thus likely significantly larger than the one from M1 to M2."

So in the first sentence you explained, correctly, the primary advantage in going from 5 nm to 3 nm. Then in the 2nd sentence, where you tried to give a sense of the magnitude of the advantage, instead of writing "the jump from 5 nm to 3 nm", you wrote the "the jump from M2 to M3", which gives your audience the misimpression that 5 nm = M2 and 3 nm = M3, when in fact they're two qualitatively different designations.

That is why I said you seemed to be conflating the two.

I would have instead written something like this:

"[Your first sentence, then...] According to TSMC, if you use all of that process improvement for speed, everything else being equal, a chip will have ~10–15% better performance on 3 nm than 5 nm.

But: That's not the only generational improvement you can make. Separately, you can also improve the chip's performance by improving its "microarchitecture", i.e., its design. That's what the M1/M2/M3 designations mean.

Thus there's three possibilities for the next gen MBP's:

1) Stay with both the same process (5 nm) and the same microarchitecture (M2) as on the current Air and 13" MBP (that wouldn't, of course, stop them from improving the coprocessors, like the neural engine, etc.). That's what we'd expect to see with a fall release.

2) Improve the process to 3 nm, but keep the microarchitecture the same (=> a 3 nm M2).

3) Improve the process to 3 nm, and also upgrade the microarchitecture to M3 (=> a 3 nm M3). Then you'd get the improvement from the microarchitecture on top of the improvement from the process."

And then I might close by saying somethign like:

"You may recall hearing of Intel's "tick-tock" production model. In it, they alternated a process improvement, i.e., a die shrink ('tick') with a microarchitecture improvement ('tock'). This is not Apple's model, but it nevertheless illustatrates how these are two qualitatively different things." [They don't do tick-tock anymore; now, according to Wikipedia, it's more like tick-tock-optimization".]
 
Last edited:
Thanks for correcting me (seriously). Yes, what upsets me that the SD port and HDMI robbed us of a fourth TB4 port.

I suspect that the ProMotion screen robbed you of the fourth TB4 port. See below.

Whatever - bear in mind that every full-fat TB4 socket consumes 4 PCIe lanes worth of bandwidth, needs at least a share of two displayPort streams, up to 15W of power & the plumbing to enable charging, and a "Thunderbolt re-timer"/port driver chip. It's not cheap - either in money or CPU resources - to implement and you can provide several single-purpose ports on a fraction of those resources. Also, the old 4-port Macs only had two TB controllers vs. three on the new MBPs - so you've still got more total bandwidth plus the new ability to have hubs with extra downstream TB4 ports.

I’m assuming Apple had enough on chip bandwidth to have another TB4 port, but not being an engineer, it’s possible they didn’t.

The Mac Studio with M1 Max uses the same SoC as the Max versions of the MBP, has 4 TB4 ports and SD, and HDMI and 10Gb Ethernet and extra 5/10Gbps USB ports. The MBP lacks Ethernet and USB so it looks like there's plenty of spare bandwidth for the 'extras' needed in a laptop.

The only advertised advantages of M1 Max over the M1 Pro are GPU cores, media processors and RAM bandwidth so there's no particular reason to think that the Pro has fewer TB4 ports. You can pretty much pick out 4 "blocks" in the images of both the Pro and Max that could be 4 TB ports. So it does look like there's a TB4 port missing in action on the MacBook Pros.

Both the Mini and the Studio appear to get a "free" HDMI display output, on top of what is available via the Thunderbolt ports, which suggests that the SoCs have a hard-wired "internal display" output, which is freed up on a headless machine. However, even on the Studio these max out at HDMI 2.0 4k@60Hz - maybe Apple cut corners by not implementing HDMI 2.1 on the Mini but it's a bit surprising that they'd do the same on the Studio unless this was a fundamental limitation.

My speculation is that, in order to get ProMotion and true HDR at nearly-4k on the 16" MBP, Apple had to raid one of the M1 Pro/Max's potential TB4 ports for a full DP 1.4 stream, rather than use the "internal display" connection. That makes the HDMI ports on the 14/16" MBP more of a "freebie".
 
That’s what I mean. The M1 Max ad shipped only has one connector. So long before it shipped, they must have shelved the idea.

Perhaps. Or perhaps the M1 Max was developed in two forms: one with the single UltraFusion connector for Max and Ultra and then another with two UltraFusion connectors for the "Extreme" which was only going into the Mac Pro.

I guess someday we'll find out what the true plan was, though it's academic at this point since the model never left the lab.
 
Another unpopular opinion: do we really need yearly (or shorter) upgrade cycles? Why not biennially? If the improvements are truly significant ("revolutionary" even), then do not be beholden to timelines, but despite their name we all know these are primarily consumer devices.
Personally, I don't care about small incremental improvements. I've never upgraded anything for a 20% gain. The difference is just not that meaninful for me. I usually look for doubling of performance, at least in one particular task that I'm regularly doing. M1 Pro/Max did it for me with compiling.

But then 2021 Macbook Pro gave us so much more than just computing power. I feel spoiled and it will take a lot to get me excited about 2022 or 2023 mbps.

I think, if Apple adds raytracing capabilitites to their GPUs, I'll upgrade from my 14" M1 Pro mbp to 16" M2 Max mbp, as that would indicate not just an order of magnitude jump in performance for specific tasks but Apple's commitment to all things 3D on a mac.
 
Another unpopular opinion: do we really need yearly (or shorter) upgrade cycles? Why not biennially? If the improvements are truly significant ("revolutionary" even), then do not be beholden to timelines, but despite their name we all know these are primarily consumer devices.

I'm not convinced we'll get an annual upgrade cycle. (We didn't on the MacBook Air.)

The iPad Pro tends to get updated every 18 months (on average, slightly more frequently), and that seems fine by me for Macs.
 
Perhaps. Or perhaps the M1 Max was developed in two forms: one with the single UltraFusion connector for Max and Ultra and then another with two UltraFusion connectors for the "Extreme" which was only going into the Mac Pro.

Could be.

But then why not develop it in a third form that doesn't spoil the existence of the M1 Ultra?


I guess someday we'll find out what the true plan was, though it's academic at this point since the model never left the lab.

Yeah. I am a little curious what happened there.

And, of course, many questions remain about how they're gonna do the Mac Pro.
 
But then why not develop it in a third form that doesn't spoil the existence of the M1 Ultra?

I write this without knowing how TSMC lays out Max and Ultras on the wafers nor how the UltraFusion interposer binds two Max into a single Ultra, but I could see the wafer composed solely of M1 Ultras that are then cut along the interposer to make two Max so therefore the interposer is "standard" on the Max by design.
 
I write this without knowing how TSMC lays out Max and Ultras on the wafers nor how the UltraFusion interposer binds two Max into a single Ultra, but I could see the wafer composed solely of M1 Ultras that are then cut along the interposer to make two Max so therefore the interposer is "standard" on the Max by design.

*nod*
 
Another unpopular opinion: do we really need yearly (or shorter) upgrade cycles? Why not biennially? If the improvements are truly significant ("revolutionary" even), then do not be beholden to timelines, but despite their name we all know these are primarily consumer devices.
1661882206667.png

Heh. So, yeah, maybe a little unpopular in some circles...

I think most consumers would appreciate yearly upgrades (and not because they upgrade yearly). For instance, suppose the product comes out yearly, and you're in the middle of the upgrade cycle. You have two nice choices: either buy a relatively new (i.e., cutting edge) product, since it just came out a month ago; or wait six months for the next gen. Either choice is pretty good. But if you double the update cycle to a year, you either need to buy a product with a year-old processor, or wait another year for an upgrade. Not so good.
 
  • Like
Reactions: Allen_Wentz
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.