Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Where the hell is the new Intel Mac Pro that Tim Cook said they'd release when they announced the M1 processor transition? Did everyone just forgot he said there'd be one more Intel Mac?
This I don’t remember happening. I remember them saying they had more Intel macs on the roadmap, but the only one to release was that 2019 iMac. I don’t remember a promise for an updated Intel Mac Pro.
 
I hate to break it to you, but Apple is never building a Mac that takes external video cards ever again.
But could they make one work externally with TB 4? I doubt that’s the solution, but I have no idea what “expansion “ and expandability mean in the era of Apple Silicon.
 
There will never be modular Pro with current state of soc development. All of the speed gains come from soldered RAM and storage.
Seems more likely that the Intel Mac Pro gets a round of updates to use newer chips and cards.
And the Mac Studio also gets a round of updates with M2 Ultra.
 
Hope this also come true.

15-Inch MacBook Air Expected to Launch in Spring 2023​

 
I don’t believe that you recognize what is driving people insane. It’s not that they eliminated the chip that didn’t exist, but it’s what it says about what this product is really for. And more importantly, who it is for. Apples about to do it again.. they’re gonna offer a pro, that’s only for one kind of pro. And they’re gonna leave everybody else out to the windows PC’s. People who want to use Mac’s, don’t want to see this.
Apple Silicon has the fastest cpu-core and the fastest gpu-core at the smallest process node. They are leading in performance per watt and do everything to scale their architecture up to an 114 billion transistors M1 Ultra. To make up another yet twice as large M1 Extreme is nuts. Go buy a Windows PC, if you can find one with 228 billion transistors!
 
Apple Silicon has the fastest cpu-core and the fastest gpu-core at the smallest process node. They are leading in performance per watt and do everything to scale their architecture up to an 114 billion transistors M1 Ultra. To make up another yet twice as large M1 Extreme is nuts. Go buy a Windows PC, if you can find one with 228 billion transistors!
Your excitement is somewhat misplaced. When the chip designers increase the number of cores they have to lower the frequency of those cores. AS chips having fewer cores than competition at least partly explains why its single core performance is higher. Then, of course, nobody buys the most powerful chips for running the tasks on a single core.

Also, both AMD and Intel switched to using chiplets (multi die chips) a while back which makes the number of transistors on a die/chip somewhat irrelevant. In fact, this news article is partly about this issue. It appears that Apple simply can't put more transistors on their humongous die and they may not have a solution for splitting it into multiple dies.
 
what the point of Mac Pro with M2 ultra since we already have mac studio for that
Mac Pro it should stand above or way above of Mac Studio
Possibly just the ability to add discrete graphics cards and much better cooling. What I would LIKE to see though is 2 or 4 M2 Ultras in a single device. Or really if they put 4 M2 Max chips in a single configuration, that would essentially be the same as 2 M2 Ultras (at least based on the fact that the M1 Ultra is basically 2 M1 Max chips)
 
  • Like
Reactions: dysamoria
Called it.


Never made much sense to only put this in a Mac Pro. The resources Apple would have to put in for a monster chip like this would be way bigger than what they could get in return from such a niche product.
I disagree. Apple made a decision to go into chip development, and when Apple makes such a decision it is with intent to do it right - - not simply to make low-end chips for the mass market. I am certain that Apple has and will continue to devote substantial resources to development at the "niche" ends of SoC usages.

Not all developmental steps will immediately result in profitable product on the street, but that does not mean such steps do not make sense. E.g. I would argue that engineering learned with the Newton was a solid component of Apple innovating the iPhone and the iPad, both of which promptly became dominant mass market products.

Especially at the highest end, there is huge value in constantly experimenting with new state-of-the-art technology like the chip marrying seen in the Studio Ultra or the (allegedly now canceled) Extreme rumored here. Even if current implementations turn out lame the research engineering process is critical to corporate success.
 
Last edited:
I am looking at AMD and intel chips for comparisons and M1 chips and perhaps the M2 chips will be slightly behind them for the the most part as 2023 progress. However, the biggest issue is the GPU comparisons with AMD/Nvidia - there is not comparision as dedicated gpu do take the cake. But I am all for efficient power usage! Apple does it well, balancing power/performance/heat (sometimes) so will see what 2023 brings. I think M3 is the one to look out off end of 2024 perhaps?. I still believe a 2 year cycle for these chips are better than 8-12 month cycle when it comes releases. I do not like these minor performance bump - socs each year. Crossing fingers but my eye is on the AMD/Intel side after 30 years as a mac user but god damn those power requirements for those PCs! :confused: like its all expensive either way you look at it.
 
Remember when the entry level Mac Pro was actually affordable?
Yes i remember. I bought entry model locally in Sweden many years ago for around 6000-6500SEK (around 600USD) on some special christmas offer.
 
A perfect M1 Ultra is a $2200 upgrade over a perfect M1 Max. Some of that is no doubt "Apple Tax" and $400 is the mandatory 32GB to 64GB upgrade, but a significant part of that is probably to account for lower yields and by extension higher product costs.
An Ultra is two Maxes stuck together with an interconnect.
 
Thanks for correcting my incorrect assumption that they could combine two Ultra chips into one. Do you think there will be a tier above the Ultra, or do you think that this will effectively be the next generation (i.e. M3 Ultra would be considered an M2 “Extreme”, and on and on)?

No on the second. The M2 Max appears to be adding two E cores and 2 GPU cores per GPU cluster ( 4 clusters 40 ; 32 -> 40 ). Gurman suggests that the Ultra will do some die management by putting `1 GPU core per cluster in reserve ( similar to A14X ); so 76. So 64 -> 76 is about 19% more cores, but I doubt Apple would slap an "adjective" change on that each generation.

I expect M3 to be perhaps another incremental core bump. More complicated cores that are more "powerful" but not dramatic change in the numbers. Some folks expect Apple to "go crazy" when they start using TSMC N3 fab process to add a 'ton' more cores. I don't expect that. More likely Apple is going to use some of the N3 density increases to make smaller dies that have a higher focus on cost control. That problem only gets worse with N2 , N18A , etc.

Whether there is tier above Ultra depends upon Apple's willingness to stop using monolithic laptop dies as 'too chunky' chiplets. If Apple won't let go of that crack pipe then there probably won't be anything bigger than an Ultra.
It looks like the future fab processes are going to force as different SoC disaggregation approach on Apple. ( N3 and forward are not going to allow shrinking cache at same rate as the compute logic shrinks. ) Depending upon how Apple 'chops up' the functionality of the Max die into pieces then could end up with better building blocks that could either produce a different ratio of CPU:GPU cores. 50 CPU : 32 GPU or 12 GPU : 128 GPU which would 'top' the core count in one grouping at a higher level than the "Ultra" does. But the "Ultra" and "next over Ultra" would have to share most of their chiplet building blocks.

What Apple needs is some desktop SoC building blocks. That doesn't mean they are going to build some Threadripper or Xeon W 'killer' SoC. They need better desktop Max SoC also in addition to "ultra" and "above Ultra" class SoCs. The Mini and iMac 24" are coasting on two year old infrastructure. Apple is still selling 2018 Intel Minis as new (as if that was really an overall market competitive offering as opposed to the "Mac only " offering. ). Their whole Rip van Winkle attitude toward their desktop products makes it look like they are going the Scrooge McDuck route and spend as little as possible on desktop ( and keep smoking that 'laptop SoC works' crack pipe. ). Putting a M2 inside of a Mini is 'hard' for Apple to do how? It jkust looks like intentional laziness. And lazy isn't going to produce a next tier over Ultra.
 
  • Like
Reactions: uczcret
Where the hell is the new Intel Mac Pro that Tim Cook said they'd release when they announced the M1 processor transition? Did everyone just forgot he said there'd be one more Intel Mac?
I think that is a result of the unbelievable amount of success they had with the M series. I don’t think they want to do anything to support intel now that they have them on the “ ropes”. Intel has expanded of fortune, to attempt to catch up. I don’t think Apple wants to help them.
 
Last edited:
When the chip designers increase the number of cores they have to lower the frequency of those cores. AS chips having fewer cores than competition at least partly explains why its single core performance is higher. Then, of course, nobody buys the most powerful chips for running the tasks on a single core.
Wrong! During single-core benchmarks there is no heat coming from the other cores, so the number of cores is absolutely irrelevant. And despite it's higher TurboBoost frequency the Intel Xeon core is still slower than a Firestorm core, because the ARM core is calculating more per cycle.

And in multi-core benchmarks where the 28-core Intel Xeon has indeed a theoretical core count disadvantage against the 20-core M1 Ultra, a 20-core Intel Xeon would be even slower. Because you not only lose the heat from 8 fewer cores, but also the performance from 8 fewer cores.

You simply can't built a competitive CPU on x86 architecture, because of its inherent inefficiency. It doesn't matter wether the M1 octa-core has more cores than all the dual-core, quad-core and six-core i3, i5 and i7 it replaces or if the M1 Ultra has fewer cores than the highest Intel Xeon. arm64 is always better than x86 period. All the extra energy is turned into heat, not performance.
Also, both AMD and Intel switched to using chiplets (multi die chips) a while back which makes the number of transistors on a die/chip somewhat irrelevant. In fact, this news article is partly about this issue. It appears that Apple simply can't put more transistors on their humongous die and they may not have a solution for splitting it into multiple dies.
But the M1 Ultra is a chiplet architecture. Two M1 Max chips fused together! So what are you trying to say?
 
But could they make one work externally with TB 4? I doubt that’s the solution, but I have no idea what “expansion “ and expandability mean in the era of Apple Silicon.
Precisely. Expansion doesn’t mean anything anymore. Because the way Apple Silicon works is completely different. I have sat and laughed for over a year now, at all the people in the Apple Silicon thread, that are struggling to get an external SSD to work with their Mac mini. Because they believe that external expansion for storage is necessary. Apple doesn’t. They just want you to pay more for the built in SSD and be done with it. And the performance that they will give you with it all built in is mind blowing. Expansion has a whole new meaning under this platform. And it’s fun to watch peoples head explode when they can’t understand that.
 
Apple Silicon has the fastest cpu-core and the fastest gpu-core at the smallest process node. They are leading in performance per watt and do everything to scale their architecture up to an 114 billion transistors M1 Ultra. To make up another yet twice as large M1 Extreme is nuts. Go buy a Windows PC, if you can find one with 228 billion transistors!
Yes. Every word you said is totally true. And that’s what is driving people nuts about how they are going to build their pro macs. Because it’s like every other part of Apple Silicon. It totally different from what we have had before.
 
Another explanation is Apple’s Frankenstein assembly of CPU chips just doesn’t scale beyond two. Routing thousands of high-bandwidth interconnects is beyond the current state of magic.

UltraFusion has 10,000 connections. The trade-off is that it takes up the whole edge of the Max die. If Apple needed another one of those then starts to get into a zero sum game with the RAM controllers (which largely 'eat up' the longer die edges with their longer distance, high bandwidth connection demands. ). At some point run out of die edge space. Can't have too many edge space consumers.

It isn't just a "CPU" chip. The GPU die space of the Max die greatly dominates the CPU space.

M1MAX.jpg



[ NOTE: Apple photoshopped out the UltraFusion connector on the initial M1 Max die photos which runs along just about the entire bottom of this picture.

About where the M1 Max label above is the 4 TB controllers. (other PCI-e and display output runs along the top edge above ). ]

The Max is pretty close to being a set of CPU cores wrapped around what would be normally called a GPU die ( GPU cores , video encode/decode multimedia accelerators , and memory controllers to keep the GPU cores fed. ). The CPU cores are being fed a limited slice of what the GPU needs.

If the UltraFusion and LPDDR components are going to compete more for edge space then will need to lower the bandwidth of one of those two. Or kick some of the secondary stuff off ( move TB , PCI-e , SSD , etc) off. (e.g., a smaller UltraFusion like connector (more pads/lanes in a smaller space and then had over the 'saved' edge space so something else (e.g. LPDDR5 controller).

If aggregate too broad a set of external I/O needs onto a single die then probably end up in conflict. Apple throws the entire 'kitchen sink' onto one monolithic die to maximize performance per watt savings. For laptops that can be a good trade-off. For reasonable sized desktops that gets more dubious. Especially, if want to greatly crank up the GPU core count which means need to crank up those LPDDR5 memory controller edge space consumption. ( have to keep massively parallel data consumers fed with massively parallel memory paths). Doubling up on "Poor man's HBM" + UltraFusion at the same time means going to have to let go of absolute max perf/watt at least just a bit. (have to push some other I/O off the die).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.