Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
they have **** on this pro user by not releasing a mid level pro machine, something akin to the old G4 Cube in fact.

I don't need upgradability if the specs are right from the start, unlike the iMac Pro gpu which was crap in comparison to Nvidias offerings [it could barely handle the 5k screen].
A mid level pro machine in ADDITION to a current Mac Pro yes. If they only released a cube like machine that would be ******** on pros.
 
  • Like
Reactions: peanuts_of_pathos
Your argument falls down in several places, claiming "it's not worth it".

Before the 2019 Mac Pro, all T2-enabled Macs used soldered 'naked' SSD chips. What did they do for the Mac Pro? They created daughter cards to achieve the same logical setup (T2 controlling the chips, doing transparent encryption).

Before the 2019 Mac Pro all TB3 enabled Macs used GPUs either incorporated into the CPU or soldered to the mainboard, allowing video output to be routed to TB3 controllers. What did they do for the Mac Pro? They created a customised version of a PCIe slot, and custom video cards to achieve the same end result (display output on any of the system's TB3 ports, not just those on the card).

The above is a minor rewrite of history. The Mac Pro 2019 was not the first system to get NAND daughter cards. The iMac Pro 2017 was. That happened two years prior. The Mac Pro just picked up what was already established for the top end systems.

Why did the iMac Pro get NAND daughter cards.

1. It was a cheaper way to get to 4TB SSD. Just use twice as many NAND chips as other 2016-2017 mainstream SSD implementations.

2. Apple gets to deal with less exotic ( more affordable0 NAND chips that align with the bulk of the rest of the Mac line up. ( i.e., buying same stuff for iMac Pro and later Mac Pro as buying for rest of Mac products. ). Following Apple mantra of high shared component overlap across producs.

3. For some higher end entriprises where data storage destruction is part of end-of-lifecycle at the company shredding two NAND daughter cards is far , far , far more cheaper than shredding the logic board. Similarly disk failure... just keep the old part (and shred ) and install replacement parts.

4. spread the wear uaage out over more NAND chips then can get longer lifetime.


Secondly what happened with iMac 2020 ....

https://www.macrumors.com/2020/08/07/27-inch-imac-storage-affixed-to-logic-board/


did a half backslide from iMac Pro solution. ( not too surprising since Apple was looking for minimal changes to older iMac logic board ). Unless want to get to 2TB or 4TB ... soldered. Just like the laptops.

Also any semblance of a SATA connector or 2.5" / 3.5" drive bay banished from the system.









I'm not suggesting that the replacement for the 2019 Mac Pro will be a like-for-like swap, but with an Arm CPU. I have no idea what it will be. But given there are no 'easy' CPU upgrade options for the current Mac Pro (besides starting the base at a higher tier, and perhaps offering the non-M version CPUs with a lower memory ceiling),

But if the Apple Series (ARM CPU) doesn't provide what the Intel chips deliver then some functionality of the Mac Pro 2019 will die. :

A. no SATA connector. Then the whole J2 drive bay augument goes into the trash can. ( That will cause a dust up with the die hard Mac Pro customer base. )

B. If doesn't have four x16 PCI-e v3 (or better ) feeds ... that nukes slots. [ which lines up with the rumors around the M1 variant being "half sized" . ] if it drops down to just one or two x16 feeds then there is a huge mismatch with the current Mac Pro chassis. The bulk of the system's power allocation and volume allocation is there to provision the slots , not the CPU.



As for "easy upgrades". The Xeon W 3300 series would probably work with some limited board adjustments. New socket. Apple can keep the 12 RAM dimms ( there probably are eight Memory controllers unless Intel squeezes the new die into the old socket). New chipset. Apple will need PCI-e v4 redrivers down to slot 1 (at least. ).

Apple could leave the current PCI-e switch to slots 2 , 4 , 5, 6 , 7, 8 at PCI-e v3. (that cuts down on re-driver work. )


the idea of an actually 'new' Intel Mac Pro seems quite odd.

if Apple M series can't do what the Intel solution can then , it isn't odd at all. Apple is winning the single thread drag race. But they aren't as clearly wining the :

i. supporting internal 3rd party internal drives race. ( more like a jihad on SATA. )

ii. supporting 3rd party discrete GPUs. ( major message was that iGPUs are great and that Apple is out to be king of the iGPU world. If Apple holds that party line at WWDC 21 on that theme. In the core Mac Pro user space that is a substantive disconnect. ( even worse if WWDC 21 theme is " we are even more right about iGPUs than last year". )

iii. Not winning the native boot multiple OS path either. ( where demographics of hard core Mac Pro users base likely sepreated from mainstream boot camp usage. e.g., The "war' on Nvidia drivers is a much bigger issue in Mac Pro space than overall Mac market. )




But then your explanation also seems even more odd, to me: it's "not worth it" to build high-end CPUs (and apparently going back to multiple discrete CPUs to allow re-use of shared silicon is just completely off the table in your mind) for a product that's low-volume,

That's not the point I was making. A high end CPU at low volumes. That is substantively different from a low volume Apple product's CPU. Apple sells MP 2019 Xeon W 3200 probably at sub 100K/yr run rates. Intel sells Xeon W 3200 ( and Xeon SP's based on the exact same die) at one or two order magnitude higher rate. Dell/HP/Lenovo so vastly more than Apple does. If you pie chart the top players in the workstation market Apple isn't even on the chart. ( buried down inside of others).

Apple products with big CPUs do not sell in large numbers. Macs do. The Mac Pro very simply does not.

This isn't an "Apple" thing. Almost all of the high end CPU vendors in something with high overlap with the general workstation market sell to others. Ampere . Nuvia was going to (and Qulacomm will). etc.
Intel doesn't sell Apple exclusive high end CPUs. . Nor does AMD in high end CPUs or GPUs.

A narrow corner case where one might says this is happening somewhere else is Amazon Graviton2. They designed it and do not sell it to other systems vendors. Only probably with that is that don't sell it to anyone. Amazon doesn't sell it. They use it to provide subscription services ( the customers don't own it). [ and the baseline design for Graviton2 is ARM's Neoverse... Amazon does some value-add but not doing ground up design. ]

There are some vendors in the super high end priced space like Cerebas but those aren't mainstream.



Multiple SoCs packages is way out of line with what Apple has got the major baseline funding the basic processor design ( mobile ). I haven't dismissed some kind of build-block chiplet. But again what does the basic building block have? The major linchpin of Apple's design is the large shared common cache and shared memory. That puts some heavy tension of branching out into different chips. it is "fast' in part because the caches are big and the latencies are controlled. if try to go relatively far off die that is going to get harder to do.

the mobile focus implementation gives them low thermals so they could close pack 2-6 chips that had some extremely short chip-to-chip common shared memory bus without too much power and latency penalty.

The further the buidling blocks get away from Apple mobile design criteria focus the more "disconneted" from the blocks going to be. The problem is if Apple is busy in the labs thinning out the iMac large screen to similar standards as the 24" model and chopping at least half the slots out of the "half sized" Mac Pro what kind of large , "tall" Mac pro building block are you going to get? Quite large internal volume is one of the primary purposes. ( was not the primary driver of the volume. )


so they're inevitably going to just EOL it, but before that, let's push out a new version with a new Intel CPU... which is what? A downgrade to an i9, which loses any benefit the Mac Pro had, or switch over to server-series like Xeon Gold/Platinum?

The top end of the W-3300 series is likely not a "downgrade" to the i9 at all for multithreaded work. The Mac Pro 2019 is already on a "Xeon server series die" CPU . So switching to another isn't a downgrade. The max core count of the Xeon Ice Lake server dies is 40 cores. 32 M-series ones will still be behind on count ( and have memory bandwidth pressure from the iGPU. ). The AMD option that will be competiting with will have 64 cores. M-series isn't going to trump that. Over in ARM workstation space the count is likley going to be over 80 ... which is even less likely to compete with (and macOS can't even handle. So choked on hardware and software.)


If it is long 2-3 year from now wait then My position is that it will probably take far more than just 2 years for Apple to come up with a true M-series solution to the large , tall Mac Pro. Several basic reasons.



1. in the Mac space, Apple doesn't walk and chew gum at the same time very well. For example, the Bloomberg article notes that Apple put the big screen iMac on "pause several months ago" to focus on getting the smaller Imac out. Another example, if go back to the April 2017 pow-wow on Apple view on what went wrong with Mac Pro 2013 they were quite happy to note that they had been working on iMac Pro ... after iMac Pro largely wrapped up ... Apple would start on Mac Pro. Again single tracked development. And the Mac Pro is very often on the end of the priority queue ( 3yrs ( 2010-2013) , 6 yrs ( 2013-2019) ).

If Apple puts a higher priority on a "half sized" , relatively more slot limited Mac Pro as a "iMac Pro " price zone filler than the full sized replacement will probably get paused just like the Mac Pro did for the iMac Pro.


2. iPad Pro A__X variant updates for last several years all waited on process shrinks for updates. Which mean something closer to 18-24 months. if the large size Mac dies are on the same space then there could easily be a relatively long iteration.

Similarly the Bloomberg story is suggestive that the low end models are being moved to a "M2" ( probably based on A15 SoC and 5+nm ) versus the "MBP 16 building block" solution for half sized Mac Pro being closer to a "M1X" iteration. If low end laptop processors are iterating faster than the top end , more "only for Macs" specific options then there is a pretty good chance the "mac only" processors are going to move more slowly. Happened in the iPad Pro relationship relative to higher volume iPhone. Coupling the iPad Pro to the MBA-MBP 13" - low end Mini and small screen iMac is extremelhy likely going to give them prority on budget allocation based on volume. ( that represents millions more systems. and if Windows ARM gets traction Apple probably is more scared of that then speends large dollars chasing relatively very small volume top end Mac Pro customers. ).


The Mac Pro 2013 went stale in an era where Intel was out in front on workstation processors. The Mac Pro 2019 came to market where Intel was falling behind. Has fast the MP 2019 is going stale is much, much , much quicker than the sit-and-squat benefits that Apple had in the past. and the M-series isn't a clear winning in workstation space at all (which isn't single threaded focused. ). The current context is much different than Apple's previous "rip van Winkle" moves on the Mac Pro.



I could see them selling the current one for an extended period before either EOL that level of functionality or replacing it with something vaguely similar with an Arm CPU(s).

But when can they replace it. If that is 3 years from now and the rate the current Mac Pro is going stale.... that is probably going to turn out extremely bad for Apple.


The real core issue is whether Apple really want to remain in this extremely high I/O bandwidth and large internal volume system space at all. That is coupled to how many bridges do they want to burn.


I could see them just EOLing that level of flexibility (again) 'immediately' (i.e. at the end of the ~2y transition period) and producing an Arm equivalent of the 2013 Mac Pro (remember they said "painted ourselves into a thermal corner", not "misjudged what pro users actually want to use")

Apple there is a decent chance that Apple can use building blocks to provision at least 1-2 slots. Even if just two x8 PCI-e v3 slots along with two 10GbE ports and a discrete SATA controller. ( for things like internal storage J2 , PCI-e SSD M.2 cards. Higher Ethernet I/O. Audio/Video capture. etc.).

Going back down to zero and doing lots of chest beating about their "insanely great' iGPUs would be missing the real meat of what those were the wrong moves. Not just repeating the same mistake but missing the industry trends as to why those were the bonehead moves.

GPU thermal expansion issues doesn't say diddy-squat about other types of high end I/O cards. Even if Apple's GPU now allows them to paint themselves into a corner again (with one embadded realitively low TDP ). If they willing to throw away the top end range of performance they have still shot themselves in the foot with a large caliber weapon again. [ The point was one very powerful GPU had high utility. Apple has cut off Nvidia GPUs so inventing a different league to play ball in. Squashing Intel GPUs means they capture the volume and high revenues of the Mac GPU market. That isn't the whole market. ]

Likewise. part of that "misjudged" was about provisioning additional internal storage.

The earlier Mac Pro worked with 4 slots. The M-series would could probably 'get by' with 1-3. Not make everyone in Mac pro user space happy but do "OK" . (similar to MP 2013).


I can't quite see them releasing a significantly different Intel Mac Pro as a stop-gap before EOLing it anyway.

I think you are not willing to look at the low gap difference between Xeon W 3200 series and likely W 3300 series. It isn't that large. Also to what Apple does not have now in M-series. That gap is actually bigger.
 
  • Like
Reactions: peanuts_of_pathos
The real core issue is whether Apple really want to remain in this extremely high I/O bandwidth and large internal volume system space at all. That is coupled to how many bridges do they want to burn.
Apple’s combustion of bridges over the last 10+ years has been pretty consistent, though. There’s probably only 7 bridges left, so burning them all wouldn’t even yield a decent cloud.
 
  • Like
Reactions: peanuts_of_pathos
It seems pretty likely, if the specs in this post are true.

There will be a CPU chiplet with 16 large cores and 4 small cores.

Some MacPros will include 1 of these, others will include 2.

There may be separate chiplets for the I/O and GPU, or these may be integrated into a single non-CPU chiplet.
Or these functions may be distributed across the CPU chiplets, like first generation AMD Zen (which may mean 32+8 MacPro will have more I/O than 16+4).

GPU may (indeed it's most likely thinking about it) even be off of the CPU for Mac Pro, so they can ship more powerful discrete GPUs suitable for a workstation in varying configurations.
Based on the names, it’s not 1 and 2 chiplets but 2 and 4. The “Jade C” is an 8+2 core chip, and the “2C” is therefore 2 chiplets or 16+4, while the “4C” is 4 chiplets or 32+8. Much more logical as it doesn’t require another design or a huge single die.
 
With 120 cores, do you think these GPU chips would be candidates for TSMCs “3D Fabric”? (All in the name of manageable die sizes)

 
Last edited:
Furthermore - I can see how identical CPU SOCs "layers" could fit into a 3D fabric as well - each layer would have:

- 8 high end cores
- 2 efficiency cores
- 8 or 16 GB of memory

With identical slices, you can make 10/20/40 cores with a 3D stack of 1, 2 or 4 of the same layer. Maybe even minimize the need for communication between layers? Maybe be able to take advantage of what's already good power management of these chips?

So - 3D Fabric for CPUs too?
 
Furthermore - I can see how identical CPU SOCs "layers" could fit into a 3D fabric as well - each layer would have:

- 8 high end cores
- 2 efficiency cores
- 8 or 16 GB of memory

With identical slices, you can make 10/20/40 cores with a 3D stack of 1, 2 or 4 of the same layer. Maybe even minimize the need for communication between layers? Maybe be able to take advantage of what's already good power management of these chips?

So - 3D Fabric for CPUs too?
You guys seem to think that these dies are far more transistor-limited than they are. The reticle size is big enough that there is no need to get into this sort of thing.
 
You guys seem to think that these dies are far more transistor-limited than they are. The reticle size is big enough that there is no need to get into this sort of thing.
How big would the reticle size (typically) be for TSMC 5nm?
 
Are we just not going to talk about this claim? While I currently have more use for an Intel processor and thus this would be a potential benefit to me, it seems like a very odd move at this stage of the game.
 
Well, if anything, to the extent that Apple Silicon trails Ice Lake in benchmarks, might it give Apple a target to beat with the next version of Apple Silicon?
 
Why would this type of computer need efficiency cores? Is there something inherent in the design of the CPU which requires them.

I just ask as it seems that the space taken up by the efficiency cores could fit in more performance cores. Really this could be applied to all desktop models. For the Mac Pro they seem the most pointless.
IMHO I think it’s for the same reason the M1 Macs have efficiency cores. Some processes on any computer, even the most high powered ones don’t take advantage of the performance cores. If you were to run those processes on all performance cores, extra heat would be generated, slowing down the high performance cores. It sounds a little counter intuitive I know, but Apple is all about making next generation chips to blow away the competition. Fastsest computing power, while keeping heat in check, at foundry sizes that boggle the mind.
 
IMHO I think it’s for the same reason the M1 Macs have efficiency cores. Some processes on any computer, even the most high powered ones don’t take advantage of the performance cores. If you were to run those processes on all performance cores, extra heat would be generated, slowing down the high performance cores. It sounds a little counter intuitive I know, but Apple is all about making next generation chips to blow away the competition. Fastsest computing power, while keeping heat in check, at foundry sizes that boggle the mind.
Yes this is precisely why Intel is doing the same thing on their upcoming Alder Lake CPUs with both performance and efficiency cores even on desktop versions. Microsoft is modifying their process scheduler in Windows 11 to do the same thing that iOS and now MacOS with the M1 is doing with deciding which processes need performance and which ones can go slower but work more efficiently.
 
Yes this is precisely why Intel is doing the same thing on their upcoming Alder Lake CPUs with both performance and efficiency cores even on desktop versions.

The "Efficiency" cores on Alder Lake are faster than the Skylake cores in Intel's previous models. So take the 2016 MBP and iMacs and faster than those cores. If it is simple "Dick, Jane , Spot" code that has a bunch of mundane add, subtract, mulitiple , and divide that load/unload some data from persistant storage then t they work just fine.

pace wise can get 4 E cores in the same die space as one P core. That is 4x the number of math units. loaders , etc. uless a hyper long single threaded, single task more that get a bit done at a time at a steady pace typically does better than so hot rod , top fuel dragster than only goes super fast in a completely straight line.

There is a notion that the E cores are grossly underpowered. They aren't. Same stuff as 2015 era packed into a smaller space is more closer to the reality.


Microsoft is modifying their process scheduler in Windows 11 to do the same thing that iOS and now MacOS with the M1 is doing with deciding which processes need performance and which ones can go slower but work more efficiently.

It is more than just the scheduler. Also like Apple the SoC hardware is bubbling up more information for the scheduler to work with that didn't exist before. The scheduler has more insights into what kinds of instructions the cores are doing (as opposed to "educated guesses" ). It knows when P cores are in a "spin lock" where the vast majority of the "extra" P core resources are being wasted. ( pragmatically the core is waiting doing nothing . All that extra registers , etc. etc. etc. is buying a whole lot of nothing. more ).

In short, there are higher syngeries between kernel and hardware. It is not 100% necessary for both OS and hardware be inside the same company for that to happen. Usually extra work to do right, but not impossible.
 
It would be nice to see Intel & Microsoft move from CISC to RISC, but that would really screw with backwards compatibility for hardware & software. I expect it will eventually happen, but it’s going to be a very painful transition.
 
It would be nice to see Intel & Microsoft move from CISC to RISC, but that would really screw with backwards compatibility for hardware & software. I expect it will eventually happen, but it’s going to be a very painful transition.

Intel probably won't dump x86 family completely. ( neither AMD). What the x86 family is overdue for is dropping "super old " as opposed to "backwards" compatibility. At this point, x86_64 has a very large ecosystem that doesn't make much sense to dump completely. Backwards on code all the way back to 1987 or other parts of the last century brings "Complexity". Can take the notion of "if it isn't broken , don't fix it" too far into the zone of just being an instruction "hoarder".

It doesn't have to be Intel's whole line up, but the stuff that is trying to stay far in front and is primarily only running software from the last 9-10 years. ( 10 year old software is relatively pretty old. )

Yes, there is some overlap with the "old" stuff and backwards compatibility in terms of impact on the user. But can kind of see this with Windows 11 leaving non TPM and older hardware behind. The extra code for "what if we are running on 1992 BIOS ... " ... just sent those customers to a virtual machine. Those folks are boat anchored in time. They new a "time machine" more than a new forward looking processor.
 
Intel probably won't dump x86 family completely. ( neither AMD). What the x86 family is overdue for is dropping "super old " as opposed to "backwards" compatibility. At this point, x86_64 has a very large ecosystem that doesn't make much sense to dump completely. Backwards on code all the way back to 1987 or other parts of the last century brings "Complexity". Can take the notion of "if it isn't broken , don't fix it" too far into the zone of just being an instruction "hoarder".
Thing is, with AMD AND Intel in the space, neither one is going to remove cruft because the other is poised ready to bludgeon them with “SEEE, THEY’RE NOT EVEN FULLY COMPATIBLE WHILE WE ARE!” Would they both work together to agree on a more streamlined way forward? Maybe, but not if it gives some small upstart company an opportunity to provide a “solution” and become a future thorn in both of their sides :)
 
Thing is, with AMD AND Intel in the space, neither one is going to remove cruft because the other is poised ready to bludgeon them with “SEEE, THEY’RE NOT EVEN FULLY COMPATIBLE WHILE WE ARE!” Would they both work together to agree on a more streamlined way forward? Maybe, but not if it gives some small upstart company an opportunity to provide a “solution” and become a future thorn in both of their sides :)

That doesn’t make much sense in light of the current environment. There are already thorns . Arm based solutions from multiple vendors are coming . Apple’s is competitive . 5% of the personal computer placements for x86 is in process of walking away. ( Apple has dropped well ovee 50% of their unit buy volume ). Amazon is deploying new Arm servers twice as fast as new AMD and Intel x86 servers combined.
“ we are compatible with a substantive shrinking market “ is the kind of stuff IBM talked about in mainframes in the late 90’s into the beginning of this century. King of reshuffling the deck chairs on the Titanic is still on the Titanic .

Intel and/or AMD wouldn’t have to drop the pre x86-64 stuff on all their products . However, for modern 2022+ Chromebook or Windows 11 only laptop that boat anchor stuff really doesn’t bring much material value at . The “we have got the boat anchor” stuff is really an argument of why have a lower utility product offering .

For tHe ultra conservative server workload markets yeah the old IBM “leave no ancient oppose mode behind “ would play . Similar, the embedded market that may play.


The other major issue is that the super old stuff is that the patents expire over time . After a very long while that stuff doesn’t have a huge barrier to entry . The parts of that instruction set that locked up in the more exclusive aggrements is the newer stuff after the x86-64 transition.

The ancient exception model has moat or holes. Both AMD and Intel have proved replacements for it . Kind of nutty to come up with a new one without the holes and still keep around that old stuff in funky backward compatibility mode 35 . when there is broke stuff should get replaced as fixed where applicable . Otherwise just keep dragging same old problem behind you .
 
“ we are compatible with a substantive shrinking market “ is the kind of stuff IBM talked about in mainframes in the late 90’s into the beginning of this century. King of reshuffling the deck chairs on the Titanic is still on the Titanic .
That is quite literally Intel’s talking points right now against Apple’s M1. :) I’m not saying it’s a smart way for them to think, but they DO think that way and haven’t shown the willingness to think otherwise. There’s all kinds of PC technologies that broke backwards compatibility but offered a new way of computing… that were done in by someone figuring out how to do “yesterday, but a little bit better” good enough for folks that didn’t want to drop backwards compatibility.
 
So Apple's headless desktop offerings will consist of:

$600 M1 mini with 8 cores.

$x,xxx Mac Pro with 20 or 40 cores.

For the love of pete, can we just get something in the $2,000 range that has 10 or 12 cores?
Congratulations! ;)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.