Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
We'll see tomorrow (6/13). But good chance the MI300 does more to shrink a two MI250X+AMD CPU whole logic board for a supercomputer node into one very large, expensive package than it does for unifying the GPUs into one presented uniform larger GPU. It is going to take apps that are built to scale across multiple components in a supercomputer node and run them faster with huge decreases in power wasted on making them overly discrete. those apps already have the explicit remote, NUMA access juggling assumptions already built into them. We'll see tomorrow (6/13). But good chance the MI300 does more to shrink a two MI250X+AMD CPU whole logic board for a supercomputer node into one very large, expensive package than it does for unifying the GPUs into one presented uniform larger GPU.

It looks like AMD did manage to make it a single NUMA zone.

"... The MI300A can run in several different modes, but the primary mode consists of a single memory domain and NUMA domain, thus providing uniform access memory for all the CPU and GPU cores. Meanwhile, the MI300X uses coherent memory between all of its GPU clusters. The key takeaway is that the cache-coherent memory reduces data movement between the CPU and GPU, which often consumes more power than the computation itself, thus reducing latency and improving performance and power efficiency. ..."

The MI300X is similar to Ultra in that memory caps at 190GB. Power levels and cost to buy are way different. Apple's "poor man's HBM" is more affordable than real HBM. And AMD through a lot of cache and logic at it (and three layers ; two stacked interposers )

The Instinct Platform that lashes eight of these together is right back to NUMA though to crawl back to the 1TB aggregate range.
 
Last edited:
Zbrush ran smoothly on 1.5 decade old systems. Part of the reason behind its popularity is its ability to work with fairly high poly counts even on low specced systems.

It’s also primarily CPU centric. Apple’s tile based GPUs have nothing much to contribute in that sense.

Don’t credit Pixologic’s innovation to Apple.

Is it still innvoated to still continue to leave graphics workload on CPUs when the GPU cores have access to the exact same amount of memory. "But the CPUs have sooooo much more memory" has been the refrain from all the way back when the Mac Pro 2013 appeared with a Compute GPU. It's memory is 'too small to be useful' was the complaint.

Apple removes the constraint and still trying to optimize to hardware from over 10 years ago. Innovation????
 
But with the *massive* crunch in wafer starts at the time these decisions had to be made, would it have been worth it?

I think Apple saw all the pileups in the supply chain and a *lot* of upcoming features were decided would only make sense on the 3nm node.
I guess we'll find out when it gets refreshed in the next couple years, if this was ever in the works or they are incapable of making it work.
 
so it's not entirely clear to me how you'd bring in another GPU and do so in a way that is optimized for our systems

Oh come on. Remember the Mezzanine slot on iMacs? Somebody will port a driver. There’s no real reason to, but somebody will.

Key wiggle word there is “optimized”.

It’s all zeroes and ones. I don’t see how they could stop you if you knew what you were doing. And someone does.
 
It’s not “is is”, it’s “it is”. I know you’ll appreciate the feedback. 👍🏻

Also not sure what you’re correcting exactly. Oh the 2nd typo, well spotted.
Yes, I’ll glad when iOS 17 brings the new predictive text. Especially when typing on the phone.
 
Is it still innvoated to still continue to leave graphics workload on CPUs when the GPU cores have access to the exact same amount of memory. "But the CPUs have sooooo much more memory" has been the refrain from all the way back when the Mac Pro 2013 appeared with a Compute GPU. It's memory is 'too small to be useful' was the complaint.

Apple removes the constraint and still trying to optimize to hardware from over 10 years ago. Innovation????
Because it’s not needed when GPU bound apps competing in the same space can’t match Zbrush’s speed and polycount innovation on just CPUs?

Memory? Did I mention CPU memory as a factor? Why are we inventing false argument? Be that as it may the PC side has superior options than Apple’s solutions. Again, an irrelevant argument.

Zbrush’s innovation started 1.5 decades ago, when Apple’s Mx dream was in the realm of ‘someday if things work out’..assuming Apple was even thinking along those lines back then.
Why should Pixologic even bother about Apple when the vast majority of its users are on the PC side, working well even on cheaper and low specced hardware?
Yeah. That’s innovation.

Bottom line: Mx brings nothing to the table as far as Zbrush is concerned = nothing to do with Apple’s ‘innovation’ = don’t credit Apple for Pixologic’s innovations that started 15 years ago.
 
Last edited:
  • Like
Reactions: spaz8 and Eidorian
I'm old enough to remeber when apple did an apology tour for releasing a Mac Pro where they tried to tell pros what they needed and didn't listen to what they actually wanted. Similar sense of déjà vu now as they've designed themseleves into an unupgradable corner that fails to meet the needs of many genuine pros.
 
I have a new theory, based on M3 rumours. The M3 Ultra is the "Extreme" chip we were hearing about and hoping for in the Asi MP they released a WWDC.

The 40 cores, the 384 GB of RAM, 144 core gpu .. all sound like the 3nm die shrink .. where they can physically fit the equivalent of 4 x M2 max chips into the physical space of 2.. The Ultra. Also 5nm -> 3nm is not half.. which is why its 40 not 48 cores.

This would possibly mean that the M3 Max is gonna be equivalent to the currently M2 Ultra, possibly even a touch faster.

That is gonna finally be a 3d capable ASi system. This may also explain the lack of 3rd party GPU support - i'm making excuses for Apple here.. but it might be that a real 3d capable igpu is about 15 months away.
 
Last edited:
  • Love
Reactions: AlphaCentauri
I have a new theory, based on M3 rumours. The M3 Ultra is the "Extreme" chip we were hearing about and hopping for in the Asi MP they released a WWDC.

The 40 cores, the 384 GB of RAM, 144 core gpu .. all sound like the 3nm die shrink .. where they can physically fit the equivalent of 4 x M2 max chips into the physical space of 2.. The Ultra. Also 5nm -> 3nm is not half.. which is why its 40 not 48 cores.

The physical space of 2 probably isn't happening.

They would have to linearly increase the number of Memory Packages on the package to go to something with the equivalent of bandwidth of 4x a M2 max bandwidth. TSMC N3 isn't going to do diddly squat to shrink those packages at all. Nor the on-die physical out to that increased package count. (N3 could add some fancier logic to the memory controllers like ECC or compression, but the I/O aspects of the memory controllers talking to 'distant' packages elsewhere isn't going to shrink much at all with the same baseline approach sharing the same fab process as the compute logic and memory on affordable plain 2D interposers for "poor man's" HBM.).

N3 isn't going to double the cache size either. ( N3B doesn't particularly shrink SRAM/cache much at all , and N3E is absolutely no shrink at all. ). Apple Silicon is relatively very cache heavy. Which means they won't see the same aggregate shrink that TSMC's nominal test chip sees. And dramatic core count increases without cache to go with them won't really do much for performance on general workloads. ( will/could 'goose' benchmarks that can largely drag into cache , but probably doesn't scale well with normal mixed , concurrent app workloads. )

Apple could keep two dies joined on a single edge if grew the building block much bigger. That won't let you keep the same size. The more affordable InFO-LSI packaging process it likely out the window regardless of how you 'slice' the problem. So a 800mm^2 block isn't necessarily a roadblock for CoWoS-LSI. It just costs substantially more.

The "Max" , as composed to fit into laptops, really isn't a good chiplet design if want to scale past 2. If Apple is going to "40 cores" there is pretty good chance the Extreme will being using chiplets with different shape(s) than the Max. So the laptop Max die as a unit of space measure probably would be a good fit.

If Apple goes to chiplets and the extreme's CPU core count slides 'backward' on core count then the Ultra probably would also. Economically, they probably would need to use the same chiplets for it also.

This would possibly mean that the M3 Max is gonna be equivalent to the currently M2 Ultra, possibly even a touch faster.

Pretty good chance that the laptop Max is decoupled from the desktop SoCs. Again, probably not a good baseline measurement metric as will be different dies.

That is gonna finally be a 3d capable ASi system.

The current systems can't do 3d now? "capable" isn't the same as 'improved performance".

This may also explain the lack of 3rd party GPU support - i'm making excuses for Apple here.. but it might be that a real 3d capable igpu is about 15 months away.

Apple got a general 20% upllift from M1 Ultra to M2 Ultra with the exact same memory subsystem and using an about three year old fab process for M2. So if they go to an incrementally faster LPDDR5 memory subsystem and N3 ... yeah it is a pretty safe bet that they will get another substantial uplift. Even if it was just a shrink of what they got (and same core count) it would likely go significantly faster.

And even if keep the core count number about the same can make "bigger transistor budget" cores so GPUs cores could get some limited HW raytrace add-ons ... yeah that would be faster also in that niche. (that niche doesn't enumerate '3d' ).

But Apple has already tossed dGPUs from the rest of the line up. laptops -> old 'iMac 27" zone. It isn't primarily about the Mac Pro. If the M3 generation brings another 15-20% uplift to GPUs then dGPUs are just even more dead there. M4 probably can squeeze out slightly bigger dies and get some straightforward GPUs gains. M5 probably a fab augment to leverage. Rinse and repeat on M6. The progressively morosely "dead" dGPUs are in the rest of the line up makes 3rd party GPUs more and more problematical for the Mac Pro. Likely not jus what Apple can already see in the test labs with M3 that is at issue here. It is a decently long, achievable path into the future.

Making applications that optimally runs on those products is what pays the "Mac ecosystem" bills. That is the 'equation' that Apple is balancing.

The other equation that Apple is balancing is power. Three 8-pin AUX power sockets disappeared. A large majority of that is likely being reserved for a much bigger SoC package that soaks up substantially more power than the M2 Ultra does. The Mac Pro is at standard (USA) household wall socket power levels. If the power consumption of the "CPU" thermal zone goes up significantly then the other zones are going to need to give up. Fixed cap out of the wall makes it a zero sum game.
 
Sad thing is: you are not wrong...
Apple indeed appears to intend to leave the higher end computing space on purpose
It's more lucrative to do so, and they're still serving the overwhelming majority of the market with best in class computers. Not best priced, but certainly best in class all things considered.
 
  • Like
Reactions: Tagbert
It's more lucrative to do so, and they're still serving the overwhelming majority of the market with best in class computers. Not best priced, but certainly best in class all things considered.

Is kinda hard to think apple is the best in class computers:

- Macs doesn't have the fastests storage, neither the highest storage.
- Macs Sound Card are below the average, any gaming computer smokes Macs Sound Cards.
- Macs GPU aren't better than Nvidia or even AMD.
- Macs M2 Ultra CPUs have about the same performance of a much cheaper Intel Core i9 13 gen.


And the list goes one, so basically macs makes sense on laptops, in where the battery lasts lots of hours, PC laptops, specially gaming laptops, the battery is pretty much useless.

For desktop, makes sense if you hate windows and/or if you use specifically mac optimized software (Logic / Final Cut).

Some specific tasks are faster on macs but on average, PC have a way better performance per value ratio by far.
 
Is kinda hard to think apple is the best in class computers:

- Macs doesn't have the fastests storage, neither the highest storage.
- Macs Sound Card are below the average, any gaming computer smokes Macs Sound Cards.
- Macs GPU aren't better than Nvidia or even AMD.
- Macs M2 Ultra CPUs have about the same performance of a much cheaper Intel Core i9 13 gen.


And the list goes one, so basically macs makes sense on laptops, in where the battery lasts lots of hours, PC laptops, specially gaming laptops, the battery is pretty much useless.

For desktop, makes sense if you hate windows and/or if you use specifically mac optimized software (Logic / Final Cut).

Some specific tasks are faster on macs but on average, PC have a way better performance per value ratio by far.
As a total package, I stand by they make the best computers. Never mind macOS vs Windows.
 
The physical space of 2 probably isn't happening.

They would have to linearly increase the number of Memory Packages on the package to go to something with the equivalent of bandwidth of 4x a M2 max bandwidth. TSMC N3 isn't going to do diddly squat to shrink those packages at all. Nor the on-die physical out to that increased package count. (N3 could add some fancier logic to the memory controllers like ECC or compression, but the I/O aspects of the memory controllers talking to 'distant' packages elsewhere isn't going to shrink much at all with the same baseline approach sharing the same fab process as the compute logic and memory on affordable plain 2D interposers for "poor man's" HBM.)...

So what kind of uplift would you project the jump is from M2 to M3? I've heard people speculate 70% over M1. That math would be a pretty huge 50% bump from M2.
 
So what kind of uplift would you project the jump is from M2 to M3? I've heard people speculate 70% over M1. That math would be a pretty huge 50% bump from M2.

I would project a general 20% ,but some specialized corner cases that jump much higher.

There is nothing to indicate that designing for N3B was going to be easy early on. Even less so now that every else 'ran for the hills' away from it. So taking 'crazy' large leaps leave for M4 after getting footing on N3 generation tools.

Alot depends upon if Apple does anything for ECC RAM or not. If they don't it is a bigger leap , but dubious data safety. If add safety with ECC it is going to 'cost'. Won't go backwards in memory bandwidth, but also going to need to speed 'overhead' on ECC; so not getting everything.
 
  • Like
Reactions: spaz8
I’m not talking about the split between Mac Pro and other Macs. 80/20 is my estimate (mostly based on other people’s guesses, no actual knowledge) between Mac Pro users not needing PCI graphics cards, and Mac Pro users that do.
I’d think that’s kinda high for “needing PCI graphics cards”, but mainly because anyone that needs a PCI graphic card more than macOS left once there weren’t going to be anymore Nvidia cards. Those aren’t future Mac customers as they’ve likely got all their work tied up in CUDA as so wouldn’t even move to AMD if AMD came out with something more performant.
 
I’d think that’s kinda high for “needing PCI graphics cards”, but mainly because anyone that needs a PCI graphic card more than macOS left once there weren’t going to be anymore Nvidia cards. Those aren’t future Mac customers as they’ve likely got all their work tied up in CUDA as so wouldn’t even move to AMD if AMD came out with something more performant.

But the multiple long droughts Apple has built up 2010->2019 , 2019-2023 have conditioned even the folks who stayed around into the "need card" mode. Just a different angle than "need CUDA". Need that to bump GPU card in year 3-5 when there is no end in sight of relief. (Doomsday Card prepers ) A bigger problem that Apple has is the notion that they are just going to lapse back into Rip van Winkle mode for some long time and need some back-up plan to extend life of the system.

That "need a refuge' is busted in multiple ways. New embedded dGPUs in new Intel Macs became options for older Mac Pros over time. Well, Apple nuked dGPUs from rest of the line up. There is no repurposed embedded GPU drivers coming in relief from the rest of the Mac ecosystem. relatively huge increase in iGPU performance range covers is going to shrink any eGPU demand for other Macs also. So where is the supply going to come from if there is shrinking demand.

Not all of the '80' shops of the 80/20 are eyeball deep hooked in CUDA. If rotate systems on a 3 year cycle then on next cycle there is the EXACT same Mac Pro sitting there... a bunch are going to leave. It wasn't about the brand of card or its modularity. There is a decent number of folks who were just looking for regular upgrades and who were not in the AMD vs Nvidia 'wars'. However, blowing the self imposed deadline for the transition ( over 2 years) isn't going to instill much confidence there. It isn't a hardware thing. They have lots of reputation rebuilding to do. Otherwise those folks will stick to where the schedule is more regular even if not annual.

For the group of folks that pick the GPU card first and then wrap the rest of a PC around that. Macs never really were that in the first place. But at this point there is always going to be an Apple GPU at the nominal default core of a Mac system. Those 'card first' folks .... Apple wouldn't get them even if had drivers because that card isn't going to be dominant 'first class' in a Mac system. Their 'precious' card being regulated to non first class status is something those folks will likely just reject in a visceral fashion.
 
Last edited:
  • Like
Reactions: spaz8 and CWallace
Even if it's not a direction Apple wants to pursue, a lot of customers want Apple to pursue that direction

Another issue I've been hearing people complain about is the maximum RAM is too low. Enabling some other way to expand RAM or increasing the maximum seems important to a lot of commenters
 
This is what I don't understand; every time I watch a review on YouTube about almost literally anything, it's always about how it benefits 'content creators' (aka, YouTube douchebags) with their Adobe Premiere / Apple Final Cut work flow.

Since when did the only thing a computer was to be used for and measured on solely for video editors? WTF.
Probably because Youtubers are all content creators and that's the only lense they see computers through. I agree it's myopic
 
Disappointed because it’s not a machine you feel you could comfortably recommend to others? Because if the decision doesn’t impact you as you weren’t going to buy it, where does the disappointment factor in?

I don’t know why you bring up me having to recommend the machine as a prerequisite for me being disappointed at this turn of events

There’s nothing objectively wrong with this decision as long as the product serve it’s consumer base, my disappointment is purely subjective as I liked the fact that there was at least one Mac that was wholly expandable
 
  • Like
Reactions: Unregistered 4U
And? What's new? Did you not read the post where I mentioned they aren't the best priced. Regardless of pricing and upgrades, areas Apple has never been inexpensive with, as a total package Macs are the best computers on the market for most users.

most users = users like you

Most Users != users like me

To most users like use, no upgradability and uber expensive prices doesn't makes macs "The Best Computers on The Market.
 
  • Like
Reactions: Romain_H
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.