Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I’m not talking about the split between Mac Pro and other Macs. 80/20 is my estimate (mostly based on other people’s guesses, no actual knowledge) between Mac Pro users not needing PCI graphics cards, and Mac Pro users that do.


For that context, those numbers are likely not quite correct.

It is limited on its demographic sampling (probably highly skewed toward tinkering folks ) , but this (yes dated) slot survey of pre Mac Pro 2019 systems




There are a decent number of "Apple boot support and newer Video GPU" card parings there. The survey sprung up when Apple tossed the slots in the MacPro 2013 and folks dropped new data points in over time.

The Mac Pro 2019 alleviated the need for a 'boot screen card' because at the very late stages Apple's EFI implementation had merged into a mostly a UEFI implementation and the gaps between 'off the shelf' cards and"Mac GPUs " had dropped considerably. You could hand wave tossing the older card in the boot screen pairing once trying to project this onto the MP 2019 over an extended amount of time. But two things...

1. The newer card over system life really counts. (what is probably not captured in the survey is folks on fixed 3 yrs (or less deprecation/lease and replace cycles).

2. there was also a MP 2019 impact at desupport of Nvidia cards where boot camp to windows played a role. (for some the MP 2019 is a part time Mac. Apple had a hack so that the MP 2013 was in crossfire mode when booted into Windows. Some rogue project for giggles or filling a perceived market need? )



It would probably incrementally help to do a 2019 slot survey now. If only to count of the number of non Apple GPU cards installed. (which I suspect is really much of the source of grumbling. At least for those not upper 10 percentile in workload demands )


The survey would place it in the ballpark of 10/90. I think that is probably way off too , but if it was '80' it would be hard to hit '10' even with bad sampling. How much lower than 80 ... extremely hard to tell. Only thing certain is that the two radically different groups who probably don't count the other side with their sampling.

Not sure if Apple really does a better job unless pulling data from support interactions/upgrade profiles . For the MP 2019 it wouldn't show up in GPU accessories sold at all in any representatively accurate way at all. MPX unit sales are just another bad way of counting.
 
Last edited:
There will be no "next year" model. Maybe in six or seven years, but that's looking even more unlikely considering Apple has all but abandoned the Pro market.
By using the same SoC as the Mac Studio then a lot of the R&D is catered for already.
Basically can swap out a new PCI-E vX slot and switch but not hard to do.
Caase basically ready to rock
People were saying Mac Studio was a one off like the iMac Pro however had a second gen and prospects look good for gen3 when the M3 is out.
 
  • Like
Reactions: Eidorian
Why should it be crazy?
Let’s play the label game.

Desktop class ‘ULTRA’ dual GPUs barely beat a previous gen, middling laptop RTX GPU. That’s crazy.
Or SoC iGPUs barely beat previous gen middling laptop discrete GPU.
 
These dedicated graphics cards are not optimised against the computer's CPU and internal memory. They're just optimised by themselves.

How would you make a NVIDIA 4090 use the M2's RAM?
With intent. If Apple truly wanted to make the RAM accessible by GPUs, they could facilitate it, theyre smart cookies. Just like they were able to deliver their current SoC and SiP, blowing competitors out of the water.

Heck, Macs were running Windows to a decent efficiency til not too long ago.
Didn't seem to be too many issues with hardware compatibility there. You can still hackintosh an AMD card to work with MacOS.

It's just Apple locking it down, deliberately - like when they had a whinge about nVidia and dropped support
 
Irrelevant unless you want to create unnecessary cherry picked differentiation to justify ‘crazy’ when it’s crazy Apple hasn’t yet been able to match a 4 year old GPU with its own technological approach since both systems are supposed to perform same/similar functions.

I just pointed out the holes in your cherry picked claim.

You probably don't have the historical context of Macs and Blender. They didn't blend at all until maybe last year.

Apple's improvements of both hardware and software have made the (high-end) Macs gone from being in the top 200 to having a shot at competing in the top 30 or even top 20. Instead of achieving 10% of the performance of a NIVDIA card, they might be able to achieve 75%.

That's impressive considering NVIDIA has also improved a lot in the last 2-3 years.
 
With intent. If Apple truly wanted to make the RAM accessible by GPUs, they could facilitate it, theyre smart cookies. Just like they were able to deliver their current SoC and SiP, blowing competitors out of the water.

Heck, Macs were running Windows to a decent efficiency til not too long ago.
Didn't seem to be too many issues with hardware compatibility there. You can still hackintosh an AMD card to work with MacOS.

It's just Apple locking it down, deliberately - like when they had a whinge about nVidia and dropped support
Wait, how would you deal with the hit in performance from passing that data off the SoC and back? What does the SoC do with that idle time, which adds up significantly?
 
That statement bothered me when he said it and I still can't see it as anything but a dodge. Isn't the answer "the same way you used to?" Sure unified memory is nice within the SoC, but if they wanted to push a block of memory across PCIe and process it remotely that doesn't seem like it would be any harder than it used to be.

It's not harder, it's just not optimised. It's the opposite: Less optimised.
 
So, if the Mac Pro is a workstation, it sold less than 20 000 units in 2022.


Kind of dubious that Dell+HP+Lenovo are 98% just by themselves. (75-85% maybe , but way past 90 seems extreme) Suggestive they are counting in some 'easy to count' from spaces as opposed to capturing all the sales through a very wide variety of sales channels.
 
Last edited:
What can be less optimized than "not possible"?

Are you really talking about the same thing though. If talking Uniform (no NUMA drama) , Unified access.... then no; it isn't possible.

You can't put something 4-6 inches away and get the same round trip time to memory as something as 1 inch.

What is talking about is having discrete and not have any impact on the what the software 'sees' in terms of latencies and responsiveness. Is that a new software requirement or not?

It appears that Apple wants folks to write optimization code that works across broad swathes of the product line far more so than a niche optimization that only works on a narrow range of BTO (or customer customized) systems.
 
Are you really talking about the same thing though. If talking Uniform (no NUMA drama) , Unified access.... then no; it isn't possible.

You can't put something 4-6 inches away and get the same round trip time to memory as something as 1 inch.

What is talking about is having discrete and not have any impact on the what the software 'sees' in terms of latencies and responsiveness. Is that a new software requirement or not?

It appears that Apple wants folks to write optimization code that works across broad swathes of the product line far more so than a niche optimization that only works on a narrow range of BTO (or customer customized) systems.

The argument you're making though is "You can't have high performance GPUs because the memory model is different than we use with our integrated GPUs." That doesn't really make sense. The whole point of an expansion bus is to expand and support stuff that isn't on the SoC.

Yes, unified memory is the best approach for the integrated GPU. If that GPU isn't sufficient, you need an external GPU. Everything is there to support that external GPU, including historical precedent. If the only thing standing in they way is purity of thought, that seems like a weak argument to me.

They're supporting PCIe storage, even though it probably doesn't all pass through the T2 chip. They're supporting PCIe networking even though it's probably different from the interface to the internal PHY. They support WiFi even though it won't have the same round trip time as Ethernet.

As I said, Apple supported shared memory to integrated GPUs and block memory transfers to discrete GPUs in the Intel days.

A better answer would might have been "if GPU vendors are interested in creating drivers for MacOS, they're welcome to, but we're not interested in pursing that ourselves."
 
Give me a Mac Pro 6,1SE with M3 Ultra and TB4 Ports. Would be cool if the spaced freed up from the FirePros could then be used to add up to two additional SoCs. Imagine three M3 Ultras in that bad boy!

Two notes from earlier in this thread:

1) Mac Pro 6,1 can support up to 128GB ECC (as opposed to the official 64GB ECC from Apple) but at lower clock, although the extra RAM trumps the lower clock in terms of performance depending on workflow. https://barefeats.com/tube15.html

2) Mac Pro 6,1 Crossfire support wasn't a "hack" the FirePros are physically Crossfired. Apple never implemented Crossfire support in macOS which is why Crossfire was only available when booting into Windows, and then only with certain apps/games that could take advantage. https://barefeats.com/tube07.html

RIP "Rob ART" Morgan!

Long live the 6,1!
 
The argument you're making though is "You can't have high performance GPUs because the memory model is different than we use with our integrated GPUs." That doesn't really make sense. The whole point of an expansion bus is to expand and support stuff that isn't on the SoC.
I think a better explanation of what they are saying is:
We want people to optimize for our architecture. In order to ensure that happens, we will not provide any other options. If we supported other options, developers (who are lazy by nature - speaking as one myself), :) will only support the hardware/architecture they already support.
This is known as “burning the boats on the shore” for those with a history background.
 
Weird that I keep seeing videos of real time effects at full resolution (no downscaling) by 3D designers repeatedly mumbling wow, I’ve never been able to do that. That was on the M1 Studio.

It’s almost as if Apples tile based GPU strategy works remarkably well when the software is actually designed to use it…

This is one of the reviews I was talking about: Zbrush


At 13 minutes in he tries to break it by upping it to 300+ million polygons (“I can’t imagine why I’d ever have to do this”) and then he’s genuinely surprised he can still sculpt it.

This thing is powerful.
Zbrush ran smoothly on 1.5 decade old systems. Part of the reason behind its popularity is its ability to work with fairly high poly counts even on low specced systems.

It’s also primarily CPU centric. Apple’s tile based GPUs have nothing much to contribute in that sense.

Don’t credit Pixologic’s innovation to Apple.
 
Last edited:
  • Like
Reactions: spaz8
Zbrush ran smoothly on 1.5 decade old systems. It’s part of the reason of its popularity, working with fairly high poly counts even on low specced systems. It’s also primarily CPU centric.

Don’t credit Pixologic’s innovation to Apple.
Can you replicate that workflow on your device? Sculpting works fine with 300+ million polygons on screen?
 
1. Yes 2019
2. Some of it. But I was able to swap and add as I needed.
Thanks! A few more questions:
  1. What is your configuration?
  2. What software are you using?
(Just curious how your workflow flow compares to those with which I have experience.
 
Or SoC iGPUs barely beat previous gen middling laptop discrete GPU.
Compared to a 3070 ‘laptop’ GPU, discrete or not.

Apple is selling the ultras as desktop class GPUs, unless you count the studio and the Mac Pro as laptops. Apple might as well put in ‘dual’ iGPU or Apple pie in there. It’ll still be sold as desktop systems.

No one forced Apple to use non discreet iGPUs. Nor try and post cherry picked results or inserting self congratulatory metrics when the M1 ultra launched, by comparing it with a 3090.

You have things mixed up.
 
Last edited:
You probably don't have the historical context of Macs and Blender. They didn't blend at all until maybe last year.

Apple's improvements of both hardware and software have made the (high-end) Macs gone from being in the top 200 to having a shot at competing in the top 30 or even top 20. Instead of achieving 10% of the performance of a NIVDIA card, they might be able to achieve 75%.

That's impressive considering NVIDIA has also improved a lot in the last 2-3 years.
Don’t assume my ‘historical context’ viz either Blender or Macs.
 
Last edited:
Can you replicate that workflow on your device? Sculpting works fine with 300+ million polygons on screen?
Which device ? The current TR pro , previous 7980x, decade, years old MacBook Pros? No longer available 2008 Mac Pro ?

Can you give evidence that you cannot on other systems ?
 
Last edited:
I think a better explanation of what they are saying is:

This is known as “burning the boats on the shore” for those with a history background.

Yeah, that seems about right. But it also explains why people are unhappy-- some people simply need more GPU performance than Apple offers and there's no reason to refuse support in a box full of PCIe slots other than to try and ransom your customers for developer changes.

And the, either sloppy or carefully triangulated, statement of "I can't think of a way to do it optimally" comes off as a bit disingenuous. It can be done, that's obvious. You're either saying you can't think of how to do, knowing that you have before, or you're saying you can do it but it wouldn't be as optimal as if you had a more powerful integrated GPU, which you don't, so we get nothing.

It's not worth supporting a third party card in a third party enclosure at the end of a relatively slow Thunderbolt cable seemed justified. Now they have a box full of Apple made PCIe slots designed to take third party cards. They've reduced the problem down to one of pure software, and said they won't take that step-- optimality seems like a more insulting justification than the truth would have been.

That's all I'm saying.
 
AMD really didn't do "Unifed Uniform" memory with the MX250X. The two dies on the 250X present to the end user app as two GPUs; not one.

I was referring only to AMD’s Trento, and not the MI250X proper. Trento allowed the memory space between host and GPUs to be unified, which is essentially what Apple’s SoCs and even AMD’s APUs do.

GPUs in Trento have their own VRAM pools, but the important bit is the unified memory access with the actual CPU (host) via system RAM. Pointers are used to prevent data movement and GPUs can access data as if it is in local VRAM. HBM2e acts more like a last-level cache in MI250X, whereas in MI300, CPU-GPU use the same HBM3 pool, which has its own capacity constraints. The issue with SoCs and on-package CPU/GPUs is that electrical power is often shared and is generally biased to the highest compute processor (usually GPU). So, when both are computing, at maximum power, CPU is afforded less power for computation vs discrete parts.

I think you misconstrued my post. Doesn’t mean much since these aren’t consumer parts anyway. It is technically possible for Apple to create a discrete GPU and have it access a unified memory pool. It would not, however, be possible with PCIe.
 
Last edited:
I still think the new Mac Pro is still an overpriced machine.

Apple needs to come out with Mac Pro Mini.

View attachment 2216585
Now it looks like the G4 Cube
1686621089153.jpeg
 
That IS the point. If you need the slots a Mac Pro is the answer. If not the Mac Studio will be better if you need all that power. And no, the slots aren’t for graphics cards or memory, but everything else goes. It’s not for everyone, but for the right people it will be exactly what they need. These are the two top machines Apple makes, if it doesn’t do what you need get a PC. 👍🏻
LOL, the blind sheep sees a compelling argument and it's always "get a pc if you dont like it, waaah".

All I was saying is that Apple did not put enough effort to give the Mac Pro, the biggest and baddest Mac, more computing power than the Mac Studio. Cost isn't an issue, as evidenced by the 2019 Intel Mac Pro. They could have had some kind of M2 Extreme quad. That would at least have made sense to pay the extreme price for the Mac Pro.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.