MP 7,1 7,1 Bros: We're headed into 2025. How are we feeling?

But what is weird/surprising is that I apparently have 1Tb working on the original apple 16 core when all documentation says max is 768gb. In photoshop it shows 1024 available and the entire amount allocated if desired.

This IS OWC ram though, maybe they did something to allow more ?
Look for actual Intel doc, not what Apple says. Intel Ark is clear - CPU supports up to 1TB of RAM. OWC memory is just a memory, coming in from one of reputable sources that also build sticks under their own namy (Hynix, Samsung). OWC just adds it's own tax and resells it.
 
That’s my main issue. Logic Pro X uses a single code for an armed (live) channel. So a few CPU intensive plugs and it’s over. My MBP M3 Max performs much better in that scenario. There’s also a few bottlenecking issues you come across where things a not optimised for Intel anymore.

Very hopeful Apple give us something soon in terms of a tower update.

I have a feeling they will kill the Mac Pro completely :(
 
If Apple aren’t prepared to support NVIDIA GPUs and CUDA, they may as well have discontinued the Mac Pro in 2013.

Since then, high-end Macs have essentially been left with pro audio / video production (admittedly a decent sized market). The Trashcan clearly set out Apple’s MP direction in 2013, which continued in a similar vein with the Studio.

In retrospect, the 7,1 was an anomaly. It seemed to be created in a late-2010’s panic, which subsided once they decided Apple Silicon was ready. AS has no (real) answer for the Mac Pro of course. But they're cool with that, as it has major advantages for laptops - the majority of Mac sales. Being able to re-use iPhone / iOS technologies for the Mac obviously brings all sorts of benefits too.
 
Last edited:
Does anyone know why the Silicon Mac Pro wasn't made more expandable? Seems like a waste of a computer chassis.
 
Does anyone know why the Silicon Mac Pro wasn't made more expandable? Seems like a waste of a computer chassis.

- Apple Silicon / APIs don't support GPUs outside the SoC.

- The AS MP is otherwise expandable (within the limits of driver support), but PCIe devices are sharing a relatively small number of lanes (via PLX switches). This is because Apple's SoCs don't generally have many external PCIe lanes (why would a laptop need them?). The Ultra has a few spare, so those are being used.
 
Apple Silicon / APIs don't support GPUs outside the SoC.

I am sure they could have done so if Apple wanted to develop it that way. But Apple believes it can do no wrong.

They should have supported GPUs being added for the tasks those do better.

It is not impossible, it just needs Apple to not be so damn stubborn.
 
I am sure they could have done so if Apple wanted to develop it that way. But Apple believes it can do no wrong.

They should have supported GPUs being added for the tasks those do better.

It is not impossible, it just needs Apple to not be so damn stubborn.

Of course, on a technical level there is nothing to prevent it. ARM workstations such as Ampere Altra manage it just fine. The issue is that iOS / macOS graphics stack is fully optimised for SoC-based GPUs, and Apple doesn't want to muddy the waters for the sake of a single, slow-selling workstation. Especially when the audio / video pros that currently buy it are fine with the M2 Ultra's GPU.

Anyway, as I mentioned above, if Apple won't support NVIDIA / CUDA - which they've refused to for well over a decade - PCIe GPU support is kind of moot anyway.
 
The issue is that iOS / macOS graphics stack is fully optimised

Yet that fully optimised runs Radeon Pro perfectly fine in Tahoe and the same OS has no issues with regular Apple silicon.

I have both types on MacOS Tahoe.

Apple could do this and do it very well, its just marketing needs pushing this. And general Apple hubris - ie, Apple thinks it can never be wrong.

I’ve spent enough big $$$ with them that I can be critical of them. ;)
 
Last edited:
If it doesn't have the capability to expand (GPU RAM), then it needs to be put down. Dummies should've called the 2013 MP the Mac Studio, and kept on developing the 5,1 into the 7,1.

I agree, maybe this was their plan all along especially the gap between 2013 MP and 2019 MP was huge, not to mention the crazy starting MSRP for the 2019 MP which put a lot of people on the side lines. Maybe this was their way to just kill off a customizable Mac Pro completely.

The only upside I see in MP now are for folks to use PCIe cards like audio + video professionals, and also the ability to add a lot of storage as well.

Lack of RAM + GPU as you said kind of kills off a lot of people.
 
If Apple aren’t prepared to support NVIDIA GPUs and CUDA, they may as well have discontinued the Mac Pro in 2013.

Since then, high-end Macs have essentially been left with pro audio / video production (admittedly a decent sized market). The Trashcan clearly set out Apple’s MP direction in 2013, which continued in a similar vein with the Studio.

In retrospect, the 7,1 was an anomaly. It seemed to be created in a late-2010’s panic, which subsided once they decided Apple Silicon was ready. AS has no (real) answer for the Mac Pro of course. But they're cool with that, as it has major advantages for laptops - the majority of Mac sales. Being able to re-use iPhone / iOS technologies for the Mac obviously brings all sorts of benefits too.

Apple will NEVER ever put NVIDIA back in their Macs again. They had a major fallout after faulty GPUs in their MacBooks in the early days and Apple had to do major recalls. And on top of that NVIDIA doesn't play nice with vendors, they write lackluster drivers, etc. I have personally spoken to both Apple insiders and NVIDIA insiders and they don't gel.

It's a shame because the 2019 MP is such a beautiful piece of hardware and under utilized.

Does anyone know why the Silicon Mac Pro wasn't made more expandable? Seems like a waste of a computer chassis.

Because Apple silicone has the RAM (shared by the GPU btw) on die with the CPU and a bunch of other stuff on the same SoC and they have to add PCIe switches if they want to allow PCIe GPUs, which will degrade performance...and if they want addon RAM, I have no idea how technically feasible that is...so they chose to just allow a medium range of PCIe cards in the M class Mac Pro.
 
Yet that fully optimised runs Radeon Pro perfectly fine in Tahoe and the same OS has no issues with regular Apple silicon.

I have both types on MacOS Tahoe.

Good point. When developers write software for macOS, do they take into account the unified memory model of Apple Silicon? Or is this stuff just taken care of by the compiler? If the latter, then perhaps it is just laziness on Apple's part. My impression was that it would require extra work from developers to support two different models of GPU operation, but perhaps not.

The deprecated Intel code paths obviously still work, but all AS code to date has been able to assume unified memory. Could an AS MP with a PCIe GPU run this code unmodified? I guess in the worst case it would just require some kind of translation layer / shim. Most apps aren't going to use much VRAM anyway.

Apple could do this and do it very well, its just marketing needs pushing this. And general Apple hubris - ie, Apple thinks it can never be wrong.

What's the marketing implication? If the next Mac Pro could accept PCIe GPUs, surely that would just remove criticism from certain high end Mac users? The majority of Mac customers would just continue buying MacBook Airs, oblivious to the change. It might give Studio Ultra customers pause. But then the Mac Pro is £3K more expensive and a completely different form factor, so maybe not.

Given its mobile roots, the whole Apple Silicon thing is performance / watt. All their products are thin / compact, with silent cooling. A big box with a power guzzling GPU is at odds with this aesthetically, so sits awkwardly in the range. The current Mac Pro form factor is 100% Xeon workstation. Apple commonly reuses case designs over an architectural transition, but the v2 might change things up. Or maybe the v1 will just hang around for a while as a bridge for those with PCIe cards, then get discontinued. Given the ease with which they could have upgraded the logic board to an M3 Ultra, if 2025 ends with no MP revision, it'll likely suggest the latter.

Again, though, is there much market for a Mac Pro that doesn't take NVIDIA cards?

I agree about the hubris. Apple is also very image conscious; their whole persona is based on 'perfection', so they almost never admit error.

I’ve spent enough big $$$ with them that I can be critical of them. ;)

No argument there.
 
Good point. When developers write software for macOS, do they take into account the unified memory model of Apple Silicon? Or is this stuff just taken care of by the compiler? If the latter, then perhaps it is just laziness on Apple's part. My impression was that it would require extra work from developers to support two different models of GPU operation, but perhaps not.

The deprecated Intel code paths obviously still work, but all AS code to date has been able to assume unified memory. Could an AS MP with a PCIe GPU run this code unmodified? I guess in the worst case it would just require some kind of translation layer / shim. Most apps aren't going to use much VRAM anyway.



What's the marketing implication? If the next Mac Pro could accept PCIe GPUs, surely that would just remove criticism from certain high end Mac users? The majority of Mac customers would just continue buying MacBook Airs, oblivious to the change. It might give Studio Ultra customers pause. But then the Mac Pro is £3K more expensive and a completely different form factor, so maybe not.

Given its mobile roots, the whole Apple Silicon thing is performance / watt. All their products are thin / compact, with silent cooling. A big box with a power guzzling GPU is at odds with this aesthetically, so sits awkwardly in the range. The current Mac Pro form factor is 100% Xeon workstation. Apple commonly reuses case designs over an architectural transition, but the v2 might change things up. Or maybe the v1 will just hang around for a while as a bridge for those with PCIe cards, then get discontinued. Given the ease with which they could have upgraded the logic board to an M3 Ultra, if 2025 ends with no MP revision, it'll likely suggest the latter.

Again, though, is there much market for a Mac Pro that doesn't take NVIDIA cards?

I agree about the hubris. Apple is also very image conscious; their whole persona is based on 'perfection', so they almost never admit error.



No argument there.

It's not hubris imo, its just a niche market and alot of folks who relied on PCIe addons that were using Macs either moved to PCs, they kept their old Macs, or they moved to PCIe boxes or alternate hardware with Mac Studio/etc.

NVIDIA makes the best GPUs in the world, and theres still a market for that especially in the creative field (not counting gaming, which macOS was never good at really).

The other person mentioned Radeon drivers in Tahoe, but he's talking about the 6xxx series (which is the max it supports) and on an Intel Mac only (non M series). So nothing has changed really.

After Tahoe, Apple will be removing all Intel libraries.
 
Apple will NEVER ever put NVIDIA back in their Macs again. They had a major fallout after faulty GPUs in their MacBooks in the early days and Apple had to do major recalls.

This is Apple lore, and I had one of those 2007 MacBook Pros (got a free logic board replacement out of warranty). But I'm always a bit sus that this completely explains it. AMD has also had issues with widespread GPU failures in MBPs (2011 models IIRC).

I tend to think it's a combination of a) being a distant second in the GPU market, AMD cuts Apple much better deals (as they do with console makers), and b) NVIDIA's CUDA being a threat to Metal. CUDA is essentially the industry standard, so if pro Macs had NVIDIA GPUs, cross-platform software would just use it in preference to Metal. This has a bunch of implications, including Apple forever being obligated to include NVIDIA GPUs in their high end Macs.

And on top of that NVIDIA doesn't play nice with vendors, they write lackluster drivers, etc. I have personally spoken to both Apple insiders and NVIDIA insiders and they don't gel.

I'm sure they're bastards etc, but they wouldn't dominate the GPU industry if they made crap products. Apple have a bit of a reputation themselves. They're both industry giants that swap positions for market cap, so neither feels like bending the knee to the other.
 
This is Apple lore, and I had one of those 2007 MacBook Pros (got a free logic board replacement out of warranty). But I'm always a bit sus that this completely explains it. AMD has also had issues with widespread GPU failures in MBPs (2011 models IIRC).

I tend to think it's a combination of a) being a distant second in the GPU market, AMD cuts Apple much better deals (as they do with console makers), and b) NVIDIA's CUDA being a threat to Metal. CUDA is essentially the industry standard, so if pro Macs had NVIDIA GPUs, cross-platform software would just use it in preference to Metal. This has a bunch of implications, including Apple forever being obligated to include NVIDIA GPUs in their high end Macs.



I'm sure they're bastards etc, but they wouldn't dominate the GPU industry if they made crap products. Apple have a bit of a reputation themselves. They're both industry giants that swap positions for market cap, so neither feels like bending the knee to the other.

The NVIDIA situation is not lore. I have actual emails from Jensen saying that it's impossible to work with Apple. And I have had NVIDIA engineers that were supplying the Apple drivers (incl. CUDA drivers) when NVIDIA was supported talk smack about Apple to me personally.

But to your point, yes CUDA is very proprietary and Apple never liked that. The highest end Apple GPUs are equivalents to NVIDIA 4060-4080 (maybe some more), which isn't too bad especially at these power envelopes, its pretty amazing what Apple is doing GPU wise.

AMD seemed like a better partner to Apple, but they are a generation or two behind NVIDIA when it comes to GPUs. And historically AMD makes custom GPUs for many other companies as a vendor, like Nintendo (GameCube, Switch, etc), PlayStation 4/5/5 Pro, XBOX (whatever models they have). And AMD had a team inside of Apple building their drivers. But that partnership is essentially over after this year with the last Intel devices being the only supported ones. They didn't even add 7xxx AMD GPU drivers, its like a 3-4 year old GPU by now.

NVIDIA is king in the GPU space, bar none, and it will probably stay that way for a while. Jensen banked on AI many generations ago and they won.
 
The NVIDIA situation is not lore. I have actual emails from Jensen saying that it's impossible to work with Apple. And I have had NVIDIA engineers that were supplying the Apple drivers (incl. CUDA drivers) when NVIDIA was supported talk smack about Apple to me personally.

Lore doesn't imply it's untrue, just that it's the accepted / traditional story (which it clearly is). I'm sure the failures - and especially, NVIDIA's initial denials about it - caused Apple hassle and embarrassment at the time. But ultimately it's a business issue that can be sorted with lawsuits etc. It caused a rift in NVIDIA's case, whereas it didn't when a similar thing happened with AMD (albeit without the lying).

The communications you mention are all from NVIDIA's side, saying Apple are difficult to work with.

But to your point, yes CUDA is very proprietary and Apple never liked that. The highest end Apple GPUs are equivalents to NVIDIA 4060-4080 (maybe some more), which isn't too bad especially at these power envelopes, its pretty amazing what Apple is doing GPU wise.

Is that a desktop 4080, or mobile? Massive difference given NVIDIA's disingenuous naming. Presumably you're talking about an M3 Ultra? That's an expensive machine vs. a desktop with a 5070Ti (which has equivalent performance to a desktop 4080; the 4080 has been discontinued).

AMD seemed like a better partner to Apple, but they are a generation or two behind NVIDIA when it comes to GPUs. And historically AMD makes custom GPUs for many other companies as a vendor, like Nintendo (GameCube, Switch, etc), PlayStation 4/5/5 Pro, XBOX (whatever models they have). And AMD had a team inside of Apple building their drivers. But that partnership is essentially over after this year with the last Intel devices being the only supported ones. They didn't even add 7xxx AMD GPU drivers, its like a 3-4 year old GPU by now.

Sure, I expect AMD were very happy for the business. And given Apple's control of the Mac platform, they were free to go with whoever offered the biggest discounts (and block drivers for new cards, when they want their customers to move to their new machines). Not an option for PC vendors.

NVIDIA is king in the GPU space, bar none, and it will probably stay that way for a while. Jensen banked on AI many generations ago and they won.

Yep.
 
Is that a desktop 4080, or mobile? Massive difference given NVIDIA's disingenuous naming. Presumably you're talking about an M3 Ultra? That's an expensive machine vs. a desktop with a 5070Ti (which has equivalent performance to a desktop 4080; the 4080 has been discontinued).

A good read here on the M4 Max (cheaper than Ultra's)

 
The NVIDIA situation is not lore. I have actual emails from Jensen saying that it's impossible to work with Apple. And I have had NVIDIA engineers that were supplying the Apple drivers (incl. CUDA drivers) when NVIDIA was supported talk smack about Apple to me personally.

Personally, I would be more lilely to suspect the GPUs failed because Apple's cooling design was cooking them. They had endemic GPU failures with AMD laptops as well, and of course the G4 Cube and 2013 Mac Pro.

But to your point, yes CUDA is very proprietary and Apple never liked that. The highest end Apple GPUs are equivalents to NVIDIA 4060-4080 (maybe some more), which isn't too bad especially at these power envelopes, its pretty amazing what Apple is doing GPU wise.

I still don't buy that 4080 equivalence. Unless you've seen a demonstration of realtime high resolution, high frame rate, high complexity 3D on AS that's as performant as 4080 systems. Rendering a static image seems like grading on a curve to an easy task. Like AMD used to claim their GPUs were super competitive with Nvidia at 1080p.

I will note Blender have ceased support for Intel macs, claiming due to AMD driver issues. Affinity's standard technical support answer to any issue (Intel and Apple Silicon) is "disable Metal rendering, disable OpenGL, Disable Hardware Acceleration".

Maybe Metal is just catastrophically, laughably bad as a 3D API and that's why the best quality of Mac-optimised games is Potato at Stopmotion frame rate, when compared to the Windows equivalents. 🤷‍♂️
 
A good read here on the M4 Max (cheaper than Ultra's)


OK, so the mobile 4080. So for Blender rendering, the M4 Max GPU (40 core) is roughly equivalent to a desktop 4070.

That's very impressive given the low power draw. But for context, the 4070's replacement, the 5070, is about 10% faster and costs £500. Plus, if you want to increase the RAM and SSD capacity in a Studio to adequate levels (64GB / 2TB), the price spirals to £3400.
 
Personally, I would be more lilely to suspect the GPUs failed because Apple's cooling design was cooking them.

I believe there were similar issues in Dell laptops etc. It was a flaw in the ceramic package of the chip; over time cracks developed that let the internals oxidise IIRC. Somehow Dell et al managed to move past it, though, and continue giving their customers what they wanted. I guess that's having competition for you.

But yeah, Apple products have a long history of cooking GPUs. Similar issues in iMacs too. That's what thin + silent + midrange GPU gets you.
 
Personally, I would be more lilely to suspect the GPUs failed because Apple's cooling design was cooking them. They had endemic GPU failures with AMD laptops as well, and of course the G4 Cube and 2013 Mac Pro.



I still don't buy that 4080 equivalence. Unless you've seen a demonstration of realtime high resolution, high frame rate, high complexity 3D on AS that's as performant as 4080 systems. Rendering a static image seems like grading on a curve to an easy task. Like AMD used to claim their GPUs were super competitive with Nvidia at 1080p.

I will note Blender have ceased support for Intel macs, claiming due to AMD driver issues. Affinity's standard technical support answer to any issue (Intel and Apple Silicon) is "disable Metal rendering, disable OpenGL, Disable Hardware Acceleration".

Maybe Metal is just catastrophically, laughably bad as a 3D API and that's why the best quality of Mac-optimised games is Potato at Stopmotion frame rate, when compared to the Windows equivalents. 🤷‍♂️

It's definitely the equivalent of a 4080 especially in compute. It's pretty astounding because any PC based 4070-4080 equivalent won't run without a power adapter and on battery it has reduced performance vs MBP.

Also Blender is a solid benchmarking tool, Cinebench is a better tool to use imo.

OK, so the mobile 4080. So for Blender rendering, the M4 Max GPU (40 core) is roughly equivalent to a desktop 4070.

That's very impressive given the low power draw. But for context, the 4070's replacement, the 5070, is about 10% faster and costs £500. Plus, if you want to increase the RAM and SSD capacity in a Studio to adequate levels (64GB / 2TB), the price spirals to £3400.

Don't forget it's a laptop and fairly similar priced to PC counterparts.
 
Don't forget it's a laptop

Sure, for laptops Apple Silicon is incredible. Given this is a Mac Pro thread, though, I was just clarifying whether you were saying the M4 Max has desktop RTX 4080 performance. Manufacturers love to obfuscate this stuff; you have to pay close attention to the number of GPU cores (Apple) or whether we're talking desktop or mobile (Nvidia).

and fairly similar priced to PC counterparts.

That depends on the overall spec of the laptop. A MacBook Pro with a 40-core Max starts at £3700 / £4000, depending on screen size. Something like a Razer Blade will approach this. But if you just need the GPU power and are OK with something less portable, perhaps generally be connected to a monitor, you can e.g. pick up an MSI Vector 16 (Core 7 + 5070Ti) for £1700.
 
It's definitely the equivalent of a 4080 especially in compute. It's pretty astounding because any PC based 4070-4080 equivalent won't run without a power adapter and on battery it has reduced performance vs MBP.

Also Blender is a solid benchmarking tool, Cinebench is a better tool to use imo.

So on your desktop system - that's costs ~$10k+, the best you can get is 2-3 years ago's second tier performance?

Is that really an achievement to be happy with? It's kinda damned with faint praise.

Personally, I don't really believe these "I need to do a hardcore Blender Render... while running on battery" are a significant real market. I think this is a bragging point for Apple (fans), because it's what they have, but if you have work that serious to do, you have site power.

I mean if I was on any other laptop, I could just plug into x number of external GPUs as well if render power was the real goal... 🤷‍♂️

Is "I need to do massive render compute... without power" really a bigger market than "I want to be able to upgrade my GPU / Ram post-purchase"?
 
Personally, I don't really believe these "I need to do a hardcore Blender Render... while running on battery" are a significant real market. I think this is a bragging point for Apple (fans), because it's what they have, but if you have work that serious to do, you have site power.

It's impressive on a technical level, and all things being equal, it's obviously desirable. Less heat = less noise and / or a thinner chassis. A smaller power brick is handy too.

But if you're someone who regularly needs strong GPU performance, why are you using a laptop in the first place? A desktop will be even quieter, have more grunt and is cheaper to boot. Plus, who cares how big it is if it's under your desk? In that scenario, it takes up less desk space than a laptop.

Is "I need to do massive render compute... without power" really a bigger market than "I want to be able to upgrade my GPU / Ram post-purchase"?

Quite.
 
So on your desktop system - that's costs ~$10k+, the best you can get is 2-3 years ago's second tier performance?

Is that really an achievement to be happy with? It's kinda damned with faint praise.

Personally, I don't really believe these "I need to do a hardcore Blender Render... while running on battery" are a significant real market. I think this is a bragging point for Apple (fans), because it's what they have, but if you have work that serious to do, you have site power.

I mean if I was on any other laptop, I could just plug into x number of external GPUs as well if render power was the real goal... 🤷‍♂️

Is "I need to do massive render compute... without power" really a bigger market than "I want to be able to upgrade my GPU / Ram post-purchase"?

Very weak argument, imo when it comes to MacBook Pro's. They are incredible machines/power efficient/portable that rival desktop performance. I should know, I have a specced out Mac Pro 2019 and a specced out M1 Max 16" MBP. I have been docking the MBP lately and it feels amazing enough for real hardcore work.

MBPs have great CPUs and very strong GPUs. They are not NVIDIA and won't be for a while, that is the only argument you have.

What are you using your GPUs for anyway? If it's gaming, then Macs are not for you. If it's hardcore 3D (ie Octane, Redshift, etc) then Macs are not for you either. Everything else is fine.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.
Back
Top