Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

How much would you be able to pay for a Radeon RX 7900 XTX card driver for Apple Silicon Mac?

  • I wouldn't pay for the drivers

    Votes: 23 67.6%
  • $100

    Votes: 4 11.8%
  • $200

    Votes: 1 2.9%
  • $300

    Votes: 0 0.0%
  • $400

    Votes: 3 8.8%
  • $500

    Votes: 3 8.8%

  • Total voters
    34
Apple has shown zero interest in High Performance Computing. Even the 2019 Mac Pro only offered a single CPU and dual GPUs which made it at best a mid-tier workstation.

Apple Silicon is very much not designed for HPC as it prioritizes power and thermal efficiency over raw performance. We have seen it has issues effectively scaling past a couple dozen CPU cores and a few score GPU cores (the Ultra). But for the intended markets, it's performance is exceptional and it delivers that performance with class-leading efficiency.

As others have tried to explain, the lack of software drivers is not the issue - Apple Silicon is believed to lack the hardware to interface with external GPUs and one may very well have to completely re-design the SoC to add that functionality.

I think it depends on which definition of HPC you use. HPC as in cluster-level performance? That is certainly not something Apple is interested in. But HPC at a personal computer level, I would say that they are very much interested in that. I mean, so far they are the only mainstream CPU designer that ship advanced high-throughput vector and matrix hardware on all levels of consumer products.
 
Because there is no interface for writing GPU drivers on Apple Silicon macOS. See my post above.
I have said it once in this forum, but I'll mention it again.

Asahi Lina (developer of the Mx video drivers for Linux) already said it is possible to write 3RD party GPU drivers for Apple Silicon hardware. It's just extremely difficult. I'm not sure if she meant exclusively a Linux port, or an actual open third-party GPU driver.

There might be a way to create some sort of interface and/or emulate the 3rd-party device somehow to run on MacOS, but I would guess that doing so is very hard. It's similar to how even open source AMD drivers are behind in performance on Linux compared to the commercial implementations.

Of course, if Apple deliberately DENIED access to kernel mode to potential future GPU drivers, such drivers would have to be in user mode, which would make them slower.
 
Asahi Lina (developer of the Mx video drivers for Linux) already said it is possible to write 3RD party GPU drivers for Apple Silicon hardware.
Can you find it? I can only find this.
FAQ: Will you be able to use GPUs on the PCIe slots of the new Mac Pro?

Answer: We don't know yet! The PCIe controllers on prior Apple Silicon chips could not support GPU workloads properly, but we have not checked whether this has changed on the M2 Max, which includes an updated PCIe controller. Or, indeed, on the apparently possibly separate PCIe controller block used in the Mac Pro (it's complicated).

We'll keep you up to date with this topic, but right now it could go either way.
 
Can you find it? I can only find this.

I couldn't find the exact mention now, but thanks to the link you posted, I found this (please scroll down a bit):

@JLO64 @AsahiLinux There is no significant difference between the two. In fact, running just the display out is probably easier, and it should be possible to make it work on the Mac Pro with Asahi with the right setup.

The issue has to do with memory mapping modes and equally affects render and compute workloads. There is a workaround, but it probably drastically affects performance, which is why the viability of (e)GPUs on Apple Silicon hangs on this. There's no point in making it work if it's going to be so slow it's of no benefit.

So, Martin (one of the project developers) acknowledges it IS possible, but it's questionable whether if the Mx macs can do it at a reasonable speed. It's possible that while eGPUs CAN run, they will be so slow that any benefits will be nullified.

Of course, since this is so complex, no one has tried it / managed to do it yet.
 
@Joe Dohn all of these statements are about Linux drivers. If I remember correctly Martin also acknowledged that using those specific mapping modes is probably wrong to begin with, but that’s what GPU drivers traditionally use.

Regarding macOS drivers however, I’m certain that you can write a generic PcIe driver for GPUs, but you won’t be able to hook it into systems graphics APIs. So for an application to use a driver like this it would need to use custom graphics API. It’s not sustainable.

For example, Nvidia might be able to write a CUDA-only driver for macOS (they did in the past), but I am very skeptical about them diverting major engineering resources just so that they can sell a few dozen CPUs to Mac Pro users.
 
Last edited:
  • Like
Reactions: bcortens
For example, Nvidia might be able to write a CUDA-only driver for macOS (they did in the past), but I am very skeptical about them diverting major engineering resources just so that they can sell a few dozen CPUs to Mac Pro users.
Would that even be possible with MacOS security? I thought that NVidia cards were DoA on Mac awhile ago because of gatekeeper and Metal (Mojave I think).
 
I would rather see Apple develop a way to have multiple SoC cards and multiple GPU cards in one system, all working as if they were a single entity...?

But if Apple could come up with a SuperDuperUltraHighSpeed backplane system; just imagine (up to) four Mn Ultra SoC cards & (up to) eight ASi GPU cards in one system, with the OS seeing them all as one giant lump of CPU/GPU/NPU/RAM resources...!
I think there are two problems with this idea: 1. There just aren't enough Mac Pros sold to justify it. 2. The bandwidth and latency issues of this super highspeedbackplane system mean that to get reasonable performance (maintaining a single large unified memory architecture) out of discrete chips you would not only need a wide bus but a short bus. So each chip would have to be very very close together, which minimizes most of the the benefits of splitting them onto separate chips anyway. At that point just do an Intel or AMD and build an absolutely massive chip out of multiple tiles.

I think something like an M2 Extreme would be cool, but I don't think apple sells enough Mac Pros to justify dedicated resources to designing a new set of chip tiles to support one.
 
I think something like an M2 Extreme would be cool, but I don't think apple sells enough Mac Pros to justify dedicated resources to designing a new set of chip tiles to support one.

Extreme is likely coming. Recent Apple patents depict a new routing network that connects four SoCs. Earlier patents only depicted two directly connected SoCs. So I have no doubt they are at least working on this kind of technology (whether it will see the light of the day in an actual product is entirely different matter of course).

It’s much less clear though whether they are also looking at NUMA configurations that combine multiple boards. I would definitely like to see that, but as you say, the low volume of Mac Pro might not make it a worthwhile R&D investment.
 
the low volume of Mac Pro might not make it a worthwhile R&D investment.
I keep seeing this, but I feel like it rings hollow. After all, if the Mac Pro was at least valuable enough to move to Apple Silicon, then surely it has a continuing value as a product line.

If it wasn’t worth the R&D to make an M-Extreme, then I can’t see why Apple would keep it around when it’s essentially a Mac Studio with a pci backplane. They’d just discontinue it.

I believe the rumor that the “Extreme” processor just wasn’t ready. It make ls more sense to me anyway.
 
I keep seeing this, but I feel like it rings hollow. After all, if the Mac Pro was at least valuable enough to move to Apple Silicon, then surely it has a continuing value as a product line.

What is the unique R&D investment of the current Mac Pro? A fancy PCIe switch?

It’s one thing to develop a tower chassis for an existing platform. And an entirely different one to develop an entire set of enterprise features (like NUMA) that permeate all levels of the system.

If it wasn’t worth the R&D to make an M-Extreme, then I can’t see why Apple would keep it around when it’s essentially a Mac Studio with a pci backplane. They’d just discontinue it.

I believe the rumor that the “Extreme” processor just wasn’t ready. It make ls more sense to me anyway.

I also believe that Extreme is in the pipeline. There is some evidence for that. But again, the big question is scalability beyond four SoCs. Extreme would be still leveraging existing Apple technologies (the only new component will be a high-speed intra-chip routing network). But if you want pluggable boards like with Nvidia Grace/Hopper, that’s going to be a much more substantial investment.
 
What is the unique R&D investment of the current Mac Pro? A fancy PCIe switch?
Hear me out:
If Apple wasn’t planning on doing anything else with the platform, it would make more financial sense to drop it now.

At least in my opinion, this indicates that the Mac Pro has a future. It’s certainly better than the zombie years of 2013-2019 with zero changes.
It’s one thing to develop a tower chassis for an existing platform. And an entirely different one to develop an entire set of enterprise features (like NUMA) that permeate all levels of the system.
I’m aware of that. I’m more talking about implications rather than anything concrete.
I also believe that Extreme is in the pipeline. There is some evidence for that. But again, the big question is scalability beyond four SoCs. Extreme would be still leveraging existing Apple technologies (the only new component will be a high-speed intra-chip routing network). But if you want pluggable boards like with Nvidia Grace/Hopper, that’s going to be a much more substantial investment.
Reading posts from people way smarter than me, I don’t think there’s gonna be some insane expansion.

However, I did have a small thought about it, and I wonder if it’s possible that Apple could fit an M series SoC on a board that would be in a master/slave setup with the main SoC. Perhaps with some tasks (that could entirely be run on the slave SoC) relegated to the add-in card, freeing up processing power in the main system.

Sorta like using a dedicated rendering/compiling machine as an add in card. (Which I’m aware exists already, that’s where the idea came from)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.