Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

How much would you be able to pay for a Radeon RX 7900 XTX card driver for Apple Silicon Mac?

  • I wouldn't pay for the drivers

    Votes: 23 67.6%
  • $100

    Votes: 4 11.8%
  • $200

    Votes: 1 2.9%
  • $300

    Votes: 0 0.0%
  • $400

    Votes: 3 8.8%
  • $500

    Votes: 3 8.8%

  • Total voters
    34
Really? I don't think so.

View attachment 2228971

...and this does not take into account the power of AMD/Nvidia RT cores, which Apple Silicon does not have. Besides, in a PC crap you can use several graphics cards and several times the total computing power.

View attachment 2228985

The situation would look a little bit better if Apple released a Mac Pro with M2 Extreme now. Unfortunately, that didn't happen and it looks terrible.
Tflops just became a marketing term now. It’s no longer indicative of performance, though it never really was as different architectures perform differently regardless of tflops.

Read why: https://www.engadget.com/nvidia-rtx-3090-3080-3070-cuda-core-int32-fp32-210059544.html
 
  • Like
Reactions: Colstan
One, good luck fitting 7 4090’s in a case, and not blowing a fuse doing so.

Two, tflops aren’t a hard measure of performance. (Radeons Vega architecture comes to mind, with the VII comparable in tflops to Nvidias 30 series but performing like a 1080ti in graphics)

Three, in raster, the gen to gen performance jump is minimal (and in some cases, backwards)

Four, the M2 Ultra performs comparably to a 3070 in Blender rendering with Nvidia’s Optix on.

And let’s do a performance analysis of competitors gen over gen.
According to lazy googling leading to userbenchmark, the 10 series got about a 30% uplift in performance to the 20 series. 20 to 30 got about another 30%, and the 30 to 40 series being, on average, 30% (the 4090 being the outlier).

Typical good gen over gen improvement over 7 years. Hardly “calculator level” as per your prediction.

But I guess we should see what monstrosity the 5090 is, since the goalpost seems to be the top performing gpu.
One: Please don't worry about my fuse. It worked with 7xRTX3090 without any problems, so it would also withstand 7xRTX4090.

Two: If Tflops is not a good indicator of performance then focus on Octane results. The Radeon VII has far fewer teraflops than the RTX 3090 and not much more than 1080Ti. Given the good optimization of CUDA/Optix - everything is in order.

Three: It's not about rasterization, but pure computing power. It is known that in games Ultra has good results.

Four: The 3070 has about 35% of the performance of the 4090 – similar to Ultra, so what is your point?
 
Last edited:
Apple makes very opinionated computers that fit most workflows, but not all. If your workload doesn't fit, you're screwed and you have a decision to make: you have to decide if the loss of performance is more important than not using macOS. For some people it's not and they keep using Apple hardware. But, for others it is, and they use hardware from other companies.

I'd like to work with macOS and have high computing performance. I think Apple can do that.
 
Hmmm. As others have pointed out TFlops are not a good way to compare generally.
Then ignore Tflops and look at the Octane rendering results.
In gfxbench the M2 ultra compares to a 4080. In geekbench compute the ultra is close to a 4080 and in blender only optix based Nvidia cards and the 7900xtx (which is very close to the ultra) beat it.
This is NOT about GAMING performance. This is about computing performance. No one but youtubers work using banchmarks.
You’re overstating things here. I have no idea what you mean by AMD and rt cores. They don’t appear to help AMD very much.
Yes, not much. 20-30%. AMD is far beyond Nvidia.
 
I'd like to work with macOS and have high computing performance. I think Apple can do that.
You may need to rethink your expectations. For better or worse, Apple makes many choices on behalf of its customers. And so far Apple has decided that's impossible, though it may change in the future.
 
I'd like to work with macOS and have high computing performance. I think Apple can do that.

Finally, we get to the core of the issue! Many of us share our sentiment. And I also believe that Apple can do it. But their strategy for high computing performance is proprietary in-house devices. Third-party GPUs are out. That said, I see absolutely no reason why Apple wouldn't be able to catch up or even surpass the performance of other GPUs in mainstream applications. They already pretty much overtook AMD, performance parity with Nvidia is just a question of costs and business strategy.
 
AMD/Nvidia RT cores, which Apple Silicon does not have

AMD doesn't have any RT cores. They have an intersection test assist instruction which is faster than doing the same calculations with normal instructions. But by that measure, Apple Silicon also has "RT cores" as they have a three-operand conditional selection instruction that can accelerate intersection tests.
 
No, that's what this thread is all about. Apple Silicon GPU has too little computing power.

Current generation of Apple Silicon GPU might have inadequate computing power for some applications, yes. We have no idea about what the future generations will bring. All we know is that Apple doesn’t see any need for third-party GPUs on their platform.

If you are unhappy with this vision, petition Apple to bring third-party GPU support to Apple Silicon. As already explained, neither current hardware nor software offer potential for this support. You can offer as much money as you want, nobody will be able to write a stable third-party driver because the OS completely lacks the necessary interface.
 
nobody will be able to write a stable third-party driver because the OS completely lacks the necessary interface.
This comparison between Linux and macOS may help to explain your point.
In every modern OS, GPU drivers are split into two parts: a userspace part, and a kernel part. The kernel part is in charge of managing GPU resources and how they are shared between apps, and the userspace part is in charge of converting commands from a graphics API (such as OpenGL or Vulkan) into the hardware commands that the GPU needs to execute.

Between those two parts, there is something called the Userspace API or “UAPI”. This is the interface that they use to communicate between them, and it is specific to each class of GPUs! Since the exact split between userspace and the kernel can vary depending on how each GPU is designed, and since different GPU designs require different bits of data and parameters to be passed between userspace and the kernel, each new GPU driver requires its own UAPI to go along with it.

On macOS, since Apple controls both the kernel driver and the userspace Metal/GL driver, and since they are always updated in sync as part of new macOS versions, the UAPI can change whenever they want. So if they need a new feature to support a new GPU, or they need to fix a bug or a design flaw, or make a change to improve performance, that’s not an issue! They don’t have to worry too much about getting the UAPI right, since they can always change it later. But things aren’t so easy on Linux…

The Linux kernel has a super strict userspace API stability guarantee. That means that newer Linux kernel versions must support the same APIs that older ones do, and older apps and libraries must continue working with newer kernels. Since graphics UAPIs can be quite complicated, and often need to change as new GPU support is added to any given driver, this makes it very important to have a good UAPI design! After all, once a driver is in the upstream Linux kernel, you can’t break compatibility with the old UAPI, ever. If you make a mistake, you’re stuck with it forever. This makes UAPI design a very difficult problem! The Linux DRM subsystem even has special rules for GPU UAPIs to try to minimize these issues…
 
Last edited:
No, that's what this thread is all about. Apple Silicon GPU has too little computing power.
No. We were on the topic of Tflops not being representative of actual performance. You’re falling into marketing numbers too much.
 
No. We were on the topic of Tflops not being representative of actual performance. You’re falling into marketing numbers too much.
No. The topic of this thread is computing performance. Just forget about Tflops and look at Octane or Blender. The RTX4090 is several times faster in computing than the M2 Ultra. Period.
 
This comparison between Linux and macOS may help to explain your point.


And to make things even more interesting, it seems that for some devices the Mac driver itself is actually split between multiple processors, which might even have different architectures! That is, the firmware (running on the specialised processor) and the kernel driver (running on the main CPU) can share the same codebase that passes control to each other using custom protocols (e.g. https://asahilinux.org/2021/08/progress-report-august-2021/).
 
  • Like
Reactions: Xiao_Xi
No. The topic of this thread is computing performance. Just forget about Tflops and look at Octane or Blender. The RTX4090 is several times faster in computing than the M2 Ultra. Period.

Then why don't you get the 4090? It is obviously the better choice for your use case, right?
 
  • Like
Reactions: Macintosh IIcx
Not right. macOS does not support RTX 4090.

Then you have to petition Apple that they allow you to use 4090 or similar GPUs. Right now this conversation is kind of pointless. I also want to be able to charge my petrol car on the free electrical charger in front of our department complex. But it doesn't work like that. I either have to suck it up or buy another car. Same with a Mac. No current-get Mac will be able to run third-party GPUs. Maybe Apple will add this ability in the future (IMHO, extremely unlikely). Maybe Apple will make their own GPUs that are as fast or faster as competition. Who knows.
 
  • Like
Reactions: goldpin
If macOS did support it, you’d complain that no Mac would support 2 x 4090. If it had 2, you’d want 4 etc.
Wrong would be complaining that no CUDA support as to get the driver in Nvidia would have to drop insistence on CUDA part and support only Metal as per Apples direction.
 
Maybe Apple will make their own GPUs that are as fast or faster as competition.

^^^ This is what needs to happen, Apple silicon (GP)GPUs; even if they cannot be an aggregate part of the overall GPU/UMA pool of resources, they can be used to grind out compute/render tasks in the background while the "iGPU" in the SoC handles actual display graphics...?
 
^^^ This is what needs to happen, Apple silicon (GP)GPUs; even if they cannot be an aggregate part of the overall GPU/UMA pool of resources, they can be used to grind out compute/render tasks in the background while the "iGPU" in the SoC handles actual display graphics...?

That’s the design of yesterday. We already see massive increase in working memory sizes for compute workloads. Companies like Nvidia for example are actively researching ways to combine compute and memory, and both Nvidia and AMD are working on high-bandwidth interfaces with large memory pools. I just don’t understand why you would want Apple to explore solutions that were successful ten years ago, when everyone else seeks to go forward.
 
^^^ This is what needs to happen, Apple silicon (GP)GPUs; even if they cannot be an aggregate part of the overall GPU/UMA pool of resources, they can be used to grind out compute/render tasks in the background while the "iGPU" in the SoC handles actual display graphics...?
That’s the design of yesterday. We already see massive increase in working memory sizes for compute workloads. Companies like Nvidia for example are actively researching ways to combine compute and memory, and both Nvidia and AMD are working on high-bandwidth interfaces with large memory pools. I just don’t understand why you would want Apple to explore solutions that were successful ten years ago, when everyone else seeks to go forward.

I would rather see Apple develop a way to have multiple SoC cards and multiple GPU cards in one system, all working as if they were a single entity...?

But if Apple could come up with a SuperDuperUltraHighSpeed backplane system; just imagine (up to) four Mn Ultra SoC cards & (up to) eight ASi GPU cards in one system, with the OS seeing them all as one giant lump of CPU/GPU/NPU/RAM resources...!
 
I'd like to work with macOS and have high computing performance. I think Apple can do that.

Apple has shown zero interest in High Performance Computing. Even the 2019 Mac Pro only offered a single CPU and dual GPUs which made it at best a mid-tier workstation.

Apple Silicon is very much not designed for HPC as it prioritizes power and thermal efficiency over raw performance. We have seen it has issues effectively scaling past a couple dozen CPU cores and a few score GPU cores (the Ultra). But for the intended markets, it's performance is exceptional and it delivers that performance with class-leading efficiency.

As others have tried to explain, the lack of software drivers is not the issue - Apple Silicon is believed to lack the hardware to interface with external GPUs and one may very well have to completely re-design the SoC to add that functionality.
 
Just forget about Tflops and look at Octane or Blender. The RTX4090 is several times faster in computing than the M2 Ultra. Period.
Yes. If 3d rendering is crucial for your workflow, the only sane choice is Windows. But it’s your choice to make. RTX 4090 is a wonderful card, I can attest to that.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.