WWDC is all about software and the next OS and I'm sure Metal has been developed further. It has been a huge pain that Nvidia got their monopoly with so many devs only looking to CUDA.
It isn't so much that CUDA was early to race and has built a large software base. It is more that they have used some anticompetitive monopolistic tactics to build a bigger "moat" around their software base. (e.g., some "embrace, extend, extinguish" moves on OpenCL ).
The solution is not to get CUDA drivers (even if that's a band aid), it's to turn Metal into the best API, period.
CUDA drives for Nvidia cards does make sense. The real core issue is that it can't be a context where "Metal has to loose for CUDA to win". If Apple is blocked from having a tier-1 API on the Nvidia GPU so that CUDA constantly has an "inside track" to performance then Apple will just pass on the Nvidia GPUs. It isn't that Metal be "best". Rather it is so that Metal isn't being kneecapped.
That's not the solution.
The ML/DL/AI community is profoundly CUDA based. Replacing the proprietary CUDA APIs with proprietary Metal APIs won't help.
Like all ML/AI activity would disappear without Nvidia GPUs. Not.
CUDA isn't just an API - it's the basis used by hundreds of solution-specific libraries, so that the users never touch the CUDA APIs - they use the abstractions that the libraries layer on top of CUDA.
Stuff like TensorFlow runs on top of other "ML engines" like Google's Tensor Processor Unit. It is a "nice to have" in some contexts but it is in no way necessary. Some folks have put customizations into tighly coupled their models to CUDA but most of them are not going to be happy unelss Apple shipped a 3-4 GPU container box ( which Apple likely won't do anyway).
Why is Apple so hell-bent on defining a new proprietary low level compute API and blocking existing compute APIs? (you know
Because the iOS market is about an order magnitude ( or two ) bigger than selling some Nvidia GPU container box would bring in. CUDA is just as proprietary, if not more, than Apple's solution. Apple's solution works on multiple GPUs from different vendors in different sizes. There a tons more inference contexts on Apple devices than systems in the learning phase. What counts is folks being about to use what go learned. Nvidia GPUs isn't going to be a huge there ( with Apple's focus on privacy and local inference on local data ).
this too really isn't the core of the issue/problem. Apple isn't going to put Nvidia GPUs into most of its systems. Nvidia has no x86 integrated solution. Nor do that have any ARM integrated solution that is in any way competitive with Apple's. In the Apple ecosystem context, CUDA is not a multiple platform solution that spans the ecosystem.
CUDA doesn’t run on their own GPUs.
Apple is now a GPU vendor. That’s why this is happening. Nvidia is a competitor, not a partner.
It isn't just GPU. Apple has their own Tensor ("AI" ) engine too. Both Apple and Nvidia probably need to act like better partners here. Big companies can compete and partner where it makes sense in different areas.
Apple needs a way for CUDA to 'fit in' along side Metal. Nvidia needs to commit to seriously enabling Metal (and not attacking it. Indirectly picking a fight with iOS is doom for Nvidia as a minor component supplier. They are not that essential to the ecosystem. ).
Even if they don’t (yet) ship their GPUs on their top end Macs that doesn’t change they don’t want a second compute API muddying the waters.
Metal doesn't really cover all that CUDA does. Apple not wanting to be a 2nd place alternative (due to lack of efficient access to the hardware) is one thing. Apple just pruning off the choices just some Metal can 'win' is just about as bad as Nvidia kneecapping Metal for CUDA to win.
And Apple would say they have just as much of a right to define a first party compute API for their GPUs as Nvidia does on theirs.
But Apple's GPUs probably are not going to be universally pervasive. Some folks will use hardware out of Apple's standard configurations. Apple shouldn't be taking Metal to a "designed into a corner" context where it is only most effective with Apple's specification direction of GPU implementation. That isn't a good across the platform foundational solution either.
Right now the two are in finger pointing mode. Nvidia:" It's Apple's fault. Apple: It's Nvidia's fault. Frankly, it has the appearances that they are using "fan clubs" on each respective side to further escalate. What is probably needed is for both sides to work together more tightly than they have done in the past. ( if they have blown up the trust there ... that's a core issue. )