Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

M4pro

macrumors regular
Original poster

“On March 31, 2026, Apple officially approved George Hotz’s Tiny Corp TinyGPU driver extension, allowing Apple Silicon Mac users to run external Nvidia and AMD GPUs over Thunderbolt/USB4 for the first time since 2020.”

“TinyGPU installs through Apple’s official DriverKit framework, enabling AMD RDNA3+ and Nvidia Ampere+ GPUs (RTX 30-series and newer) via Thunderbolt 4 or USB4. However, it’s compute-only: no gaming acceleration, no display output, no Metal API support. macOS treats the external GPU purely as a compute device for AI and ML workloads.

The technical limitations matter. Thunderbolt 4 provides 40 Gbps bandwidth—significantly slower than desktop PCIe x16’s 128 Gbps on modern systems. Setup isn’t plug-and-play either; it requires Docker for compilation, command-line comfort, and manual configuration. Gamers and video editors hoping for Nvidia-accelerated rendering will find nothing here. This targets data scientists and AI researchers who need CUDA compatibility for frameworks like PyTorch and TensorFlow.”
 
Another one:


“Apple has approved a kernel extension driver, developed by a third-party company called Sapling, that enables Nvidia eGPUs to function with Arm-based Macs. The driver, called Miracle, works over Thunderbolt connections and supports recent Nvidia desktop GPUs, meaning a Mac Studio or MacBook Pro can now theoretically offload compute tasks to an external Nvidia card”

Is this another one or another 'name' ? The article linked makes referenced to " As The Verge first reported, " . The Verge article is about TinyGPU Tiny Corp (www.tinygrad.org ). Sometimes companies change names due to trademark conflicts, but this don't look 'new'.

If the core here is actually the TinyGPU work around this doesn't apply to just Nvidia. It is modern AMD GPU (consumer) cards also that support HIP (RDNA 3+).


“Display output from the Nvidia eGPU isn’t supported in the initial release — this is a compute-only driver.

Initial release? Pretty good chance it is all releases. DriverKIt doesn't have an abstraction for the GUI stack devices. These are "in-between' space drivers; not kernel level drivers. All the very quirky high frame rate + high data rate latency issues will have to bounce the iOMMU control.
 
Another April fools prank?

Probably not. There is no reason for Apple not to sign a properly implemented "USB" or "PCI-e through a tunnel" driver written using DriverKit.

The less polished approach was working last year ( October).

" ... TinyCorp also shared an image of a MacBook Pro M3 Max running Tinygrad off of an (unnamed) RTX GPU hooked up to an ADT-UT3G dock using USB4. ..."

If look at Tiny's Twitter (X) feed they were cobbled working "enough" systems a year ago (March 2025)


If someone made a command line tool that talked to DriverKit driver that talked to a PCI-e card that ran a cement mixer ... would (or even should) Apple block it? No. Or some kind of high end production printer ? No. It isn't much different here other than the 'hype' associated with the external device. As an OS vendor it isn't job to police the command line if folks are using proper libraries.

It is a just a compute accelerator. From the Tinybox's perspective more users of their tinygrad framework just helps the hardware side of the company grow bigger. Dropping the Mac Pro there is even less overlap between TinyBox's hardware and Apple's. ( I don't think Apple would have held it up if they had kept the Mac Pro) .
 
  • Like
Reactions: tenthousandthings
Another one:


“Apple has approved a kernel extension driver, developed by a third-party company called Sapling, that enables Nvidia eGPUs to function with Arm-based Macs. The driver, called Miracle, works over Thunderbolt connections and supports recent Nvidia desktop GPUs, meaning a Mac Studio or MacBook Pro can now theoretically offload compute tasks to an external Nvidia card”
Is this another one or another 'name' ? The article linked makes referenced to " As The Verge first reported, " . The Verge article is about TinyGPU Tiny Corp (www.tinygrad.org ). Sometimes companies change names due to trademark conflicts, but this don't look 'new'.
I went down that rabbit hole, mainly because I find the article interesting if in fact "Sapling" and its "Miracle" driver are an AI hallucination and/or creation, designed to drive traffic to the "Content Innovation Summit 2026" page that is linked there. I could not find anything about it -- the closest is a system extension called Sapling for Mac (grammar checker and AI writing assistant) from an AI-toolkit developer named Sapling at sapling.ai ...

The article says "The developer community’s response has been enthusiastic. Posts across X and various forums have described the Miracle approval as the most significant Mac-related development for ML practitioners in years. Some see it as a potential catalyst: if enough users adopt Nvidia eGPUs on Macs, Nvidia itself might eventually invest in official macOS support." LOL -- good luck finding any of that!
 
  • Like
Reactions: M4pro
I went down that rabbit hole, mainly because I find the article interesting if in fact "Sapling" and its "Miracle" driver are an AI hallucination and/or creation, designed to drive traffic to the "Content Innovation Summit 2026" page that is linked there. I could not find anything about it -- the closest is a system extension called Sapling for Mac (grammar checker and AI writing assistant) from an AI-toolkit developer named Sapling at sapling.ai ...

There are some 'Miracle' branded GPUs out of some Asian GPU vendors. And some Nvidia drivers have been label 'miracle' when they fixed some problem. It wouldn't be surprising of some Asian LLM 'dreamed' this up. [ I'm guessing 'Sapling' is some mistranslation defect added in on top. ]



The article says "The developer community’s response has been enthusiastic. Posts across X and various forums have described the Miracle approval as the most significant Mac-related development for ML practitioners in years.

I don't think those folks have comprehensively looked at what this actually is.


Some see it as a potential catalyst: if enough users adopt Nvidia eGPUs on Macs, Nvidia itself might eventually invest in official macOS support." LOL -- good luck finding any of that!

Nvidia writing a strict PCI-e DriverKit or USB4 one would fall into same category of "why not would Apple sign it". Port their command line Unix/Linux tools to another variant of Unix. Apple isn't stopping you. Nvidia brigging the Graphic output stack still in the same boat it has been the last 6 years. That isn't going to change.
 
“TinyGPU installs through Apple’s official DriverKit framework, enabling AMD RDNA3+ and Nvidia Ampere+ GPUs (RTX 30-series and newer) via Thunderbolt 4 or USB4..."

The technical limitations matter. Thunderbolt 4 provides 40 Gbps bandwidth—significantly slower than desktop PCIe x16’s 128 Gbps on modern systems.
So it won't communicate at TB5 speeds if connected to a TB5-equipped Mac?

And how much does the communication speed matter in practice? Does it need to communicate continuously, at high bandwidth, with the Mac while the job is running? Or is the bandwidth only a constraint at the start when sending the job to the external GPU, and at the end when retrieving the results?
 

Apparently, it's doable but a tad technical. I'm not sure I see the point. At least not for local LLMs where unified memory is an advantage for higher end Macs.
 
So it won't communicate at TB5 speeds if connected to a TB5-equipped Mac?
It wouldn’t have killed them to address the TB5 question.

Maybe they assume people don’t yet know that TB5 enclosures are a thing.

 
Last edited:
So it won't communicate at TB5 speeds if connected to a TB5-equipped Mac?

The requirements are for a 'USB4/Thunderbolt' ... USB4 has varying requirements for "Thunderbolt" support on a systems . (there are some M1 systems that are USB4 because don't hit all of the thunderbolt requirements.)


At one point this was a USB 'solution' (i.e., 'hack' )

About a year ago in 2025.


Current tinygrad/extras/setup_tinygup_osx.


is calling APLRemotePCIDevice (i.e., a DriverKit PCI-e device. ). It shouldn't particularly care about Thunderbolt because this is a PCI-e interface. (Thunderbolt should be transparent except to implement the optional PCI-e 'hot plug' API features ).
And how much does the communication speed matter in practice? Does it need to communicate continuously, at high bandwidth, with the Mac while the job is running? Or is the bandwidth only a constraint at the start when sending the job to the external GPU, and at the end when retrieving the results?

Likely depends upon job shipping over. Bigger model to a larger remote RAM storage requires more bandwidth. If card only have 8-16GB though then that is limited.

A normal 'prompt'/question is going to be smaller than the model in most cases. Stuff like "do my homework after reading the next 3 chapters of this book...." probably take longer. 🙂
 
It wouldn’t have killed them to address the TB5 question.

Maybe they assume people don’t yet know that TB5 enclosures are a thing.

Or maybe Holt starting working on this before TB5 was available.
 
The requirements are for a 'USB4/Thunderbolt' ... USB4 has varying requirements for "Thunderbolt" support on a systems . (there are some M1 systems that are USB4 because don't hit all of the thunderbolt requirements.)
Sorry, not following how this addresses my question of whether TB5 is supported.

According to Anker, "Thunderbolt 5 is built on USB4 V2, DisplayPort 2.1, and PCIe Gen 4 standards."

So certain versions of USB4 do imply the potential for TB5. However, the fact that they explicity said TB4 and not TB5 suggests that TB5 is not supported, even though some version of USB4 is:

"TinyGPU installs through Apple’s official DriverKit framework, enabling AMD RDNA3+ and Nvidia Ampere+ GPUs (RTX 30-series and newer) via Thunderbolt 4 or USB4. "
 

“… TinyGPU installs through Apple’s official DriverKit framework, enabling AMD RDNA3+ and Nvidia Ampere+ GPUs (RTX 30-series and newer) via Thunderbolt 4 or USB4. ...”
So it won't communicate at TB5 speeds if connected to a TB5-equipped Mac?
It wouldn’t have killed them to address the TB5 question.
The TinyGPU page itself doesn’t say “Thunderbolt 4 or USB4” — I think that’s an inference (pun intended) made by the author (“ByteBot”) of the article @M4pro quoted. The TinyGPU page itself says “USB4/Thunderbolt” which includes both Thunderbolt 4 (spec USB4 v1.0) and Thunderbolt 5 (spec USB4 v2.0) so I think we can surmise Thunderbolt 5 is supported. There’s nothing in the TinyGPU documentation that suggests otherwise:

 
Last edited:
The TinyGPU page itself doesn’t say “Thunderbolt 4 or USB4” — I think that’s an inference (pun intended) made by the author of the otherwise excellent article @M4pro quoted. The TinyGPU page itself says “USB4/Thunderbolt” which includes both Thunderbolt 4 (spec USB4 v1.0) and Thunderbolt 5 (spec USB4 v2.0) so I think we can surmise Thunderbolt 5 is supported. There’s nothing in the TinyGPU documentation that suggests otherwise:

Thanks for clearing that up, and providing a link to the source of the "USB4/Thunderbolt" quote that deconstruct60 referenced!

A year ago, TB5 was not supported, according to the article cited by deconstruct60:

"Requirements for running an eGPU through a USB3 interface at this time include the use of an ASM2464PD-based adapter and an AMD GPU. For its tests, Tiny Corp used the ADT-UT3G adapter, which uses the same ASM2464PD chip, but out of the box, it only works with Thunderbolt 3, Thunderbolt 4, or USB 4 interfaces"


Though a lot of changes have happened since then, since at that time it was AMD only.
 
Apparently, it's doable but a tad technical. I'm not sure I see the point. At least not for local LLMs where unified memory is an advantage for higher end Macs.

I suppose the point is that you could in principle develop and debug CUDA kernels? But then there is a big question of driver compatibility. It's a really niche use case and I am not sure why some people seem so excited about this.

Something that'd be truly useful is port pat through with Virtualization framework. Then you could use Nvidia eGPUs from Linux with proper drivers, graphics and compute.


This is a big win for the hackintosh community. Another chink in the armor for macos 27.

I fail to see how. Is there even hackintosh running modern ARM macOS? I'd be shocked if there are compatible ARM CPUs out there.
 
  • Like
Reactions: Macintosh IIcx
It's a really niche use case and I am not sure why some people seem so excited about this.
I think it's the context that create a lot of the interest.

It indicates at least some thawing in the famously frosty relationship between the two, and leads one to wonder if this might presage more interesting collaborations in the future.
 
I think it's the context that create a lot of the interest.

It indicates at least some thawing in the famously frosty relationship between the two, and leads one to wonder if this might presage more interesting collaborations in the future.

Does it though? Aren’t these third party drivers and not endorsed by Nvidia in any way, or am I misunderstanding something?

Besides, Apple has no reason to not sign a driver as long as it’s a clean implementation. Any developer can request DriverKit entitlement for their software.
 
Last edited:
is calling APLRemotePCIDevice (i.e., a DriverKit PCI-e device. ). It shouldn't particularly care about Thunderbolt because this is a PCI-e interface. (Thunderbolt should be transparent except to implement the optional PCI-e 'hot plug' API features ).
I am not good with hardware, would this allow Mac Pro M2 to use a GPU for inference through PCI-E slots?
 
Does it though? Aren’t these third party drivers and not endorsed by anguria in any way, or am I misunderstanding something?

Besides, Apple has no reason to not sign a driver as long as it’s a clean implementation. Any developer can request DriverKit entitlement for their software.
I was going off of the language in the article cited by the OP (https://byteiota.com/apple-nvidia-egpu-on-arm-mac-george-hotz-breaks-ban/), which seems to indicate there was a ban that was lifted with this approval:

"Apple approved TinyGPU driver on March 31, 2026, ending a 6-year eGPU ban on ARM Macs..."

In addition, I was operating under the assumption that if Apple were still hostile to NVIDIA, they would have blocked it, hence my speculation that this indicates a thawing. But that is an assumption on my part.

Here's more detailed discussion of the implications (https://apple.gadgethacks.com/news/...ed-compute-only-not-graphics/#google_vignette):

"What Tiny Corp is describing, if accurate, is a narrow reopening not of graphics support, but of compute access. The distinction is architectural. A compute driver doesn't need to plug into Metal rendering pipelines or manage display output. It passes numerical workloads to the GPU and retrieves results, which is a simpler and more constrained handoff. That may be exactly why it's possible where full graphics acceleration isn't.

Tiny Corp claims Apple sanctioned its driver, but "approved" can mean several different things: a notarized system extension, a private developer entitlement, or access granted through Apple's developer program. None of those options would constitute Apple publicly reopening eGPU support. The distinction matters because what Tiny Corp has may not extend to other developers or other hardware configurations.

The Nvidia angle is the more surprising part of the claim. Apple hasn't shipped Nvidia-compatible GPU drivers in macOS for years. Forum guidance from early 2022 on eGPU.io described Nvidia RTX cards as simply non-functional under macOS, with AMD as the only viable path (eGPU.io, February 2022). If this driver genuinely brings Nvidia into the picture on Apple Silicon even for compute only, even unverified that's the detail worth watching most closely."


And what's "anguria"? I'm guessing you didn't mean this 😉:

 
Last edited:
I was just going off of the language in the article cited by the OP (https://byteiota.com/apple-nvidia-egpu-on-arm-mac-george-hotz-breaks-ban/), which seems to indicate there was a ban that was lifted with this approval:

"Apple approved TinyGPU driver on March 31, 2026, ending a 6-year eGPU ban on ARM Macs..."

Though I'll grant I don't have enough knowledge of this area to assess whether that languge is correct...

Was there ever a ban or is that a poetic addition by the author? Specifically, dud anyone try to get a GPU driver signed by Apple before and was refused?

IMO, all of this reads like a big bunch of nothing.

And what's "anguria"? I'm guessing you didn't mean this 😉:


Was supposed to be Nvidia. It’s incredible how bad Apples autocorrect is sometimes
 
Last edited:
Hmmm, why is Tom’s Hardware drawing this as a broadly beneficial / high upside development? Human brain hallucination? lol

“…this is still a game-changer for people working with artificial intelligence, as they could now potentially do training or inference (with some limitations) without needing a dedicated AI supercomputer”

 
Last edited:
Was there ever a ban or is that a poetic addition by the author? Specifically, dud anyone try to get a GPU driver signed by Apple before and was refused?

IMO, all of this reads like a bug bunch of nothing.
The author is “ByteBot” after all — looking more broadly at the byteiota site, I gather it’s a news aggregator (“personalized news”) generated by FeedMatters.com (the “Need more personalized news?” ad that appears at the bottom of each article). I’m not sure what “personalized” really means here, I guess it means something like “a news summary that sounds like it was written personally for you” — I’m not going to pay $4.99 a month to find out!

It appears to get into trouble when it tries to aggregate sources that are related but ultimately are not the same thing. So it links the older October 2025 Tom’s Hardware article about the tinygrad project with the more recent news of TinyGPU, for example, but it doesn’t really understand how they are related. The same is true of the scalastic.io article about Apple and Nvidia, and so on.

I feel for professors trying to teach students today how to use AI in their research and writing — something few of them have much (if any) experience with themselves, I know I don’t — I wonder if there is a human being defining the queries that set ByteBot into motion?
 
Hmmm, why is Tom’s Hardware drawing this as a broad benefit / high upside development?

“…this is still a game-changer for people working with artificial intelligence, as they could now potentially do training or inference (with some limitations) without needing a dedicated AI supercomputer”


That is indeed quite puzzling. It's like saying "hey, there is now a towing cable for a Prius, which is a game-changer for people who need to haul stones from a quarry!"
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.