Sure. But unless someone specifically wants or needs to run CUDA code, there's no reason to try to get an NVIDIA eGPU running on a Mac unless it significantly increases performance.
No reason? If already own a Mac and want to add an additional speciality agent to do some specific, sandboxed work that is isolated from your main resources, it is a straightforward path to adding another 'worker' to the same machine/system. If the host mac is maxed out of isolatable compute then adding more compute engines does add performance.
Anyone who is a rigid CUDA purist isn't going to use Tiny's framework to get to an Nvidia GPU whether there is a eGPU involved or not. Not only are they not looking for eGPUs they are not looking for multiplatform solutions at all.
However, some models are 'trapped' in the CUDA walled garden. If there is a specialist model that is trapped in CUDA then a mutiplatform framework is what you need is going to use a Mac as a main host.
Does Apple has a very easy to use , agent sandboxing controls. Not right now, but they have been trying to sandbox the standard MacOS environment for a number of years now in some fundamental ways. They have a decent foundation to start from. ( folks running OpenClaw locally on a Mac as a admin user is likely a security dubious move. ) . For example, hand an agent a clone, snapshot of a subset of files. The agent may go off and destroy the clone , but just kill that snapshot (all without having to literally double the amount of storage to make a copy).
Hence I assume those trying it for performance reasons would be running the subset of compute tasks that don't need the high amounts of VRAM that Macs provide. I.e., for the enthusiasts that are seriously considering this, the VRAM disparity wouldn't be relevant since otherwise they wouldn't even be thinking about it in the first place.
If want to run multiple AI models then resources run out quicker. It could be a just incremental more VRAM you need.
Thus I think the main reason it doesn't make sense to do this is instead what I stated earlier: It's cleaner, more performant, and not much more costly, to just buy an NVIDIA PC:
If want a maximum , almost air-gapped environment then yes. If the interim work needs to be shared or collaborated on then it may not be as 'clean' as you are making it out to be.
More costly boils down to what you already. Folks who already have a Mac and an PCI-e enclosure the incremental costs here is much lower than buying a Linux/Nvidia PC from scratch. I don't think this is primarily aimed at selling new boxes as much as folks leverage the systems they already have and some incremental add to that. More so targeting enthusiants witht he hardware they have as opposed to the hardware Tiny wished they had. Ideally, Tiny wished they had Tiny hardware.
🙂
this framework main purpose is to get more Tiny hardware bought long term. They need more software written to their framework to make that grow faster. So getting the framework into the framework into the hands of some folks who might developm more solutions that needed higher compute scale builds business. Lots and lots of developers have Macbooks. It is a move to meet those folks where they are (and have 'sunk costs' already).
tinygrad isn't only ported to Macs. It is one of multiple paths to getting more solutions build for Tiny's systems. For a long time Apple had the "developer who needs Unix and a somewhat mainstream GUI/app" platform. There are number of folks who deal with gets thing up on one system and deployed on another. Apple approving drivers that help with that , really should be causing the 'surprise' that is spinning around this approval. Apple has to handing off some this as Windows grows more adoptive of LInux subsystem and Apple goes more lackadaisical about the Unix roots present and staying attached to things evolving there. ( e.g., 'discovering' RDMA about 7-10 years after most Linux/Unix platforms).