Hmmm... it does say iMac, it could be a mistake and Apple meant iMac Pro, or maybe there just hasn’t been the appropriate developer support implemented yet.
https://support.apple.com/en-us/HT208544
It’s near the end, in the “eGPU support in applications” section, the last bullet:
- Pro applications and 3D games that accelerate the built-in display of an iMac or MacBook Pro. (This capability must be enabled by the application's developer.)
The problem here is the low bandwidth from the eGPU back to the internal display. You will have low frame rates because of this, whereas there is no bottleneck if an monitor is attached to the eGPU box.
You would end up spending a lot of money on the eGPU box + desktop graphics card, only to get poor frame rate on your internal display. Apple is recommending the best solution given the bandwidth limits of TB3.
Compute tasks can work well within those bandwidth limits.
[doublepost=1522732659][/doublepost]
Why are you only talking about laptops? What about iMacs with proper 65w and 91w cpus? They won’t bottleneck high end gpus.
iMacs have always used Mobile GPUs. (Exception: the new iMac Pro with its redesigned cooling.)
Poor cooling kills GPUs - leading to early death of your iMac.
That is why low wattage mobile parts are used in most iMacs. The elegant single body construction forces compromises on heat removal.
[doublepost=1522732854][/doublepost]
I told people way back when the speculation started that this would result in $600 for a mid-range card. It seems I was correct. Apple won't make this directly, because they never release this kind of accessory under their own brand.
Apple seems to be pushing Metal these days. I would still go with CUDA, even though their refusal to release the isa irritates me.
As a developer, the lack of any debugging tools on Metal is a very serious problem. CUDA provides excellent debugging support.
My app supports OpenCL, Metal, and CUDA on both AMD and Nvidia GPUs. So I see all of these pros/cons every day.
[doublepost=1522734152][/doublepost]
Well "like 0 ML applications that doesn't use CUDA" - that's pretty nonsense.
My own applications use OpenCL, and FP16 is much better for my training application. YMMV, but your statement is just tosh.
Yes, CUDA is more mature and more widely used, but it's not the only option out there. My Vega frontier get's me the same performance at my chosen precision level as the ridiculously overpriced Nvidia cards.
[doublepost=1522605424][/doublepost]WRT TB2 - I'm not defending Apple here at all, but I would imagine it's largely due to drivers/resource focusing.
Getting an eGPU to work (e.g. under linux) is pretty damn hard work - thunderbolt standards are a nightmare to work with, so it makes sense, in a way, to focus resources.
That said, there is a hack to make it work, but YMMV.
As another GPU software developer, I think Brian hit the nail on the head. I think it all boils down to drivers/ resources and size of Apple's Mac GPU developer team.
At the end of the day, Apple is not a GPU company and would rather spend its R&D budget on non-GPU stuff.
But the 2 biggest trends in the whole Computer device market are 1) Mobile miniaturization and 2) GPU compute.
The new big push is in AI and self-driving cars. This is all because of the massive power from GPU compute APIs.
It really is in Apple's self interest to not fall behind in the quality of their GPU drivers and platforms.