Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Yes, Apple not designing themselves into a "thermal corner" like they did with the 2013 Mac Pro would be good.
Actually, I didnt buy this excuse from Apple, even at crappy 450W TDP the Trashcan could hold dual RX570 and deliver a good performance update over 2013' model, so maybe Apple is on a more complicated endeavor (as to switch to AMD cpus) or actually their R&D team are the procrastinating world champions.

Note a few weeks from the Amigos gathering with friendly Apple-loving press, a mac R&D head leave Apple to Tesla (voluntary leave or just fired?)
 
  • Like
Reactions: ctrlzone
Actually, I didnt buy this excuse from Apple, even at crappy 450W TDP the Trashcan could hold dual RX570 and deliver a good performance update over 2013' model, so maybe Apple is on a more complicated endeavor (as to switch to AMD cpus) or actually their R&D team are the procrastinating world champions.

My guess is still that they've been picking apart how to handle GPU upgrades. They've probably made a decision by now, but if I had to guess... With eGPU and other initiatives they've been refactoring a lot of their graphics stack. If they were working on a software solution to support PCIe GPUs and Thunderbolt, that would take some time and couldn't be rushed. Might even require 10.14.

Could be wrong, but it would explain what is taking time.

But I also think generally people here underestimate how long a new design from scratch would take.
 
  • Like
Reactions: PeterJP
My guess is still that they've been picking apart how to handle GPU upgrades. They've probably made a decision by now, but if I had to guess... With eGPU and other initiatives they've been refactoring a lot of their graphics stack. If they were working on a software solution to support PCIe GPUs and Thunderbolt, that would take some time and couldn't be rushed. Might even require 10.14.

Could be wrong, but it would explain what is taking time.

But I also think generally people here underestimate how long a new design from scratch would take.

Right now, insofar as I can tell, eGPUs officially supported by Apple are of the AMD variety. Nvidia’s initiative is likely its own willingness to get back into Apple’s lineup even if it means going the eGPU route at present ( with some non chalant support by Apple )

That said even if it exists purely as an eGPU solution, Nvidia re entry brings CUDA. And with CUDA comes the developer. With the developer comes the software that just couldn’t exist prior to this move.
 
Last edited:
  • Like
Reactions: ssgbryan
An upgrade usually means adding better or non existing parts ( which technically might mean expanding ) in a system.

Perhaps you meant 3rd party peripherals that are cheaper?

It's a mix of both, actually. Some folks on this thread have noted they want the ability to add something better /different than what Apple would offer for options and others have noted they want the ability to add something cheaper compared to what Apple is charging for their offerings.


Actually, I didnt buy this excuse from Apple, even at crappy 450W TDP the Trashcan could hold dual RX570 and deliver a good performance update over 2013' model, so maybe Apple is on a more complicated endeavor (as to switch to AMD cpus) or actually their R&D team are the procrastinating world champions.

Yes they could offer updated GPUs, but they noted that the market choose single very-powerful (and very thermally hot) GPUs over two powerful (and thermally warm) GPUs.

And yes, it looks like dual RX570 and RX580 via Crossfire is comparable to the GTX 1070 and 1080 for gaming and would almost certainly have improved the performance for AMD-optimized applications like Final Cut Pro. But how would they compare for things like Adobe Premiere and other applications optimized for CUDA?

And then one has to calculate what Apple would have charged for those upgrades and how popular they would have been at that price compared to the equivalent third-party cards.
 
and other applications optimized for CUDA
Apple dont cares on CUDA, anyway I'm on the technincal/commercial aspect of the tcMP, it could heve ben updated to dual RX GPUs (with some throtling), for those that dont' know the tcMP, its GPUs are barely modified Reference design, and the mods are just ro rute PCIe+DP into a single connector, dont takes much effort to develop another GPU once AMD (or nVidia) release a Reference design.

I hope Apple to include nVidia among the mMP menu, but I wont hold my breath, Apple's involvement with AMD is going deeper.

Even Apple could have released that, and promise a dual Vega32/28 GPU later (vega32/28 is coming)
 
(A)nyway I'm on the technincal/commercial aspect of the tcMP, it could heve ben updated to dual RX GPUs (with some throttling).

Yes it technically could have, but I am not sure it commercially could have (as in the sales would have been enough to justify doing it).
 
Right now, insofar as I can tell, eGPUs officially supported by Apple are of the AMD variety. Nvidia’s initiative is likely its own willingness to get back into Apple’s lineup even if it means going the eGPU route at present ( with some non chalant support by Apple )

That said even if it exists purely as an eGPU solution, Nvidia re entry brings CUDA. And with CUDA comes the developer. With the developer comes the software that just couldn’t exist prior to this move.

As far as Nvidia goes - I've heard the issue is still that there are bugs mixing AMD and Nvidia cards. It can generally work but it's not bullet proof. AMD is an easy pick because all of Apple's Thunderbolt 3 Macs use AMD.

The other thing that still doesn't work is using a eGPU on the internal display, which is mentioned as something that doesn't work yet in Apple's notes. This would be the exact same software trick PC makers use to redirect a PCIe GPUs output to a Thunderbolt port without passthrough cables, so I'm watching that with interest.

Still, there is a small latency hit to doing that. And that's where Apple could opt into a custom GPU form factor. But even AMD just uses software now to move images between GPUs without a physical bridge.
 
As far as Nvidia goes - I've heard the issue is still that there are bugs mixing AMD and Nvidia cards. It can generally work but it's not bullet proof. AMD is an easy pick because all of Apple's Thunderbolt 3 Macs use AMD.

The other thing that still doesn't work is using a eGPU on the internal display, which is mentioned as something that doesn't work yet in Apple's notes. This would be the exact same software trick PC makers use to redirect a PCIe GPUs output to a Thunderbolt port without passthrough cables, so I'm watching that with interest.

Still, there is a small latency hit to doing that. And that's where Apple could opt into a custom GPU form factor. But even AMD just uses software now to move images between GPUs without a physical bridge.
redirect a PCIe GPUs output to a Thunderbolt port without passthrough cables will hurt IO on the TB bus and apple needs a full slot in the next mac pro.
 
Yes it technically could have, but I am not sure it commercially could have (as in the sales would have been enough to justify doing it).
So why there is no iMac Pro with nVidia?
...
Apple never considered nVidia for the iMac Pro (should havew been a monster with an 1080 or a GP100), nVidia is ruled out (at least as supplier), on TB3as eGPU its tricky to use it (and mostly to accelerate CUDA enabled Apps)
 
redirect a PCIe GPUs output to a Thunderbolt port without passthrough cables will hurt IO on the TB bus and apple needs a full slot in the next mac pro.

Why? The machine would be still sending a DisplayPort signal to the TB controller, so same bandwidth as usual.

The only hit is transferring frames from a PCIe GPU to a helper internal GPU.

So the flow would be:
PCIe GPU -> On Board GPU -> Thunderbolt Controller -> Thunderbolt Cable -> Display

The traffic between the PCIe GPU and onboard GPU is the only hit, but it might be ok.
 
I am of the view that by allowing Nvidia to exist, even as an eGPU, on Apple systems, Apple is discreetly keeping its idea of pushing GPUs as the main workhorse for computing alive. AMD hasn’t fared well so far but hopefully they have realized it’s as much about software as it is about powerful hardware.

It’s next 7nm Vega refresh is reportedly aimed mainly at professional environments (which might tie in nicely with Apple wanting to keep TDPs down )

Yes there is the issue of combining AMD and Nvidia on the same system, but Apple might be hoping nvdia users will try and bear it out until Apple/AMD can catchup.

It likely more of the same shadow boxing between Intel and Nvidia than anything else.
 
Last edited:
Yes there is the issue of combining AMD and Nvidia on the same system, but Apple might be hoping nvdia users will try and bear it out until Apple/AMD can catchup.

To me it sounds like Apple is actively working on fixing the Nvidia/AMD issue. But I don't think they'll ever actively promote Nvidia. They'll just continue signing Nvidia's retail drivers.

It's not like they have no relationship with Nvidia. They work with them to roll fixes for the macOS drivers. But on hardware deals they don't get along.
 
To me it sounds like Apple is actively working on fixing the Nvidia/AMD issue. But I don't think they'll ever actively promote Nvidia. They'll just continue signing Nvidia's retail drivers.

It's not like they have no relationship with Nvidia. They work with them to roll fixes for the macOS drivers. But on hardware deals they don't get along.

Ofcourse not. But right now what options does Apple have ? I think the eGPU thingie is just to ride along and see which way the wind blows. And right now Nvidia is the only game in town for anything related to GPU computing. If it manages to keep existing developers on the Mac platform as well as bring back/woo over others, Apple will make the most minimal effort it can to support Nvidia. Nvidia ofcourse would like to a keep its foot in the door on Apple systems. It also had the little side effect of keeping Intel honest ( as much as you can keep businesses honest that is )

This whole Nvidia-Intel started around when Intel fired the first salvo by having it’s own embedded GPUs and the Nvidia CEO promised to open a COWA. The AMD GPU with Intel CPU wouldn’t have happened otherwise. It’s just to lock out Nvidia where it can.

Apple’s interest in AR is partly driving their desire to push GPU into the limelight. AMD has both feet firmly planted in x86 and GPU and whichever way it turns out in the future ( or ideally both for AMD) It can take advantage of it.

AMD is a happy camper in Mac land. And Apple has a more malleable partner in AMD than Nvidia. Nevertheless I think it will disgruntedly allow Nvidia to exist until they can figure out their own solutions to what Nvidia is doing at present.

What does all these mumbo jumbo got to do with the new Mac Pro ? I think it will be a big indication of how Apple sees MacOS and ancillary ambitions going forward. And that does affect my own choice of finally leaving or staying on the Mac platform.
 
Ofcourse not. But right now what options does Apple have ? I think the eGPU thingie is just to ride along and see which way the wind blows. And right now Nvidia is the only game in town for anything related to GPU computing..

Depends what we're talking about.

Nvidia is trying to keep CUDA alive on the Mac because CUDA is the only leverage they have over Apple. My hunch is that they're using people like MacVidCards as proxies to keep CUDA from collapsing.

Are they the only game in town? No. Metal has some limitations but it generally works well for GPU computing, and it works great on AMD and Intel GPUs.

Nvidia continues to keep themselves as a gatekeeper because of software lockouts. CUDA is basically the ActiveX of the pro market.

The other angle to think about is that Apple has been rumored to be working on their own discrete Mac GPUs, either on their own or with some deeper partnership with the Radeon Technology Group. If Apple does their own GPUs, they definitely would want to break up the CUDA monopoly.

Also could be a reason Nvidia has been unwilling to assist Apple. Apple's goals and Nvidia's goals are not in alignment.
 
The only hit is transferring frames from a PCIe GPU to a helper internal GPU.
this as yet its only possible with latest AMD GPUs, so no case, a custom or semi-custom GPU-PCIe interface its mandatory on the mMP or else it will be an AMD only machine ir using nVidia only for compute.
 
Last edited:
this asyes its only possible with latest AMD GPUs

It's possible on any GPU. AMD just decided it was fast enough on their latest GPUs/faster PCIe buses.

so no case, a custom or semi-custom GPU-PCIe interface its mandatory on the mMP or else it will be an AMD only machine ir using nVidia only for compute.

See above. GPU to GPU transfer without a bridge is already possible in macOS. Has been since 10.6. The speed is the concern.

If you have Apple Developer access, they did a demo on 10.6 where they wrote a software bridge that actually let them render on an Nvidia and AMD card in the same box, and move buffers between:
https://download.developer.apple.co...on_422__taking_advantage_of_multiple_gpus.mov

DirectX 12 has a very similar feature. I think it's called Device Pools? Anyway, it also lets you render across Nvidia and AMD GPUs, which involves moving the buffers between, with one device doing the actual output.
 
Last edited:
The other angle to think about is that Apple has been rumored to be working on their own discrete Mac GPUs, either on their own or with some deeper partnership with the Radeon Technology Group.

Maybe...but It seems like a three way deal than just two. Apple could, as you say, get some custom graphic cards made in cooperation with AMD, but I think the Intel - AMD deal for graphic cards is just a stop gap arrangement until Intel comes out with its own discreet GPUs, while Apple.. I can’t really say why it would want to make its GPUs but who knows .. maybe its success with its own arm based processors for its iOS devices has given it enough confidence to go its own route with AMD’s help.

Someone mentioned here that Apple helped financially for the AMD GPU division. Maybe it’s more of the same ? AMD might even get to market similar GPUs for pc side of things ( thereby supporting metal on windows ?)

Let’s see. Just a few months to go ..
 
Last edited:
Someone mentioned here that Apple helped financially for the AMD GPU division. Maybe it’s more of the same ? AMD might even get to market similar GPUs for pc side of things ( thereby supporting metal on windows ?)

The two possible paths would seem to be either creating a large GPU based on the work they've done on the A series, or buying the Radeon Technology group, which as you mentioned they're already quietly funding.

My thought is that buying the Radeon Technology Group seems like the better option, depending on AMD's financials.

But all that seems a ways out. I wouldn't expect any movement this year.
 
Still can not mix GPU drivers (chipset/vendor), no case.

You can't mix Nvidia and AMD specifically. Everything else is fine. Not sure that would change what I'm talking about.

I think you mean support for Crossfire

I'm not talking about Crossfire. Crossfire is two GPUs sharing rendering load. All I'm talking about is redirecting the finished output of a discrete GPU to an internal GPU, with the internal GPU then providing the DisplayPort signal for Thunderbolt.

Since 10.6 is support for the then unused tcMP's integrated crossfire.

It is not. Watch the video.

I edited my post about, but DirectX 12 has a very similar feature I think called device pools, which also doesn't require a hardware bridge and works between different vendors.
 
The two possible paths would seem to be either creating a large GPU based on the work they've done on the A series, or buying the Radeon Technology group, which as you mentioned they're already quietly funding.

My thought is that buying the Radeon Technology Group seems like the better option, depending on AMD's financials.

But all that seems a ways out. I wouldn't expect any movement this year.

Oh AMD would charge some hefty sum for a buy out, considering all the hype and tangible performance and cost savings GPUs are touting.

It might be more of just getting iPs and then getting AMD to make some special version for Apple based on Apple’s own design.
 
I edited my post about, but DirectX 12 has a very similar feature I think called device pools, which also doesn't require a hardware bridge and works between different vendors.

DX12 feature is Explicit Asynchronous Multi-GPU and its windows only as is not designed arroun any open GPU std, the closest is AMD's XConnect which blends rendered scenes (actually data) into one specific frambuffer, it may require a bandwidth relative to the framebuffer size and framerate, for a 5K display it may require 8 PCIe lines equivalent dedicated for a 120fps 5K display (maximun), consider even the nVidia Pascal GV100 barely saturates PCIe 8x, it is quite possible with PCIe3, but there is nothing working on macOS like that enabling Heterogeneous GPUs, neithr in Linux, only on Wincrap DX12, and is not quite stable, neither polite.
[doublepost=1517265303][/doublepost]
It is not. Watch the video.
Which video?
 
it is quite possible with PCIe3

Yes, it would be. PCIe3 is one of the reasons AMD discontinued using CrossFire bridges.

Edit: The other thing to remember is that PCIe is full duplex. The bandwidth to send back the pixel buffers off the card is separate from the bandwidth sending data into the card. So the 8 PCIe lanes for 120 fps 5k (assuming only PCIe 3) does NOT steal from the bandwidth uploading data into the card. Separate lanes.

but there is nothing working on macOS like that enabling Heterogeneous GPUs

It is working on macOS. Take a look at IOSurface, or watch the video link. The video link uses OpenGL Share Groups, I was looking to see if Metal has any similar support.

Which video?

Weird, it disappeared.
https://download.developer.apple.co...on_422__taking_advantage_of_multiple_gpus.mov
 
Last edited:
That's 8 year old, and sorry actually isnt related on your dreamed HSA Multi-GPU suppor read: https://developer.apple.com/library/content/technotes/tn2229/_index.html

it has to do with using different GPUs in applications, not the GUI, macOS windows manager still restricted to a Single GPU and just from past year allow a 2nd GPU from TB3 with AMD's XConnect (Rx4XX/VegaXX) true HSA is long in the future.

PD weird, some post missing, something happened at theforum's server.
 
That's 8 year old, and sorry actually isnt related on your dreamed HSA Multi-GPU suppor read: https://developer.apple.com/library/content/technotes/tn2229/_index.html

it has to do with using different GPUs in applications, not the GUI, macOS windows manager still restricted to a Single GPU and just from past year allow a 2nd GPU from TB3 with AMD's XConnect (Rx4XX/VegaXX) true HSA is long in the future.

PD weird, some post missing, something happened at theforum's server.

Look. You say you're a developer so I'm just going to try to break this down.

It let's you take an image from one GPU. And load it onto another.

That is exactly what they demoed.

That is exactly what you would need to tunnel GPU output.

I'm not sure what's so complicated about this. They literally demo exactly what I'm talking about. They literally have a window being drawn on one GPU and output on another.

You don't need special hardware. You don't need a bridge. You don't need HSA. You keep pulling in things that have nothing to do with tunneling GPU output.

What the heck do you thing Premiere is doing when it's using an external GPU for CUDA but rendering on an internal MacBook Pro display? It's the exact same thing. It's just data crossing between GPUs.

The window manager outputs an image in the end. That image can be moved to another GPU and then output to a display.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.