Dual-GPU and FCP X 10.1

Santabean2000

macrumors 68000
Original poster
Nov 20, 2007
1,795
1,814
I was just thinking about the new Mac Pro with its dual GPU setup which got me to thinking about the MacBook Pro and the graphics there.

At the moment, (my limited understanding is), the MacBook switches to the dGPU when the going gets tough. My question is though, can both be used simultaneously?

Could we see a boost to MacBook Pro performance in FCP X (and other optimised software) if Apple could leverage both the integrated and dedicated to work in tandem? [Or would that just set this thing on fire as the heat got outta control..?]
 

maflynn

Moderator
Staff member
May 3, 2009
66,699
33,587
Boston
There's no way for us to enable both GPUs at the same time in the MBPs. It an interesting concept but one that is is not viable at the moment
 

leman

macrumors G4
Oct 14, 2008
10,773
5,268
There's no way for us to enable both GPUs at the same time in the MBPs. It an interesting concept but one that is is not viable at the moment
Well, you can request an OpenGL context for a particular card and do offscreen rendering on it, so I am sure it is potentially possible. I have not tried it out myself due to lack of time, but I'll try to get to it over the holidays.

Edit: Well, I just tried it out, it works and is also quite easy. It turns out you can even do on-screen rendering using the offline card! I modified an existing Apple OpenGL demo to select two different renderers. Here is a screenshot of HD4000 and 650M rendering to different views:



P.S. During the run of this application, the online GPU (the one driving the display) is actually the HD4000
 
Last edited:

yjchua95

macrumors 604
Apr 23, 2011
6,725
231
GVA, KUL, MEL (current), ZQN
I was just thinking about the new Mac Pro with its dual GPU setup which got me to thinking about the MacBook Pro and the graphics there.

At the moment, (my limited understanding is), the MacBook switches to the dGPU when the going gets tough. My question is though, can both be used simultaneously?

Could we see a boost to MacBook Pro performance in FCP X (and other optimised software) if Apple could leverage both the integrated and dedicated to work in tandem? [Or would that just set this thing on fire as the heat got outta control..?]
I don't think so, because the AMD FirePros in the Mac Pros are wired in a CrossFire-like (AMD's version of the NVIDIA SLI configuration). In the MBPs, the iGPU and dGPU aren't wired like that.
 

archagon

macrumors newbie
Dec 21, 2013
20
10
Does FCP X 10.1 use OpenCL? I would assume it does. If so, there should be no problem getting both chips to work at once. LuxMark, for example, can leverage both GPUs simultaneously.
 

leman

macrumors G4
Oct 14, 2008
10,773
5,268
I don't think so, because the AMD FirePros in the Mac Pros are wired in a CrossFire-like (AMD's version of the NVIDIA SLI configuration). In the MBPs, the iGPU and dGPU aren't wired like that.
What is your source on that? Besides, I have clearly shown above that you can use the dGPU and the iGPU at the same time :confused:
 

leman

macrumors G4
Oct 14, 2008
10,773
5,268
Yes, but the performance gains would be nothing compared to CrossFire/SLI configurations.
Did you do any benchmarks on that or are you speculating? of course, real SLI is faster because the two GPUs are equally fast, plus the copy is more efficient due to the use of a bridge. Still, programming the GPUs separately can get you a healthy boost if you know what you are doing. At least it's possible with OS X, furthermore, it's seamlessly supported by the OpenGL API - I dont believe there is a way to do it in Windows at all.

Edit: seems that its possible with DirectX on Windows, so disregard my last sentence
 
Last edited:

priitv8

macrumors 68040
Jan 13, 2011
3,632
476
Estonia
Besides, I have clearly shown above that you can use the dGPU and the iGPU at the same time :confused:
It sounds really weird. Unfortunately, the schematics of Retina Pros can't be found anywhere (yet), but in all previous iterations, the 2 GPU-s were fully parallel and independent, converging only at the very output, at LVDS and mDP level, via Mux-es (see attachment).
So I wonder how can the dGPU render anything on screen, if iGPU is active?
Besides, they both shall have different VRAM-s, hence different framebuffers.
Despite that, you can always task the disconnected from display dGPU with computation tasks via CUDA/OpenCL, just like Sorenson Squeeze does.
If anyone knows better, I'd be delighted to hear an explanation.
 

Attachments

leman

macrumors G4
Oct 14, 2008
10,773
5,268
So I wonder how can the dGPU render anything on screen, if iGPU is active?
There is nothing puzzling about this at all. The system renders to a texture on the dGPU, copies it to the system memory and then uses this texture as the backing layer for the view. The performance penalty of the copy is minimal in most cases. The 8x PCIe 3.0 lanes offer enough bandwidth to download a full HiDPI 3840x2400 colorbuffer from the dGPU at over 200 frames per second. In fact, this is how Nvidia Optimus and also bridgeless SLI/Crossfire work. OpenCL/CUDA use exactly the same approach.
 

priitv8

macrumors 68040
Jan 13, 2011
3,632
476
Estonia
Its reversed: the texture rendered on the iGPU is being copied to the dGPU. I just tried it, works too.
I see. I reckon this copying occurs only because you've specifically requested an OpenGL context on inactive GPU? Meaning, without calling for this, the rendering can fully occur only on the active GPU?
 

leman

macrumors G4
Oct 14, 2008
10,773
5,268
I see. I reckon this copying occurs only because you've specifically requested an OpenGL context on inactive GPU? Meaning, without calling for this, the rendering can fully occur only on the active GPU?
Yes, this is the default behaviour. Even more, usually OS X will switch to the dGPU as soon as application starts to use OpenGL. The application can also inform the OS that it knows how to work with multiple GPUs and would actually prefer the iGPU - in which case the currently active GPU (whichever that is) is used. The GPU can change on the fly in the later case - your application is informed of this change and is expected to adjust its behaviour accordingly (e.g. disable/enable Nvidia-specific functionality). Most applications can keep this fairly trivial, and because adding support for GPU switching (and thus potentially saving energy) usually takes just two lines of code, there is no excuse not to do so.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.