Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

SpiffyFilms

macrumors newbie
Original poster
May 6, 2016
10
1
Western Kentucky, USA
Hello! Before I ask my question here is the stats of my mac pro:
2009 Mac Pro (4,1 -- firmware upgraded to 5,1)
8GB DDR3 ECC RAM
Intel XEON W3530 2.8GHz 4 Cores 8 Threads
OS X El Capitan 10.11.4

My question is that I have two GPUs in the system. One is the stock apple-branded
GT120 and the other is a EVGA GTX 650. Due to the whole "no boot screen" issue
with non-apple GPUs I use my monitor plugged in to the GT120. What I wanted to
ask is if Final Cut Pro X will use the strongest GPU in the system for rendering or if
it'll use the one that the monitor is plugged in to. If it has to do with the monitor,
what would happen if I had dual monitors, one plugged into the GTX650 and one on
the GT120, and then moved the Final cut Pro X window to the monitor with the GTX?
Another thing, I was wondering if a similar concept went with Parallels virtualization or
even with gaming...

(unless there's a way to flash the GTX.. but looking into it there doesn't seem to be any
available ROMs for it)
 
OK SO I THINK I JUST FOUND OUT SOMETHING

THEORY:
If you have two GPUs in a Mac Pro, and a monitor plugged in to each, the
rendering will be put to the GPU that is displaying the rendering
i.e. if with my GTX 650 and GT120, if the Final Cut Pro X window is on
the monitor plugged into the GTX 650, it will use the GTX 650 to render.

Testing:
So I put in both my GT120 and my GTX650 in slot 1 and slot 2 respectively, and
plugged in a monitor to each. I than ran two instances of Cinebench with one
running in the GT120 monitor and the other in the GTX650 monitor. What I found
out was that it used the GT120 for both instances. It was just as people said.

BUT THEN IT HIT ME
In "About This Mac" it registered the 1st monitor being on the GT120 and the 2nd
on the GTX650, but what if the 2nd monitor was set as the primary monitor?

So I did just that, I set the primary monitor to the one plugged into the GTX 650 and
then ran the two instances of Cinebench, and wouldn't you believe it! It started using
the GTX650 for both instances!

CONCLUSION
It would seem that the way OS X handles multiple GPUs is by using the one
set as the primary monitor to render. Meaning that in reality if you want dual GPUs,
your secondary or third GPU only needs to be strong enough to run the monitor, as long
as the primary GPU is strong enough to do all the rendering.

I hope this clears things up for a lot of people, as really I've never seen a clear answer
for how this works. Even if this is basic logic and "known" information, I couldn't find
a solid straight-forward answer, and so here i am giving just that.
 
  • Like
Reactions: Macschrauber
allows you to use faster CPUs and faster RAM
More specifically it allows you to use the "Westmere" Xeon CPUs that are in the 2010-2012 Mac Pros as well
as 1333MHz RAM without it under clocking it to 1066MHz

It also allows sound output through MiniDisplay Port, with the exception of the GT120. It may also allow sound
output through HDMI but I haven't tested this yet..
 
So how do you explain how the Late 2013 Mac Pro has two GPUs by default, but the non-display one is reserved for rendering only?

Not trying to troll, just wanting to understand why, might pick up a used nMP for BOINC project.
 
So how do you explain how the Late 2013 Mac Pro has two GPUs by default, but the non-display one is reserved for rendering only?

Not trying to troll, just wanting to understand why, might pick up a used nMP for BOINC project.
I'm not too knowledge-full on what exactly they did there, but I bet they have some special chipset
and driver that works like crossfire, but is exclusive to the nMP. Seeing how proprietary everything in that
model is that wouldn't surprise me at all. Both GPUs are counted as one beefy GPU. Again its basically
their own little version of crossfire.

But as far as a 2008-2012 mac pro goes, it seems you just need one beefy GPU for your primary display and all the rendering, and you can have a apple-branded something to power a second monitor and it will use the beefy primary
GPU to render all your junk. (Actually idk about the 2008 but I wouldn't think it'd be different, same software and same
general concept)

EDIT:
Ok so it would seem that I was wrong about the nMP, it is basically exactly like the old mac pro, except you don't have to have a display set as default. One GPU handles the monitor control and
the other does all the tasks like rendering. The only way you can truly combine the two is if you
use Windows and set up CrossFire. This kinda makes sense though, as that leaves a single powerful
GPU that can power a bunch of high resolution monitors, but still leaves one to actually render stuff.
(this of course is assuming that you'd be using a bunch of high resolution monitors)
 
Last edited:
So does this mean I can get a PC GTX 1080 (when it comes out (assuming it works with NVidia drivers)) and also keep my lowly GT120 and have them both plugged into my monitor so that I can have a boot screen and run with the powerful card and not have to switch cards when ever I want a boot screen?

I would think I would have to change the input on the monitor depending on whenever I wanted to see the boot screen or not.
 
So does this mean I can get a PC GTX 1080 (when it comes out (assuming it works with NVidia drivers)) and also keep my lowly GT120 and have them both plugged into my monitor so that I can have a boot screen and run with the powerful card and not have to switch cards when ever I want a boot screen?

I would think I would have to change the input on the monitor depending on whenever I wanted to see the boot screen or not.
Assuming that the card is compatible power-wise yea, but I'd recommend getting a cheap VGA monitor to plug into the GT120 like I did.. (plus having a second monitor is nice) but that way you get the boot screen and all without having to touch anything
 
You can use both GPU's power at the same time.

Setting the primary display can only use one GPU regardless which monitor you use.

However, if you tick this box.
Screen Shot 2016-05-07 at 12.41.54.jpg

Then the GPU will become responsible to its own connected display.

In this screenshot, both of my 7950 (R9 280) working at the same time. One running the Valley Benchmark on its own screen. And the other one running the Heaven Benchmark on its own screen. Both GPU working very hard.
Screen Shot 2016-05-07 at 12.45.53.jpg

For FCPX, it's much simpler, as long as the GPU is detected. Even no monitor connected, it will be used for rendering.

In the following case, I plug both of my monitors to one of the card, but both still use for rendering. Same as the nMP, even one GPU can never connect to any monitor, it can be use for rendering in FCPX.
FCPX Rendering.jpg
 
Last edited:
Don't use Cinebench.

It only tests CPUs in OSX.
Incorrect, it does test GPU..
[doublepost=1462626740][/doublepost]
You can use both GPU's power at the same time.

Setting the primary display can only use one GPU regardless which monitor you use.

However, if you tick this box.
View attachment 630339
Then the GPU will become responsible to its own connected display.

In this screenshot, both of my 7950 (R9 280) working at the same time. One running the Valley Benchmark on its own screen. And the other one running the Heaven Benchmark on its own screen. Both GPU working very hard.
View attachment 630340
For FCPX, it's much simpler, as long as the GPU is detected. Even no monitor connected, it will be used for rendering.

In the following case, I plug both of my monitors to one of the card, but both still use for rendering. Same as the nMP, even one GPU can never connect to any monitor, it can be use for rendering in FCPX.
View attachment 630341
With the nMP, as I said earlier, it uses one GPU as a display controller and the other to render.. It can not and does not pool the GPUs together.

So to sum up what I got though basically what your saying is my original theory can be true if each GPU monitor is its own space?
 
Well, if it tests it the way it should then there is something really wrong with my 7950...;)
View attachment 630377
It just does a crappy job at testing, but still what we're looking for here is not GPU performance but control
over what GPU is used for rendering. When I did both instances in separate monitors initially, I got
12-13FPS, concluding it was using the GT120. When I switched the default display, the result jumped
up to to 35FPS, so obviously it had switched to the GTX650.

I used cinebench because it's small, simple, and gives me the stats I needed for this test
 
So does this mean I can get a PC GTX 1080 (when it comes out (assuming it works with NVidia drivers)) and also keep my lowly GT120 and have them both plugged into my monitor so that I can have a boot screen and run with the powerful card and not have to switch cards when ever I want a boot screen?

Yes but even if the 1080 works I doubt the drivers will be optimised for Pascal. There isn't even Maxwell optimisation after two years. All you get is backwards compatibility layer. That's OK for some people. Then there is the issue of Nvidia's recent web drivers having terrible bugs in apps like Photoshop. Nvidia is not even fixing them after months.
 
Last edited:
@SpiffyFilms Totally understand what you're saying and for what are you using cinebench, but it is dependent on CPU and that's why it doesn't test GPU (at least for any benchmark). :)
 
Very interested to know the outcome of this, as I use a 750 Ti (MVC flashed) as my main graphics card, plugged into a 4K monitor (my only display) and was going to install my 7950 with no monitor plugged in hoping it would be used for GPGPU stuff--especially OpenCL stuff like FCPX.

I know with the nMP Apple updated FCPX to be able to utilize dual identical GPUs. But I've never seen anything definitive about what it does in cases where there are two mis-matching GPUs--does it use one or both, and if one which one?

I think DaVinci Resolve by default dedicates the GPU your monitor is plugged into for the GUI and a second, unconnected GPU is used for acceleration tasks.
 
Very interested to know the outcome of this, as I use a 750 Ti (MVC flashed) as my main graphics card, plugged into a 4K monitor (my only display) and was going to install my 7950 with no monitor plugged in hoping it would be used for GPGPU stuff--especially OpenCL stuff like FCPX.

I know with the nMP Apple updated FCPX to be able to utilize dual identical GPUs. But I've never seen anything definitive about what it does in cases where there are two mis-matching GPUs--does it use one or both, and if one which one?

I think DaVinci Resolve by default dedicates the GPU your monitor is plugged into for the GUI and a second, unconnected GPU is used for acceleration tasks.

For OpenCL or GPGPU, the GPUs didn't talk to each other. So I don't think they need to be identical (or belongs to the same family).
[doublepost=1462637240][/doublepost]
Incorrect, it does test GPU..
[doublepost=1462626740][/doublepost]
With the nMP, as I said earlier, it uses one GPU as a display controller and the other to render.. It can not and does not pool the GPUs together.

So to sum up what I got though basically what your saying is my original theory can be true if each GPU monitor is its own space?

Haha, I think CineBench tax (use) the GPU a bit, but not really testing (benchmarking) the GPU (unless you have a CPU that has very good single core performance, but very poor GPU)

Anyway, in this test, all you want to know is which GPU is working. So, CineBench may do the job properly. But IMO, it's better to check the power consumption to decide which GPU is working.

Regarding the nMP, both GPU are working together. I totally agree that only one GPU is connected to the monitor(s). However, I quite sure BOTH GPU can do the rendering / analysing. The compute GPU is not alone. In Windows, they can even work under Corssfire, sure they are connected.

Back to the topic, if each GPU has its own space, then the GPU will become responsible to their own monitor. If they don't have their own space, then the GPU that connected to the primary display will become responsible for the OpenGL stuff, even though you are using another display which is connected to another GPU.

And all this are only true for OpenGL. For OpenCL, both GPU can work together regardless do they have their own space.
 
Last time I checked a 512 meg 8800gt got same score as 3Gb GTX580 on Cinebench. It is a complete and utter joke.

Once a week I get an email from a customer who thinks they got a dud card because Cinebench score is identical to old card. It is a complete and total waste of time and the authors should be embarrassed.
 
Regarding the nMP, both GPU are working together. I totally agree that only one GPU is connected to the monitor(s). However, I quite sure BOTH GPU can do the rendering / analysing. The compute GPU is not alone. In Windows, they can even work under Corssfire, sure they are connected.
"By default, one GPU is setup for display duties while the other is used exclusively for GPU compute workloads. GPUs are notoriously bad at context switching, which can severely limit compute performance if the GPU also has to deal with the rendering workloads associated with display in a modern OS"
-Anandtech
And crossfire technology is only available in Windows.

Last time I checked a 512 meg 8800gt got same score as 3Gb GTX580 on Cinebench. It is a complete and utter joke.

Once a week I get an email from a customer who thinks they got a dud card because Cinebench score is identical to old card. It is a complete and total waste of time and the authors should be embarrassed.
Again though we don't need accurate benchmarking, just identification as to which GPU is being used to render.

I also decided to test this with some more realism to it by running the game portal 2 twice, once
with the GT120 monitor set as default, but the game itself in the GTX 650 monitor, and then again
with the GTX 650 monitor set as default but the GT120 monitor actually displaying the game. The
results were the same as with the cinebench, it used the GPU set as default for rendering.

Also what the hell was Apple thinking putting a GT120 in a $2,500 computer? Even in 2009 that's
ridiculous.
 
"By default, one GPU is setup for display duties while the other is used exclusively for GPU compute workloads. GPUs are notoriously bad at context switching, which can severely limit compute performance if the GPU also has to deal with the rendering workloads associated with display in a modern OS"
-Anandtech
And crossfire technology is only available in Windows.


Again though we don't need accurate benchmarking, just identification as to which GPU is being used to render.

I also decided to test this with some more realism to it by running the game portal 2 twice, once
with the GT120 monitor set as default, but the game itself in the GTX 650 monitor, and then again
with the GTX 650 monitor set as default but the GT120 monitor actually displaying the game. The
results were the same as with the cinebench, it used the GPU set as default for rendering.

Also what the hell was Apple thinking putting a GT120 in a $2,500 computer? Even in 2009 that's
ridiculous.

Will the compute performance limiting is another issue. We were discussing about "are those GPU can only work independently", right? And the answer is NO.

For displaying image, yes, only one GPU in the nMP can do the job. However, for rendering, no, both GPU CAN WORK TOGETHER. We are talking about "can or cannot", but not "is it optimum". Your quote is NOT valid to the fact we were discussing. Therefore, it means "it can, and it does pull both GPU together when rendering".

You get the 1st part right. Only one GPU can handle the monitors. However, you misunderstand the 2nd part. "The 2nd GPU can only be use to compute" doesn't mean that "the 1st GPU cannot be use to compute". The 1st sentence DO NOT imply the 2nd one.

So, on the 6,1. Only one GPU can display. But BOTH GPU CAN COMPUTE. They can work together. That's exactly what happen in FCPX. The GPU that need to handle the monitors may not perform so well on computation, but it does some contribution. If a GPU that responsible display cannot compute, then no single GPU system can play AAA games. You will need one GPU to handle the monitor, another one to render the 3D picture.

Crossfire is just another example that both GPU on the 6,1 can work together. Even though it's not a good example, because it avail in Windows only. It is another prove that both GPU can do the rendering together (as long as the software support, same as FCPX).
 
Thread's been dead for a year but now that i've owned the Mac Pro for a good while I'll say this;
In the 5,1 Mac Pro I kept the GT120+GTX650 for a while, and with some testing I did find that when each
monitor is set as the same space, it will prefer the primary display GPU for rendering tasks with the exception
of programs that utilize multiple GPUs.

Final Cut Pro X did not care at all which display was primary, and render performance stayed the same.

Portal 2 (using this game since the GT120 can BARELY run it at low specs) did care which GPU was set to
primary. When the GTX 650's display was used as primary framerates were much higher than when the GT120's
display was.

As far as the 6,1 nMP is concerned, I really can't make any good theories on how it handles the multi-GPU setup, and if it does work the same as above then this could explain why I've heard gamers complain about the performance in games in macOS vs in Windows, since windows has drivers that force games to see the two GPU's as one big GPU...
 
only apps that are able to use dual GPU's will use both GPU's it's not the OS but the apps.
the windows thing dose not force games to see 'One' GPU it just splits up jobs for games that are compatible to both cards with mixed results depending on nvidia/ATI and mode used for more than one card.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.