Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

FatGuy007

macrumors 6502
Original poster
Apr 8, 2012
326
0
New York
I currently downloaded gfxcardstatus and it either shows the gt 650m or the integrated intel hd 4000, I can switch to one or the other. but if I want the ultimate powerhouse and use both, so I can play games in a breeze and graphics design can be instant.
Any suggestions are helpful.
 

sofianito

macrumors 65816
Jan 14, 2011
1,207
2
Spain
Is there a reason for that?

As an example, what would happen if someone is in front of an opened door and wants to close it, and someone else is behind the door and wants to open it?

The screen is a shared resource that cannot be accessed concurrently by both gpus... If this happens, the result would probably be a Picasso drawing... :D

On the other hand, it can accessed sequentially. In fact, this is what does the operating system; when you are doing light work on the gpu the OS uses the integrated GPU, but when you do heavy work, the OS automatically switches to the discrete GPU.

HTH.
 
Last edited:

tninety

macrumors regular
Apr 18, 2010
244
5
Banned!
The Lucid Hydra does this and could probably be modified to use an integrated GPU, but of course it's not on the rMBP. :rolleyes:
 

tninety

macrumors regular
Apr 18, 2010
244
5
Banned!
Maybe one day, a gpu might be able to have several cores as the cpu do actually...

GPUs are already "multi-core." They were before SMP was widely used on CPUs. Look at NVIDIA and ATI's specs for their GPUs - they specify the number of CUDA cores or stream processors which are analogous to cores.
 

sofianito

macrumors 65816
Jan 14, 2011
1,207
2
Spain
GPUs are already "multi-core." They were before SMP was widely used on CPUs. Look at NVIDIA and ATI's specs for their GPUs - they specify the number of CUDA cores or stream processors which are analogous to cores.

So, they already embed sli?

Is the hd4k multicore?
 

tninety

macrumors regular
Apr 18, 2010
244
5
Banned!
So, they already embed sli?

Is the hd4k multicore?

SLI is an NVIDIA proprietary term for using more than one GPU to render a single game, but GPUs themselves are already massively parallel and have many little "cores" inside them.

It's not really correct to call it internal SLI because SLI by definition requires two GPUs, but the reason SLI scaling is so good because it's just an extension of what the GPU is doing internally (slap two CPUs together and they can only be faster if you write a heavily multithreaded app - slap two GPUs together and games will be almost twice as fast with no effort).

The HD 4000 has 16 execution units: http://www.anandtech.com/show/5626/ivy-bridge-preview-core-i7-3770k/2
 

dusk007

macrumors 68040
Dec 5, 2009
3,411
104
The only way you could use both GPUs at least in theory would be if one was responsible for one screen and the other for the other.
When two GPUs work together it usually works so that each is responsible for a different frame.
GPU 1 handles all the even frames.
GPU 2 handles all the odd frames.

As you can imagine that gets really difficult if they aren't equally fast. In the real world it is even problematic with equally fast GPUs.

The 650M now is about 3-4 times as fast as a HD 4000.
The latter could do every forth frame or something like that. Despite other problems that would mean quite a lag. I don't think anybody would want that. At 60 hz you would have some 50 ms lag.
The only sensible way is to move some post processing like morphological AA to the HD 4000 while keeping the main part on the dGPU. Even such stuff was never worth the additional processing lag or trouble to implement.
Wouldn't work with Apple Notebooks anyway but require an Optimus like GPU switching not the boneheaded way Apple does it.


BTW. It is not just GPUs themselves that have multiple processing units that can be described as cores. There are also multipackage solutions like the GTX 690 which is two GPUs on one substrate.
Yet actually neither is really a core equivalent to a CPU. The former is more the equivalent of an ALU or Floating point execution units and the latter the equivalent of two CPUs on the same logicboard or die.
 

tninety

macrumors regular
Apr 18, 2010
244
5
Banned!
The only way you could use both GPUs at least in theory would be if one was responsible for one screen and the other for the other.
When two GPUs work together it usually works so that each is responsible for a different frame.
GPU 1 handles all the even frames.
GPU 2 handles all the odd frames.

As you can imagine that gets really difficult if they aren't equally fast. In the real world it is even problematic with equally fast GPUs.

The 650M now is about 3-4 times as fast as a HD 4000.
The latter could do every forth frame or something like that. Despite other problems that would mean quite a lag. I don't think anybody would want that. At 60 hz you would have some 50 ms lag.
The only sensible way is to move some post processing like morphological AA to the HD 4000 while keeping the main part on the dGPU. Even such stuff was never worth the additional processing lag or trouble to implement.
Wouldn't work with Apple Notebooks anyway but require an Optimus like GPU switching not the boneheaded way Apple does it.


BTW. It is not just GPUs themselves that have multiple processing units that can be described as cores. There are also multipackage solutions like the GTX 690 which is two GPUs on one substrate.
Yet actually neither is really a core equivalent to a CPU. The former is more the equivalent of an ALU or Floating point execution units and the latter the equivalent of two CPUs on the same logicboard or die.

As I said the Lucid Hydra chip solves this issue by dividing up a single frame between two GPUs.

http://www.anandtech.com/show/2910/2

The additional overhead of the Lucid Hydra and the fact that you're adding the power of such a weak GPU like the HD 4000 to the 650M makes it probably not worth it.
 

dusk007

macrumors 68040
Dec 5, 2009
3,411
104
I also doubt that the power consumption of such an extra chip would make it worth it.
Lucid Hyrda is really only worth it if you drive some high end gaming rig, in any other case simply buying a faster GPU or notebook serves one better.
Also it never really took of because the market for it is way to small. SLI and CF got better driver support and don't require any extra stuff. The micro stalls, lag and all those AFR problems exist but the majority doesn't care because they only use one GPU anyway and upgrade. For the small enthusiast group AFR is and I suppose will be the only viable option because it is simple.
The Hydra chip is old and the 2nd gen was out but ever only used on one or two ultra high end mainboards. I am not sure if it still exists. There is just not enough money in the market to make it worth it and really work out the kinks.

With an HD 4000 and the poor driver performance it would net 20% more speed in the best of cases and zero or less than that in many others. You get in exchange artifacts, other errors, instability. Would never be worth it. In IT all things are a trade off between programmability and speed. Even if speed is great but if it is too hard to use and make work it is not viable.

BTW. I also think that some special post processing could be done simply by changing the Optimus, ADS drivers. Without any extra chip necessary. The output framebuffer is written back to the HD 4000 anyway. Yet usually it just puts it out to the screen and the EUs and everything do nothing.
With some extra driver tuning one could force the HD 4000 to run post processing effects like morphological AA or artifact reduction and similar stuff that is not geometry dependent. Still the use would probably be limited as only MLAA to my knowledge would really be of any use here. MSAA is not that expensive and still better in most cases than MLAA so why would one even bother for offloading one feature that most people would probably end up not using.
 

AZREOSpecialist

Suspended
Mar 15, 2009
2,354
1,278
I currently downloaded gfxcardstatus and it either shows the gt 650m or the integrated intel hd 4000, I can switch to one or the other. but if I want the ultimate powerhouse and use both, so I can play games in a breeze and graphics design can be instant.
Any suggestions are helpful.

:rolleyes:
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.