Discrete GPU, iGPU, what's the difference?

Discussion in 'MacBook Pro' started by sarakn, Oct 3, 2013.

  1. sarakn macrumors 6502a


    Feb 8, 2013
    On the Waiting for Haswell thread, I see a lot of posts about iGPU/discrete GPU. Could someone please explain the difference/benefits of one over the other?

    In this wait for Haswell, if the only difference is improved battery life - as in going from 7 hours to 8 hours, I might as well purchase the 2013 model today. I have a dell laptop with 2hr batter life, at best, so 7hrs is a 300% improvement.

    It's nice to get the latest and greatest, but it sounds as if there might not be a point in waiting.

  2. 53kyle macrumors 65816


    Mar 27, 2012
    Sebastopol, CA
    iGPU, or integrated gpu, is a "gpu" that is made by the CPU manufacturer which is in your computer. It is usually much less powerful than the dGPU, or dedicated GPU which is its own piece of hardware. dGPU is much better for video editing and gaming.
  3. actuallyinaus macrumors regular

    Feb 13, 2013
    igpu = integrated gpu, it is built into the cpu chip
    dgpu = discrete gpu, it is a separate chip
  4. appleii.c macrumors 6502


    Mar 18, 2013

    I'm with you. I'm hoping it does have a dedicated GPU, or at least provide a model with one. If not I will likely get the 2013 (early) models with the nvidia chip. The Iris does look interesting but unless early gaming benchmarks show an improvement, or at least "on par" gaming performance with the current 650M, I'll stick with the current lineup. I do enjoy the occasional gaming on my 2011 MBP and I've been waiting to upgrade for a few months now.
  5. actuallyinaus, Oct 3, 2013
    Last edited: Oct 3, 2013

    actuallyinaus macrumors regular

    Feb 13, 2013
    if you read from post #430 onwards:

    (igpu) HD 5200 iris pro is better/faster for:
    - opencl programs: photoshop, final cut pro, after effects, maya, cad, sony vegas, blender, solid works, lightwave, ...

    (dgpu) 750M/755M is better/faster for:
    - CUDA only programs, gaming


    - 750M is just an overclocked version of the 650M (its the same kepler gk107 chip):
    - iris pro comes close to but can not match the 650M:


    the iris pro is much better at opencl, here's an open cl benchmark:
    notebook check also has some more specific benchmarks (maya...) if you scroll down to the specveiwperf 11 section link


    the questions are:
    - will apple only have iris pro (HD5200)?
    - will apple only have dgpu with HD4600?
    - will apple provide two options? a) hd5200 b) hd4600 + dgpu
    - or will apple be magical and provide a dgpu with hd5200 ... (too expensive)
  6. sarakn thread starter macrumors 6502a


    Feb 8, 2013
  7. dusk007 macrumors 68040


    Dec 5, 2009
    The important difference is in the memory interface.
    That is why discrete isn't what they are usually called but dedicated, referring to the dedicated video memory. dGPUs have their own memory, iGPUs share main memory with the CPU. The 9400M, X4500, 320M have also been discrete chips but did not have any dedicated memory.

    GPU if they are fast needs lots of data access. CPUs usually don't need lots of data only the the little data they need fast. GPU memory like GDDR5 is optimized for bandwidth and less for data access. Main memory is usually supposed to feed the CPU but if the GPU has to share it puts limits on how fast that GPU can be because at some point it is out of work because it is out of data.
    The new Haswell chips in the fastest version have improved on main memory access by adding another level 4 cache of 128MB to push the boundaries of this limitation.

    In the past iGPUs have been slow just because they have been small (not very many transistors). Now the 40EU integrated GPUs are as big as the smaller dGPUs in transistors. A dGPU that is fast can just use faster GDDR5 memory or more channels like 4 or 8 instead of 2 times 64bit.
    This whole thing puts a bit of a limit on how fast an iGPU can be but that will eventually change when HMC (hybrid memory cube) comes around at some point.
    Sharing memory just saves a lot of power too as on some of the normal mainstream GPUs it can be about a third of the total power than only goes to the memory interface and memory chips. If the the interface and memory has to be there for the CPU anyway, you save a lot of power there. Which is why dGPU cannot keep up in the power efficiency race in the long run.

Share This Page