GT3E on MacBook Pro Retina?

Discussion in 'MacBook Pro' started by Freyqq, May 2, 2013.

  1. Freyqq macrumors 68040

    Joined:
    Dec 13, 2004
    #1
    Do you guys think they'll continue to use a dedicated GPU in the 15" rmbp in the next version, or that they'll try using the new intel GT3e GPU? The dedicated GPUs are still faster, and the best solution at the moment, but I'm nervous that they'll go for the intel GPU for power saving reasons. Also, Apple was apparently the ones that wanted Intel to come out with the GT3e in the first place...and it is only going to be released on the quad core processors.
     
  2. B... macrumors 68000

    B...

    Joined:
    Mar 7, 2013
    #2
    Apple has made us expect the following: more screen size= more power. 13": power saving an portability with iGPU and dual core. 15": real work with dGPU and quad core. Doubt they would change that.
     
  3. Mr MM macrumors 65816

    Mr MM

    Joined:
    Jun 29, 2011
    #3
    well most of us thought that the gt3e is going on the 13, not the 15. but given that we still dont have anything on 35w parts this might not happen

    there is no reasonable idea of having the gt3e on the 15, it costs more, they have to alter the cooling of the rmbp 15, and to have another mobo in their stock and production
     
  4. Freyqq thread starter macrumors 68040

    Joined:
    Dec 13, 2004
    #4
    I think that the GT3e would work perfectly in the 13" rmbp. However, I was under the impression that the GT3e is only being released for Haswell on quad core processors...hence the worrying.
     
  5. B... macrumors 68000

    B...

    Joined:
    Mar 7, 2013
    #5
    http://forums.macrumors.com/showthread.php?t=1557791&highlight=haswell+gpu+chart

    first chart: some quad cores are getting 4600, so it makes sense that some dual cores would get 5200. Also, dual is lower TDP so it can fit a better GPU, whereas quads would get much hotter.
     
  6. Mr MM macrumors 65816

    Mr MM

    Joined:
    Jun 29, 2011
    #6
    well only the ones available (i7 4850HQ and the i7 48950HQ) are 47w, there is absolutely no peep at all about 35w, intel reintroduced the 28w.

    I fear 35w is dead, if it is, I wonder if apple will go for 47w tdp on the rmbp 13, with some more robust heatpipe
     
  7. Freyqq, May 2, 2013
    Last edited: May 3, 2013

    Freyqq thread starter macrumors 68040

    Joined:
    Dec 13, 2004
    #7
    Interesting. Yeah I think the best solution for apple is putting the Intel 4600 and maybe an nvidia 750m in the 15" rmbp and the Intel 5100 in the 13" rmbp. Hopefully they go that route.
     
  8. Mr MM macrumors 65816

    Mr MM

    Joined:
    Jun 29, 2011
    #8
    thats the probable route, there is no sense in buying the more expensive gt3e versions with 400mhz lower in clocks because of the added edram

    the gpu should be either the 750m or the 8870m, both are revamps of the 660m and the 7870m
     
  9. jeblis macrumors regular

    Joined:
    Jun 13, 2012
    #9
    Likely 47w gt3e on 15" with nvidia gt750 discrete gpu.

    gt750 is a rebadged gt650 with possibly higher clock.

    Since gt3e will not outperform last years gt650, I doubt Apple will want to take a step backwards in graphics performance.
     
  10. Mr MM macrumors 65816

    Mr MM

    Joined:
    Jun 29, 2011
    #10
    why would they drop 400mhz on the core clock for a better unneeded gpu?
     
  11. Freyqq thread starter macrumors 68040

    Joined:
    Dec 13, 2004
    #11
    Yeah exactly. Anything intensive is going to switch to the 750m anyway.
     
  12. Mr MM macrumors 65816

    Mr MM

    Joined:
    Jun 29, 2011
    #12
    its kind like, hey Im going to buy a new cpu that will perform worse than the past related model, this doesnt actually fly with people
     
  13. dusk007 macrumors 68040

    dusk007

    Joined:
    Dec 5, 2009
    #13
    All the information till now says is that there will be no 5200 on anything but a few Quad Cores. Everything below as 5100 and 5000 which is GT3 without the 'e'.

    Being lower TDP also only means it could fit a better dGPU next to it, but as CPU and GPU share you can fit the best GPU obviously into the 47W package. The GPU can have quite a lot of TDP while maintaining a fast enough CPU during gaming.

    That is also one thing I believe current 3dMarks don't accurately account for. A real game tends to have more CPU requirements for AI and stuff. The 3DMarks being mostly just simulating a scene with some physics would allow more GPU Turbo than a real game might. Depends on he game and the max TDP those 40EUs need at 1300Mhz but still. In games like Starcraft a dGPU will do much better because it doesn't eat into the CPU TDP as much.
     
  14. Mr MM macrumors 65816

    Mr MM

    Joined:
    Jun 29, 2011
    #14
    while being worried about turbo is fine, the problem is that the current cpus already can keep with a good lvl of turbo with the HD4000 on.

    I know there is a large difference in EU count, and to top that off we see a edram in the package to annoy us with the heat, but 400mhz is quite a large difference

    Basically my main concern aint the clock drop or how the turbo will work, its the amount of edram, and by that we will see fluctuations on the game when it starts using the awfully slow ddr3

    Still if what is released is any indicative, 35w is dead
     
  15. dusk007 macrumors 68040

    dusk007

    Joined:
    Dec 5, 2009
    #15
    I meant to say that given how 3dMark assesses performance a 40 EU GPU might be not quite as good as its 3dMark scores suggest compared to a dedicated GPU which can let the CPU still run as fast as it wants.

    I doubt fluctuations when it DDR3 is in use will happen. The driver will most likely be written to specifically account for the eDRAM and will most likely in barely any game run only off 128MB. Rather it will ALWAYS keep the high priority stuff in eDRAM and textures and all the big stuff in memory.
    In all cases this like every cache will lower the bandwidth requirements consistently for the DDR3 memory. Graphics computing isn't terribly latency dependent so it really boils down to bandwidth. The extra cache simple increases the bandwidth.
    It will almost always use both the 128MB and some big probably dynamic sys memory. It can also mem copy really fast between the two, much faster than over PCIe to a dGPU.

    I think heat is also generally less of a concern. When the hot spots are more spread out it tends to be easier to cool and move the heat away. The total TDP won't increase and moving heat away gets easier that is a win win.

    ad 400Mhz difference. There is also the 20EU vs 40 EU so that 400 Mhz base clock seems more because of that rather than the eDRAM. Also quite little in my opinion. When the GPU does not have to deal with much load it still will turbo up almost to the same level. I don't see much performance loss there.
    The base clock is quite high. I doubt it will be the same for the 28W U CPUs.

    There aren't really any power savings though. With switchable graphics one can always disable the dGPU and then it is the same. Apple only markets battery life numbers of a switched off dGPU anyway. If they go for Intel only it is because the performance difference isn't worth adding an extra chip, its heat, the space for GPU + its GDDR5 memory, the cost. Definitely not for power savings (Though there would be some but that is only because of the idiotic way automatic switching works in OSX)
    If the dGPU isn't at least 40% faster it isn't worth it IMHO.
    It is unlikely though that they will remove it. They might offer one model without the dGPU as they used to but a 5200 instead. I would buy that one if the 5200 actually reaches 2000 points in 3dMark11.
    Less hassle with the crappy switching and enough speed. Also Chrome, iphoto, network monitor won't keep the IGP from kicking in after it had to switch to the dGPU for the external screen. Even forcing the IGP doesn't work in OSX when you need an ext. screen. Also less heat while using an external display. I honestly don't see any reason I would want the dedicated GPU unless it is at least twice as fast. Double performance used to be the difference at the least in the past.
     
  16. thunng8 macrumors 6502a

    Joined:
    Feb 8, 2006
    #16
    Maybe he is referring to the CPU. The top end CPU with iris pro runs at 2.4ghz, while the top end CPU with hd4600 runs at 2.8ghz. Both have 47w TDP. I.e. they had to drop CPU speed because of the more power hungry iris pro (gt3e) graphics

    ----------

    Iris pro graphics is quite good. The 750m might be approx 40% faster which just meets your criteria.

    Only problem is CPU speed as the top end CPU with iris pro graphics tops out at 2.4ghz.

    http://www.cpu-world.com/CPUs/Core_i7/Intel-Core i7-4950HQ Mobile processor.html
     
  17. dusk007 macrumors 68040

    dusk007

    Joined:
    Dec 5, 2009
    #17
    That seems to be about it. That is why I expect it to stay at least in the high end model. It is still enough of a difference and a step down from the 650M. Enough at least to annoy marketing.
    Personally for me 40% wouldn't cut it considering the annoying multi GPU switching behavior of OSX.
    Its supposed to be 1800 points vs about 2550-2700 which is at least 40%.
    Still a lot less than what used to be the difference. Usually you could expect 80-100% extra performance almost always where there was a dedicated GPU to switch on, across all the different notebooks (not just Apple) that exist.

    750M also seems like a really poor upgrade from the 650M that already runs at 900Mhz in the rMBP. With Broadwell Intel is aiming for another 40% increase. If the next 20nm Maxwell doesn't show a big increase in performance, I see the death of mainstream dGPUs coming. Only in gaming notebooks the 45W+ GPUs will be really worth it and obviously Desktops.
     
  18. Freyqq thread starter macrumors 68040

    Joined:
    Dec 13, 2004
    #18
    Why would apple replace the 15" rmbp with a slower version? If it had the intel 5200, it would have a slower CPU and GPU than the current rmbp version, in exchange for maybe an extra 30 min of battery life. Not a terribly wise thing to do. However, I have no objection if they made a $1799 retina macbook pro with this chip in it instead as a cheaper alternative.
     
  19. dusk007 macrumors 68040

    dusk007

    Joined:
    Dec 5, 2009
    #19
    The CPU wouldn't really be slower. The base clock is lower because it has to be. As usually in non gaming workload the GPU is not at full load. So at the end of the day both CPUs will probably end up running at about the same speed in most situations due to Turbo.

    I think due to the bad auto switching the actual battery life increase for real scenarios would be much more than 30%. The IGP can instantly power gate all the resources it doesn't use while applications are running based purely on load states. The way Apple's auto switching driver works the dGPU is turn on at launch of an app. Ie. iphoto may use some GPU features for its slide show features (overkill but okay). The dGPU is turned on at launch of iphoto even if all you do is import a bunch of pictures and sort through them. Nothing that requires any GPU. If you do not close iPhoto the dGPU stays active even if all you do for the rest of the day is typ numbers into an excel sheet.

    I would buy the 5200 only versions if I was in the market for one. I think I will replace my notebook with a broadwell though. SoC Quad Core + 40% faster GPU and potentially big long lasting and powerful hybrid notebooks. Could enable new designs while the current haswell will be essentially the same.

    I think Apple should kill the old MBP.
    Release a 15" Air. Let the Airs serve as low power thin and light.
    Give the 13" rMBP a Quad Core + 5200 chip to make it a real Pro powerhouse.
    Leave the 15" as is and hope that 20nm GPUs next year are good and put a bigger margin than 40% between dGPU and IGP.
     
  20. Mr MM macrumors 65816

    Mr MM

    Joined:
    Jun 29, 2011
    #20
    the 4850HQ is slower than the 4800MQ, the turbo clocks are still slower, the core clocks are slower.


    you measure cpu performance at peak, not at low. Its exactly like saying that my VW bettle performs the same as a ferrari 275 gts at 20km/h, and they would
     
  21. dusk007 macrumors 68040

    dusk007

    Joined:
    Dec 5, 2009
    #21
    The Turbo are almost the same. The TDP they have to work with is the same and the temps they need to keep. One has 40 EU the other 20. Half the array can be power gated and completely shut off. The GT3 can also clock down to about 200 Mhz.
    If the GPU isn't used for general processing or some constant serious load the CPU has the same amount of TDP to work with. It cannot sustain the same minimum clock rate (base clock) when the 40 EU array is under full load obviously. In most scenarios that is never the case that you need both. In the scenarios where you need both Games and Quicksync the 2.4Ghz base clock is more than sufficient. You don't really loose anything.
    A 4850hq should on average turbo more and higher than a 4800 when not in games. In games it doesn't really matter.

    You can measure cpu performance whenever you want
    That is exactly what I am doing. The only thing that matters is what you get out of it at the end of the day. The base clock is just the clock that it is guaranteed to not go under if heat is not an issue no matter under what load the CPU & GPU are. Since the GPU can go higher the base clock has to be lower. The real clock you end up with might in some cases be higher because the faster GPU array is more efficient.
    Ivy Bridge CPUs with a deactivated IGP almost constantly run at 3 Ghz+ in a decently cooled notebook. At the top the Turbo clock speed difference is marginal.

    It is 200 Mhz in Turbo barely 5%. In exchange you get double the GPGPU prowess for OpenCL and such. No hassle with switching. Instant power management. The base clock difference is 500Mhz but that doesn't really matter in practice.
     
  22. Stetrain, May 5, 2013
    Last edited: May 5, 2013

    Stetrain macrumors 68040

    Joined:
    Feb 6, 2009
    #22
    I wonder if Apple could squeeze one of the 47W quad cores with GT3e into the 13".

    The 13" has almost the same heatpipe and dual fan system as the 15", and the reviews I saw that tested heat and fan speed under load seem to indicate that it runs a lot cooler than the 15":

    http://www.anandtech.com/show/6409/13inch-retina-macbook-pro-review/12

    There might be some extra thermal headroom there to work with, and it would still have a much lower thermal load than the 15" because it won't have the dGPU.

    It would definitely help distinguish the 13" rMBP from a future 13" rMBA.
     
  23. Mr MM macrumors 65816

    Mr MM

    Joined:
    Jun 29, 2011
    #23
    One thing that I think you dont get is that the 4800/4850 is the replacement of the 3840qm, the first is going to be faster, the latter is going to be slower

    aside that the openCL and openGL performance needs to be coded specially using the intel igpu, because in the end the dgpu would use that

    And yes in a very good cooling notebook, it can use the turbo constantly and properly, I hardly see the point in this argument.

    Just a question why would the 4850 turbo higher than the 4800?
     
  24. Freyqq thread starter macrumors 68040

    Joined:
    Dec 13, 2004
    #24
    I was thinking about that too. Seems possible.
     
  25. Stetrain macrumors 68040

    Joined:
    Feb 6, 2009
    #25
    It would be pretty awesome if they could do that. Maybe only on the high end model, as it would probably add a bit of cost.
     

Share This Page