Non-Game 750M Performance?

Discussion in 'MacBook Pro' started by kfmfe04, Aug 6, 2014.

  1. kfmfe04 macrumors member

    Jun 19, 2010
    I have a 2009 13" mbp - considering an upgrade to a 2014 15" rmbp, but on the fence on going for the dgpu version or not. 90% of the time, I'm in a browser or I'm programming (i7 is going to be sweet) in C++, sometimes with an external monitor. The other 10%, I watch movies.

    I'm leaning towards the $1,999 version + $100 2.5GHz i7 upgrade for its simplicity/no chance of dgpu failure (maybe upgrade the SSD myself in 3-4 years), but I'm open to the $2,499 version for the larger SSD/dgpu, if there are enough non-game benefits.

    Does anyone have either version who can compare the 2014 15" 750M vs on-board graphics performance in these situations?

    1. Browser scrolling
    2. Movies (VLC)
    3. 2nd Monitor (4K)

    Maybe 3. is the only one with a perceptible difference?

  2. johnnnw macrumors 65816


    Feb 7, 2013
    Not needed for any of those really. Maybe the 4k on a few monitors if anything.

    Easily capable of vlc movies and scrolling. Hell even the integrated does that fine.
  3. leman macrumors G3

    Oct 14, 2008
    You won't get any benefits from the dGPU. The Intel is better at decoding movies and everything else is not demanding enough to make a difference. That said, you also don't need the upgraded CPU.
  4. reflex99 macrumors newbie

    Aug 5, 2014

    if I did maths correctly:
    $100 is ~5% of the total price (pre tax)

    .3GHz is ~13% increase in processing speed (turbo clock excluded). He mentions coding, so if he has large compiles that take many hours or something like that that'll use the CPU at 100%, he'd see an almost 1:1 decrease in time. 5% more $ for 13% more speed is not a terrible deal.

    For anyone that doesn't do tasks like that though (read: most people), it is a total waste of money.
  5. yjchua95 macrumors 604

    Apr 23, 2011
    GVA, KUL, MEL (current), ZQN
    The 750M will really scream through in CUDA-accelerated software.
  6. leman macrumors G3

    Oct 14, 2008
    I think these considerations are more relevant for a dedicated build server. I do not have any hard data on compilation performance of different CPUs, but I would think that memory subsystem/disk speed plays a much more important role here. Not to mention that if you have compiles that take many hours in the first place, then you are probably doing something wrong ;)

    I fully agree with you that $100 for 13% (its more like 8% btw) increase of speed is a good price, but I doubt that this increase will show anywhere else aside heavy-duty numeric computation...
  7. 5to1 macrumors 6502

    Mar 9, 2008
    For NO. 3 I believe both the iGPU and dGPU SKU's of the 15" support 4k/60. I believe the 13" only supports 4K/30 although not sure if thats a hardware limitation or software.

    For the other two either GPU will be fine and in some circumstances (depending on the software) the iGPU may perform better.

    What kind of coding do you do? We have a group 15" i7 machine and I have a 13"
    2.6Ghz i5 (I need the portability). We do both embedded design and mobile device software.

    For embedded design work the difference in compile time is irrelevant as actually blowing an IC takes so much time in comparison a few seconds saved compiling makes little or no difference. For mobile device software, the i7 is faster, but we're talking about a few seconds. Thats only really noticeable when you're trying to resolve a bug and doing multiple compiles in short time frame. For general coding, compiles are so few and far between (in the grand scheme of things) the difference is negligible. And they are usually used as a good time to take a break and stretch ones legs/etc.

    If we were compiling software that took a long time to compile, we'd be better of dropping the source onto one of our hosted servers to compile. Even if you don't have a hosted machine it would likely be better to invest the additional money in a desktop at home and setup remote access.

    Its hard to say whats best for someone else, as only you know your circumstances and usage scenarios intimately. But personally I'd rather invest in more storage then marginally faster silicon. Finding you haven't got a doc, spec, stock source code, etc on your machine is far more of a hindrance to our work then a few extra seconds for a compile.

    Having said all that, our machines are business tools, therefore the relatively "small" difference cost is insignificant in the grand scheme of things. We just get what we need, so for example I got a 512GB SSD (double what I previously had filled), but given I was never bound by the CPU/GPU I knew the i5 would give me plenty of overhead. All that matters to us is that we're not constrained by the machine over the ~3 years its our primary machine.
  8. kfmfe04, Aug 7, 2014
    Last edited: Aug 7, 2014

    kfmfe04 thread starter macrumors member

    Jun 19, 2010
    I currently do numerical simulations in a Linux virtualbox running on a Win8 desktop i7-3770k (geekbench 16-17k? but I also see 12k samples) that I currently tunnel into via ssh/VPN via free WIFI spots. So I cannot work anywhere - I must find free/cheap WIFI spots.

    Being able to compile and run locally with a geekbench of 13104 for base 2.2GHz or 14126 for the $100 2.5GHz upgrade (which gets amortized over 5+ years) will be worth it when I don't have web access, either way. The way I see it, for anywhere builds and runs, I'd rather pay for hardware for local builds rather than for $40/month worth of cellular access.

    For gnu C++ compiling, the more threads g++ -j8 (and the more memory - I wish I could ramp RAM to 32GB, but mostly for the simulations) are available, the better so the 15" beats out the 13". Unfortunately, C++ headers are from the stone-age and the syntax itself leads to baroque compiler design so there is no escaping the build-test-debug cycle with tricks like incremental compile-as-you-type that you see in some JVM-based languages. Also, C++ template meta-programming (TMP) results in faster runtimes, but more painful compile-times. But I digress...

    CUDA would be nice, but so far, I haven't found all the extra efforts in coding to be worth the trouble for the computations that I am running.

    5to1's comment about more storage is tempting, but I am expecting my needs to be such that I can wait a few years for that upgrade (in 2-3 years, maybe go for 1TB SSD upgrade).

    It seems like the 4k video could be the only possible issue then... ...which actually sounds like a non-issue. I am now heavily leaning towards the base or base@2.5GHz model.
  9. x3n0n1c macrumors regular

    Jul 9, 2014
    For what it is worth, UI performance is noticeably improved when I switch to the 750m.
  10. 5to1, Aug 7, 2014
    Last edited: Aug 7, 2014

    5to1 macrumors 6502

    Mar 9, 2008
    Exactly the way I look at my laptop purchases, although I don't need the silicon upgrade and amortise over 3 years. Plus since my company is VAT registered, I get 1/6th (VAT = Value added TAX for non UK members) of the extra out lay back anyway (plus the 20% that would have gone to corporation tax anyway).
  11. kfmfe04 thread starter macrumors member

    Jun 19, 2010
    Could you elaborate in detail on what the improvements are?

    Is this in the mid-2014 or late-2013 models?
  12. x3n0n1c macrumors regular

    Jul 9, 2014
    My device info is in my signature.

    When webpages or applications start to get a bit choppy on the integrated Iris Pro, performance can usually be regained by switching to the dedicated 750m. This is especially true when running at higher resolutions such as HiDPI 1680x1050 and 1920x1200.

    Something to keep in mind about the Iris Pro is that while it does has potentially superior raw computational ability than the 750m, it's 3d performance is about half and this is what the UI will utilize.

    Anyone with a dGPU will tell you that switching spaces or mission control can be much smoother overall on the dGPU rather than integrated.
  13. yjchua95 macrumors 604

    Apr 23, 2011
    GVA, KUL, MEL (current), ZQN
    Hardware limitation. According to Intel, the U-series processors can only support 4K/60Hz.
  14. Godavid macrumors newbie

    Aug 8, 2014
    What about battery difference when using intel iris pro vs 750m on external monitor?

    Since the 750m is forced whenever external monitor is plugged in, I'm curious to know if you'd get more battery juice using the intel iris pro.

    Obviously is depends on what you personally do, but considering they have about 10% - 20% difference in processing speed... could the additional battery power (or percentage) be another huge factor?
  15. kfmfe04 thread starter macrumors member

    Jun 19, 2010
    Actually, if I am plugged into an external monitor, I would tend to have the rmbp power bricked into the wall.

    But I think even while running in pure mobile-mode, battery drain, added heat, and the potential for component failure are all possible concerns regarding any mobile card.

    I am heavily leaning towards Iris Pro now, but I will head down to a store and try to compare the machines side-by-side to determine if the difference is significant for day-to-day non-gaming use.
  16. yjchua95 macrumors 604

    Apr 23, 2011
    GVA, KUL, MEL (current), ZQN
    Why not just get the dGPU variant and use gfxcardstatus to disable the 750M when you don't need it? That's what I do.
  17. Hustler1337 macrumors 68000


    Dec 23, 2010
    London, UK
    Hi kfmfe04,

    TLDToday has a fantastic video on comparing the 750M vs. Iris Pro (and the 650M). He goes through quite a lot of benchmarking to test out performances in various areas, including battery to give you an informed comparison between the three.

    Hopefully that should come in handy. Personaly, seeing as you're not going to be gaming, editing videos or handling GPU-intensive tasks, the Iris Pro should be sufficient for your needs. :)

    Link to video:
  18. Godavid macrumors newbie

    Aug 8, 2014
    I don't think he did battery difference. And I'm not sure how the ghz difference went into play with his tests.

Share This Page