Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
They already showed that intention when they tried to push Intel to produce faster integrated GPU.

I'm as unhappy as the next guy about this move, but it seems inevitable.

Otherwise they wouldn't have let that benchmark result be found.

We don't even know if there was a dedicated GPU on that or it was just a test machine. For all we know it is a 15" MacBook Air a Classic MacBook Pro 15" or the low end Retina MacBook Pro.

If they use that chip they'll take the power consumption from what 85-90 Watts to 50-55 for the whole computer? If they are doing that this new machine is going to not need a lot of its heatsinks or cooling people were convinced just a few weeks ago that the new Retina MacBook Pro would receive no redesign just spec bumps but if they are removing the GPU that is a drastic redesign and it may even end up thinner or lighter which would change the whole look and feel of the notebook.

I'm just not buying that they are dropping the Dedi at this time, it doesn't make sense in my opinion.
 
Im not interested in performance from Iris Pro. I want a dGPU in MBP with Retina Display. However...


Ditching the dGPU leaves a possibility to overclock the Iris Pro GPU. Anandtech shown how OC'ing it by just 8W of TDP gives more power. OC'ing it by another 15W gives even more.


From the point of battery life - people forget that putting even 2.8 GHz Proc with HD4600 will give that laptop at least 10 hours of battery life. dGPU does nothing to do with that if it is turned off all time when used on battery.

My dream is rMBP with 2.8 GHz Processor with Nvidia Geforce GT750M OC'ed by Apple to 1000 mhz core and 1500(6000Mhz effective)Mhz on memory.
 
Im not interested in performance from Iris Pro. I want a dGPU in MBP with Retina Display. However...


Ditching the dGPU leaves a possibility to overclock the Iris Pro GPU. Anandtech shown how OC'ing it by just 8W of TDP gives more power. OC'ing it by another 15W gives even more.


From the point of battery life - people forget that putting even 2.8 GHz Proc with HD4600 will give that laptop at least 10 hours of battery life. dGPU does nothing to do with that if it is turned off all time when used on battery.

My dream is rMBP with 2.8 GHz Processor with Nvidia Geforce GT750M OC'ed by Apple to 1000 mhz core and 1500(6000Mhz effective)Mhz on memory.

Would Intel let them overclock it? That is a good question. If the Iris Pro when overclocked is 5-10% faster than the 650m from last year in normal GPU loads (not just OpenCL) I'm okay with it.
 
slight redesign with a similar color slate as the iPhone 5

Iris 5100

quad core i7

256gb pcie ssd

1,499.99 for the 13" non retina



sold
 
Last edited:
Would Intel let them overclock it? That is a good question. If the Iris Pro when overclocked is 5-10% faster than the 650m from last year in normal GPU loads (not just OpenCL) I'm okay with it.

Overclocking does nothing to bandwidth starvation, so it'd still suck at higher resolutions, which is what you would definitely want to run on the rMBP.

And that aside, beating the 650M from last gen rMBP is a pretty tall order, since Apple overclocked that chip to perform even better than a 660M. It sits right between 660M and 670M.
 
Without regards to the current discussion and to answer OP's question: No, short and simple. Otherwise just call it MacBook and be done with it.

What a nonsense, it's not like they put GMA950 graphics in this thing...
An iris pro is actually faster at opencl then a 650m. So for a macbook pro it actually makes sense to go with it. It's not like it's called a Macbook Game or something like that. For gaming it is indeed bandwidth starved which is a shame i agree. But i still rather have the long battery life and the more quiet and cool design with less components then 30% more frames in a game that i play once in a while.

Laptops are moving to tablet/phone style design, small pcb non upgradable and mostly filled with batteries.

Also
i7-4950HQ / 6MB Cache / 2.4GHz Base Clock / 3.8GHz Turbo / 47 Watts

Now compare that to the CPU that doesn't have Iris Pro

i7-4900MQ / 8MB Cache / 2.8GHz Base Clock / 3.8GHz Turbo / 47 Watts
Is not accurate as a 4950HQ also has 128MB L4 cache which can also be used by the CPU not just the GPU.
 
Last edited:
Well people seem convinced that a 2.4GHz CPU with a 47 Watt TDP is higher end then a 2.8GHz CPU just because it has a higher product model number so I guess marketing works.

i7-4950HQ / 6MB Cache / 2.4GHz Base Clock / 3.8GHz Turbo / 47 Watts

Now compare that to the CPU that doesn't have Iris Pro

i7-4900MQ / 8MB Cache / 2.8GHz Base Clock / 3.8GHz Turbo / 47 Watts

The i7-4950HQ turbo clock is 3.6Ghz, not 3.8Ghz - So even more differences.

http://ark.intel.com/products/76085/Intel-Core-i7-4950HQ-Processor-6M-Cache-up-to-3_60-GHz

----------

Would Intel let them overclock it? That is a good question. If the Iris Pro when overclocked is 5-10% faster than the 650m from last year in normal GPU loads (not just OpenCL) I'm okay with it.

650M in the rMBP is still ~30% faster in games compared to overclocked Iris Pro according to the Anandtech benchmarks done so far.
 
If anyone is interested, I did do some analysis from gaming benchmarks on results from anandtech

https://forums.macrumors.com/posts/17348384/

Not quite sure where you get almost on par with GT650m from. Anandtech's tests at low resolution 1366x768 and / medium 1600x900 shows in games the 650M is % faster than Iris Pro:

Metro Last Light: 12.3%/30.7%
Biobock Infinite: 37%/47%
Sleeping dogs:8%/47%
Tomb Raider: 22.5%/40%
Battlefield 3: 27%/60%
Crysis 3: 37/42%
Crysis Warhead: -11%/7%
Grid 2: 26/62%

Average: 20/42%

so at higher res 1600x900, the 650M is 42% faster than the Iris Pro. If we go integrated only, it will be large step back in performance.
 
If anyone is interested, I did do some analysis from gaming benchmarks on results from anandtech

https://forums.macrumors.com/posts/17348384/

Not quite sure where you get almost on par with GT650m from. Anandtech's tests at low resolution 1366x768 and / medium 1600x900 shows in games the 650M is % faster than Iris Pro:

Metro Last Light: 12.3%/30.7%
Biobock Infinite: 37%/47%
Sleeping dogs:8%/47%
Tomb Raider: 22.5%/40%
Battlefield 3: 27%/60%
Crysis 3: 37/42%
Crysis Warhead: -11%/7%
Grid 2: 26/62%

Average: 20/42%

so at higher res 1600x900, the 650M is 42% faster than the Iris Pro. If we go integrated only, it will be large step back in performance.

I really hope Apple don't go integrated only. Surely they won't. Too much of a downgrade, they should wait until Broadwell.
 
If anyone is interested, I did do some analysis from gaming benchmarks on results from anandtech

https://forums.macrumors.com/posts/17348384/

Not quite sure where you get almost on par with GT650m from. Anandtech's tests at low resolution 1366x768 and / medium 1600x900 shows in games the 650M is % faster than Iris Pro:

Metro Last Light: 12.3%/30.7%
Biobock Infinite: 37%/47%
Sleeping dogs:8%/47%
Tomb Raider: 22.5%/40%
Battlefield 3: 27%/60%
Crysis 3: 37/42%
Crysis Warhead: -11%/7%
Grid 2: 26/62%

Average: 20/42%

so at higher res 1600x900, the 650M is 42% faster than the Iris Pro. If we go integrated only, it will be large step back in performance.

Thank you for this post.
 
No. If it didn't have it, I'd probably get a top spec old model....

I'm not buying it for gaming but I'd like to still be able to with reasonable performance
 
Grid 2: 26/62%

Iris Pro is actually faster in ultra settings, however:
55298.png


Seriously, why you compare GT 650, with 1 year of graphics driver maturity, to product that is not yet on the market?

Iris Pro that barely .. BARELY meets the 650m of last year in OpenCL

Did you actually check AnandTech review? They are basically neck to neck in only one test, everywhere else Iris just wipes the floor with Kepler:

http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/17

I mean, wasn't a common knowledge that kepler, first- sucks at compute (compared to GCN, for example), and then- sucks at openCL (it's understandably, optimized for CUDA)?
You mentioned that you work with OpenCL. Are OS X kepler drivers that good?

These geekbench scores show that the 2.4GHz chip is comparable to 2012's 2.7GHz Ivy Bridge chip.
My point is, I didn't wait 14+ months to get a computer that performs the same as last year.

Nope, you didn't. How about AVX2 (double the floating point performance) and new instructions that helps with vectorization? And double the L1/L2 cache bandwidth to facilitate that?
Improving the performance of the superscalar cores in the same 22nm process as last year's Ivy Bridge is pretty much dead end. Instead of marginal increase in that field, intel opt for dramatic increase in FP execution.
Granted, it won't show in geekbench, you actually have to sit and write code for that, but hey, let's just stop with that nonsense how Haswell is "incremental" update, it's not.
 
I would prefer the new rMBP to have the dedicated graphics processor like the current generation, think at least on the 15 the new model will have it too.
 
Bottom line:

So we can have cheap & light 15" retina with only iGPU ! And of course for the game maniacs expensive one.. This actually looks, feels and sounds AWESOME :D
 
Granted, Intel gen7+ gpu arch has some deficiencies, but really, all that talk about "integrated vs. dedicated" makes it sound like they put something like GMA series, do you remember that ******?

Iris has more raw processing power than last year's gt650m ( 832 GFLOPS vs. a little bit more than 700 ) and has massive, low latency 128MB L4 cache, big win for some compute workloads. Looking at AnandTech's openCL benchmarks, I can see how apple will advertise something in the line of "up to 1.5-1.9 more something...blah blah blah in FCPX".

Bottom line: it doesn't get more "dedicated" than that. (at least not in rMBP form factor and it's somewhat limited power/heat envelope).
 
The thing is if the would go for Haswell with 4600 graphics and let's say a 750m they would have announced it a WWDC. The frame of the laptop would have been more or less the same, nvidia drivers ready to go. Components were all available. Absolutely no reason not too launch it then and there.

My guess is they are modifying the frame to pack a lot more battery. And they are putting a lot of effort in writing great drivers for Crystalwell cache available with the HD5200.

I understand those who game a lot on their macbook pro's don't like it.
I agree for gaming it's just too memory bandwidth constrained. And for gaming not a great option. But i'm afraid they are gonna go that route as all signs point in that direction. If you're lucky they might not phase out the classic macbook pro this year and fit a 750m in that. Because it doesn't make sense to rework the inside of that laptop. But i would say the chance of that happening is slim to none.

I would personally be very tempted by a 28w dual core version of the 15 inch pro for say 1599$. But realistically that is not gonna happen either.
 
The thing is if the would go for Haswell with 4600 graphics and let's say a 750m they would have announced it a WWDC. The frame of the laptop would have been more or less the same, nvidia drivers ready to go. Components were all available. Absolutely no reason not too launch it then and there.

Too much logic in this post. :p
 
Seriously, why you compare GT 650, with 1 year of graphics driver maturity, to product that is not yet on the market?

You are correct. Apple has shone time and time again that they get driver updates out to their customers at lightning speed.

My parents are stuck with an iMac at 10.6.6 because going any further has the graphics drivers make the machine lock up. A problem that has existed ever since 10.6.8 came out, and Apple has done NOTHING about since. Yes, Apple is simply giving the one finger salute to a generation of iMacs for years, thanks to them not addressing their dGPU drivers.

https://forums.macrumors.com/threads/1262891/

Also, the tearing in the rMBP lasted how long? I think I just saw a thread where people are still experiencing the problem.
 
Last edited:
Yes, if there were other tradeoffs that made it worthwhile:

  • Price
  • Battery Life
  • Weight
  • Heat
I would actually be interested in a cheaper 15" rMBP with a dual core ULV CPU, like the 28W models that are likely for the 13" rMBP. I don't need quad core or a dGPU but would love the larger screen size.


Just a stupid idea occured to me, Is it possible to put two ULV cpu (4258U) in a laptop, which takes 56 watts, but in the same time dual HD5100 iGPU cross-fire?
 
Would Intel let them overclock it? That is a good question. If the Iris Pro when overclocked is 5-10% faster than the 650m from last year in normal GPU loads (not just OpenCL) I'm okay with it.

No, i counted the numbers from Anandtech tests. Looks like that even getting to last years TDP(70W) wont give any improvement over the last year.

Yes, Iris Pro will be faster in OpenCL computing. But in the rest - it will be way slower than GT650M from MBP Retina, and Geforce still can be overclocked really much.
 
Too much logic in this post. :p

Except for one small point. If Apple wanted to offer a variant that was "ULTIMATE BATTERY LIFE", where they wanted a non-dGPU version of the MBP, then they would wait until it was ready, before updating the entire lineup.

So to me, all indications are that there will be A version of the 15" MBP that is released sans dGPU.
 
Will need to see the real-life performance of the Iris Pro before making any decisions, but generally I am very glad that the trend of pumping up integrated graphics in laptops and ditching the dedicated ones is finally beginning.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.