Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
One should not forget that Iris Pro has access to more than 128 MB of video ram. It will still have access to the slower ddr3 ram of the machine.

In the end we will have to wait for benchmarks. There is a lot of uncertainty that goes beyond the raw power of the hardware... in particular drivers and thermals can make big differences
 
One should not forget that Iris Pro has access to more than 128 MB of video ram. It will still have access to the slower ddr3 ram of the machine.

In the end we will have to wait for benchmarks. There is a lot of uncertainty that goes beyond the raw power of the hardware... in particular drivers and thermals can make big differences

I am not sure Iris Pro has direct access to RAM. It'll likely be indirectly through CPU. Also iGPU core clock rate is slower than DDR3 RAM if I am not wrong.
 
I am not sure Iris Pro has direct access to RAM. It'll likely be indirectly through CPU. Also iGPU core clock rate is slower than DDR3 RAM if I am not wrong.

Well, given that the GPU is part of the CPU and access RAM over the same memory controller , you are both right and wrong ;) And the 128Mb eDRAM is not video memory - its an L4 cache, used both for GPU and CPU. Read the AnandTech article, they describe it very well. As to your remark about clocks, I am not sure where you are going with this. RAM often has higher clocks then the processing unit, why is it even relevant? The iGPU can still output the data at a much higher rate than the DDR3 would be able to accommodate - Iris Pro is bandwidth limited, as tests clearly show. In regards to computing power, HD5200 has plenty for a iGPU.
 
Well, given that the GPU is part of the CPU and access RAM over the same memory controller , you are both right and wrong ;) And the 128Mb eDRAM is not video memory - its an L4 cache, used both for GPU and CPU. Read the AnandTech article, they describe it very well. As to your remark about clocks, I am not sure where you are going with this. RAM often has higher clocks then the processing unit, why is it even relevant? The iGPU can still output the data at a much higher rate than the DDR3 would be able to accommodate - Iris Pro is bandwidth limited, as tests clearly show. In regards to computing power, HD5200 has plenty for a iGPU.

I read somewhere that iris pro core clock rate is 400mhz. How could it output faster than what ddr3 would be able to accommodate knowing ddr3 has a faster clock rate?
 
I read somewhere that iris pro core clock rate is 400mhz. How could it output faster than what ddr3 would be able to accommodate knowing ddr3 has a faster clock rate?

Well, don't get too fixated on clocks! Clock rate alone is not a very useful metrics, e.g. a modern 1.3 ULV CPU will completely beat the 4Ghz Pentium 4.

And anyway, after doing some research it turn out that the clock rate of DDR3-1600 is 200Mhz, and the max clock of Iris Pro is 1300Mhz ;)

P.S. A much more useful metric is bandwidth. Anyway, just looking at the benchmarks of Iris Pro you can see that its bandwidth limited (=memory subsystem is the weak point) - at low resolutions is almost outperforms the 650m, but at higher ones it rapidly falls down.
 
Well, don't get too fixated on clocks! Clock rate alone is not a very useful metrics, e.g. a modern 1.3 ULV CPU will completely beat the 4Ghz Pentium 4.

And anyway, after doing some research it turn out that the clock rate of DDR3-1600 is 200Mhz, and the max clock of Iris Pro is 1300Mhz ;)

P.S. A much more useful metric is bandwidth. Anyway, just looking at the benchmarks of Iris Pro you can see that its bandwidth limited (=memory subsystem is the weak point) - at low resolutions is almost outperforms the 650m, but at higher ones it rapidly falls down.

Just a little precision: DDR3 1066 min-max clock rate is 400–1066 MHz, and DDR3 1600 max clock is 1600Mhz...:D

Do you refer todo this benchmark?
 
Last edited:
I suppose Intel could be adding more memory to fix this problem, but as far as i understand it the memory used in the Iris Pro is one of the most expensive components, and increasing it would likely lead to a large spike in unit costs.


For the CPU perhaps. But if it means Apple can simplify the system board design and do away with an entire supply chain for GPUs, then it may very well be worth it to spend money there to save money elsewhere.
 
Just a little precision: DDR3 1066 min-max clock rate is 400–1066 MHz, and DDR3 1600 max clock is 1600Mhz...:D

The RAM module clock rate is 100-266, you are talking about I/O clock. Anyway, let's do a simple calculation. DDR3 1600 can do hypothetical 1600*1e6 64-bit transfers per second, e.g. 1600 million double-precision floating point numbers per second. With dual-channel it doubles to 3200 (again, theoretically). Now let's look at CPUs. Modern Intel CPUs are capable of doing multiple double-precision calculations per clock (max seems to be 8 DP-FLOPS for Ivy Bridge and 16 for Haswell). Let's, for the sake of the argument, take the number 4 FLOPS per clock per core. This means that the ULV CPU in the MBA, operating at 1.3 Ghz (let's forget about TurboBoost here) can theoretically do clocks*cores*4= (1.3*1e9)*2*4 = 10.4 GFLOPS or 10400 million double-precision floating point numbers per second. This is a much faster data processing rate than the DDR3 can handle. And Iris Pro has the peak performance rate of 832 GFLOPS (not sure if its single or double-precision, in worst case we still have over 400GFLOPS DP).

So yeah, RAM is VERY slow compared to the CPU. This is the reason why we have all the caches (extremely fast small memory pools that work as a layer between the CPU and the RAM) - well written algorithms keep all the data in the cache so that RAM access can be minimised.

Do you refer todo this benchmark?

No, this one: http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested
 
For the CPU perhaps. But if it means Apple can simplify the system board design and do away with an entire supply chain for GPUs, then it may very well be worth it to spend money there to save money elsewhere.
To whom?

No need to answer, we already know - it ain't us(consumers). Well, unless your goal was the same as Apple - to make everything impossibly thin.
 
To whom?

No need to answer, we already know - it ain't us(consumers). Well, unless your goal was the same as Apple - to make everything impossibly thin.


Depends.

If the GPU is "good enough" (most mac customers don't even need a GPU at all) and gives a lot better battery life, less heat, etc then it's a win for those customers also.

Myself, I'd gladly trade 10% FPS (which is probably the frame rate difference we're talking here for the new version under wraps) for a Macbook pro that doesn't sound like a jet turbine and get uncomfortably hot to use, when actually used as a portable machine. The several hours more battery life is another nice bonus.
 
Depends.

If the GPU is "good enough" (most mac customers don't even need a GPU at all) and gives a lot better battery life, less heat, etc then it's a win for those customers also.

Myself, I'd gladly trade 10% FPS (which is probably the frame rate difference we're talking here for the new version under wraps) for a Macbook pro that doesn't sound like a jet turbine and get uncomfortably hot to use, when actually used as a portable machine. The several hours more battery life is another nice bonus.

Wouldn't it be nice to be given a choice? I know I'd like to decide for myself which system I wanted. Hopefully Apple sees sit to allow that to happen.

If they truly are killing off the cMBP, I don't see how Apple can remove the option of having a dGPU, without alienating a large percentage of their customer base.
 
Choice has never been an Apple thing. Their philosophy has usually been the opposite especially when the choice people make has to weigh something against another. They don't want their customers to worry about ups/downs. The more expensive product must be better in every respect.

You may prefer it but you really shouldn't buy Apple if you want choice. They are all about making one product as good as possible in their eyes by their criteria and never about making the perfect product for addressing different demands. That is what other companies usually try. The downside is that those other companies don't spend quite as much time refining each product.

I think Apple is also a bit lazy. If the kill the dGPU they don't have to spend as much time with optimization for Nvidia/AMD GPUs. They only need to target one architecture. They have always been lazy with making stuff available to developers and only every once in a while used great features themselves. CUDA, OpenGL API and QuickSync almost entirely missing. If they can spend time optimizing their code, libs and drivers for Intel only will be easier. Maybe it helps the situation.
They still need to support the Mac Pro but only on specialized topics as for all the standard mainstream stuff the Mac Pro won't have a problem anyway.


A 760M GPU would be cool and looking at benchmarks there is a big jump from the 750M/650M/IrisPro trio to the 760M. Those three are not that far apart but the 760M can pull in some (not all) games quite significantly ahead. It is double a 650M at lower clocks. While it would be a stretch for the cooling system with the rather hot Haswell at its side, it should be doable.
The Acer V3 is a 17" and the cooling has its its problem even with the use of a 37W Haswell but the power metrics don't look so different from the 45W 3630QM + 650M@850Mhz previous model.
That GPU could have been (maybe).
It is still a stretch even with some headroom in the rMBP cooling system. Apple would have wanted the highend quads. The faster 2.6/2.7Ghz were always a lot hotter than the 2.3Ghz entry CPUs and they won't use a 37W either. Also in many of the new notebooks with 700M GPUs the cooling system gets all messed up because the aggressive GPU turbo forces the CPU often to throttle since they share the cooling blocks and the cpu usually runs hotter. While the GPU thinks it is still fine at some 75C the CPU burns up. Nvidia needs to fix that.
 
Last edited:
Myself, I'd gladly trade 10% FPS (which is probably the frame rate difference we're talking here for the new version under wraps) for a Macbook pro that doesn't sound like a jet turbine and get uncomfortably hot to use, when actually used as a portable machine. The several hours more battery life is another nice bonus.

10% fps at which resolution and which settings?

At lower resolutions, Iris Pro is consistently 30% behind 750M, and it's behind by as much as 70-80% at higher resolutions.

To gain enough performance that the difference is only 10%, Intel would have to either over clock the chip, which increases its TDP, or move on to a different processing node. That means... Broadwell, or they have to make Iris Pro close to the power consumption of a 640M. Not necessarily a good idea since cooling one hot chip is harder than cooling 2 cooler chips.

And sorry to burst another bubble, but the MacBook Air, which has far more power efficient HD 5000 GPU still sounds like a jet turbine and still gets uncomfortably hot to use:
http://www.notebookcheck.net/Review-Apple-MacBook-Air-11-Mid-2013-i5-1-3-GHz-128-GB.96570.0.html

And you still won't get several hours more battery life. As it is, the screen sucks far too much power. At 50% brightness doing nothing, I'm still staring at a max of 9 hours. If Apple can improve battery life, it'll probably be to about 9-10 hours at light load and you still have to make do with 3-4 hours under max load. But that's hardly different from the current model.
 
10% fps at which resolution and which settings?

At lower resolutions, Iris Pro is consistently 30% behind 750M, and it's behind by as much as 70-80% at higher resolutions.

The bespoke iris model apple is using in the MBPr isn't released yet, and any performance comparisons are pure speculation.

They don't necessarily need to over clock. They could add some execution units, more cache and maybe even go multi-socket if they were to ditch AMD/Nvidia (i.e., 2x i7 Quad cores in the same machine). Or a combination.

We don't have any idea what they've done yet.


The screen uses nowhere near as much power as the CPU+GPU running hard. It's nothing in comparison... i've chewed my MBP battery in 45 minutes before running 3d stuff. Normally can get 7 hours or so. The screen uses very little in the scheme of things if you're pushing the box.

I'm not talking about a couple of hours more at idle. I'm talking about a couple more hours when actually using the machine.
 
Last edited:
Wel, people using the laptop for 'professional' stuff, e.g. GPU compute, Photoshop acceleration etc. probably won't notice much difference to having a dGPU. Also, things like CAD have the potential of being fast on the IGP, depending on how well the driver is optimised for that kind of stuff. After all, Iris Pro packs some relatively serious raw computation power and it can also freely access system RAM, bypassing the need of potentially expensive data copies. Gaming performance will however take a big hit, as will any bandwidth limited operations.


I think Apple is also a bit lazy. If the kill the dGPU they don't have to spend as much time with optimization for Nvidia/AMD GPUs. They only need to target one architecture.

This is a good point, but unfortunately, you are forgetting the iMac (which will still be using dedicated GPUs).
 
The bespoke iris model apple is using in the MBPr isn't released yet, and any performance comparisons are pure speculation.

They may just be asking for higher-binned Iris Pro chips that allow them to set it reliably at a higher TDP than the default 47W.

Refer to Anandtech's article:
http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/5

At the request of at least one very eager OEM, Intel is offering a higher-TDP configuration of the i7-4950HQ. Using Intel’s Extreme Tuning Utility (XTU) I was able to simulate this cTDP up configuration by increasing the sustained power limit to 55W, and moving the short term turbo power limit up to 69W. OEMs moving from a 2-chip CPU + GPU solution down to a single Iris Pro are encouraged to do the same as their existing thermal solutions should be more than adequate to cool a 55W part. I strongly suspect this is the configuration we’ll see in the next-generation 15-inch MacBook Pro with Retina Display.

If they do that, though, then the power-saving benefits of Iris Pro would turn out to not be so significant after all.

The screen uses nowhere near as much power as the CPU+GPU running hard. It's nothing in comparison... i've chewed my MBP battery in 45 minutes before running 3d stuff. Normally can get 7 hours or so. The screen uses very little in the scheme of things if you're pushing the box.

I'm not talking about a couple of hours more at idle. I'm talking about a couple more hours when actually using the machine.

Refer to the above about why the power saving benefits may not be so significant.

In the configuration that Anandtech had it, Iris Pro gets pretty close to 650M in the current rMBP, but still quite a bit behind. It's understandable because Apple overclocked the 650M in the rMBP to exceed 660M performance while keeping TDP down.

But the point is Iris Pro was running close to 70W with Turbo Boost.

And unlike separate 45W CPU + 45W GPU config where the GPU may hit 45W but the CPU may be at 20W, you're looking at a single chip that will happily Turbo until it hits 70W... no matter if it's the CPU or GPU that's under load.

So in the end, the situation under load levels out either way. You probably save 10-15W at absolute max load at most if Apple goes that route.

But there is no realistic case where you have both the CPU and GPU running at max load (total 90W) on the rMBP unless you're running Folding@HOME or something else like that.

And the screen really does use that much power. The battery is 95WHr, but even at idle, I can only get a max of 9 hours. Most of the time less. That means the screen, just sitting there being pretty, consumes roughly 8-10W per hour on average. That's nothing to scoff at considering most other panels would sip battery at 3-5W.
 
Last edited:
If they do that, though, then the power-saving benefits of Iris Pro would turn out to not be so significant after all.

But they will. Because as soon as you run anything 3d, you're looking at a max TDP of 45 watts for the CPU+iGPU (the power consumption for the iGPU is included in the CPU's 45 (or so, i forget the exact figure for the new models but its about that) watts.

Rather than a max of 45 watts for the CPU and another 45 for the dGPU.


Again, i'm not interested in max battery life comparisons when the machines are sitting there idle reading macrumors.

I'm talking about when they are actually being used.
 
A few months ago I assumed the same but there was this geekbench leak of the 4950hq MBP. Only a 4750HQ basemodel makes any sense but with a 4950HQ it has to be a high end model and it wouldn't make any sense to add a dedicated GPU in one such.
It is most likely HD 5200 only all the way. People can probably choose between the three cpus and the whole price will probably drop somewhat.

The 4750HQ isn't all that expensive but with the 4850 it gets expensive and the 4950 is more expensive than a dedicated solution with a more reasonable CPU.

This line of reasoning is spot-on and generally what I have thought as well.
 
But they will. Because as soon as you run anything 3d, you're looking at a max TDP of 45 watts for the CPU+iGPU (the power consumption for the iGPU is included in the CPU's 45 (or so, i forget the exact figure for the new models but its about that) watts.

Rather than a max of 45 watts for the CPU and another 45 for the dGPU.

Sigh... I'm not sure if you're being intentionally ignorant. But there are a few things I'd like to correct for you:

Iris Pro is 47W, not 45W. And that's nominal. Turbo may push that up by another 10-15W, so you're looking at realistically a minimum of 57W under Turbo. And if Iris Pro follows the trend of HD 5000, then it NEEDS Turbo Boost in order to reach maximum performance even when the CPU is not under heavy load. Under heavy load, you're looking at a chip that consumes 60W due to Turbo Boost. Not a max 47W chip.

If Apple decides to do what Anandtech does, then you're looking at max 70W for a single chip. Again, because Iris Pro may NEED Turbo to reach maximum performance even when the CPU is not under load.

But on a separate 45W CPU + 45W GPU configuration, even when the GPU is under max load at 45W, the CPU does not need to go to 45W if it's not under heavy load.

So you're most likely looking at 45W GPU + 20W CPU at most depending on how many CPU cores are active. Most 3D applications and even games don't use more than 2 CPU cores at once, and most of the time don't even need that much processing power from the CPU.

The only situation where Iris Pro may offer significant battery life savings on the rMBP 15" when under load would be when Apple decides to leave it at stock and lock Turbo so that the chip can't perform that fast. But they will surely suffer a rough 40-50% performance loss in 3D applications and gaming at higher resolutions. I wonder what they'll tell the press then to calm the herd.
 
Last edited:
Spending nearly $3000 on a laptop with integrated graphics, non-user replaceable RAM and SSD is a very hard sale for me.
 
I'm glad there are others behind the dGPU scene. I'm not a processor nerd, but theoretically there are some seriously significant feats that will need to be pulled off if Iris Pro can replace the current CPU+dGPU setup that supports all current macbook pro users. Although it's clear that intel's graphics are progressing rapidly, I do not believe that Iris Pro will be the chip that breaks this barrier. The benchmarks don't really predict that, and most theoretical assumptions behind the possibilities of it doing so also fall short. Those points aside, it just doesn't seem like what Intel is doing. They are progressing at a rather consistent pace, and having a break away chip set up (overclocked to hell, for apple) doesn't seem like something they would really be doing.

Speculation aside: I personally don't think apple would allow the massive performance crush of certain tasks (especially at higher resolutions - given the retina display). Soooooo, probably next years model, when broadwell becomes the new Haswell. I wouldn't be surprised if entry level models exclude dGPU though. All I know is if they don't get rid of dGPU's, and instead refresh them - this is gonna be one hot puppy. Even for MBP's
 
Well, let me put it this way, if Apple has convinced Intel to create a custom CPU for them with 80 iGPU cores, then I will take that over the 650M :) But how likely should that be?
 
To those who say that the Iris Pro is close to the 1 year old 650m, take a look again. The increase in battery life better be substantial since the GPU performance is a step backwards.

55289.png


55292.png


55280.png
 
Last edited:
But the increase in battery life won't be that substantial. That's not speculation. That's factual.

The screen still consumes far too much power for idle power consumption to make any difference.

And unless Iris Pro scales its performance perfectly (that's impossible!), it'll still consume more power than HD 4000 or HD 4600 just handling the interface.

All in all, idle power consumption would likely stay the same (or increase slightly) with Iris Pro, and who would care about battery life under heavy load when you'd get less than 2 hours at max load either way?

Logic: battery is 95WHr, CPU (w/ "custom" Iris Pro) is likely 55W (I'll even ignore Turbo Boost here) + display and other components at nominally 10W (50-75% brightness). Calculation says you get about 1.46 hours with those parameters. Read: 1 hour and 28 minutes.

That's hardly better than the 2012 model.

Granted, the 2012 model at max load would be 45W CPU (ignoring Turbo Boost again) + 45W GPU + 10W display and other components. Calculation gives .90 hours, or 54 minutes.

You barely get an extra 30 minutes of battery life... but for obviously less graphics performance.

Not worth it at all IMO.

Also, I know my calculation is absolutely correct for the 2012 model because I have a 2012 model, and I have seen battery estimate as low as 54 minutes under Bootcamp when I stressed both the CPU and GPU with Battlefield 3 running at 2560 x 1600.

Granted, Iris Pro may be less of a jet engine at such an insane load, but I don't think it'd make battery life that much better for the rMBP 15".
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.