Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Atomic Walrus

macrumors 6502a
Original poster
Sep 24, 2012
878
434
I'd like to start a thread where we can do some experimental comparisons between the performance of Iris Pro and the 750m, because from what I've seen already this is going to be a very complex topic. I'll start with a few interesting data points I've collected so far:

I started with Unigine Heaven 4.0, a pretty common (and small) benchmark. I tried two different modes and was surprised by the outcome:
-1440x900 Medium:
--Iris Pro: 709
--750m: 717
-1440x900 Low:
--Iris Pro 1205
--750m 1237

So here we're looking at performance deltas in the range of the normal variance between individual tests with the same GPU. In fact they're so close I'm now uncertain whether or not gfxCardStatus is actually forcing the 750m off or not (it indicates that Iris Pro is active in both gfxCardStatus and the Activity Monitor, however it's hard to be certain).

Next I ran Minecraft.

In order to properly test MC you'll need a utility to control screen resolution directly (Minecraft tries to render pixel doubled on Iris Pro in HiDPI mode, which completely throws off the results). I used this simple one here: http://www.reddit.com/r/apple/comments/vi9yf/set_your_retina_macbook_pros_resolution_to/

I'm sure there are more robust options but this works and is very simple (I've also been using it for quite a while on my 17" just to have quick switching between 1920x1200 and 980x600 HiDPI).

This is at full 2880x1800 resolution (which you'll need to set at the desktop with a utility as mentioned above):
-Iris Pro: Average frame rates in the 80-90 range.
-750m: Average frame rates much lower, 50-60.

Not sure what to make of this. Drivers? I'd love to see some numbers from other users to verify that it isn't simply an issue with my machine, but at the moment I have a fairly simple theory:

Intel's OS X drivers could be significantly better than NVidia's. It's also very likely that there's nearly zero performance delta between the OC'd 650m seen last year and this year's 750m (same chip). Anyway, I'd love to see results from other users, and I'll be posting more when I've tried other benchmarks (hopefully including some actual games). I don't intend to actually game on this machine, however after all the debate between iGPU and dGPU these results are very interesting.
 
.some experimental comparisons between the performance of Iris Pro and the 750m...

That's excellent. Is there any chance you could do some image processing testing too, for example with iPhoto, Photoshop or FinalCutPro, with heavy loads? I don't know if there are off-the-shelf setups you can use.
 
This is very interesting information. If you could post screenshots of any benchmarks you run, that would be very helpful.
 
Kinda glad someone started pulling this together because I've been trying to make sense of it all! Im not sure I know what I'm doing TBH but I ran a couple of benches with gfxCardStatus2.3 set to Iris or 750m and got the following results...

i-Kwbfgdt-L.png


i-r4CzPjh-L.png


I'll try and get time to run a few others tomorrow to add more info. Subscribed!
 
Kinda glad someone started pulling this together because I've been trying to make sense of it all! Im not sure I know what I'm doing TBH but I ran a couple of benches with gfxCardStatus2.3 set to Iris or 750m and got the following results...

...

51.78 vs 28.40. It makes more sense. 750M still has more muscle than IrisPro.
 
I just run a 1440x900 medium quality fullscreen test and got the following results...

iris Pro
FPS 24.8, score 626, min 10.0, max 38.2

750M
FPS 26.5, score 668, min 8.9, max 44.5
 
Kinda glad someone started pulling this together because I've been trying to make sense of it all! Im not sure I know what I'm doing TBH but I ran a couple of benches with gfxCardStatus2.3 set to Iris or 750m and got the following results...

Image

Image

I'll try and get time to run a few others tomorrow to add more info. Subscribed!


One question though. i dont understand why CPU varies in benchmark. since you only turn off/on dGPU who come CPU tests also varies in these two tests??

thanx for the tests people. keepem coming.
 
Cinebench, good reminder. Got almost identical scores. Obviously there should be more power available in the 750m than the Iris Pro, but it seems like there are massive software and/or driver variances in real world performance.

Good to see we're getting similar results at least. I'd hate to think that different 750m's were hitting wildly different boost clocks (something that was widely discussed during the original Kepler 680 launch).

A little note on Minecraft: After testing again while watching the power draw, I can see that Minecraft only asks for about 1.8A at any time (and barely ramps up the GPU fan) while Unigine Heaven and Cinebench both draw slightly over 3A. I believe this makes it clear that this is a software anomaly which should be ignored (though I do plan to report this is a bug to Mojang).

I will update this post with screenshots of my benchmark results so far shortly. I'm also open to any suggestions anyone has for more "professional" benchmarks (specifics for test procedures would be helpful as I don't work in video/image creation).

Tomorrow I will go through my Steam library looking for titles which will cooperate with gfxCardStatus (for example CS:GO would not launch in iGPU mode, though it performed fairly well on the 750m).
 
I just run a 1440x900 medium quality fullscreen test and got the following results...

iris Pro
FPS 24.8, score 626, min 10.0, max 38.2

750M
FPS 26.5, score 668, min 8.9, max 44.5



quick runs of Luxmark v2.1beta2

Iris = 601points.
750 = 639points.

Can you clarify what application you were using in the first benchmark?

In both of those benchmarks, the Iris Pro is achieving 94% of the 750M's performance. Impressive.
 
Subjectively, I noticed when running under 750m render testing the fan noise was noticeably louder than when running the iris chipset alone.

My gut instinct (and hey, this is coming from someone who really isnt all that technical so could be way off the mark) right now is that the 750m is old tech which is probably optimised to old fashioned rendering of polygons and texturing whereas the Iris is a more modern unit that can turn its hand to more traditional CPU like calculations. Its probably possible to find some apps that run significantly better on 750 than the Iris but if these early benches are right, its possible Apple pricing which appears to 'give away free' a 750m is because there really isnt the headroom to make it an additional cost!

I

----------

Can you clarify what application you were using in the first benchmark?

In both of those benchmarks, the Iris Pro is achieving 94% of the 750M's performance. Impressive.

"Heaven", download form
http://unigine.com/products/heaven/
 
Intel's Iris and Iris Pro always do well above average in synthetic benchmarks. I feel people are really starting to get the wrong idea here between the 750M and Iris Pro.
Forget synthetic benchmarks. Try benching some games either through Steam or Bootcamp and the difference will be obvious. Silicon aside, the sheer fact the 750M has 2GB of dedicated GDDR5 memory to itself puts it miles ahead of the Iris Pro which has a L4 128MB cache and uses slow DDR3 memory.
That 2GB will make a big difference in games like Skyrim and Battlefield. Combine that with a superior chip and you will get much better frame rates when gaming, despite what Minecraft (Java - I mean seriously) and the synthetic benchmarks have to say.
Head over to Anandtech if you want to see how the Iris Pro really compares (it really struggles in comparison but is not unusable when gaming).

If you are not gaming and just using CAD and video work then Iris Pro will be fine!
 
Intel's Iris and Iris Pro always do well above average in synthetic benchmarks. I feel people are really starting to get the wrong idea here between the 750M and Iris Pro.
Forget synthetic benchmarks. Try benching some games either through Steam or Bootcamp and the difference will be obvious. Silicon aside, the sheer fact the 750M has 2GB of dedicated GDDR5 memory to itself puts it miles ahead of the Iris Pro which has a L4 128MB cache and uses slow DDR3 memory.
That 2GB will make a big difference in games like Skyrim and Battlefield. Combine that with a superior chip and you will get much better frame rates when gaming, despite what Minecraft (Java - I mean seriously) and the synthetic benchmarks have to say.
Head over to Anandtech if you want to see how the Iris Pro really compares (it really struggles in comparison but is not unusable when gaming).

If you are not gaming and just using CAD and video work then Iris Pro will be fine!

Could you define gaming here? There's a wide range, for instance I only care about running games at around 1680x1050 and getting anything above 30fps, I could care less about a constant 60. Do you think the Iris Pro is up to that?
 
Per AnandTech, IrisPro worked pretty good in various real world work loads. The review is for iMac base model with IrisPro. Anand got a lot of video cards to compare with. :D

I dunno about "pretty good." I would characterize his comments as long-run optimistic but much more critical. I'll take two excerpts, one from the Iris Pro benchmarking article and one from the iMac review:

For the past few years Intel has been threatening to make discrete GPUs obsolete with its march towards higher performing integrated GPUs. Given what we know about Iris Pro today, I'd say NVIDIA is fairly safe. The highest performing implementation of NVIDIA's GeForce GT 650M remains appreciably quicker than Iris Pro 5200 on average. Intel does catch up in some areas, but that's by no means the norm. NVIDIA's recently announced GT 750M should increase the margin a bit as well. Haswell doesn't pose any imminent threat to NVIDIA's position in traditional gaming notebooks. OpenCL performance is excellent, which is surprising given how little public attention Intel has given to the standard from a GPU perspective.

As we found in our preview of Intel’s Iris Pro 5200, in its fastest implementation the GPU isn’t enough to outperform NVIDIA’s GeForce GT 650M (found in the 2012 15-inch rMBP).
 
My gut instinct (and hey, this is coming from someone who really isnt all that technical so could be way off the mark) right now is that the 750m is old tech which is probably optimised to old fashioned rendering of polygons and texturing whereas the Iris is a more modern unit that can turn its hand to more traditional CPU like calculations.
Yes, and no.

The design of modern GPUs and their typical problem space have brought about a solution that uses long pipelines in parallel solving the same problem on many pixels at a time. They don't do terribly well when you do general computation with lots of branching or state changes but they scream when allowed to run flat out. (Think F1/indy car)

The intel solution is almost certainly coming from a general compute angle, where the pipeline is shorter and less penalized for state changes. Crystalwell (the embedded DRAM that makes the 'PRO' in Iris Pro) also provides a very large low latency cache that helps with this problem. It does better on the curves, but loses on the open road.

It's unclear to me if the benches that the Iris Pro do so well on are representative of "well written OpenCL" or not. It may be that the problem is necessarily a better match for the Iris pipeline, or it may be that it's written suboptimally for the standard compute model of most GPU architectures and the test itself could be refactored to be better performing on deep pipelines.

Its probably possible to find some apps that run significantly better on 750 than the Iris
Anything that is fillrate/throughput bounded will show the 750 to be considerably more powerful than the Iris Pro for those tasks.
 
Could you define gaming here? There's a wide range, for instance I only care about running games at around 1680x1050 and getting anything above 30fps, I could care less about a constant 60. Do you think the Iris Pro is up to that?

http://m.youtube.com/user/Retinagameshow?&desktop_uri=%2Fuser%2FRetinagameshow

See above. He compares a whole host of games at different FPS.
This is my first mac and I'll be using it for a fair bit of gaming (please don't start with the whole save your money and build a PC - I have one and I travel a lot with university so I need something mobile and well built). It should serve my needs fine, I play a lot of world of tanks and RTS, as well as less demanding shooters and I have no doubts it will handle this fine. I'll probably run most stuff at 1680x1050 or 1440x900 on high with FXAA or 2xAA.
 
Intel's Iris and Iris Pro always do well above average in synthetic benchmarks. I feel people are really starting to get the wrong idea here between the 750M and Iris Pro.
Forget synthetic benchmarks. Try benching some games either through Steam or Bootcamp and the difference will be obvious. ...

If you are not gaming and just using CAD and video work then Iris Pro will be fine!

If Intel's GPUs score well in synthetic benchmarks but lag behind in specific games, that would imply that the drivers need optimizing, something NVIDIA and ATI/AMD have been doing for years. It will be interesting to see if Iris benchmarks significantly improve with successive driver releases.
 
This screenshot shows the 5200 is just below the 650 while the 650 is just below the 750.
I'm not sure whether the new pros use the 5200, I would imagine so though.

This is for more gaming benchmarks, you can review the differences, but it looks like the iris pro isn't that much worse surprisingly.
 

Attachments

  • Screenshot 2013-10-24 18.49.22.png
    Screenshot 2013-10-24 18.49.22.png
    413.3 KB · Views: 1,245
http://m.youtube.com/user/Retinagameshow?&desktop_uri=%2Fuser%2FRetinagameshow

Shows the performance of the Ivy rMBP, very capable.
 
Intel's Iris and Iris Pro always do well above average in synthetic benchmarks. I feel people are really starting to get the wrong idea here between the 750M and Iris Pro.
Forget synthetic benchmarks. Try benching some games either through Steam or Bootcamp and the difference will be obvious. Silicon aside, the sheer fact the 750M has 2GB of dedicated GDDR5 memory to itself puts it miles ahead of the Iris Pro which has a L4 128MB cache and uses slow DDR3 memory.
That 2GB will make a big difference in games like Skyrim and Battlefield. Combine that with a superior chip and you will get much better frame rates when gaming, despite what Minecraft (Java - I mean seriously) and the synthetic benchmarks have to say.
Head over to Anandtech if you want to see how the Iris Pro really compares (it really struggles in comparison but is not unusable when gaming).

If you are not gaming and just using CAD and video work then Iris Pro will be fine!

Yeah, you're quite right: When a real game enters the mix the 750m is clearly ahead. I just found the actual similarity in pure rendering performance (which is essentially what synthetics like Unigine Heaven measure) to be kind of impressive. And it was interesting to see a case where driver issues actually put the Iris Pro far ahead of the 750m (Apple and or Nvidia have never been great with driver updates or OpenGL support in general compared to what we get on the Windows side and D3D).

I mostly started this thread for fun; Anandtech will have solid numbers for us soon enough, but until then I don't think there's much harm in running some non-scientific home tests just to see how the two chips perform "in the wild" (in the rMBP itself).
 
Yes, and no.

The design of modern GPUs and their typical problem space have brought about a solution that uses long pipelines in parallel solving the same problem on many pixels at a time. They don't do terribly well when you do general computation with lots of branching or state changes but they scream when allowed to run flat out. (Think F1/indy car)

The intel solution is almost certainly coming from a general compute angle, where the pipeline is shorter and less penalized for state changes. Crystalwell (the embedded DRAM that makes the 'PRO' in Iris Pro) also provides a very large low latency cache that helps with this problem. It does better on the curves, but loses on the open road.

It's unclear to me if the benches that the Iris Pro do so well on are representative of "well written OpenCL" or not. It may be that the problem is necessarily a better match for the Iris pipeline, or it may be that it's written suboptimally for the standard compute model of most GPU architectures and the test itself could be refactored to be better performing on deep pipelines.


Anything that is fillrate/throughput bounded will show the 750 to be considerably more powerful than the Iris Pro for those tasks.

Agreed 100%, based on testing Iris Pro is less than half as fast at actual graphics rendering than the 750M, but about equal in computation workloads.

If Intel's GPUs score well in synthetic benchmarks but lag behind in specific games, that would imply that the drivers need optimizing, something NVIDIA and ATI/AMD have been doing for years. It will be interesting to see if Iris benchmarks significantly improve with successive driver releases.

Doubtful. The Iris chip is good at single computations while the Nvidia chip excels at parallel computations. This isn't a driver issue, it's a fundamental hardware difference.
 
Nvidia have always been good for drivers. I've been building PC's with them since I got my first 5600 FX. It's intel that is notoriously bad for gfx drivers.

I've seen what the GT 650M can do on the Ivy rMBP and it's impressive, so I'm not worried about buying my first Haswell rMBP and gaming as even if it isn't that much better, I imagine around 5-15% depending upon the game it's perfect for my needs.
Just look at retinagameshow on YouTube and you'll see what I mean.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.