Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

PatientWaiter

macrumors newbie
Original poster
Mar 25, 2013
24
0
I was hoping someone could help me understand the new (MBP Retina Oct 2013) 15" model specs, between the high and low end models in terms of graphics. I know that I will be needing (wanting/future-proofing) the 512GB SSD and the 16GB RAM... but is there a particular benefit/drawback using Iris Pro versus the NVIDIA card?

My last computer was purchased in early 2006... hard to believe this thing is still running (poorly as of late), but I'm eager to get back into the current technology - but money is an issue and this new purchase needs to last. I'm okay with spending $2500'ish on a new laptop assuming it will last me 5 or more years. (Especially considering my current investment has lasted 7+ years)

Needless to say, I'm a little out of the times and I'm not confident about the graphics end of things. As for the other components, I'm good with those. If there are any words of wisdom or links you might provide, I'd be happy to hear them.

Essentially, it boils down to this - is it better to use Iris Pro and line-item upgrade the lower end system, or just go with the high end system and call it a day...?

Many thanks and all the best!
 
People should really wait for the benchmarks which will show hard facts for what currently is speculation at best. Nobody knows definitely how good Mavericks GPU drivers are or how bad.

Chances are people that stay on OSX and their main GPU needs come from professional software want the Iris Pro only. No gpu switching hassle, Intel is better at general processing and even in OpenGL Intel seems to be better of.
Gaming on Windows though and Nvidia 750M is a lot better. It is not wise to look at old Anandtech benchmarks as the driver situation has probably changed from then. Notebookcheck has some newer benches. It will take about a week maybe two and we will have lots of Reviews with newest drivers and also on OSX.
 
I'm interested in some enlightenment here as well. My understanding with the previous gen RMBP was that the DGPU was beneficial to applications such as Aperture and Photoshop. There have been some posts made regarding the new RMPB that the DGPU will actually be worse preforming that IRIS for similar applications. This doesn't make sense to me. Price is not an issue as I am ordering the maxed model anyway. However speed of my workflow is my primary (only concern). I do not game, or do 3D rendering. Is there truth that Iris Pro only would actually be faster for photo (multiple large RAW file processing) and some video? I still have time to cancel and reorder my BTO.
 
First of all, I recommend you to read this article from AnandTech. Second, we should wait for the benchmarks. It is possible that Iris Pro in the new MacBooks is performing higher than normal model (again, this possibility is well explained in the article).

As far as I see it, the dGPU will be only beneficial for demanding games, heavy 3D work, or with CUDA-asisted applications (in which case, you should get the 2012 refurb model). The iGPU might be actually better for tasks like foot and video editing, it also packs more punch for scientific processing.
 
I would be interested in any info regarding this as well. How good is the Iris Pro exactly? How will it run Solidworks, games? And how good is the 750M in the end? Today I read that most "benchmarks"/info about it, is regarding the GDDR3 version of the 750M, not the GDDR5 which is used in the Haswell rMBP. Other info showed the 750M to be slightly better then the 660M, which was an excellent GPU imo. So what is the truth about Iris Pro and the 750M? In the end, I hope to have a clearer perspective about this and make a choice if it's worth the money or not.

Like others said, info will start to come. We could gather it here? A good thread on this will avoid many questions being asked in the future.
 
I would be interested in any info regarding this as well. How good is the Iris Pro exactly? How will it run Solidworks, games? And how good is the 750M in the end? Today I read that most "benchmarks"/info about it, is regarding the GDDR3 version of the 750M, not the GDDR5 which is used in the Haswell rMBP. Other info showed the 750M to be slightly better then the 660M, which was an excellent GPU imo. So what is the truth about Iris Pro and the 750M? In the end, I hope to have a clearer perspective about this and make a choice if it's worth the money or not.

Like others said, info will start to come. We could gather it here? A good thread on this will avoid many questions being asked in the future.

If you're planning on using Solidworks, Maya, Lightwave, or any 3d creation or CAD application get the Iris pro one. It is significantly faster for those tasks and not that far behind the GT650M-750M for gaming...

Here are some benchmarks of the Iris Pro 5200: http://www.notebookcheck.net/Intel-I...0.90965.0.html

Nvidia gt650m(the 750m which is fairly similar to the 650M does not include the specviewperf 11 scores for comparison):
http://www.notebookcheck.net/NVIDIA-...M.71887.0.html

If you are remotely interested in 3d creation apps check the specviewperf 11 scores. Even in the cinebench scores the iris pro minimum framerates exceed the 650 by 10 fps.
 
If you're planning on using Solidworks, Maya, Lightwave, or any 3d creation or CAD application get the Iris pro one. It is significantly faster for those tasks and not that far behind the GT650M-750M for gaming...l[/url]

So is Apple offering the 750 just for gamers and to shut people up on boards like these?:confused:
 
If you're planning on using Solidworks, Maya, Lightwave, or any 3d creation or CAD application get the Iris pro one. It is significantly faster for those tasks and not that far behind the GT650M-750M for gaming...

Here are some benchmarks of the Iris Pro 5200: http://www.notebookcheck.net/Intel-I...0.90965.0.html

Nvidia gt650m(the 750m which is fairly similar to the 650M does not include the specviewperf 11 scores for comparison):
http://www.notebookcheck.net/NVIDIA-...M.71887.0.html

If you are remotely interested in 3d creation apps check the specviewperf 11 scores. Even in the cinebench scores the iris pro minimum framerates exceed the 650 by 10 fps.

I've seen that site today (first link). I don't understand the data completely though.

There is a "solidworks" benchmark (ctrl+f search for solidworks). I see a "max: 15fps". This is the first thing I don't understand. I assume this doesn't mean that solidworks runs at max of 15fps lol.

Secondly, if you open the comparison chart, you will see the GTX 780M has "-8%". Does that seriously mean that the, GTX 780M, runs 8% slower then the Iris Pro? And is that for general performance, or only something specific?

edit: The HD4600 is "-5%", the HD5000 is "-21%". How can the HD5000 be so much worse then the 4600, shouldn't it be better actually? I really don't understand this anymore, sorry if I'm being a complete noob.

If Iris Pro is really this amazing in Solidworks the deal seems 95% done. Base model for me. Solidworks > Games in the end, I'm not a kid anymore haha. And I've seen Iris Pro run some fun games (even from this year) at around 30fps on pretty good settings, so it's ok.
 
Last edited:
notebookcheck only offers Windows benchmarks. OSX is tests are still outstanding.
The thing is though that Nvidia does worse in OpenGL cinebench in OSX than in Windows. While for Intel HD4000 and so on the results are inverse with OSX OpenGL performance actually better. For OpenCL Kepler (650M/750M) is just not very good. I suspect in OSX the performance in this sort of workload will lean even more to Iris Pro.

"Tibits said:
So is Apple offering the 750 just for gamers and to shut people up on boards like these?
I would suspect so. Also they get lots of people to buy a 2600$ machine who have never looked at any benchmark and just heard Nvida=great and Intel=crap. These people are way more than those who frequent forums and are interested in technical details. It is really a win win. Maybe that is also why they put the Iris Pro next to a 750M. So the people don't stand in front of the dilemma that the cheaper model is actually better in certain OSX pro workload.
Also I think casual gamers are actually quite a lot of their customers. I assume 95% of people that buy 15" MBPs never need a fast GPU for anything other than games. Programmers, Students of all sorts, teachers. Engineers and photographers or people who work in the creative industry are probably the minority today. Engineers have always been and are probably not even worth mentioning on Macs.
 
Also they get lots of people to buy a 2600$ machine who have never looked at any benchmark and just heard Nvida=great and Intel=crap. These people are way more than those who frequent forums and are interested in technical details. It is really a win win. Maybe that is also why they put the Iris Pro next to a 750M. So the people don't stand in front of the dilemma that the cheaper model is actually better in certain OSX pro workload.

I hear you. I know the faster processor and RAM will reduce the time of my photo processing workflow, and I want the bigger SSD for convenience; so there won't be a $ savings for me without the DGPU. I just want the best option for my needs, and it seems counterintuitive that IrisPro might be > than DGPU. That said, I think I am canceling my BTO until I get it figured out.
 
Maybe that is also why they put the Iris Pro next to a 750M. So the people don't stand in front of the dilemma that the cheaper model is actually better in certain OSX pro workload.

Could someone also address the following question?

The lower-end 15" features the Iris Pro. The higher-end $2600 model features the NVIDIA card -- but does it have the Iris Pro AND the 750M?

It is listed on apple.com as:
- 512GB PCIe-based flash storage
- Intel Iris Pro Graphics

Some confusion here. Seems to me if Iris Pro is so much better for some apps, that the software would choose which system to use for the best result if both are included....?
 
I was hoping someone could help me understand the new (MBP Retina Oct 2013) 15" model specs, between the high and low end models in terms of graphics. I know that I will be needing (wanting/future-proofing) the 512GB SSD and the 16GB RAM... but is there a particular benefit/drawback using Iris Pro versus the NVIDIA card?

My last computer was purchased in early 2006... hard to believe this thing is still running (poorly as of late), but I'm eager to get back into the current technology - but money is an issue and this new purchase needs to last. I'm okay with spending $2500'ish on a new laptop assuming it will last me 5 or more years. (Especially considering my current investment has lasted 7+ years)

Needless to say, I'm a little out of the times and I'm not confident about the graphics end of things. As for the other components, I'm good with those. If there are any words of wisdom or links you might provide, I'd be happy to hear them.

Essentially, it boils down to this - is it better to use Iris Pro and line-item upgrade the lower end system, or just go with the high end system and call it a day...?

Many thanks and all the best!

upgrading to 512GB + 16GB from the base model is AS EXPENSIVE AS the higher model WITH THE DISCRETE GRAPHICS, Why not buy the latter?
 
upgrading to 512GB + 16GB from the base model is AS EXPENSIVE AS the higher model WITH THE DISCRETE GRAPHICS, Why not buy the latter?

  • potentially reduced graphics performance for some apps & functions (see above posts)
  • risk of sub-optimal switching resulting in reduced battery life
 
I've seen that site today (first link). I don't understand the data completely though.

There is a "solidworks" benchmark (ctrl+f search for solidworks). I see a "max: 15fps". This is the first thing I don't understand. I assume this doesn't mean that solidworks runs at max of 15fps lol.

Secondly, if you open the comparison chart, you will see the GTX 780M has "-8%". Does that seriously mean that the, GTX 780M, runs 8% slower then the Iris Pro? And is that for general performance, or only something specific?

edit: The HD4600 is "-5%", the HD5000 is "-21%". How can the HD5000 be so much worse then the 4600, shouldn't it be better actually? I really don't understand this anymore, sorry if I'm being a complete noob.

If Iris Pro is really this amazing in Solidworks the deal seems 95% done. Base model for me. Solidworks > Games in the end, I'm not a kid anymore haha. And I've seen Iris Pro run some fun games (even from this year) at around 30fps on pretty good settings, so it's ok.

Yes that's exactly right...the top of the line notebook Nvidia GTX780M card is 8% slower in Solidworks. Unfortunately they dont have any such benchies for the 750M but they do for the 650M which is almost the same(benchmarks on the second link). The 650M reaches 9fps which is about 80% slower than the Iris Pro.

You can google specviewperf and see what the test does....It's basically a very heavy solid (geometrically speaking) model which is rotated and then the fps recorded....it simulates how heavy of a model you would build basically.

Some of those applications (not all) only run in bootcamp or parallels in my case. I'm running both Spaceclaim and Solidworks in parallels 9. I love OSX but I also like to do my work, which most of the time is on the go.
I hope all this helps.
 
As far as I see it, the dGPU will be only beneficial for demanding games, heavy 3D work, or with CUDA-asisted applications (in which case, you should get the 2012 refurb model). The iGPU might be actually better for tasks like foot and video editing, it also packs more punch for scientific processing.

OS X supports using multiple GPUs for OpenCL. So whether or not the iGPU or the 750 is faster or not may not be as big a deal as roughly doubling the OpenCL power of the machine. You can use both GPUs at once for OpenCL in things like scientific processing or video editing.
 
  • potentially reduced graphics performance for some apps & functions (see above posts)
  • risk of sub-optimal switching resulting in reduced battery life

I'm pretty sure GFxCardStatus will still be around to solve the (potential) sub-optimal switching. And while it's still too early to tell, the relatively minor bump in battery life (i.e. compared to the Air) would suggest that some of the gains from Haswell are being funneled into powering these more demanding GPUs.

All conjecture, of course...
 
Essentially, it boils down to this - is it better to use Iris Pro and line-item upgrade the lower end system, or just go with the high end system and call it a day...?

If you are really planning on waiting another 5+ years before your next upgrade, then yes, I would absolutely get the higher end system. Besides, if you're already getting the RAM and SSD upgrades, it practically becomes a no-brainer given the cost involved.

----------

  • potentially reduced graphics performance for some apps & functions (see above posts)
  • risk of sub-optimal switching resulting in reduced battery life

As noted by the above poster, gfxCardStatus is a dream and works beautifully. Should its developer stop for some reason, I'm sure someone else will carry the torch. (There were two or three similar utilities years ago, but gfxCardStatus's elegance and superior implementation won the day.)

----------

upgrading to 512GB + 16GB from the base model is AS EXPENSIVE AS the higher model WITH THE DISCRETE GRAPHICS, Why not buy the latter?

Not quite—there's a $100 gap, which is accounted for by the CPU jump from 2.0 to 2.3. The OP didn't mention that he cared about the CPU increase (nor should he, since that's the least relevant upgrade of all).

----------

The lower-end 15" features the Iris Pro. The higher-end $2600 model features the NVIDIA card -- but does it have the Iris Pro AND the 750M?

It is listed on apple.com as:
- 512GB PCIe-based flash storage
- Intel Iris Pro Graphics

Some confusion here. Seems to me if Iris Pro is so much better for some apps, that the software would choose which system to use for the best result if both are included....?

Yes, the higher end does have BOTH chips. It's more than a bit surprising for cost reasons that Apple elected to do this (I have to think Intel gave them a sweet deal, perhaps for marketing reasons, else using HD4600's for the iGPU should have been a no-brainer).

As for your second conjecture, you're expecting way too much intelligence from the system, I'm sorry to say. As it is, dynamic switching has been far from perfect for the last several years. I expect the algorithms used will not differ substantially with these new models. In other words, I wouldn't be surprised to find that, say, an intensive OpenCL task causes a switch to the 750M even though the Iris Pro would have done better. Time will tell, but if I'm right, that's yet another argument in favor of using gfxCardStatus.
 
Color fidelity with two graphics cards?

I am a photographer looking at the two options for the new 15" Haswell MacBook Pro laptop and wonder about color fidelity in the display with the NVIDIA 750M discrete graphics card compared to when the Iris Pro 5200 graphics is being used. I mainly use Photoshop but want consistent color across all apps.

Is there any possibility of color shifts? Is the same ColorSync profile used in all cases, or can you have different profiles?

For a maxed out system the price is the same for the NVIDIA 750M and the Iris Pro 5200. I'm not quite sure I need the dGPU.
 
I am a photographer looking at the two options for the new 15" Haswell MacBook Pro laptop and wonder about color fidelity in the display with the NVIDIA 750M discrete graphics card compared to when the Iris Pro 5200 graphics is being used. I mainly use Photoshop but want consistent color across all apps.

Is there any possibility of color shifts? Is the same ColorSync profile used in all cases, or can you have different profiles?

There shouldn't be any difference in color output. To put it in different words - if there is a difference, then something is broken :)
 
In other words, I wouldn't be surprised to find that, say, an intensive OpenCL task causes a switch to the 750M even though the Iris Pro would have done better. Time will tell, but if I'm right, that's yet another argument in favor of using gfxCardStatus.

OpenCL runs entirely separate of graphics card switching IIRC. It can manually target one of the cards, or both of the cards at the same time.

That's at least how it worked on previous machines, but this is the first Macbook Pro in a while that supported OpenCL on both the iGPU and dGPU.
 
OpenCL runs entirely separate of graphics card switching IIRC. It can manually target one of the cards, or both of the cards at the same time.

In principle, yes, but only if the developer of the OpenCL application implemented that feature. It does not do that automatically. Furthermore, if OS X uses Nvidia's Optimus technology for switching between cards, a powered-down dGPU is indeed turned off and doesn't do anything (but also doesn't consume any energy). I am not sure if Optimus allows both GPUs to be turned on completely so that both of them can perform computations.

Apart from all of that, you can always turn off the dGPU with gfxCardStatus and "emulate" the base model "for free". When the dGPU is turned off it consumes 0 watt. I would always get the dGPU if it doesn't cost me more.
 
I'm very much interested in this current dilemma as well.

Personally, I'm not a hardcore gamer, but I know I will be running Bootcamp every now and then to play some demanding games like Call of Duty, Battlefield, Tomb Raider, and some others. Mostly, I will be doing photography work with Photoshop CS6, Lightroom 5, Aperture, Photomatix, and some Nik software. Web development is also in the picture, and the occasional video editing.

I'm mainly concerned about what Tibits mentioned up there (potentially reduced graphics performance for some apps & functions / battery life impacted) in the case of going with the high-end Iris Pro + GeForce 750M model. The price difference between the iGPU and dGPU models is nonexistent with identical processor, RAM and Flash storage configurations – so it seems to me like in the end it boils down to really how well these machines do in the GPU department.

After all, putting down 2K+ dollars is a significant investment. Can't wait for the in-depth benchmarks and technical reviews.
 
In principle, yes, but only if the developer of the OpenCL application implemented that feature. It does not do that automatically. Furthermore, if OS X uses Nvidia's Optimus technology for switching between cards, a powered-down dGPU is indeed turned off and doesn't do anything (but also doesn't consume any energy). I am not sure if Optimus allows both GPUs to be turned on completely so that both of them can perform computations.

Previously Nvidia's switching did allow both cards to be on. I used to run OpenCL tasks across both a 9400m and 9600 at the exact same time.

You could also do all your OpenCL stuff on the Iris, and then do OpenGL on the Nvidia. That would let each card do what they're good at if you're doing video editing where you are using both at the same time.

True, software developers have to implement the preference to let you pick your cards. But it wouldn't surprise me if a lot of software developers pinned OpenCL to the Iris and let graphics switching take the OpenGL over to the Nvidia card. I actually wouldn't be surprised if this was default behavior.
 
In principle, yes, but only if the developer of the OpenCL application implemented that feature. It does not do that automatically. Furthermore, if OS X uses Nvidia's Optimus technology for switching between cards, a powered-down dGPU is indeed turned off and doesn't do anything (but also doesn't consume any energy). I am not sure if Optimus allows both GPUs to be turned on completely so that both of them can perform computations.

Apart from all of that, you can always turn off the dGPU with gfxCardStatus and "emulate" the base model "for free". When the dGPU is turned off it consumes 0 watt. I would always get the dGPU if it doesn't cost me more.

True but what if you're using bootcamp to run a CAD application?.. Then u are stuck with the weaker (in the case of CAD) gt750 card....gfxcardstatus is only a mac OSX app!
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.