Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Oh dear, this thread has a lot of info going around which can get confusing. Lets see if I can get straight answers with a couple of specific questions

Does the top end model 15" use a less powerful Iris Pro compared to the base model?

Can I use that graphic card switcher program to keep the dGPU turned off the majority of the time, only switching it on when I want it on?

In a way, I see no reason to get the base model macbook if the top end uses the same iGPU as the base model, but includes a free dGPU that you can keep off at will (would increase the resale value, obviously). The only problem would be the dGPU running constantly in windows, but I believe it is better to run windows as a VM in parallels - you get better battery life than running windows natively anyway.
 
Does the top end model 15" use a less powerful Iris Pro compared to the base model?
No, it's the same Iris Pro, Apple is basically giving you a "free" GeForce on a similar spec rMBP (2.3 Ghz, 16Gb RAM, 512GB SSD)


Can I use that graphic card switcher program to keep the dGPU turned off the majority of the time, only switching it on when I want it on?
You could, but I heard there are some limitations (you'll have to read the software limitations) and it'll only work in OSX, there's no Windows version and as on till now, there's no way of turning on the iGPU in Windows due to Apple disabled it from bootcamp it self.

In a way, I see no reason to get the base model macbook if the top end uses the same iGPU as the base model, but includes a free dGPU that you can keep off at will (would increase the resale value, obviously). The only problem would be the dGPU running constantly in windows, but I believe it is better to run windows as a VM in parallels - you get better battery life than running windows natively anyway.
Not really, VM works well with some situations but fully Windows will be leaps and bounds better then VM when you need the performance.

I guess the problem comes that the price is the same and that some people would want their notebook to run cooler or slightly longer battery life (I'm guessing they're hoping for an additional 1 hour battery life) or IrisPro would perform certain task faster then the dGPU in Windows.
 
Good discussion

I'm on the fence as well. Online a 2.3Ghz/16/512 Iris Pro model costs $2599 and the 2.3/16/512/Nvidia model costs $2599! I can probably walk out of the door that day with the dGPU model, but will have to order an upgraded base.

Two things to consider, the 128MB eDRAM of the Iris is supposed to be used by the CPU when the not used by the iGPU, and you should be able to use the iGPU for the display while using the dGPU as a CUDA coprocessor. So the dGPU option might be more flexible in the long run.

Is it possible to simply turn the dGPU off and completely power it down all the time, and turn the top end model into the low end model? Then you get all the benefits of the base with the ability to convert up when you need it.

I use Bumblebee on Linux to completely power off Optimus dGPU's, so it should be possible.
 
Is it possible to simply turn the dGPU off and completely power it down all the time, and turn the top end model into the low end model? Then you get all the benefits of the base with the ability to convert up when you need it.

Yes, with gfxCardStatus, you can. I think doing so would be crazy, since the dynamic switching works reasonably well, but you could certainly do this.

Too many people are getting caught up by some small anecdotal complaints about dynamic switching not being optimal and losing sight of the big picture.

----------

world of warcraft runs significantly worse on the 750m model. how is that possible? and why is that old game being used.

Because it's about social networking, as well as the fact that people get invested in their characters and achievement. The quality of the graphics isn't all that relevant, aside from FPS considerations in arena.
 
No, it's the same Iris Pro, Apple is basically giving you a "free" GeForce on a similar spec rMBP (2.3 Ghz, 16Gb RAM, 512GB SSD)



You could, but I heard there are some limitations (you'll have to read the software limitations) and it'll only work in OSX, there's no Windows version and as on till now, there's no way of turning on the iGPU in Windows due to Apple disabled it from bootcamp it self.


Not really, VM works well with some situations but fully Windows will be leaps and bounds better then VM when you need the performance.

I guess the problem comes that the price is the same and that some people would want their notebook to run cooler or slightly longer battery life (I'm guessing they're hoping for an additional 1 hour battery life) or IrisPro would perform certain task faster then the dGPU in Windows.

Let me rephrase it for my own personal situation then:

Won't be using it for heavy gaming - maybe some Terraria, Minecraft or some other similar kind of game on Steam, I should be able to expect resonable performance from that in Paralles, wouldn't I?

If I really needed the extra horsepower of the dGPU, I could then otherwise boot into bootcamp and go from there, where I would have a power outlet handy anyway.

As long as the Iris Pro is the SAME in both models, I think it makes more sense to go for the dGPU model at the same cost - If you're going to run windows in bootcamp you're probably not that worried about battery life anyway. Does that sound better?
 
I'm interested in some enlightenment here as well. My understanding with the previous gen RMBP was that the DGPU was beneficial to applications such as Aperture and Photoshop. There have been some posts made regarding the new RMPB that the DGPU will actually be worse preforming that IRIS for similar applications. This doesn't make sense to me. Price is not an issue as I am ordering the maxed model anyway. However speed of my workflow is my primary (only concern). I do not game, or do 3D rendering. Is there truth that Iris Pro only would actually be faster for photo (multiple large RAW file processing) and some video? I still have time to cancel and reorder my BTO.

This makes me sad:(. Almost none of your post is correct, and it's because so much bad information gets passed around. The things you mention either don't use the GPU or use it in a trivial fashion. In those cases just buy whatever is cheapest assuming it meets your other needs. I'll give you an example here. 3d rendering doesn't use the gpu under most circumstances. A few renderers in the wild are specifically gpu based renderers. The rest don't use it or use it only for previews. Most of those are still CUDA, in which case your only option is NVidia. Intel won't do it. A problem with rendering on GPUs is memory. A lot of CG images you see rely on GB of textures which won't fit into the hardware framebuffer. GPU use is trivial in photoshop. On benchmarks it looks great, but making gigantic swipes in liquify as a test of a large image is highly illogical. It takes an exorbitant amount of time to calculate without the massively parallel architecture there because of bad mesh topology, not because that represents a realistic usage pattern.

I will add that there are cases for a strong gpu if you use 3d apps. In most situations the 750m will also be woefully underpowered there. If you are animating or dealing with 3d paint programs, gpus make a huge difference. Large framebuffers help with 3d paint work. 2GB isn't that large for the price of a rmbp, although that's a notebook issue. With animation the ability to compute point positions on an animatable mesh that will play back at 24fps without constantly hitting playblast is a good thing.

Last thing, do not get sucked into the silly marketing. Next year there will be a new debate on gpus, as they currently experience far bigger jumps than X86 cores.

Arg I keep having to edit. To append the statements on photoshop, aperture, and some of the other semi-mass market media editing stuff, lack of bugs and feature support can be important. Some intel generations did have issues, but they were typically a matter of what was actually supported rather than a raw performance issue when it came to lighter tasks.


In a way, I see no reason to get the base model macbook if the top end uses the same iGPU as the base model, but includes a free dGPU that you can keep off at will (would increase the resale value, obviously). The only problem would be the dGPU running constantly in windows, but I believe it is better to run windows as a VM in parallels - you get better battery life than running windows natively anyway.

It depends on how much ram you have and what you do in Windows.
 
Last edited:
That isn't true at all with Aperture. It's BARELY true with photoshop.

What's not true at all? That the DGPU is beneficial to applications such as Aperture and Photoshop, or that Iris Pro only would actually be faster for photo (multiple large RAW file processing) and some video? I wasn't clear in reading your post.
 
What's not true at all? That the DGPU is beneficial to applications such as Aperture and Photoshop, or that Iris Pro only would actually be faster for photo (multiple large RAW file processing) and some video? I wasn't clear in reading your post.

Blah I reworded a bit (several times). I still probably could have written it better. It came through too much that I'm grumpy today. I've looked for evidence of speedup in raw photo processing through Aperture. I have yet to find any. It's not used in that manner in photoshop through camera raw. It is used for iris blur, liquify rendering, and a few other things. I can't remember if they added it to puppet warp. The OpenGL drawing doesn't have much of an effect, nor should it. It involves a lot of pixels, yet not much in the way of shading calculations. There are no normal vectors involved or anything that really tends to take a long time as you're talking about planar raster images without lighting calculations. Barefeats and a couple other sites tend to show comparisons on those features, yet they should never be that slow in real world use. My comment on topology referred to liquify.

The rendering thing always irks me because everyone assumes it works that way. Most offline renderers don't work that way. More of them would if gpus contained insane amounts of ram. Older techniques like allowing file formats that can be read directly into swap space don't work there.
 
This makes me sad:(. Almost none of your post is correct, and it's because so much bad information gets passed around. The things you mention either don't use the GPU or use it in a trivial fashion. In those cases just buy whatever is cheapest assuming it meets your other needs. I'll give you an example here. 3d rendering doesn't use the gpu under most circumstances. A few renderers in the wild are specifically gpu based renderers. The rest don't use it or use it only for previews. Most of those are still CUDA, in which case your only option is NVidia. Intel won't do it. A problem with rendering on GPUs is memory. A lot of CG images you see rely on GB of textures which won't fit into the hardware framebuffer. GPU use is trivial in photoshop. On benchmarks it looks great, but making gigantic swipes in liquify as a test of a large image is highly illogical. It takes an exorbitant amount of time to calculate without the massively parallel architecture there because of bad mesh topology, not because that represents a realistic usage pattern.

I will add that there are cases for a strong gpu if you use 3d apps. In most situations the 750m will also be woefully underpowered there. If you are animating or dealing with 3d paint programs, gpus make a huge difference. Large framebuffers help with 3d paint work. 2GB isn't that large for the price of a rmbp, although that's a notebook issue. With animation the ability to compute point positions on an animatable mesh that will play back at 24fps without constantly hitting playblast is a good thing.

Last thing, do not get sucked into the silly marketing. Next year there will be a new debate on gpus, as they currently experience far bigger jumps than X86 cores.

Arg I keep having to edit. To append the statements on photoshop, aperture, and some of the other semi-mass market media editing stuff, lack of bugs and feature support can be important. Some intel generations did have issues, but they were typically a matter of what was actually supported rather than a raw performance issue when it came to lighter tasks.




It depends on how much ram you have and what you do in Windows.

Well, being the top spec model, expect 16GB of ram. :)
 
Blah I reworded a bit (several times).

Thanks for the clarification. I don't mean to be pedantic about this topic, but I am hung up on understanding the truth before ordering.


One unrelated question as you seem to be knowledgable on the topic: is the DGPU beneficial for driving a Thunderbolt display (or two) in a multiple display environment?
 
Thanks for the clarification. I don't mean to be pedantic about this topic, but I am hung up on understanding the truth before ordering.


One unrelated question as you seem to be knowledgable on the topic: is the DGPU beneficial for driving a Thunderbolt display (or two) in a multiple display environment?

Ah that is a good question, and I wish I had a good answer. That isn't always a matter of raw performance. It's typically one of supported features and lack of bugs. There was a discrepancy on the 2011 models, and a bug early on with 2012 integrated graphics. I would have to look up whether the iris pro chipset supports 2 thunderbolt displays. I would be surprised if either of the 15" models lacked a feature like that. The iris pro used in the 15" is miles ahead of the HD 3000 from 2011, but if you want to be cautious, wait for user feedback. It can reveal latent bugs at times.
 
Yes, with gfxCardStatus, you can. I think doing so would be crazy, since the dynamic switching works reasonably well, but you could certainly do this.

If the 5200 and 750 are roughly the same speed, there isn't much need to run the 750, so dynamic switching eats battery for no reason. But since 16/512GB models of either are the same price it makes no sense not to get the one with the dGPU.

If the benchmarks are mistaken and the 5200 doesn't perform as well as the 750 in the real world, then obviously, the dGPU model and dynamic switching are the best way to go.
 
If the 5200 and 750 are roughly the same speed, there isn't much need to run the 750, so dynamic switching eats battery for no reason. But since 16/512GB models of either are the same price it makes no sense not to get the one with the dGPU.

If the benchmarks are mistaken and the 5200 doesn't perform as well as the 750 in the real world, then obviously, the dGPU model and dynamic switching are the best way to go.

It can be inferred that the 750m will perform better do the the simple fact that its offered as an upgrade option.
 
A large and often overlooked part of displaying very large textures/images is memory bandwidth.

The 128MB cache in the Iris Pro makes a huge difference in performance.

The jittery/jumpy movements/scrolls with tear lines is often due to hitting memory bandwidth limits of of GPU memory. These issues can also be the result of hitting the bandwidth limits of the bus connecting the GPU to RAM or processor caches depending on the architecture.

You can avoid these issues by increasing memory bandwidth or increasing the VRAM to a size that is much less likely to require paging things in and out.

Given the 128MB cache in the Iris Pro and how Mavericks can vary the size of the allotted VRAM for it depending on requirements I would judge the Iris Pro to be more than enough for photo editing, graphic design and video editing.
 
Anybody else experiences smoother scrolling in Chrome with gfxCardStatus set to "integrated only"? Or is it just my imagination?
 
Two things to consider, the 128MB eDRAM of the Iris is supposed to be used by the CPU when the not used by the iGPU, and you should be able to use the iGPU for the display while using the dGPU as a CUDA coprocessor.

Could you expound on this? Or anyone else chime in? For the past year it's been nothing but Iris Pro vs. dGPU ad nauseam.

I'm want to find out more about how the Iris Pro (or just it's eDRAM) works with the dGPU. I haven't read anything about this yet, and it seems to be a significant performance component unique to the iGPU+dGPU models.
 
which should i buy ?

im a college student and im already bought mbp mid 2012 and i lost it :( now since the new rmbp late 2013 is out im willing to buy it for my college daily such as photoshop, final cut, after effect etc. im a gamer too i installed my lost mbp with bootcamp and im playing a hard gaming such as bf3, cod bo 2, crysis 3. but its not a ultra setting i use. i dont usethe AA, FXXA and anything except the texture, shadow and resolution im putting on high.
my question is should i buy mbpr or other gaming laptop or mbpr high end that comes with 750m nvidia GPU.
im sorry for my typo. im indonesian :D
thanks in advance
 
I'm looking forward to get an answer to that questino from an experienced photographer :) I'm exactly in the same situation !

I'm also a part-time photographer too (PS I'm just 18, mon ami), so don't take my words for granted.

I don't need the best graphics, because I don't do video editing with Adobe AE or FCP X either. Mais, I don't even use iMovie.

All I use is Lightroom and Photoshop CS6, and they run just fine on my Haswell MacBook Air with Intel HD 5000.

But if you want to play games and drive 4K displays like a boss, no harm in getting the 2GB of GT 750M.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.