Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I would be able to swallow the absence of a dGPU if one (or a combination) of the following happened:

-The new machine is significantly cheaper.
-The thermals are improved greatly.
-The battery life improved.
 
I have to say I would like to see rmbp use the iGPU. My rmbp could run really hot when I play game. They fan noise is so loud that I worried about the computer a little bit.
 
Compared to the 384 execution units in the 750m that's almost competitive!

Intel execution units are 16-wide, Nvidia cores are 2-wide. Do the math. :cool:

And how many execution units there are in the GPU shouldn't matter. If we go by that logic, the 650M with 384 shader units should completely demolish HD 5200 with only 40 units.


So, I guess I'm canceling my Titan order then. It turns out that I'd better settle for GT 650, 2688 cores wouldn't matter anyway! :eek:
 
Since Intels EUs are 16 wide vector units one would have to compare 384 vs 640 (40*16) to make a somewhat fair comparison.

Nope. If you want to compare vector units, nVidia's are 2 vector units in 1 (pixel and vertex), so nVidia would be 768. That's 768 vs 640.

Nvidia is better at loading its units at max efficiency but it has fewer of them and they don't even have as high a turbo. So by the stupid dumb logic the Iris is more powerful.

No, read above.

Adding some close by cache like that L4 is for the most part a power savings feature. It is just the cheapest way to handle bandwidth requirements with minimal power use. It is not supposed to be as big as a full 2GB VRAM, it only should enough to reduce the load on the two 64bit DDR3 channels. There is a reason smartphone GPUs worked differently because if they could keep data close they could save lots of power.

Smartphones share system memory with graphics memory. Some smartphones (Android) have dedicated video memory, though.

They obviously cannot use GDDR5 like the PS4 because that would kill CPU performance as GDDR is bandwidth optimized and bad in latency. Game developers may be willing to program around that problem in games but for everyday desktop applications it wouldn't be good.
Putting that L4 in place adds a lot of bandwidth for a third of the power cost of anything else.

No one said the CPU had to use GDDR5 for system memory. There is still a memory controller for DDR3 integrated into the CPU die, you know...

That said, they could have gone dedicated GDDR5 for Iris Pro. But they decided not to.

Reason: it would make the die size much bigger and add even more heat since they need to implement a separate memory controller just for the iGPU.

If they don't integrate it onto the CPU die, then the design of their motherboard wouldn't make sense since the motherboard would have to incorporate some sort of video memory... that's only used if you put in a CPU with any Iris GPU.

In the end, the cost (GDDR5 does cost more) and disadvantages (bigger die size, more heat, increased TDP) are just not worth it.

The thing about all this is that iGPUs simply have more options to yield more power efficiency. Today it is about similar to somewhat lower performance but each generation it will be harder for Nvidia to keep up. They will be pushed more and more into the higher TDP classes to make a case for their GPUs.
The 2010 MBP was basically a 73W notebook. Now we are at 100W. With Haswell it will go down to about 60-70W. (10W screen, Turbo) The worth of those added 40W dGPUs should deliver more than justnsomewhat faster especially for Apple with their crap automatic switching that turns on unnecessarily way too often.

Nope. Install gfxCardStatus on Mavericks and you'll see.

The dGPU kicks in ONLY when Photoshop or something else is running.

It's all iGPU otherwise.

That's how Mavericks increased battery life so much.

If one complains about OpenGL performance, I would really look more at OSX performance. Drivers are a big part of that and Apple will probably try to make sure there isn't a big difference (to 650M) to complain about.

Intel's drivers on Linux are still far faster than Intel's drivers on OSX when it comes to OpenGL performance. And yeah, that applies even now. Unless Apple does a 180 turn, I don't expect OpenGL performance for Iris Pro on OSX to even match 650M, which is pretty much on par with its Windows counterparts now because nVidia has been stagnant with their drivers for a while.

Also Iris Pro has far too many hardware limitations (TDP limiting Turbo, memory bandwidth not high enough, only 128MB of high-bandwidth memory, etc...) for it to match 650M. Those limitations already showed up on benchmarks at high resolutions. I don't think those will change unless Apple purposefully cripple the 650M to make Iris Pro look better.

[*]Bandwidth is only an issue for the faster GPUs. Intel got the L4 Cache to break the barrier of slow DDR3.

Note: assuming 100% efficiency between the 2 channels of DDR3 and L4 cache, Iris Pro does not even have as much bandwidth as last year's 650M with GDDR5.

And if you're in the tech industry, you'd know 100% efficiency is a pipe dream. Especially when you're considering you have 3 different buses (2 DDR3 channels and 1 L4 cache) to worry about.

[*]Architecture changed a lot. About 5-7 years ago Intel started hiring lots of GPU experts. It takes a while to get results but this new GPU architecture is made by people who used to work at Nvidia/AMD/others. That has nothing to do with that little bit of GMA like afterthought GPU.

Yeah, and even now, for true power saving, Intel still has to rely partially on PowerVR GPUs for their super low power Atom chips.

Their project Larrabee was cancelled because its performance was not up to par.

I don't think Intel has had any real GPU success story to boast about up to this point. And judging from benchmarks, Iris Pro may break the norm, but it is still not comparable even to last year's dGPU.

The HD 5200 will definitely not be as fast as the current Gen dedicated alternatives but the real issue is whether that difference justifies the power difference. Nobody complained about the 6750M being crap and the Iris Pro will definitely be better than that one.
I think this generation around a notebook would have to have a 765M or faster to display a big enough performance difference to really be worth it.

Even the 650M in the old rMBP is fast enough to display a difference.

Please keep in mind that Apple did overclock the 650M in the rMBP to either matching or past 660M performance. It's truly 15-20% faster than stock speed (750MHz stock vs 950MHz on the rMBP).

If the stock 650M can leave Iris Pro behind on benchmarks, then I'm sure you'll see an even more pronounced difference with an overclocked 650M.

Granted, it's not crap, but it's still a regression in graphics performance no matter how you want to spin it. And obviously, even with an overclocked 650M, the rMBP still craves more performance.

Where power saving is concerned, as noted, Mavericks changed things significantly and the dGPU isn't kicked in unless very specific professional applications (Photoshop, AutoCAD, Maya) are open.

In that case, I'd think that for power saving, HD 4600 would make a lot more sense (considering now HD 4000 is enough to handle the desktop smoothly), and then couple it with a 765M or something for those intensive moments.
 
Nope. If you want to compare vector units, nVidia's are 2 vector units in 1 (pixel and vertex), so nVidia would be 768. That's 768 vs 640.
Yes it still looks a lot different than the number 40 and accounting for clocks the nvidia at 900Mhz is still down.
Smartphones share system memory with graphics memory. Some smartphones (Android) have dedicated video memory, though.
The point on smartphones is that they use(d) tile based rendering which can reach better performance with the limited bandwidth.
No one said the CPU had to use GDDR5 for system memory. There is still a memory controller for DDR3 integrated into the CPU die, you know...

That said, they could have gone dedicated GDDR5 for Iris Pro. But they decided not to.

Reason: it would make the die size much bigger and add even more heat since they need to implement a separate memory controller just for the iGPU.

If they don't integrate it onto the CPU die, then the design of their motherboard wouldn't make sense since the motherboard would have to incorporate some sort of video memory... that's only used if you put in a CPU with any Iris GPU.

In the end, the cost (GDDR5 does cost more) and disadvantages (bigger die size, more heat, increased TDP) are just not worth it.
Once you implement a seperate gddr5 memory controller you have effectively a dedicated GPU. That would be completely pointless. Desperately trying to put that on one package would shrink the logic board but probably be less efficient than a CPU + dGPU system (given that it is a Intel dGPU who have never dealt with GDDR5 before).
Sony/AMD went all GDDR5 for a reason because both is just in no world a good idea. No resource sharing, which means no load balancing, huge die size with terrible efficiency.

Nope. Install gfxCardStatus on Mavericks and you'll see.

The dGPU kicks in ONLY when Photoshop or something else is running.

It's all iGPU otherwise.

That's how Mavericks increased battery life so much.
What is when the external display is on. I use an external almost all the time when the notebook is on my desk. All I do with two screens is browse and playing videos as the most demanding stuff but the fans have to deal with a useless dedicated GPU.
Also when you do presentations with a projector attached a dGPU is waste of power.
I have yet to read a review of Mavericks. If they have a white list approach now that would finally be something at least when no external is on.
Intel's drivers on Linux are still far faster than Intel's drivers on OSX when it comes to OpenGL performance. And yeah, that applies even now. Unless Apple does a 180 turn, I don't expect OpenGL performance for Iris Pro on OSX to even match 650M, which is pretty much on par with its Windows counterparts now because nVidia has been stagnant with their drivers for a while.
That surprises me. If they go for all Intel, I assume have to do what they can. Why would they want to do it otherwise. It is not like adding a 760M would be impossible. If MSI can do it with 22mm thickness and only one fan, Apples two fans should be able to handle it even without the 37W CPU.
I am just not pessimistic enough.
Also Iris Pro has far too many hardware limitations (TDP limiting Turbo, memory bandwidth not high enough, only 128MB of high-bandwidth memory, etc...) for it to match 650M. Those limitations already showed up on benchmarks at high resolutions. I don't think those will change unless Apple purposefully cripple the 650M to make Iris Pro look better.
The only Iris Pro benchmarks I have seen so far are from Anandtech and there is only one exception where high settings let Iris Pro loose ground and that is Battlefield 3. That could have all sorts of reasons. The other benchmarks look just fine.
Note: assuming 100% efficiency between the 2 channels of DDR3 and L4 cache, Iris Pro does not even have as much bandwidth as last year's 650M with GDDR5.

And if you're in the tech industry, you'd know 100% efficiency is a pipe dream. Especially when you're considering you have 3 different buses (2 DDR3 channels and 1 L4 cache) to worry about.
Who needs 100% efficiency. The only goal is to get a high enough L4 cache hitrate so the 2 DDR3 channels can handle the load, meanding they aren't actually running at 100% load. If they did, chances are the processor is quite often starved for input data.
The 650M with DDR3 memory is not all that much slower which suggests that a GPU at that performance level really doesn't need all the bandwidth it has. 50GB/s for that L4 cache should be plenty for Iris Pro.
Yeah, and even now, for true power saving, Intel still has to rely partially on PowerVR GPUs for their super low power Atom chips.

Their project Larrabee was cancelled because its performance was not up to par.

I don't think Intel has had any real GPU success story to boast about up to this point. And judging from benchmarks, Iris Pro may break the norm, but it is still not comparable even to last year's dGPU.
To be fair Atom is really behind in just about everything. It is 32nm, no integrated memory controller, a GPU from someone else. It was more of a prototype to work out whether an x86 core can be power efficient enough. The real smartphone Intel CPU will be Merrifield which is 22nm with Intel GPU and a fully integrated SoC.
Larrabee was a completely new architecture that turned out to be not as universally useful as they hoped. It was just not power efficient to have all these tiny vector x86 cores. Nvidias 32 wide warp threads are just much more efficient for most stuff. Basically before 32nm Atom everybody used to say that x86 cores cannot be that power efficient.
Intel focus on dedicated graphic architecture and fast single threaded performance rather than too many cores. So they learned their lesson. Their shiny x86 cores cannot do everything.
Please keep in mind that Apple did overclock the 650M in the rMBP to either matching or past 660M performance. It's truly 15-20% faster than stock speed (750MHz stock vs 950MHz on the rMBP).
As mentioned Anand actually compares the 900Mhz 650M. Not sure what you try to point out with that. If he compared the 650M at default clocks Iris Pro would probably look quite favorably. Good thing he didn't since the 650M was rarely ever used with standard clocks. Almost all high end notebooks clocked it at 900Mhz or even higher.
If the stock 650M can leave Iris Pro behind on benchmarks, then I'm sure you'll see an even more pronounced difference with an overclocked 650M.
See above.
Granted, it's not crap, but it's still a regression in graphics performance no matter how you want to spin it. And obviously, even with an overclocked 650M, the rMBP still craves more performance.

Where power saving is concerned, as noted, Mavericks changed things significantly and the dGPU isn't kicked in unless very specific professional applications (Photoshop, AutoCAD, Maya) are open.

In that case, I'd think that for power saving, HD 4600 would make a lot more sense (considering now HD 4000 is enough to handle the desktop smoothly), and then couple it with a 765M or something for those intensive moments.
A 765M and a proper optimus like driver would definitely be better. If Apple believed in options, they would add a low end Intel only with great thermals and battery life and something with as much speed as can possibly be cooled. Apple usually doesn't believe in confusing its customers with too many options and given that 4950HQ geekbench it seems quite certain they think Iris Pro is enough for what people use their MacBook Pros for. The MacBook Pros have never been about best possible performance.
 
Anand was using the rMBP, if you mean those benchmarks.

Oh yeah, my bad.

Though it's not clear if Anand's rMBP suffered from the EFI bug. I hope it doesn't.

Also on a side note, the only times when Iris Pro were able to catch up to 650M or surpass GT 640M were when they increased the TDP to 55W. At 47W stock, it isn't really anything to write home about.

Yes it still looks a lot different than the number 40 and accounting for clocks the nvidia at 900Mhz is still down.

You're just splitting hairs here.

Clocks aren't everything, but number of processor does indicate performance... almost in every case.

The point on smartphones is that they use(d) tile based rendering which can reach better performance with the limited bandwidth.

Tile-based rendering actually uses the CPU for the first part of the composition process to split tiles, so in essence, CPU cache is used as a buffer for the rendering.

https://en.wikipedia.org/wiki/Tiled_rendering

The technique does indeed make better use of low-latency low-bandwidth memory. But what do smartphones have to do with Apple using Iris Pro? They aren't trying to run mobile applications on their computers...

Once you implement a seperate gddr5 memory controller you have effectively a dedicated GPU. That would be completely pointless. Desperately trying to put that on one package would shrink the logic board but probably be less efficient than a CPU + dGPU system (given that it is a Intel dGPU who have never dealt with GDDR5 before).

Nope. If they "integrate" GDDR5 memory into the die, it'll just be called on-chip memory or embedded memory. Much like what the Xbox 360 has. (10MB of eDRAM)

"Dedicated" means the GPU is on its own... outside of the CPU.

And yeah, that's the only distinction. Otherwise, iGPU and dGPU are treated the same. The only reason why iGPUs have been treated like third-rate performers is because they are always slower than the dGPUs of the time.

Sony/AMD went all GDDR5 for a reason because both is just in no world a good idea. No resource sharing, which means no load balancing, huge die size with terrible efficiency.

Huge die size or small die size makes no difference as long as the thermal profile is reasonable and the heatsink design is good enough.

Seriously, take an Intel Pentium 4 and compare it to the die size of an Iris Pro and then tell me if adding GDDR5 would be terrible efficiency.

What is when the external display is on. I use an external almost all the time when the notebook is on my desk. All I do with two screens is browse and playing videos as the most demanding stuff but the fans have to deal with a useless dedicated GPU.

dGPU is forced on because all external display connectors are routed to the dGPU. But there's no extra heat or fan noise compared to integrated because the dGPU is barely stressed playing videos and browsing. It's only when you start playing a video game that the fans start to rev up.

Seriously, the dGPU doesn't have to run at full speed all the time.

Also when you do presentations with a projector attached a dGPU is waste of power.

If you have to plug your computer into a monitor or a projector, then I'm sure there is no reason you can't plug your power connector in as well.

And even with the dGPU, you can still reach 6-7 hours of battery life if you just turn off the internal display.

The only Iris Pro benchmarks I have seen so far are from Anandtech and there is only one exception where high settings let Iris Pro loose ground and that is Battlefield 3. That could have all sorts of reasons. The other benchmarks look just fine.

You must be joking...

Because in Anand's benchmark, the only benchmark where Iris Pro actually matches the 650M is Grid 2.

Here's Battlefield 3:

55287.png

55288.png


And Bioshock Infinite:

55281.png

55282.png


And Sleeping Dogs:

55283.png

55284.png


It's the same everywhere except for Grid 2, because this happens with Grid 2 for some reason:

55296.png

55297.png


Same as everything else, but then:

55298.png


That suggests that either nVidia's drivers are poorly optimized for the game at those settings, or Intel is omitting something in their drivers to achieve higher performance.

The 650M with DDR3 memory is not all that much slower which suggests that a GPU at that performance level really doesn't need all the bandwidth it has. 50GB/s for that L4 cache should be plenty for Iris Pro.

Really?

Benchmarks say otherwise:

3DMVantage GPU score:

DDR3 29671
GDDR5 35334
Difference +19%

3DM11 GPU score:

DDR3 2145
GDDR5 2156
Difference +0.5%

Heaven 2.5

DDR3 750
GDDR5 777
Difference +3.6%

Games:

Street Fighter 4

DDR3 136fps
GDDR5 163fps
Difference +19.4%

Resident Evil 5

DDR3 66.5fps (weaker CPU bound)
GDDR5 121.4fps
Difference +83%

Lost Planet 2

DDR3 26.6fps
GDDR5 31.8fps
Difference +19.5%

20% performance drop is kind of a big deal, considering that's actually close to the difference between 650M and HD 5200.

In fact, Resident Evil 5 showed a whopping 83% difference, suggesting that the game made heavy use of texture streaming, and DDR3 couldn't cope.

I suspect that a part of the difference between HD 5200 and 650M in Anand's benchmarks is also due to this reason.

You shouldn't underestimate memory bandwidth. Especially not for higher resolutions.

To be fair Atom is really behind in just about everything. It is 32nm, no integrated memory controller, a GPU from someone else. It was more of a prototype to work out whether an x86 core can be power efficient enough. The real smartphone Intel CPU will be Merrifield which is 22nm with Intel GPU and a fully integrated SoC.

Atoms came with Intel's GPUs as well. But it was painfully obvious that Intel just sucked at producing low-power GPUs for their CPUs, so the really low power Atom chips had to use PowerVR GPUs.

As mentioned Anand actually compares the 900Mhz 650M. Not sure what you try to point out with that. If he compared the 650M at default clocks Iris Pro would probably look quite favorably. Good thing he didn't since the 650M was rarely ever used with standard clocks. Almost all high end notebooks clocked it at 900Mhz or even higher.

And also as I just corrected, the ones where Iris Pro came close to 650M were when its TDP was adjusted to 55W.

At 47W, it was consistently behind GT 640M. Again, see charts up there.

A 765M and a proper optimus like driver would definitely be better. If Apple believed in options, they would add a low end Intel only with great thermals and battery life and something with as much speed as can possibly be cooled. Apple usually doesn't believe in confusing its customers with too many options and given that 4950HQ geekbench it seems quite certain they think Iris Pro is enough for what people use their MacBook Pros for. The MacBook Pros have never been about best possible performance.

True that the MacBook Pro has never been about best possible performance. But I haven't seen any year in which they announce that they are having a regression in graphics performance all across the board.
 
True that the MacBook Pro has never been about best possible performance. But I haven't seen any year in which they announce that they are having a regression in graphics performance all across the board.

It isn't a regression - its a huge improvement over the HD4000, which is what most rMBPs today run most of the time. That's a day to day kind of improvement.

Its only a regression if you're intent on heavy gaming on a non optimized gaming platform. That's just not what most people use MBPs for. :rolleyes:
 
Clocks aren't everything, but number of processor does indicate performance... almost in every case.

And how many execution units there are in the GPU shouldn't matter.

Now that's confusing.

Tile-based rendering actually uses the CPU for the first part of the composition process to split tiles, so in essence, CPU cache is used as a buffer for the rendering.

As fare as I'm aware, in PowerVR it is hardware implementation.


You must be joking...

Because in Anand's benchmark, the only benchmark where Iris Pro actually matches the 650M is Grid 2.


It's pretty close in Crysis Warhead:

55295.png



It's the same everywhere except for Grid 2, because this happens with Grid 2 for some reason:

Image
Image

Same as everything else, but then:

Image

That suggests that either nVidia's drivers are poorly optimized for the game at those settings, or Intel is omitting something in their drivers to achieve higher performance.


No need to "suggest" anything. Grid 2 is optimized for Haswell and Intel's Gen7 graphics.
Also, in case it's running on Intel gpu, it uses some special effects, that are not possible on AMD/Nvidia hardware, so direct comparison is simply not feasible. But it shows what the architecture is capable off.

However, are you saying that gpu manufacturers actually keep an eye on what is out there and adjust their drivers accordingly? So true! Then you also should know that game developers do the same- they do tweaks with the underlying architecture in mind. Now it becomes interesting- looking at the HD 4000 scores, do you think the iris predecessor was worthy target for mid-high settings optimization?


Really?

Benchmarks say otherwise:

3DMVantage GPU score:

DDR3 29671
GDDR5 35334
Difference +19%

3DM11 GPU score:

DDR3 2145
GDDR5 2156
Difference +0.5%

Heaven 2.5

DDR3 750
GDDR5 777
Difference +3.6%

Your bad, bringing synthetics:

55307.png


55308.png


55309.png


And before you say that's with 55wat part- it's still far less than discrete gpu and gddr5 memory.
 
Last edited:
No.

I am with icedragon on this one for sure. They may as well bring back the line of the MacBook, I consider these new retina gadgets MacBooks anways. They have a really - really - really long ways still to go before they earn the PRO part which the 17"MBP still has the title of.

No matter what anyone wants to think. Or justify their chasing after the latest & greatest trends out there... The 17"MBP is the only MacBook Pro because it always performs well and most efficiently. The r-MacBook gadgets are a great addition to aid in super fast but super small tasks - for sure, but not on their own to be depended on performing huge tasks in any timely order.
 
I am with icedragon on this one for sure. They may as well bring back the line of the MacBook, I consider these new retina gadgets MacBooks anways. They have a really - really - really long ways still to go before they earn the PRO part which the 17"MBP still has the title of.

No matter what anyone wants to think. Or justify their chasing after the latest & greatest trends out there... The 17"MBP is the only MacBook Pro because it always performs well and most efficiently. The r-MacBook gadgets are a great addition to aid in super fast but super small tasks - for sure, but not on their own to be depended on performing huge tasks in any timely order.

Wow, some of the most unqualified asinine comments I've heard yet on this thread. Unsubscribing.
 
I am with icedragon on this one for sure. They may as well bring back the line of the MacBook, I consider these new retina gadgets MacBooks anways. They have a really - really - really long ways still to go before they earn the PRO part which the 17"MBP still has the title of.

No matter what anyone wants to think. Or justify their chasing after the latest & greatest trends out there... The 17"MBP is the only MacBook Pro because it always performs well and most efficiently. The r-MacBook gadgets are a great addition to aid in super fast but super small tasks - for sure, but not on their own to be depended on performing huge tasks in any timely order.

The maximum scaled resolution of the 15" rMBP is 1920x1200, just like the 17" MBP.
 
Ya, you too.

Wow, some of the most unqualified asinine comments I've heard yet on this thread. Unsubscribing.

Mutual over here. Like you contribute quality content / perspective yourself. Ignored/Blocked.
 
So, the "Pro" label is for dGPU users...:rolleyes:

So you would pay over two grand (we're going by dollars here not euros so over in Spain it would be even more) for a quad-core processor and just integrated graphics? For me, no thanks.
 
So you would pay over two grand (we're going by dollars here not euros so over in Spain it would be even more) for a quad-core processor and just integrated graphics? For me, no thanks.

I am in the UK and looking to get the 15" Haswell equivalent of the 2.7/16/512 which is £2300 if they release it with the same price roughly is just a piss take if it has only a iGPU
 
So you would pay over two grand (we're going by dollars here not euros so over in Spain it would be even more) for a quad-core processor and just integrated graphics? For me, no thanks.

Yeah, I know how you feel! That's why I'm still keeping my good old PIII PC. It has dedicated memory controller, and for me, the integrated one in those new cpu's are just a toy. But I may change my mind... I am-maybe, just maybe, considering getting one of those shiny new haswell macbooks, but only, and I mean ONLY if Apple include discrete gpu, with less flops than Intel's one, 'cause everybody knows that more flops are for *******! :D
 
Last edited:
Honestly, with the way they're moving with the Mac Pro, it wouldn't be surprising if they create a ultra thin laptop with external graphics capabilities. It's not too far out of reach. I use that method with the current 13" rMBP for games.
 
For the past few years Intel has been threatening to make discrete GPUs obsolete with its march towards higher performing integrated GPUs. Given what we know about Iris Pro today, I'd say NVIDIA is fairly safe. The highest performing implementation of NVIDIA's GeForce GT 650M remains appreciably quicker than Iris Pro 5200 on average. Intel does catch up in some areas, but that's by no means the norm. NVIDIA's recently announced GT 750M should increase the margin a bit as well. Haswell doesn't pose any imminent threat to NVIDIA's position in traditional gaming notebooks. OpenCL performance is excellent, which is surprising given how little public attention Intel has given to the standard from a GPU perspective.

Nuff said.

http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/20
 
A macbook pro without dGPU is the same as an over priced MS ultrabook or netbook for that matter is concerned. Don't settle for less and don't give your hard earned cash to a company that prances around the demand and makes you pay for a totally different system which leaves a bigger hole in your wallet people!
 
Haswell doesn't pose any imminent threat to NVIDIA's position in traditional gaming notebooks.

QFT - with the emphasis on the point that everyone is missing. The rMBP is NOT a traditional gaming notebook. ;)

No, but the point is Iris Pro is not nearly as fast as the current 650m. A change to Iris Pro would mean a move backwards in the graphics department.

This is a pro notebook, not a "I need something to update my MySpace" notebook.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.