Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
2016 should be pretty big for Nvidia and AMD (if they last). 20nm die shrink, stacked memory, a very much-awaited solid update even without a forced Skylake delay. Perhaps a boring clock speed bump but nothing to write about.

CPU improvements aren't that special anymore and Intel's main goal for 14nm is to reduce power consumption and enter the small devices market with cost-effective SoC solutions. Everything else is kind of tangential to their real motives.
 
2016 should be pretty big for Nvidia and AMD (if they last). 20nm die shrink, stacked memory, a very much-awaited solid update even without a forced Skylake delay. Perhaps a boring clock speed bump but nothing to write about.

CPU improvements aren't that special anymore and Intel's main goal for 14nm is to reduce power consumption and enter the small devices market with cost-effective SoC solutions. Everything else is kind of tangential to their real motives.

Nvidia and AMD will be in business in 2016 if Intel let them be. It may sound harsh, but it is true. I mean if Intel Iris Pro series is good enough and meets Apple's standard there may be no room for neither of them.

The key here is how good is the new Intel Iris Pro over the last generation and if Apple are ready to let go dGPU on the rMBP. Saving power, efficiency and reliability are characteristics that Apple want for every product but even more for laptops. Sincerely the dGPU is holding them back especially in the mobile department.

I am saying all this being as a pro dGPU user. I mean I like having a dGPU for my casual gaming on Windows. The current iGPU is not good enough for me.
 
Intel confirmed Skylake will be released between July-December this year:

http://seekingalpha.com/article/282...-earnings-call-transcript?all=true&find=intel

I used to work for AMD during the MHz wars, and we were forever being criticized for not shipping products on time. I found this odd because Intel hasn't shipped a processor on time (other than on paper) since at least 1995. Maybe 20 years is the charm, but I will stick to the trend.

Release dates, like all prognostications, are highly unreliable and should not be trusted. Why believe a future promised to you after being approved by both marketing and legal departments. After all, if you can't NOT trust lawyers and salesmen, who exactly isn't trustworthy in your world?
 
I'm on an Ivy Bridge retina MacBook Pro, and my next MBP is definitely going to be based on Skylake!

I really want Thunderbolt 3...

Would love them to release a 17" version too...

Just curious what you are using that you need thunderbolt 3?
 
Are you going to elaborate?

1. There is plenty of business for Nvidia outside of Apple, even if Apple decides Iris Pro whatever is good enough. But, driving multiple 4k displays will further tax the video side, so integrated graphics will still not match the dedicated.

2. AMD makes the video for XBOX, PS4, and WiiU, so pretty sure they will be around a bit longer in some form or other. Those consoles sell more gfx parts than Apple.

3. The death of AMD is predicted routinely every January. I am sure they will be right EVENTUALLY. Even stars die.

4. You are still counting on Intel to be on time, something it isn't very good at with major chip releases.

5. 14nm processing is about power consumption more than anything (for end users - it is also a cost savings in a developed process for the manufacturer). We have been parked between 2.5 and 4GHz for 8 years now. I don't expect a new MHz war anytime soon. The biggest thing Intel will have to offer with Skylake is battery life, and AMD and Nvidia are both working on that too. Process shrinks aren't the only ticket to saving power.
 
1. There is plenty of business for Nvidia outside of Apple, even if Apple decides Iris Pro whatever is good enough. But, driving multiple 4k displays will further tax the video side, so integrated graphics will still not match the dedicated.

2. AMD makes the video for XBOX, PS4, and WiiU, so pretty sure they will be around a bit longer in some form or other. Those consoles sell more gfx parts than Apple.

3. The death of AMD is predicted routinely every January. I am sure they will be right EVENTUALLY. Even stars die.

4. You are still counting on Intel to be on time, something it isn't very good at with major chip releases.

5. 14nm processing is about power consumption more than anything (for end users - it is also a cost savings in a developed process for the manufacturer). We have been parked between 2.5 and 4GHz for 8 years now. I don't expect a new MHz war anytime soon. The biggest thing Intel will have to offer with Skylake is battery life, and AMD and Nvidia are both working on that too. Process shrinks aren't the only ticket to saving power.


Seriously? Please read my post again. I am talking about the possibility in this generation that Apple may let go the dGPU (Nvidia or AMD) in favour of Intel Iris Pro on the rMBP. For everything else you suggest I am implying I have to tell you that you have a very vivid imagination.

1.- Never mentioned any other Nvidia or AMD business may have outside Apple, I explicitly stuck with the rMBP. What I am trying to say is that if the next generation of Intel Iris Pro series turns out to be good enough for Apple there will be no room for AMD or Nvidia on the rMBP. I have seen slides where Intel claim the low models of Intel Iris Pro 6000 may be able to drive 4k displays.

2.- Never mentioned Xboxes or Playstations, so nothing to say here.

3.- What do you want me to say? It is good for the market that there is competition. I wish them good luck.

4.- I agree with that except I don't think you have to wait until Skylake to see some improvements on battery life. Broadwell will bring those to the table, Skylake might improve a bit. Certainly it's not going to be a significant improvement over Broadwel. They both share the same 14nm process. As far as I know Skylake will support DDR4. That's something new.
 
Seriously? Please read my post again. I am talking about the possibility in this generation that Apple may let go the dGPU (Nvidia or AMD) in favour of Intel Iris Pro on the rMBP. For everything else you suggest I am implying I have to tell you that you have a very vivid imagination.

1.- Never mentioned any other Nvidia or AMD business may have outside Apple, I explicitly stuck with the rMBP. What I am trying to say is that if the next generation of Intel Iris Pro series turns out to be good enough for Apple there will be no room for AMD or Nvidia on the rMBP. I have seen slides where Intel claim the low models of Intel Iris Pro 6000 may be able to drive 4k displays.

but you said

Nvidia and AMD will be in business in 2016 if Intel let them be. It may sound harsh, but it is true. I mean if Intel Iris Pro series is good enough and meets Apple's standard there may be no room for neither of them.

That very much talks about their ability to stay in business AT ALL. Not just business with Apple. Even if Intel is good enough this go round AND comes out on time, What happens if AMD releases the HAL 10000 the year after and blows Intel away? What in Cuda becomes so ridiculously fast on Nvidia that doing FP math any other way just makes no sense? Look at Octane Render for an example of some serious fast rendering brought to you by a GPU. Intel doesn't come close to matching Nvidia on something like that yet.


2.- Never mentioned Xboxes or Playstations, so nothing to say here.

Well, yes you did when you talked about AMD being out of business if Intel chooses to. Also, the FTC might have a word or two about Intel getting to harsh. Monopoly regulations in the states and all.

(just going to skip 3 - nothing of note there)


4.- I agree with that except I don't think you have to wait until Skylake to see some improvements on battery life. Broadwell will bring those to the table, Skylake might improve a bit. Certainly it's not going to be a significant improvement over Broadwel. They both share the same 14nm process. As far as I know Skylake will support DDR4. That's something new.

You are speculating. 14nm isn't the only control over batter life. It is just the easiest one to explain to the public. Transistor design, conductors, insulators, local interconnects, implants, and a TON of other things will affect battery life. Yes, Broadwell will bring some of that because that is something everyone is working on due to the mobile sector.

But that also means that Skylake will be a more mature 14nm process with better design rules and manufacturing. All of this equally applies to Apple, AMD, Nvidia, and Samsung.

As low as the CPU draw is getting, the savings in the rest of the computer is really starting to loom large. How much does that WiFi card draw? How much does the RAM need? The Fans? The Display?

If Apple (or whoever) saves enough power on the other systems, maybe the use AMD processors for better onboard graphics and save power while running games or other video apps while breaking even at idle? Or maybe it is a cost deal. Or maybe Apple just wants to leverage against Intel being a single supplier (which is how AMD got into x86 in the first place).
 

I'll elaborate a bit more on this since you seem to understand what the other person glanced over.

AMD is struggling financially and thus their long-term future is always foggy but Intel is not going to take either one of them out anytime soon. Intel can't in the graphics department but might give Nvidia more competition in the HPC market with Xeon Phi and chip away at the bottom segment who really wouldn't have been Nvidia/AMD's segment initially. Hard to say if it fairs well against Tegra but time will tell. "Good enough for Apple to drop them entirely" is not the reason for either one's theoretical demise. AMD is simply not very profitable and the other is moving more into HPC, mobile and cars. Differentiation is nothing new.

The IGP will still not cut it for most intensive tasks, fine for casual to intermediate use but Intel and Apple aren't writing great drivers. Intel has had this problem for years and artificial segmentation creates a crappy inconsistent experience. Apple paired the Mac Pro with AMD but the professional market is mostly Nvidia's and I feel they fell short by pinning too much on OpenCL and not enough on actual stability and custom-tailored performance. Tangential points but for such control, these two companies put up mediocre efforts.

Iris Pro is pretty expensive. For the performance compromise, it really makes no sense unless one had to meet a certain thermal design threshold. The price difference is around what a mid-range dedicated GPU would cost which although introduces another point of failure, will definitely boost overall performance as screen density quickly rises. OpenCL and such is nice but for most, you get more value and performance from a mid-range dedicated chip and their experienced software teams working on drivers. Doesn't matter if Nvidia runs CUDA--they support it and will back it up.

The GTX980M performs at around half that of the 980 desktop card from a mobile chip. Now that's using a lot of power but the architectures have been tweaked and Intel isn't going to match that with Skylake or anything for awhile. It'll be Iris Pro in the base models I am sure but for high-end configurations, there will be a dedicated offering. Die shrinks and stacked memory could keep the same general TDP range but scale up results.

Intel's problem is getting people to actually upgrade. Desktop enthusiasts can get by just fine with a Sandy Bridge. For workstation, most will drop in a dedicated card. The low-hanging fruit is picked but the needs of a casual user can easily be met by tablets and smaller all-in-one devices. That's the market Intel sees and despite the process and technical advantage, it hasn't been a great ride for them. So this is what 14nm is really for: having a process small enough to compete with ARM and TSMC and Samsung on the SoC devices. Processor performance doesn't increase as expected but for a majority of users, it hasn't been an issue for years. Neither the clockspeed nor number of cores. Does the average person count seconds it takes to render something? No, but there is a noticeable gap between 6 and 18 hours of battery life.

Apple would be unwise to drop Nvidia or AMD. If so, they're really assuming real professional users are stupid chumps with nowhere else to turn to. Fatal thinking there. There's really nothing that needed to be said about graphics since Intel can coexist with them just fine but since it was brought up...
 
Last edited:
So in the Skylake timeframe (H2 2015 to early 2016 say) - what are the likeliest options for a discrete GPU, and how is it likely to compare to the current NVIDIA GeForce GT 750M with 2GB of GDDR5 in the highest end 15"?
 
It's hard to tell because like the 5K iMac, Apple used a Tonga chip that hasn't been released before. Who knows what they can deal up with AMD behind curtains. AMD has a compute/OpenCL advantage, not to mention their willingness to settle for any sort of secure cash-flow.

Nvidia has announced some of their Maxwell chips and the one that seems to fit is the GTX 950M although I expect a lower-clocked version due to heat constraints. It should be power-efficient though, a nice perf/watt.

http://www.techpowerup.com/gpudb/2527/geforce-gt-750m-mac-edition.html
http://www.techpowerup.com/gpudb/2642/geforce-gtx-950m.html

Probably something right in between. 25-40% increase from before? Hovering at the 900-1000 Gflops mark or not too far from it. Take everything with several grains of salt. If it's into 2016, both NV and AMD should have 20nm chips by then and another lineup to pick from.

But who knows, they can go for the Skylake 72EU + eDRAM GT4e and call it enough. It'll be an increase but I wouldn't want it as the highest build option in any professional machine.
 
I'll elaborate a bit more on this since you seem to understand what the other person glanced over.

AMD is struggling financially and thus their long-term future is always foggy but Intel is not going to take either one of them out anytime soon. Intel can't in the graphics department but might give Nvidia more competition in the HPC market with Xeon Phi and chip away at the bottom segment who really wouldn't have been Nvidia/AMD's segment initially. Hard to say if it fairs well against Tegra but time will tell. "Good enough for Apple to drop them entirely" is not the reason for either one's theoretical demise. AMD is simply not very profitable and the other is moving more into HPC, mobile and cars. Differentiation is nothing new.

The IGP will still not cut it for most intensive tasks, fine for casual to intermediate use but Intel and Apple aren't writing great drivers. Intel has had this problem for years and artificial segmentation creates a crappy inconsistent experience. Apple paired the Mac Pro with AMD but the professional market is mostly Nvidia's and I feel they fell short by pinning too much on OpenCL and not enough on actual stability and custom-tailored performance. Tangential points but for such control, these two companies put up mediocre efforts.

Iris Pro is pretty expensive. For the performance compromise, it really makes no sense unless one had to meet a certain thermal design threshold. The price difference is around what a mid-range dedicated GPU would cost which although introduces another point of failure, will definitely boost overall performance as screen density quickly rises. OpenCL and such is nice but for most, you get more value and performance from a mid-range dedicated chip and their experienced software teams working on drivers. Doesn't matter if Nvidia runs CUDA--they support it and will back it up.

The GTX980M performs at around half that of the 980 desktop card from a mobile chip. Now that's using a lot of power but the architectures have been tweaked and Intel isn't going to match that with Skylake or anything for awhile. It'll be Iris Pro in the base models I am sure but for high-end configurations, there will be a dedicated offering. Die shrinks and stacked memory could keep the same general TDP range but scale up results.

Intel's problem is getting people to actually upgrade. Desktop enthusiasts can get by just fine with a Sandy Bridge. For workstation, most will drop in a dedicated card. The low-hanging fruit is picked but the needs of a casual user can easily be met by tablets and smaller all-in-one devices. That's the market Intel sees and despite the process and technical advantage, it hasn't been a great ride for them. So this is what 14nm is really for: having a process small enough to compete with ARM and TSMC and Samsung on the SoC devices. Processor performance doesn't increase as expected but for a majority of users, it hasn't been an issue for years. Neither the clockspeed nor number of cores. Does the average person count seconds it takes to render something? No, but there is a noticeable gap between 6 and 18 hours of battery life.

Apple would be unwise to drop Nvidia or AMD. If so, they're really assuming real professional users are stupid chumps with nowhere else to turn to. Fatal thinking there. There's really nothing that needed to be said about graphics since Intel can coexist with them just fine but since it was brought up...

pretty much in agreement with all this.

Points I would argue are splitting hairs in this discussion, so I won't bother.

As for when Skylake will ship. Well, Itanium was supposed to ship in 1998 (after it was supposed to ship in 1995) and didn't land until 2001. So what does the magic 8 ball tell you?
 
but you said



That very much talks about their ability to stay in business AT ALL. Not just business with Apple. Even if Intel is good enough this go round AND comes out on time, What happens if AMD releases the HAL 10000 the year after and blows Intel away? What in Cuda becomes so ridiculously fast on Nvidia that doing FP math any other way just makes no sense? Look at Octane Render for an example of some serious fast rendering brought to you by a GPU. Intel doesn't come close to matching Nvidia on something like that yet.

I think it was quite clear what I tried to say after my last post. I said it before and I'll say it again; I am talking about business Nvidia or AMD may or may not have inside Apple ecosystem specifically concerning the rMBP. They may be out of business with Apple if Iris Pro series 6000 is good enough for the rMBP. They have no say here, It is all in Intel hands. Apple will go to Intel and see what performance the Iris Pro 6000 series can offer if Apple consider it is good enough I am afraid there will be no more Nvidia or AMD inside the rMBP.

Of course this is my opinion, based on what Apple did in 2013 with the dGPU. I can be wrong, but I think they got a plan and they are going to stick with it.



Well, yes you did when you talked about AMD being out of business if Intel chooses to. Also, the FTC might have a word or two about Intel getting to harsh. Monopoly regulations in the states and all.

This can be answered with the above paragraph.


(just going to skip 3 - nothing of note there)




You are speculating. 14nm isn't the only control over batter life. It is just the easiest one to explain to the public. Transistor design, conductors, insulators, local interconnects, implants, and a TON of other things will affect battery life. Yes, Broadwell will bring some of that because that is something everyone is working on due to the mobile sector.

But that also means that Skylake will be a more mature 14nm process with better design rules and manufacturing. All of this equally applies to Apple, AMD, Nvidia, and Samsung.

As low as the CPU draw is getting, the savings in the rest of the computer is really starting to loom large. How much does that WiFi card draw? How much does the RAM need? The Fans? The Display?

If Apple (or whoever) saves enough power on the other systems, maybe the use AMD processors for better onboard graphics and save power while running games or other video apps while breaking even at idle? Or maybe it is a cost deal. Or maybe Apple just wants to leverage against Intel being a single supplier (which is how AMD got into x86 in the first place).

Yes, I am speculating. In fact everything I have said so far is my personal opinion. I do not work for Apple, Intel, AMD, or Nvidia. I cannot give you real facts, sorry. But unless you tell me otherwise I take for granted you are speculating too.

I was talking about the power saving that Skylake processor might have over Broadwell processor. Now you are bringing up a whole computer with updated parts, that'd better have more battery life. Well, I'd say the ratio performance/watts is not gonna be much difference. But of course i am not the engineer working on them.

Apple make good computers overall. I have little experience with AMD processors, but I think it is the general opinion that Intel's are better. Apple choose good quality components for their products and want people think that. Apple products are reliable, yet you pay a plus for them. I feel that at least. And yes, Apple do not want to rely on one supplier, we are already seeing this with the iPhone.

If there is a dGPU in the next rMBP I'd say AMD have slightly the edge over Nvidia.
 
Here's the situation so you can see the larger picture:

  • 13" Retina and below, all Intel. HD6000, HD6100 Iris is the most likely direct upgrade. Typical lifecycle refresh.
  • 15" Retina - I bet there will be a base with only integrated but not as the sole option across the model since users who spend $2500 will want more power than anything Skylake can output. I have no concerns of it editing some GoPro footage on the one screen and streaming some high-resolution movies in the back and loading up Photoshop layers for the intermediate Lightroom user but hook it up to a few 4K externals and I doubt it'll be pleasant to do any work on them.
  • Also up for renewal: the iMacs, both regular 21.5" and 27" with dedicated options of their own due for updates. Don't know if the normal 27" will stick around but it's still a very powerful option if they simply followed the annual refresh lineup. These use mobile chips so take that into account.

Intel controls nothing. They would've gotten the annual upgrade cycle contract anyways seeing they're the only player in town. Their CPUs come with the IGP whether anyone likes it or not. AMD and/or Nvidia are not going anywhere: if you need performance, you pay for their offerings. Like any other manufacturer, why don't you think more people use Iris Pros instead of offering a comparable discrete? Pricing and performance reasons.

Apple choose good quality components for their products and want people think that.

Apple can sell a lot of things. One thing they can't sell is replacing dedicated graphics on the high-end machines with integrated and expecting no critical outrage. Some might see no option and accept reluctantly, some will not care, but those who are aware will never praise them after such a fiasco.
 
Thank you for your explanation, I appreciate it. I really do, and I hope you are right too. I would like to see a dGPU as an option in the rMBP.

Nevertheless just because I appreciate it that doesn't mean i can't make my own opinion and have to accept as a dogma whatever else someone throws at me.

Maybe this year there will be one, even the next one. However I do not see a long proper future for the dGPU in the rMBP. I would need some facts by Apple, they would need to back up the dGPU somehow, and they are doing just the opposite; isolating it. I am trying to explain that not everybody has to agree with you even if you think you got facts straight.
 
There's no dogma. You can disagree on many things, even hold the opinion that discretes suck and should be dropped, but it doesn't explain away why the solutions exist in the first place. You're looking through a very narrow window and ignoring the larger context of things.

Intel will slowly chip away at the low-hanging fruit (home PCs, average student and parent laptops, that sort of thing) while demanding users will need more as the resolution scales up. From 1080p for a long time to suddenly 3-5K panels and editing content to boot. Or in academic research, a single GPU like the Titan is fairly affordable on funded endeavors and gives a hell of a lot of compute power. It's hard for normal users to think what is going to process all that data bound to flood when bandwidth expands. Not saying you don't know but in general, the background work to get things up and running is mysterious and opaque. It's one thing to show a static picture but another to manipulate it smoothly.

Broadwell I know is a dud outputting 4K or 3K at only 30Hz; Skylake should do 60Hz 4K but that's not going to be more than running a movie and some browser tabs. Intel promises a lot but there's always qualifiers like "up to X" and "based on the HD4400". Not a concern for most of their Mac shipments.

I don't expect the 13" rMBP to gain discrete. There's no need to. But on the 15", I highly doubt Apple will drop Nvidia's future Pascal or AMD's HBM where performance is said to scale exponentially. Apple is one of few OEMs that actually offered Iris Pro and Iris and the decent HD5000 in their last generation instead of copping out to something lesser. They're thinking ahead and dropping discrete would be regressive in every way for the top configurations. I'm sure a lot of buyers are fine on the current Iris Pro but in the whole scheme of things, it looks like the weakest link. SSDs are fast, memory is quite good at default 16GB, CPU dandy--but then you see this integrated solution and wonder if it'll meet all demands down the road. End up paying for the discrete option that's currently available.

They could go all-out power efficiency and ditch the system anchors but then the Pro designation would carry no meaning except a deceptive marketing gimmick. I know it already echoes that vibe but things can be rectified easily with some promising refreshes.

I don't believe any engineer at Apple could look at a future maxed out 15" MBP with only an IGP and consider it the best flagship workstation they could have made. There's obviously enough of a thermal system to allow one in there. The option of a discrete is really important to have. Custom config is not a problem for those who will gladly pay. Taking it away will hurt.
 
So in the Skylake timeframe (H2 2015 to early 2016 say) - what are the likeliest options for a discrete GPU, and how is it likely to compare to the current NVIDIA GeForce GT 750M with 2GB of GDDR5 in the highest end 15"?

I don't share plastictoy's opinion on this. In my opinion, the most likely option for a discrete GPU in the Skylake MBP is none at all. The Iris Pro 7200 will outperform the 750M.
 
I don't share plastictoy's opinion on this. In my opinion, the most likely option for a discrete GPU in the Skylake MBP is none at all. The Iris Pro 7200 will outperform the 750M.

It seems you didn't read anything I wrote. I find it a backwards step if Apple were to omit a discrete card in their 15". For the base model, not expected; but like the current configuration, there should be something else to drive high-resolution workflows. If not a lot of professionals looking for a replacement will be considering other options.
 
It seems you didn't read anything I wrote. I find it a backwards step if Apple were to omit a discrete card in their 15". For the base model, not expected; but like the current configuration, there should be something else to drive high-resolution workflows. If not a lot of professionals looking for a replacement will be considering other options.

I read what you wrote and I found it unconvincing. If the 750M is fast enough to drive a 15" Retina display, then the Iris Pro 7200 (which will be faster) will also be fast enough.

Also, a minor point but there are no discrete graphics cards in rMBPs, only discrete graphics chips.
 
That's pointless semantics you're arguing. Chip, card, you know what I mean and it doesn't distract from the information presented.

So you think Apple will offer nothing higher than their current $1999 model this year and that Iris Pro itself will drive a 4K+ workflow fine for years to come? One that likely will include external monitors of equal native resolution and also increasingly larger media files.

I don't know if you even understand what performance means but simply driving a display is not enough. Back in 2013 with this 2015 IGP, it'd be a different story. But going into 2016 and beyond? Demanding users would be very disappointed.
 
From the stats, the dGPU on the current rMBP really helped speed up things like FCP X, After Effects etc compared to Intel's GPU- it's the upper end of the MBP - the most expensive model.

Isn't this highest end rMBP the professional's portable work machine? (video editing, compositing & 3d rendering etc where GPU matters). I guess the specific chip and whether it will have a discrete GPU are separate questions - but something like FCP was built to run faster with decent GPU.

I'm sure some gamers are wondering too - DirectX 11 (and by extension possibly DirectX 12, and decent fps (eternal hope?) would help, by having a decent dGPU

Wonder how much sales of the highest end MBP are compared to MBP as a whole.
 
DirectX is Wndows-only. OSX runs OpenGL, OpenCL for GPGPU acceleration. AMD has better performance for the price in this area. Nvidia cripples their non-pro cards in compute and wants people to use proprietary CUDA. I feel Apple would've gotten better driver support had the Mac Pro been Quadros but the price probably would have gone up by a thousand. And OpenCL gives them flexibility to change vendors without affecting long-term software development.

FCPX should be really good on GPUs. Premiere Pro accelerates on CUDA but Adobe added OpenCL and performance is about a tie. Stability is important so I believe CUDA is much better in that regards.

One thing I find interesting is that Windows versions of Illustrator 2014 uses GPU Preview and on my low-end card, it was noticeably smoother to pan and zoom. Newer Mac apps like Affinity Designer leverage the GPU and although I can't say whether they will make whole use of a discrete option, it's important for developers to know the whole mobile platform is not restricted to Intel's discretionary graphics budget.

I don't think the MBP with discrete sells worse than the regular. With discounts, the price difference I've seen can be as close as $50. At retail pricing, the same components sans GPU is only apart by $100, arguably a small cost for those who know they need that extra horsepower tucked away inside.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.