Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
What are you talking about? Iris Pro is a lot slower than the 650M for games. The base model was a regression in GPU performance. Apple justified the move by reducing the price by $200.

Next gen will have a die shrink, so the new Iris Pro will likely be faster than a 650M (not a surprise that next gen will finally beat a 2 year old GPU, but still).
 
Next gen will have a die shrink, so the new Iris Pro will likely be faster than a 650M (not a surprise that next gen will finally beat a 2 year old GPU, but still).

Ok, next Iris Pro will beat 650m. At the same time, 860m with the same 45w TDP as 650m will crush next Iris Pro with roughly 2 times better performance. How much time will Intel need to beat that one? Another 2-3 years?
 
Sorry to sound like a total noob, but will these ever make it to a 13" rMBP? I've heard talks of how Broadwell 13" rMBPs could be quad core so is a better GPU really that far off?

There's no guarantee they'll make it to the 15" rMBP either. A lot of people are substituting wishful thinking for deductive and inductive reasoning.
 
It's all Apple's decision. They can optimise the architecture and software for integrated graphics and let the dedicated graphics run like crap

So who cares if one specific part performs better in theory.

I can't bother to analyse every specific part on it's own. I judge the product and see if they meet my needs.
 
Last edited by a moderator:
Some people are obviously not understanding what the difference is between a "chip" and a "micro architecture."

GM107 is the internal NVIDIA codename for the Maxwell architecture - the implementation of a method of processing instruction sets in a processor.

GTX 750 Ti is the model name for a specific GPU - it is the first of the GM107 cards, but it will be nothing like the GM107 that may find its way into the MacBook Pro.

There are significant hardware differences - video memory size, memory bus width, number of cores, voltage regulator circuitry, etc. - basically everything that makes the GTX 750 Ti the GTX 750 Ti will be different from what you will find in a mobile version of the card.

And the "2X performance for half the wattage" promise is in theory - it will not scale up to higher end cards like the GTX 880M. Also, NVIDIA is going to shrink power consumption - while performance may be ~20% higher, power consumption will shrink significantly.

Maxwell is a mobile-oriented architecture - don't get your hopes up for drastic performance improvements. NVIDIA is aiming for mobile efficiency, which means increasing the performance/watt ratio and cutting down on power usage.
 
Maxwell is a mobile-oriented architecture - don't get your hopes up for drastic performance improvements. NVIDIA is aiming for mobile efficiency, which means increasing the performance/watt ratio and cutting down on power usage.

This sounds like a contradiction to me. Increasing performance/watt ratio also means having more performance at the same thermal profile. And while you are perfectly correct that Maxwell might not be able to scale in the high-end (after all, the new high end cards are all Keppler-based), I am not concerned with the high end at all. I am interested in what it can deliver at 35-45W TDP.

Anyway, the leaked 860M and such scores show real-world performance improvements of around 50% at the same TDP as the predecessor. This is nothing short of stellar. I don't remember such increase in efficiency ever being witnessed in the industry.

----------

GTX 750 Ti is the model name for a specific GPU - it is the first of the GM107 cards, but it will be nothing like the GM107 that may find its way into the MacBook Pro.

There are significant hardware differences - video memory size, memory bus width, number of cores, voltage regulator circuitry, etc. - basically everything that makes the GTX 750 Ti the GTX 750 Ti will be different from what you will find in a mobile version of the card.

What do you base that on? Usually, Nvidia only has three or four chips which they use for both desktop and mobile products (with different clocks). Again, leaked data on 860M suggests that its the same chip as 750 Ti, only with a lower clock.
 
This sounds like a contradiction to me. Increasing performance/watt ratio also means having more performance at the same thermal profile. And while you are perfectly correct that Maxwell might not be able to scale in the high-end (after all, the new high end cards are all Keppler-based), I am not concerned with the high end at all. I am interested in what it can deliver at 35-45W TDP.

Anyway, the leaked 860M and such scores show real-world performance improvements of around 50% at the same TDP as the predecessor. This is nothing short of stellar. I don't remember such increase in efficiency ever being witnessed in the industry.

The GTX 860M is physically massive. It is physically similar to the GTX 750 Ti, but this means that it comes on an MXM daughterboard. It also consumes ~45-50W of power, while the GT 750M 1) doesn't come on a MXM board 2) consumes 40W of power. While this bodes fantastically for GM107 as an architecture, to expect a 50% performance boost is unrealistic for the GT 850M.

The fact that shrinking the GTX 750 Ti results in a separate daughterboard means that we won't be seeing anywhere near this power level - the mobile 850M will be seeing less cores, less clock speed, and less performance. I estimate a 20-30% boost.

What do you base that on? Usually, Nvidia only has three or four chips which they use for both desktop and mobile products (with different clocks). Again, leaked data on 860M suggests that its the same chip as 750 Ti, only with a lower clock.

NVIDIA does that for its desktop cards because size is not a constraint. For example, the GTX 770, 780, and Titan all used the same PCB (and therefore chip) but with certain components laser or firmware disabled.

Mobile chips look like CPU dies. The ones that will be inside the MacBook Pro aren't going to be anywhere near as large as MXM boards, so it makes logical sense that it won't be sharing its components with a desktop GM107 card.

So yes, the GTX 860M looks awesome and probably will be. Gaming laptops with the power, cooling, and size capacity will probably increase power and voltage and overclock these cards, resulting in massive performance increases.

However, the MacBook Pro never has and never will be able to support such power, cooling, and size demands. It will use a completely mobile-oriented chip that bears no resemblance to desktop GM107 cards.
 
Ok, next Iris Pro will beat 650m. At the same time, 860m with the same 45w TDP as 650m will crush next Iris Pro with roughly 2 times better performance. How much time will Intel need to beat that one? Another 2-3 years?

As I said, Apple only cares about year over year performance increases, not comparative industry speeds. Not having to source NVidia gpus for a small niche market is apple's main goal.
 
The GTX 860M is physically massive.

The die size of GM107 is 148mm2 as opposed to GK107's 118mm2. While it seems much of a difference, its only 1.5mm increase on each size. Surely there is enough space on a mainboard to accommodate such a chip.

It is physically similar to the GTX 750 Ti, but this means that it comes on an MXM daughterboard.

Whether it used as a MXM board or soldered to the mainboard is completely up to the computer's manufacturer. AFAIK, MXM based 750M modules also exist.

It also consumes ~45-50W of power, while the GT 750M 1) doesn't come on a MXM board 2) consumes 40W of power. While this bodes fantastically for GM107 as an architecture, to expect a 50% performance boost is unrealistic for the GT 850M.

Well, ok, than take the 850M. Based on all leaks its the same chip as 860M, but has the same TDP as the 750M. So I expect it to have a similar performance increase.

----------

Mobile chips look like CPU dies. The ones that will be inside the MacBook Pro aren't going to be anywhere near as large as MXM boards, so it makes logical sense that it won't be sharing its components with a desktop GM107 card.

The 640M, 650M and 660M (as well as their 7xxM coutnerparts) are all based on GK107 - which is the same chips as some desktop cards. I think its reasonable to assume that Nvidia will keep the naming strategy. Of course, they might change it, and use different chips for 860M and 850M, but why would they? Also, all leaks I have seen state that 850M and 860M have the same package codename. So I assume that they are the same chip, maybe with different clocks/some shader units disabled.
 
The die size of GM107 is 148mm2 as opposed to GK107's 118mm2. While it seems much of a difference, its only 1.5mm increase on each size. Surely there is enough space on a mainboard to accommodate such a chip.

You don't understand the difference between die size and the supporting voltage, memory, power, etc. components. The trade-off by making the 860M an almost identical shrink of a desktop card is that the supporting components are larger - necessitating the use of the separate PCI board.

Whether it used as a MXM board or soldered to the mainboard is completely up to the computer's manufacturer. AFAIK, MXM based 750M modules also exist.

Yes, NVIDIA and AMD don't directly release cards that are only MXM or soldered. But the reason I strongly suspect the 860M will be a MXM only component is its lineage from desktop cards. MXM is useful because it allows manufacturers to isolate and upgrade GPU circuitry independently of the motherboard, allowing for more configuration and more efficient cooling.

Well, ok, than take the 850M. Based on all leaks its the same chip as 860M, but has the same TDP as the 750M. So I expect it to have a similar performance increase.

I'm not sure what you mean by "all leaks," but the only leak is from Clevo. It appears that not only will the GTX 850M not come with GDDR5 memory (the 860M comes with GDDR5) but it might not even be GM107. It might be a Kepler refresh.

The 640M, 650M and 660M (as well as their 7xxM coutnerparts) are all based on GK107 - which is the same chips as some desktop cards. I think its reasonable to assume that Nvidia will keep the naming strategy. Of course, they might change it, and use different chips for 860M and 850M, but why would they? Also, all leaks I have seen state that 850M and 860M have the same package codename. So I assume that they are the same chip, maybe with different clocks/some shader units disabled.

Those package codes don't necessarily indicate architecture. For example, the GT 620M is N13M-GS, but the GT 640M LE is N13P-LP, seemingly indicating a shift from Fermi to Kepler. No, both are still Fermi.
 
You don't understand the difference between die size and the supporting voltage, memory, power, etc. components. The trade-off by making the 860M an almost identical shrink of a desktop card is that the supporting components are larger - necessitating the use of the separate PCI board.

What I don't understand why a 860M would need more supporting components than, say, a 760M/750M which are also shrinks of desktop cards and have comparable TDP (so the requirements towards power components should be comparable, right)? I do understand very well that a 'big' desktop card needs more space for its power convertors and the supporting circuitry - but here we are talking about components with the TDP of 30-40W, which have been successfully integrated into the laptops without the need for any extra boards. A benefit of placing the GPU directly on the logic board is that you can avoid all the additional power conversion altogether and just have one system-wide power unit. After all, you don't need to deal with any connectors and compatibility limits they imply.

But the reason I strongly suspect the 860M will be a MXM only component is its lineage from desktop cards.

660M/650M/640M are all 'based' on a desktop card and they are not exclusively MXM. The 780M in the iMac is based on a desktop card and its not MXM.

I'm not sure what you mean by "all leaks," but the only leak is from Clevo. It appears that not only will the GTX 850M not come with GDDR5 memory (the 860M comes with GDDR5) but it might not even be GM107. It might be a Kepler refresh.

I was under impression that there were much more leaks than that by now. I have even seen leaked benchmarks of 850M in a thread discussing Maxwell (its quite long though so I can't come up with a quote).

About 850M and DDR3 - most 650M/750M are also DDR3-only versions. Its very likely that Clevo would want to differentiate the performance by limiting the 850M with slower VRAM like manufacturers usually do it with 650m/660m. So far, the x50m/x60m were always the same chip with slight difference in clock. I don't see why Nvidia would at this point change the logic behind it naming (but of course, they might).

Those package codes don't necessarily indicate architecture. For example, the GT 620M is N13M-GS, but the GT 640M LE is N13P-LP, seemingly indicating a shift from Fermi to Kepler. No, both are still Fermi.

There are two versions of 640M LE: one is Fermi and another is Kepler. The N13P-LP refers to Kepler only.

P.S. To avoid misunderstandings: all I write here is a speculation and conjecture. We won't know for sure until official information comes out. Still, I think that based on the information available now and the previous history, what I wrote here is the most likely (as in - probable) hypothesis.
 
I have even seen leaked benchmarks of 850M in a thread discussing Maxwell (its quite long

I was going to link that again if you hadn't :D

But yeah, overall it's true the die size is larger on the 860M, but I believe that it's possible that they could make room for it. Broadwell will be stepping down it's size a tiny bit, and they may be able to shift things around to make room if need be, otherwise possibly going from 8 to 16 chip ram, but I doubt that.

My thought process is that if Apple didn't care about performance at all, they wouldn't have even bothered to overclock the 750M so much that it is almost a 755M.

However given Apple logic, I think it is very likely they'd keep with the 850M and take the 10W lower TDP (effectively keeping the MB cooler and more efficient during peak GPU usage), even though the performance gain is marginal.

I still think that there are no constraints that would completely limit them from popping a 860M in there this refresh though. It's definitely possible (given that it's the same TDP as the current 750M), I'm just uncertain they're willing to do it.

Regardless, the 860M is going to ***** on the Iris Pro successor - so that's what I'm hoping gets put in there.
 
The main thing here is not the technical possibility, but the fact that 860m will cost more to Apple than 750m. Assuming their love to margin hunting, I'd rather bet on 850m (factory OCed) + GDDR5 (they still purchase it from Hynix and solder to pcb by themselves) than on pure 860m.
BTW, are there any known technical info about 850m? Is it the GPU with 512 or 640 CUDA cores?
 
The main thing here is not the technical possibility, but the fact that 860m will cost more to Apple than 750m. Assuming their love to margin hunting, I'd rather bet on 850m (factory OCed) + GDDR5 (they still purchase it from Hynix and solder to pcb by themselves) than on pure 860m.
BTW, are there any known technical info about 850m? Is it the GPU with 512 or 640 CUDA cores?

I'm pretty sure there is some known info on it. I'll try and dig up some leaks. Yeah forgot about the price too. I thought I read somewhere that the 850M wasn't seeing GDDR5, which is another reason I was thinking maybe the 860M this time around. But it's easily possible that they could get a GDDR5 850M, as I think that's exactly what they did with the 750M.

----------

Here's what I got (Cloudfire's comment at the bottom - #320):

http://forum.notebookreview.com/gaming-software-graphics-cards/745074-new-details-about-nvidia-s-maxwell-32.html#post9577854

----------

Speculation confirmed by a few verterans (plausible estimation):

840m 512 cores @ 64 ddr3
845m 512 cores @ 128 ddr3
850m 640 cores @ 128 ddr3
860m 640 cores @ 128 ddr5

The bit-rate for all of those is confirmed (unofficially)

----------

More of the same (Plus leaked TDP, this time)

http://forum.notebookreview.com/gaming-software-graphics-cards/745074-new-details-about-nvidia-s-maxwell-21.html#post9573708



From what I see based on most of the leaks, the 850M and 860M are essentially the same card. The 850M is gimped a little bit more to save power. That's probably the card we'll get :(
.
 
Last edited:
Based on the latest offerings, this whole discussion is moot. If Intel can provide a 50-90% boost in GPU performance over the Iris Pro that will be more than enough for Apple to kill the dGPU entirely.

Hopefully Intel continues this push to get out of the stigma of iGPUs being mediocre.
 
Based on the latest offerings, this whole discussion is moot. If Intel can provide a 50-90% boost in GPU performance over the Iris Pro that will be more than enough for Apple to kill the dGPU entirely.

Hopefully Intel continues this push to get out of the stigma of iGPUs being mediocre.

If Intel once again has the Broadwell iGPU meet the 840M (as Iris Pro met the 640M) then I will again, be very impressed. However even then, I'll take the high end, with the 850/860M :D

----------

My thoughts regarding this are that the only way Intel will be able to provide a substantial boost in performance (gaming-wise) would be to offer even more L2 cache in the broadwell iGPU, and that will fall apart at resolutions as high as 1080p, just like it does with Iris Pro. So regardless, I'm going to prefer the dGPU.
 
Intel will increase the power of iGPUs by UP TO 40%.

Only 8 cores more on Iris Pro. That cannot increase the power of Intel HDs by big amount. Only thing is that there is a possibility that we will se 48 Core Iris Pro(lets say: HD 5800) in lower end CPUs.


Looking by the numbers - iGPU will get around 3500 pts in 3dMark11. Which is quite a lot back from the levels of Maxwell GPUs(5.3k). In games, and everything else the gap will be bigger and bigger.

Skylake will bring new level of Performance to iGPU's.
 
Intel will increase the power of iGPUs by UP TO 40%.

Only 8 cores more on Iris Pro. That cannot increase the power of Intel HDs by big amount. Only thing is that there is a possibility that we will se 48 Core Iris Pro(lets say: HD 5800) in lower end CPUs.


Looking by the numbers - iGPU will get around 3500 pts in 3dMark11. Which is quite a lot back from the levels of Maxwell GPUs(5.3k). In games, and everything else the gap will be bigger and bigger.

Skylake will bring new level of Performance to iGPU's.

Yeah that's what I figured. So the gap between the low-end 15" rMBP and the high end 15" rMBP will be much larger this time around. I'm liking that :) :apple:
 
Only time will tell. At this point, nVidia and AMD must be scrambling to keep ahead of intel.

The future(and money) is in tablets and laptops, so if nVida and AMD are pushed out they are doomed.

Also, lets face it, most of the population is buying $500-$700 laptops, not laptops with GTX 780Ms. Thus it is vital for nVidia to make sure they can keep up performance in the mid-range category where Intel is really starting to kick.
 
Only time will tell. At this point, nVidia and AMD must be scrambling to keep ahead of intel.

The future(and money) is in tablets and laptops, so if nVida and AMD are pushed out they are doomed.

Also, lets face it, most of the population is buying $500-$700 laptops, not laptops with GTX 780Ms. Thus it is vital for nVidia to make sure they can keep up performance in the mid-range category where Intel is really starting to kick.

I don't agree with your assessment overall.

I do not think Nvidia is scrambling at all. both AMD and Nvidia are finding their way into the tablet/laptop markets the best way they can (Nvidia sheild, AMD with gaming consoles, etc). There is a place for Intel, and both of those companies. However it is true to say that Intel will indefinitely eat into their market by a margin.

I also think that people are actually buying more expensive laptops these past few years than we have seen before.

Lastly, Intel hasn't touched mid-end graphics by Nvidia yet. The Iris Pro kicks into their low-end mobile graphics, specifically the 640M/750M. The MBP will never have a true mid-tier graphics card, therefore Intel will likely not make gains in that area for several years if they do at all.
 
I don't agree with your assessment overall.

I do not think Nvidia is scrambling at all. both AMD and Nvidia are finding their way into the tablet/laptop markets the best way they can (Nvidia sheild, AMD with gaming consoles, etc). There is a place for Intel, and both of those companies. However it is true to say that Intel will indefinitely eat into their market by a margin.

I also think that people are actually buying more expensive laptops these past few years than we have seen before.

Lastly, Intel hasn't touched mid-end graphics by Nvidia yet. The Iris Pro kicks into their low-end mobile graphics, specifically the 640M/750M. The MBP will never have a true mid-tier graphics card, therefore Intel will likely not make gains in that area for several years if they do at all.

Yes, people may be buying more expensive laptops, but no where in the range of laptops carrying 8970Ms and 780Ms, and still in my $500-$700 range.

And I'm pretty sure a 640M/650M/750M are known mid-range cards. Every site I've seen says it's a mid range card. Come on. Give me one place that says it's a low-range card. The 650M was actually used in entry level gaming laptops, and in aspects other than gaming, the Iris Pro even matches or defeats a 650M.

Sorry for the reality check, but the fact is Intel iGPUs are catching up fast. Unless AMD and Nvidia can keep ahead of Intel in the mid-range, they are going to lose out. The simple fact that they are rushing to get into the console space already smells of their insecurity.
 
Last edited:
Nvidia Maxwell Performance - Wow!

So while we may not know what Apple's plans are for the inevitable Late 2014 refresh, we can always speculate and have educated guesses. Plus it's highly likely at least in the top end versions that they will continue to include a dedicated GPU, as Intel's iGPU's still are no where near capable enough in my opinion.

Here is the improvements we can expect - a 60% boost while retaining the same TDP!!! I don't even know if this is done using TSMC's older 28nm process or their new, much more efficient 20nm process which should be ready when Apple updates their lineup.
For when I bootcamp and play most of my games on high/ultra I am getting around 35FPS, a 60% boost should see that rise to around 56FPS - a huge improvement. Couple this with a lower power Broadwell chip and the rMBP could become a powerhouse gaming machine. :D

http://www.gizmodo.co.uk/2014/03/nvidias-new-laptop-cards-are-battery-saving-scorchers/
 
According to this : http://www.pcgameshardware.de/Nvidia-Geforce-Grafikkarte-255598/Specials/Geforce-GTX-800M-Kepler-Maxwell-1113162/

(It's a German Hardware site)

The 850M and 860M are similar to the GTX 750Ti with 640 SMM (Maxwell Streaming Processors), but the 850M with lower GPU clock speed.

Now the interesting things. Perfomance wise the 850M is almost exactly like the 580M GTX, while the efficiency is 30% higher.

Here a review of the 580M:http://www.notebookcheck.net/NVIDIA-GeForce-GTX-580M.56636.0.html


Also interesting the high end models 870M and 880M are just Kepler refresh's with slightly higher clock speeds etc.
 
Razer just updated the Blade 14 to include a GTX 870m and 3200x1800 10 point multi-touch IGZO display.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.