Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
not really, the cost of the gpu is not only the core, but the routing of the bus on the mobo, which is quite expensive, there is a reason that there is only 1 gpu using 192bit and 2 using 256bit, most use 128bit, like the 650m

aside that notebook parts cost more, they are higher binned cores

No, a 192 or 256 bit bus require a larger chip because they need a larger memory controller on the gpu chip (making it more expensive for the gpu chip). The data connection between the gpu and the cpu is PCI Express (bus width does not affect this). The only thing that the bus affects is the data width to the vram chips (which can increase complexity and costs).

The reason few gpu's use wider buses is that mobile gpus are often too weak to require more bandwidth. The 650m does not need anywhere near the 80 GB/s that it is given. Even the 680m with a 256 bit bus only gets about 96 GB/s (and is generally about twice the speed). For comparison the 650 ti (desktop) has a 128 bit bus and gets similar bandwidth to the rmbp 650m but is considerably faster.

The actual notebook chip costs more but the fact that there is no PCB + fans + etc. decreases costs quite a bit. For example EVGA buys the GK107 chip from nvidia for some amount of money then buys vram from somewhere else and a pcb and fan from somewhere else and then manufactures a graphics card that they sell for a profit to newegg which sells for a profit to you for around $100. Cutting out EVGA's and newegg's profits + associated costs (shipping, warehousing, handling) the actual 650m + vram costs much less than $100.
 
No, a 192 or 256 bit bus require a larger chip because they need a larger memory controller on the gpu chip (making it more expensive for the gpu chip). The data connection between the gpu and the cpu is PCI Express (bus width does not affect this). The only thing that the bus affects is the data width to the vram chips (which can increase complexity and costs).

The reason few gpu's use wider buses is that mobile gpus are often too weak to require more bandwidth. The 650m does not need anywhere near the 80 GB/s that it is given. Even the 680m with a 256 bit bus only gets about 96 GB/s (and is generally about twice the speed). For comparison the 650 ti (desktop) has a 128 bit bus and gets similar bandwidth to the rmbp 650m but is considerably faster.

The actual notebook chip costs more but the fact that there is no PCB + fans + etc. decreases costs quite a bit. For example EVGA buys the GK107 chip from nvidia for some amount of money then buys vram from somewhere else and a pcb and fan from somewhere else and then manufactures a graphics card that they sell for a profit to newegg which sells for a profit to you for around $100. Cutting out EVGA's and newegg's profits + associated costs (shipping, warehousing, handling) the actual 650m + vram costs much less than $100.

You have to understand that the clocks of those mobile parts are high, and with a MUCH lower voltage, this increases costs, not every chip that you churn out performs the same.

And actually the 680m comes on its own pcb, so the wiring of the bus is there, that is directly connected to mxm slot, a pcie interface

when you solder the gpu on the mobo, you dont have to do the wiring for the bus? thats new to me, specially in a very cramped space that is the mobo of a notebook

the 680mx gets 160gb/s

the 680m gets 115gb/s

the 650m gets up to 80gb/s, which means only GDDR5 ones do get that

and given that the 650m has a general performance of 2500 in 3dmark11, and the 680m has around 6700, and one of the best ways to raise performance for the 680m is to OC the vram and not the clock, i dont see any reason to call that not bandwidth limited

Given that the probable 780m is the 680mx rebadged, I want to see how that behaves

And FYI all of the mid range gpus are bandwidth limited, it has been like that for a long time, its cost X performance X heat that complicates the deployment of those

And the Core>>>PCB>>>>>>>>>>other things, core is expensive, pcb is less so they are standardized, but you can do your own, its your problem, the other parts costs cents
 
You have to understand that the clocks of those mobile parts are high, and with a MUCH lower voltage, this increases costs, not every chip that you churn out performs the same.

And actually the 680m comes on its own pcb, so the wiring of the bus is there, that is directly connected to mxm slot, a pcie interface

when you solder the gpu on the mobo, you dont have to do the wiring for the bus? thats new to me, specially in a very cramped space that is the mobo of a notebook

the 680mx gets 160gb/s

the 680m gets 115gb/s

the 650m gets up to 80gb/s, which means only GDDR5 ones do get that

and given that the 650m has a general performance of 2500 in 3dmark11, and the 680m has around 6700, and one of the best ways to raise performance for the 680m is to OC the vram and not the clock, i dont see any reason to call that not bandwidth limited

Given that the probable 780m is the 680mx rebadged, I want to see how that behaves

And FYI all of the mid range gpus are bandwidth limited, it has been like that for a long time, its cost X performance X heat that complicates the deployment of those

And the Core>>>PCB>>>>>>>>>>other things, core is expensive, pcb is less so they are standardized, but you can do your own, its your problem, the other parts costs cents

Mobile Clocks are low with low voltage. (GK107 tops out at 950 mhz (660m) compared to 1058 (650), GK106 at 600 mhz, Gk104 at 720 mhz)

You do have to do the wiring for the bus but its not as expensive as you think (cheaper than two parts separately for low end chips). Either way you have this cost to use a dgpu; the bus must connect the gpu to the vram, whether it occurs on the mobo or a seperate board.

Most midrange chips are not bandwidth limited (680m is more than twice as fast as the 650m but does not have anything like twice the bandwidth--you are right about 115 GB/sec). The 670MX+ chips start to become bandwidth limited.
 
Mobile Clocks are low with low voltage. (GK107 tops out at 950 mhz (660m) compared to 1058 (650), GK106 at 600 mhz, Gk104 at 720 mhz)

You do have to do the wiring for the bus but its not as expensive as you think (cheaper than two parts separately for low end chips). Either way you have this cost to use a dgpu; the bus must connect the gpu to the vram, whether it occurs on the mobo or a seperate board.

Most midrange chips are not bandwidth limited (680m is more than twice as fast as the 650m but does not have anything like twice the bandwidth--you are right about 115 GB/sec). The 670MX+ chips start to become bandwidth limited.
so have you noticed the voltage? its a lot lower

all mid range gpus are bandwidth limited, I guess you are not used to overclocking

the 670mx is pretty problematic, with just 128bit is abysmal, even the 675mx with 192bit aint what it could be
 
so have you noticed the voltage? its a lot lower

all mid range gpus are bandwidth limited, I guess you are not used to overclocking

the 670mx is pretty problematic, with just 128bit is abysmal, even the 675mx with 192bit aint what it could be

The voltage of mobile parts is always quite low. (Voltage on my 660m is 0.937 volts). This is because the core speed is singificantly lower than the desktop chips.

670mx is 192 bit (but the speed is slower-700 mhz so the bandwidth is less than the 128 bit bus at 1250 mhz)

675mx is 256 bit
 
The voltage of mobile parts is always quite low. (Voltage on my 660m is 0.937 volts). This is because the core speed is singificantly lower than the desktop chips.

670mx is 192 bit (but the speed is slower-700 mhz so the bandwidth is less than the 128 bit bus at 1250 mhz)

675mx is 256 bit

I mixed up the bandwidth of those gpus:p

Still the voltage is a lot lower than the corresponding clocks, those are higher binned chips.
 
This, but a dedicated GPU will always perform better for pro applications.

That is due to memory access and drivers, not just core clock. It also relies on the application actually making real use of the gpu. Much of what people think of as video and graphics doesn't heavily leverage the gpu, although the ones that do in some meaningful way can add a lot of value. In the past Apple has relegated things that are described as pro features to the most expensive units, figuring if people need them, they'll pay for them. I could see them going the same route today.

A discrete GPU is expensive, VRAM is also expensive. It also makes the design of the cooling system more involved. If Apple wants to shave $100-$200 off of the 15'' entry level prices, this would be a way to do it, possibly without loosing graphics performance compared to the current models. This could also mean significant power savings for graphics intensive applications.

$100 isn't terribly meaningful. I don't think they will do such a thing unless they can get it low enough to EOL the cMBP. I could see them doing the same thing with the lowest imacs if it brings the starting price close to $1000 at Apple's desired margins.


The only reason why we have multiple GPUs on the MBP is because the integrated GPU is not powerful enough, and the discrete GPU consumes too much electricity.

If Iris provides nearly the same performance of a discrete GPU, I cannot see what advantages keeping the discrete are.

For apple, its a simpler design that frees up room, reduces heat and saves them money - no need to purchase a discrete gpu from another vendor.

Am I missing something?

The igp would technically be there even if switched off permanently. It's part of the chip. This same discussion came up with Ivy Bridge, and I think it's an ongoing one when the discrete gpu will disappear. They pulled it in the mini in favor of a quad cpu. It's just a matter of when they deem intel graphics "good enough", but even then I don't think it has to disappear from the design top to bottom in the same generation. Appl hasn't changed case designs very often. If it disappears from the bottom unit, that's a pretty good indication that whatever design comes next, potentially 3-4 years out, will go to igp only.
 
I hope Apple releases a cheaper 15" with just the Iris integrated graphics. That will be more than enough for what I do, and I may consider getting the 15" this time around if that is the case. We shall see...heavily depends on what happens with the 13" laptops also.

They have done that before once with the 15" to drop the price, so that is def. a possibility this time around given the performance increase.

This is a market that Apple doesn't have covered; some of us want a 15inch retina screen but don't require full on 'pro' specs. Some of us want a 15 inch retina air.

If they ditch the dgpu and so can offer the machine at X amount thinner/lighter and X dollars cheaper, that'd be great for me and my market segment!
 
One option I started to consider since I saw the specs is an upgrade to a 760M.
http://www.pcgameshardware.de/Grafi...hot-mit-Spezifikationen-der-GTX-760M-1068500/
It is quite a bit lower clocked but GK106 with double the shaders. Should end up quite a bit faster.
Given that the rMBP clocked its 650M at 900 Mhz which is higher than the 660M specs I would see the 760M as a possibility.
It is still a stretch as the 650M seemed to require very little cooling and overclock well. Who knows if a 760M has equally conservative TDP ratings. Looks a lot hotter as a chip at least. It should be at least doable with slightly reduced clocks. More problematic is probably that 760M with all those shaders does quite badly in battery life with little load which makes Apple's muxed switching even worse. I think a GPU like the 760M GTX really should only be used under medium to high load or shut down.

The thing about dedicated GPUs is that as we advance IGPs get more and more power efficient and that counts most in notebooks. Shareing a memory interface and dynamically allocating resources just yields more performance at the same efficiency. Closer and more integrated usually is better too. That is why dedicated GPUs will inevitable die out wherever power efficiency matters. Intel and AMD APU will keep eating into the pie until dedicated GPUs aren't worth it in anything other than 200W gaming notebooks.
We aren't there yet though.
With HD 4000 the low end entry level GPUs pretty much died out. Nvidia and AMD both thought that there isn't really and reason for the existence of 64bit GPUs anymore.
128bit GPUs will be next. The low end will be retired by Iris Pro. The highend will
battle it out with Broadwell and the two generations after it. I assume it will survive Broadwell by a hair and loose on the next generation.
Everything above might last longer because at that point moving some heat away to a different chip is more important than best power efficiency.

For a dedicated GPU to loose and become unnecessary it doesn't need to just win a benchmark but it needs to win by a sufficient margin to justify all the extra space, heat and power. 20nm might do it against a 40% Broadwell increase that Intel promises but only if they do more than just the standard die shrink efficiency gain. Maxwell needs to be better than Kepler in more than just manufacturing.
 
Discrete GPUs will stick around as options for quite a few years yet. Intel will need to create an iGPU that can at least perform as well as a ~40W dGPU before they start being used in the top-end MBPs. Intel could probably do it now, but the costs would be prohibitive. Tying a large number of GPU cores with a large CPU, AND a couple of GB of super fast RAM all on the same piece of silicon is just begging to have the whole thing bricked with a single small flaw.

If Intel can keep up the pace, they will be able to kill off the discrete GPU market, as having a CPU, RAM and GPU on the same chip has obvious, colossal, bandwidth benefits. And eventually will have cost and power consumption benefits too. But I still think that's a long time away yet.

Also, I don't understand Apple's choice of GPU, clock speeds, and the designations they give them anymore. My 6750M can be overclocked by 30%, and undervolted by 7%, to run cooler, and much faster than stock. So why do I have to do it? Why doesn't Apple do it in the factory? Furthermore, if the 650M in the new MBPs is clocked higher than the 660M, and otherwise identical, why doesn't Apple just say it's a 660M? Or even factory overclock it further, and call it a 665M?

It's like they are purposefully underselling their already artificially under-performing hardware...
Makes no sense.
 
It is because not all chips are created equal. The 650M ran at all sorts of clock speeds quite well. Asus runs it at 900Mhz too. Some run it at stock 732.
Some higher lower.
The 650M with GDDR5 is by default clocked so low that it performs the same as a 650M with DDR3 or almost. It has quite the headroom. Nvidia also sells a more expensive 660M GTX which is the exact same chip but one is only allowed to market the chip as a 660M GTX in the specs if one paid for it. Nvidia wants the money and not having to sell one chip in only one price range that anybody uses as they please and also confuses customers by not really knowing what they get.
Apple is just cheap. They didn't want to pay for the 660M GTX because they didn't think the name is worth the publicity. Especially since they might have to follow up and a 760M might be too hot at the next round. It is all marketing and cost. They don't care about informing the customer what they are actually buying. The 330M in 2010 they significantly underclocked to 500Mhz (-15%).

The 650M was popular and obviously much cooler than many assumed. Since many review sites to compare benchmark score also to see if the cooling is good enough to keep performance up, they probably started all this over clocking.

Generally not all chips are equal and no company will to a factory over clock or under volting unless it can do it to all the chips they sell without stability issues.

Intel will now end up 50% slower than the midrange 650M like GPUs. With 14nm they can add transistors again. I doubt they increase core counts. 2 cores is enough for anything none intensive and I don't see going to 6 cores really as an option in the mobile space either. They seem quite content with the size of their L3 Cache. Didn't increase it with 22nm. I guess it just doesn't pay unless AMD would magically catch up and challenge CPU performance.
So they can dedicate almost the whole to more GPU processing arrays. More EUs. Now from 40 they will probably go to 60 at least and potentially even more.
Kepler was a huge boost in energy efficiency. Fermi was terrible in comparison. Nvidia claimed a 100% increase in efficiency with that new architecture and it isn't far off. They went from being quite a bit worse than AMD to being better. The only thing Kepler sucks at is GPGPU like OpenCL, Cuda, etc.
I don't see them pulling the same again with Maxwell. Who knows but them. If it is only 20nm (still no finfet) and no magic rabbit in the architecture efficiency they will loose ground.
Intel doesn't have the good drivers and the perfect architecture they got way more room to increase. They also only need to compete with a 14nm 3D Tr. against a 20nm dGPU that suffers from having to deal with its own memory controller and VRAM which. DDR4 is added too. Intel doesn't even need to win benchmarks it only needs to make the dGPUs obsolete.

Unless Intel deliberately limits itself to low sub 28W TDPs and/or slows its interest in GPU development, I doubt it will take "quite a few years".
 
In my case, a discreet GPU isn't really needed. I don't do tasks that require the capability it can provide so offering one without discreet GPU at a lower price would be great. However, they should just bump up the spec of the offered GPU(s) for those users who really do need that power. The 15" is the top of the line and should be offered with hardware specs for users who require a high end, fast machine for productivity.

Deliberately removing the capability of a dGPU would really be a disservice to users who need the power. Having them available both ways would simply satisfy more users and increase market share and profit, but only offering without discreet graphics would be a huge mistake.
 
Yeah, it makes sense and does sound like Apple. But the Macbook Pro wouldn't have been considered a Pro laptop if one of the most essential features (i.e. a dedicated graphics card) was taken away.

The only reason why we have multiple GPUs on the MBP is because the integrated GPU is not powerful enough, and the discrete GPU consumes too much electricity.

If Iris provides nearly the same performance of a discrete GPU, I cannot see what advantages keeping the discrete are.

For apple, its a simpler design that frees up room, reduces heat and saves them money - no need to purchase a discrete gpu from another vendor.

Am I missing something?
 
Not "Iris", "Iris Pro". Plain Iris is not acceptable even with a discrete GPU.
 
MacBook Pro's don't even have a discrete sound card, and they are supposed to be used by many music makers, that says it all as to how Apple thinks a "PRO" means = beautiful package outside, subpar/integrated components inside, super high price tag, maximum profit.
 
Last edited:
MacBook Pro's don't even have a discrete sound card, and they are supposed to be used by many music makers, that says it all as to how Apple thinks a "PRO" means = beautiful package outside, subpar/integrated components inside, super high price tag, maximum profit.

What?! There's no need for Apple to do that!

I don't know any music producers who doesn't use a professional sound card, which have the proper outputs for monitors (jack/XLR) and pre-amp for microphones etc.

A 3,5mm output isn't used by anyone else than hobby-producers, so it wouldn't make sense for Apple to use a better sound card (and I don't know of any laptop that does!).
 
Removing the dGPU would allow Apple to make the rMBP even thinner (as the Iris requires less cooling than the 650M).

Indeed.

But I do not think they would make the 15in thinner, if Apples moves this way, wouldn't they be creating competition towards their own super thin MBA? As far as non-technological people go, the only difference between the MBA and rMBP that I always hear them talking about is the retina display.

What do you guys think?
 
Indeed.

But I do not think they would make the 15in thinner, if Apples moves this way, wouldn't they be creating competition towards their own super thin MBA? As far as non-technological people go, the only difference between the MBA and rMBP that I always hear them talking about is the retina display.

What do you guys think?

I'm sure it won't be much thinner. The 13" will maybe be that hair thinner, so that it's as thin as the 15" rMBP. It can be a little thinner, but it will never get as thin as the Air.

But Apple really have to make a clear difference between the Air and rMBP, especially if/when they discontinue the cMBP.

The "pro" in the cMBP also makes it more difficult to customers to distinguish between the three different Macbook models.

So a lineup with only Air and Macbook Pro (I think they'll eventually remove "retina" from the name when the old cMBP is discontinued) makes a clear line between the two, so that people can distinguish them:

Air: The mainstream laptop. Thin, good battery life, but not that powerful.
Pro: For the "pro" market, with its retina screen, and performance, but a little thicker and worse battery life. It also has two Thunderbolt ports and HDMI out (MBA only has one TB port and no HDMI).

I don't see the Air get a retina screen the next two years.
 
What?! There's no need for Apple to do that!

I don't know any music producers who doesn't use a professional sound card, which have the proper outputs for monitors (jack/XLR) and pre-amp for microphones etc.

A 3,5mm output isn't used by anyone else than hobby-producers, so it wouldn't make sense for Apple to use a better sound card (and I don't know of any laptop that does!).

I knew someone would come out with that argument, I know music producers ALWAYS buy an external pro sound card, I was referring to audiophiles who would love to have a good quality DAC built-in for the price on a MBP and not simply intel integrated sound, this is value added for the price we pay for those machines that are supposed to stand out from the rest.
 
Eventually they have to make the Air retina too. With IGZO displays coming efficiency isn't that much of a concern anymore and most of the Windows competition in that price range has full HD displays.

I don't think the retina screen will be a differentiating factor for long. Next iteration the Air should get retina displays too.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.