Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Simplest answer. Two versions:

One with ONLY GT4E iGPU, and second with GT2 + AMD/Nvidia dGPU.

One of the things that are apparent on this roadmap is that now matter when the update will appear, for the next hardware update, with more performance we will wait very long time. Moore's Law, for gods sake ;).

P.S. Macbook Air can get Kaby Lake CPUs. MBP 13 and 15 - Skylake CPUs.


Intel GPU's have such substandard performance the only reason they are used is because of the "thin and light" rabbit hole that Apple and all the other "ultrabook"/tablet vendors have been chasing, at the expense of battery life.

There's also only one more die shrink expected (5 or 7nm) after that, we're not going to see any more because that's it. We've already seen that NAND memory can't shrink below 24nm and retain P/E cycles. A similar problem with increased error correction is required for DDR4 SDRAM at 20nm

So it might be we might not actually see a 5 or 7nm process because SDRAM and and NAND/NOR flash memory can't use the older 14nm process to repurpose those fabs. Might just see GPU's on those fabs.

Which goes back to this entire problem with Intel GPU's. Intel's GPU parts are weak, super-weak. They might be fine for what flies for a "netbook" before WebGL was a thing. But the bar was raised. Now a "minimal" computer can not get away with the GPU performance of an iGPU unless it has parity with a desktop $100 GPU. Intel's fastest iGPU is slower than a nVidia 675M from 2012, so a Passmark score of 1950 is where the highest end Iris Pro 580 is at.

Intel can not stick a GPU that performs well in a mobile part, it just consumes too much power. Adding to the fact that the only benefit you get from iGPU's on a desktop is the broken Quicksync, and it becomes readily apparent that Intel doesn't care about competing in the GPU performance, only providing a crufty GPU to OEM's who want to flog rubbish-grade ultrabook/subnotebook's. One is better off buying an iPad Pro than a 15W laptop/MacBook Air.
 
  • Like
Reactions: ssgbryan
Three reasons: power, reliability, and price. AMD and NV discrete GPUs use as much power as the CPU itself, and more under significant load. This murders battery life, it makes for an unpleasant user experience due to heat and noise, and most importantly to Jony, the thermal constraints significantly limit their options on radical industrial design.

But if someone wants the most powerful GPU in a laptop... do they really care about battery life?

For general computing tasks... sure. Which is why it was nice to have a switchable iGPU/dGPU.

But if you need power... you literally need power.
 
Neither do you, because that would be 100% dependent of how Apple deals with things.

Anyway, dGPUs mean less battery life, bigger form factors or throttling, and higher risk of damage. Macs shouldn't have components from AMD or Nvidia.
Like Apple is concerned about throttling. Their laptops thermal throttle like crazy already.
 
Last edited:
  • Like
Reactions: ssgbryan
Three reasons: power, reliability, and price. AMD and NV discrete GPUs use as much power as the CPU itself, and more under significant load. This murders battery life, it makes for an unpleasant user experience due to heat and noise, and most importantly to Jony, the thermal constraints significantly limit their options on radical industrial design.

As for reliability, AMD and NV's graphics stacks have always been garbage written by the C-teams at their respective companies. They're one of the primary sources of crashes, panics, and product delays, and always have been.

And buying third party GPUs significantly drives up the bill of materials cost for the configs that use them. Apple can have its own graphics silicon fabricated for far less than the premium it would pay either vendor.

Apple's own GPUs from iOS hardware could easily beat either of these companies on performance, power, as well as price. And it's just a matter of time before they do. I think we'll see ARM Macs by 2018, and possibly Apple GPUs earlier than that.

That may piss off a lot of folks that rely on Mac-only or Mac-optimized software. And there's no guarantee that major software players (ie: Adobe, MS, Autodesk) may port versions of their software to ARM-based Macs.
 
  • Like
Reactions: ssgbryan
But if someone wants the most powerful GPU in a laptop... do they really care about battery life?

For general computing tasks... sure. Which is why it was nice to have a switchable iGPU/dGPU.

But if you need power... you literally need power.

Agreed. And the cMPB really wasn't that thick and had plenty of room for a battery and a decent keyboard.

I would ask if someone wants the most powerful GPU in a laptop, do they really care about the thinnest possible computer?
 
  • Like
Reactions: ssgbryan
Do you have evidence to back that up?


Probably not, but it does seem logical that they would considering the direction they are headed. Just as they had a build running on Intel before the switch from ppc.

It would be a massive, massive move to make and it would instantly destroy basically everything that currently exists. Unless they have a trick up their sleeve.

There would be no Windows compatibility obviously, though virtualisation could take care of that.

It would also be ridiculously quick and easy to port over existing iOS apps, remade with a suitable desktop interface of course and there's no shortage of developers or apps there.

There is of course the possibility of losing some desktop apps that we currently rely on. However alternatives would naturally appear and if sales of the hardware are good enough it would be rather foolish for the big players not to port over their applications. Just as they did after the Intel switch.

When you consider just how much performance Apple can squeeze out of their A series chips on something as limited as an iPhone, the possibility of significant performance increases with the power and thermal properties of a laptop are certainly interesting. I wouldn't mind seeing what they could achieve.
 
Last edited:
yes, but hackintosh :p

i'm currently running a haswell based i5 with 290x in OSx, but i'd never recommend it for production use. so far it's been fairly reliable, but i'm terrified to update to Sierra. OSx hackintosh's with graphic cards can be a difficult thing to setup and get working right.

but it alone outperforms virtually every Apple computer in their product lineup save maybe the highest end i7 iMac. But my computer cost me < $1500 including a 34" ultrawide display
It may even outperform the k since Apple uses the unlocked variant to under clock the chip
 
  • Like
Reactions: LordVic
Seems smart to just return to dedicated graphics. Seems most people would prefer it anyway.
The problem is dedicate graphics chips aren't always the big win many think they are these days. At least not the low end integrate GPU replacements. For things like OpenCl processing Intel's integrated chips can often be a big win for the user. Now a high end mobile GPU chip is a different story but Apple won't have a range of computers if they only support one GPU model.

Speaking of GPUs, while Intel is get a lot of blame for the lack of an update (they deserve it) the lack of mobile 14nm GPUs is also a factor. From an engineering standpoint Apple really needs the process shrunk GPUs to deliver a worthwhile performance increase. People can whine all they want but without the right components Apple can't do much for upgrades.
 
I've been eargerly anticipating Zen. AMD keeps promising Intel level performance out of it. Got a link to any early bench marks? haven't stumbled on any yet. And whats the price point that we can expect? Will they be similarly priced to intel (if equal in performance) or will they be keeping wihth the "bang for the buck" route and being cheaper.

I'm also hesitant. Last time i remember the hype train over AMD CPU's was bulldozer and it turned out to be a fairly big dud

the one thing I like about going AMD for APU or GPU is their OpenCL implementation and GPU compute which seems to be wider supported and more Open than CUDA.
I believe the lead designer for Zen is the same person who designed the Athlon x64 which were massive hits.

Hopefully he can pull a rabbit out of his hat again with the zen.
 
  • Like
Reactions: LordVic
It may even outperform the k since Apple uses the unlocked variant to under clock the chip

yup! I remember when i was looking at it as an ugprade. saw the i7-k and was thinking "finally Apple putting some real power!"

then I looked at the thermals
Then I looked at the core clock speeds under load

and realized (with confirmation online from other testers) that the reason Apple went K was so that they could underclock the CPU under load as to fit into the thermal envelope they demanded when they made the iMac super thin (because we all know thats what people look for in a desktop computer is super thinness)
 
  • Like
Reactions: ssgbryan
Agreed. And the cMPB really wasn't that thick and had plenty of room for a battery and a decent keyboard.

I would ask if someone wants the most powerful GPU in a laptop, do they really care about the thinnest possible computer?

Exactly.

A lot of comments here are suggesting things like "Apple won't put a GTX 1060 in a laptop because it will murder battery life."

But what if battery life isn't your primary concern? What if GPU horsepower is?
 
I'd be absolutely willing to go give AMD CPU's another shot. I had some really good history during the Athlon XP, x2 and 64 days when Intel was still trying to cram Netburst down our throats. would be nice if they can return to legit high end competition, I'm just afraid of getting my hype train up.

I had my hype train up for Polaris, and while they're good GPU's. there's no flagship product yet that really showcases how far Polaris can be pushed. the 480 is a good card, but it's on par with the 290x/380x in performance, which, while using less power and being cheaper, isn't really showcasing forward performance gains for single card solutions
I'm still of the opinion that AMD is waiting for ZEN to release their "490x". or at least for the "new card hoopla" to die down so they can come out of left field when unexpected with a flagship product
For Polaris architecture, the RX 480 is the flagship. It is only design that will appear from this line. Zen APUs will use Vega architecture, and upcoming high-end and enthusiast lineup of GPUs is also based on Vega Architecture. And the RX 490 is based on Vega GPU.
 
Exactly.

A lot of comments here are suggesting things like "Apple won't put a GTX 1060 in a laptop because it will murder battery life."

But what if battery life isn't your primary concern? What if GPU horsepower is?

Thats what I don't get.

Sure, MacBook air, Macbook. focus on thiness and battery life as your #1 priority if they can achieve what the 80% of the world needs it for (facebook, web, etc)

but when you call something "PRO", i expect it to focus on performance FIRST. not thinness and "pretty". i'm ok with losing a couple hours battery life for a full powered laptop. i'm NOT ok with losing power just to make it 2mm thinner
 
It's not about "courage" it's about the engineering. Intel FORCES companies to take the iGPU now. You can't buy a mobile chipset without it. So you're always going to be using mobile graphics... there's no way to turn it "off" in terms of battery life. Your only option is to ad an expensive and large third party GPU if you want better. You won't get Apple's simplicity of computer design then... it's simply not an option for the tiny notebooks now.

I'm definitely leaning toward a macOS ARM-based machine. Intel has backed PC makers into a corner on the graphics issue for almost 10 years now. AMD simply doesn't cut it for mobile workstations... so going to an AMD solution with better graphics isn't an option either. We haven't seen what A10x looks like yet... with extra graphics or cores it would be a monster and sip battery life.

I'm talking about the high power Iris and Iris Pro GPUs. The PCs that these Iris/Pro iGPUs were aimed-for continued to use AMD and nVidia dGPUs. Every 15" rMBP competitor for example all use high-end dGPUs and ignored the Iris Pro altogether. As a result, Intel put the Iris Pro on the back burner.
 
Intel GPU's have such substandard performance the only reason they are used is because of the "thin and light" rabbit hole that Apple and all the other "ultrabook"/tablet vendors have been chasing, at the expense of battery life.

There's also only one more die shrink expected (5 or 7nm) after that, we're not going to see any more because that's it. We've already seen that NAND memory can't shrink below 24nm and retain P/E cycles. A similar problem with increased error correction is required for DDR4 SDRAM at 20nm

So it might be we might not actually see a 5 or 7nm process because SDRAM and and NAND/NOR flash memory can't use the older 14nm process to repurpose those fabs. Might just see GPU's on those fabs.

Which goes back to this entire problem with Intel GPU's. Intel's GPU parts are weak, super-weak. They might be fine for what flies for a "netbook" before WebGL was a thing. But the bar was raised. Now a "minimal" computer can not get away with the GPU performance of an iGPU unless it has parity with a desktop $100 GPU. Intel's fastest iGPU is slower than a nVidia 675M from 2012, so a Passmark score of 1950 is where the highest end Iris Pro 580 is at.

Intel can not stick a GPU that performs well in a mobile part, it just consumes too much power. Adding to the fact that the only benefit you get from iGPU's on a desktop is the broken Quicksync, and it becomes readily apparent that Intel doesn't care about competing in the GPU performance, only providing a crufty GPU to OEM's who want to flog rubbish-grade ultrabook/subnotebook's. One is better off buying an iPad Pro than a 15W laptop/MacBook Air.
For each two opinions that you might share with someone else, there will be another two from people wo do not want dGPUs in their Macs. Keep in mind this.

IMO Apple's best choice would be betting on two setups of 15 inch MBP. One with GT4e GPU, and one with GT2 + dGPU setup.
 
  • Like
Reactions: LordVic
(because we all know thats what people look for in a desktop computer is super thinness)
I feel like I can stick an Intel NUC with some duct tape at the back of an extra thin Dell monitor and have a much better computer than most of Apple's line up for a 10th of the price.

And I don't have to replace the entire thing if I want to upgrade or the monitor gets damaged.
 
For Polaris architecture, the RX 480 is the flagship. It is only design that will appear from this line. Zen APUs will use Vega architecture, and upcoming high-end and enthusiast lineup of GPUs is also based on Vega Architecture. And the RX 490 is based on Vega GPU.

Interesting. Have AMD given us a timeframe to expect vega yet? I need a GPU upgrade, and AMD currently doesn't really have a clear defined one from the 290x. 290x was fine for 1080p gaming on Ultra, but I'm now running 2560x1080 and the extra pixels seem to be too much for it. And I can't justify 500+ for a Fury / 390x for only a minor FPS bump. If i'm going to spend that sort of money on an upgrade, it better give singificant performance gains. right now, my only upgrade path is a 1070 or 1080, which are insanely priced in Canada (599 CAD for 1070, and 899 CAD for 1080 )
 
Interesting. Have AMD given us a timeframe to expect vega yet? I need a GPU upgrade, and AMD currently doesn't really have a clear defined one from the 290x. 290x was fine for 1080p gaming on Ultra, but I'm now running 2560x1080 and the extra pixels seem to be too much for it. And I can't justify 500+ for a Fury / 390x for only a minor FPS bump. If i'm going to spend that sort of money on an upgrade, it better give singificant performance gains. right now, my only upgrade path is a 1070 or 1080, which are insanely priced in Canada (599 CAD for 1070, and 899 CAD for 1080 )
Fury GPUs, because they are EOL'ed in Retail/consumer are having very large discounts around the world, I think I have seen on Anandtech that someone mentioned 319 USD for Asus Strix Fury, which is bargain of the century considering the performance.
 
I just looked at the best available Skylake chip for the MacBook Pro and what surprises me is that despite the shift from 22 to 14 nn fabrication the clock speeds are no better than what was available for the 4th Generation chip.

In fact it's actually worse. The old 4980HQ Crystal Well can turbo to 4 GHz while the 6970HQ reaches only 3.7. Base clock speeds are the same and maximum TDP draw is only slightly less for the Skylake chip at 45 W vs 47 for the Crystal Well. Skylake supports faster RAM and its integrated graphics are probably much faster but the high-end rMBP has dedicated graphics anyway.

Compare this to what Nvidia achieved with its 10-series graphics cards after a shrink from 28 to 16 nm. They achieved drastic clock speed and performance boosts for the same power compared with previous generation cards.

So why the difference? Is it that the new Intel chips don't perform much better except in graphics and offer only marginal improvements to efficiency? If so I can see why Apple was in no rush to use them. But more worryingly Apple seems to have lost its sway and ability to get what it wants from Intel, like Iris Pro chips in a timely fashion.

I think the Skylake chips will be good for the lower-end models without dedicated graphics as they'll get a performance boost in graphics that takes them near the performance of the dGPUs shipped in recent high-end 15" rMBPs, but without a big boost in the dGPU's performance the high-end model is looking less exciting.

Here's hoping the die shrink in discreet graphics can offer something on the order of the 70% boost we were promised last time but never really got (except for one cherry-picked AMD-optimised outlier).
 
Last edited:
Apple is quite anti-GPU these days.
Apples attitude isn't any different than it has been for years now. Discussions about poor GPU performance have gone on for years now. The only strange thing here is that the iPhone / iPad people seem to get it offering very good GPU support.
If an Intel CPU doesn't have adequate integrated graphics, it will be skipped, because Apple won't reverse the path taken against discrete GPUs.
You never know! Seriously the interest in augmented reality and virtual reality may very well drive Apple to better GPU performance in at least some machines.
I'd really wish that some major Apple customer would request a multimillion dollar shipment of Macs equipped with cutting-edge NVIDIA GPUs, but that kind of customers already moved to Linux.
No body in their right mind would be demanding NVidia.
The main Mac user wears a watchie and chats with a phonie... that's the sad reality.

Well that I agree with. The general stupidity as computer users seen in Mac forums highlight this issue to no end.
 
"… the "GT2" tier, which typically has about half the raw power of the GT3 tier, to be launched in the next-generation "Kaby Lake" processor family."

"The situation is a bit better for the 13-inch MacBook Pro and the MacBook Air, which use 28-watt and 15-watt versions of the "U-series" processors respectively. The leaked roadmap indicates that Kaby Lake versions of these chips with GT3e graphics are scheduled to launch in the first quarter of 2017 …"
—per 'MacRumors'




It would seem that Apple's likely release of new Mac Book Pros in October with Skylake chips … will be effectively obsolete on day one.

In consideration of the same released later in 2017 with Kaby Lake and twice the GPU performance.

Unless adopting their current strategy and waiting another three or four years for an upgrade. But in any event obsolete in comparison the broader market.
 
For each two opinions that you might share with someone else, there will be another two from people wo do not want dGPUs in their Macs. Keep in mind this.

IMO Apple's best choice would be betting on two setups of 15 inch MBP. One with GT4e GPU, and one with GT2 + dGPU setup.

This is certainly an approach Apple could take, like how the 4K 21.5" iMac stuck with Broadwell for the Iris Pro 6200.

This fall will be the Skylake, with 6770, 6870, and 6970HQs.

When quad core Kaby Lake arrives in 2017, Apple could have the base model continue to use the 6770HQ with Iris Pro, then have the high-end model update with a 7700HQ and a new dGPU.

Knowing Apple though, they will probably just skip Kaby Lake in the 15" altogether.
 
this is likely true across the entire Apple product lineup.

the 21" iMac's use mobile parts and are significantly slower than desktops
the 27" iMac's that use desktop parts suffer from Thermal throttling. Most noticably the underclocked i7-k model they use was purposely picked so they could limit the thermals by underclocking and thermal throttling.

iMac 21.5" retina uses desktop chips! The only Broadwell consumer desktop chips Intel has.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.