Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
I don't take it personal :) I just think you are wrong.

"but if Intel finally manages to catch nvidia/amd higher-end movile gpus" - probably never happen as new gpus on 14/16nm will arrive this year

"(which has already done with the middle-end)" - if middle-end is 940m ddr3 then yes but for me middle end is 960m and highend are 970m and 980m (comparing iris 550 to 950m ddr5 we can be sure that iris 580 will be deffinitly slower than 950m ddr5)

"the Intel iGPUs are much better performing (OpenCL, etc) and have more brute force that the gaming nVidias" - thats new for me, can you give me some links to benchmarks?

No, not this year, but if they managed to reduce the enormous gap there was between iGPUs and dGPUs, that shows their progression curve is much faster than the nVidia/AMD one. So expect in 2-3 years to be on par. And why do you think Apple will go to the GDDR5 version?
Don't use Zeon paradoxes (the tortoise one) to confirm iGPUs will never catch dGPUs.

There was one guy here that explained it great (the brute force difference between iGPUs and dGPUs to do graphic computing), don't know where the post is. But just a few numbers of floating point performance, from already known GPUs that are supposed to be on par:
* GTX750m (GDDR5) = 722.7 GFLOPs
* Iris Pro 5200 = 832 GFLOPs

Well, I said "on par" but Apple put a worse version of this dGP (the GT750m) as a "top gamma". Just search for floating point performance of the GT750m and the iris pro 5200 the "actual" MBP 15" has, and you'll see how average dGPU is not the best thing to use in floating point calculation. This of course doesn't mean it performs worse in games, since there are other factors.
 
No, not this year, but if they managed to reduce the enormous gap there was between iGPUs and dGPUs, that shows their progression curve is much faster than the nVidia/AMD one. So expect in 2-3 years to be on par. And why do you think Apple will go to the GDDR5 version?
Don't use Zeon paradoxes (the tortoise one) to confirm iGPUs will never catch dGPUs.

There was one guy here that explained it great (the brute force difference between iGPUs and dGPUs to do graphic computing), don't know where the post is. But just a few numbers of floating point performance, from already known GPUs that are supposed to be on par:
* GTX750m (GDDR5) = 722.7 GFLOPs
* Iris Pro 5200 = 832 GFLOPs

Well, I said "on par" but Apple put a worse version of this dGP (the GT750m) as a "top gamma". Just search for floating point performance of the GT750m and the iris pro 5200 the "actual" MBP 15" has, and you'll see how average dGPU is not the best thing to use in floating point calculation. This of course doesn't mean it performs worse in games, since there are other factors.
"not this year, but if they managed to reduce the enormous gap there was between iGPUs and dGPUs, that shows their progression curve is much faster than the nVidia/AMD one. So expect in 2-3 years to be on par" - no, the gap is smaller then earlier only because gpus are still on very old 28nm process and skylake on 14nm(thats huge difference) but this year gap will be once again very big as new 14/16nm gpus arrive

"
* GTX750m (GDDR5) = 722.7 GFLOPs
* Iris Pro 5200 = 832 GFLOPs"
and 950m is ~1.3 TFlops (medium mobile gpu), and ofcourse no link to any benchmarks

"and you'll see how average dGPU is not the best thing to use in floating point calculation" you mean 32bit floating point operation or 64bit? GPUs are created to work with 32bits floating operations ;) 64bit is good only in tesla edition as its only use for scientific purpose and I'm sure nobody use macs to calculate doubles ;)
 
Last edited:
when we get igpu performance boost every year around 30-40% and dGPU get only 10-15% its mathematically that sometime soon we will get the same performance between top iGPU and mid size dGPU
i bet if iGPU from Cannonlake will get another 40% boost and nvidia 1050M card will have only 10-15% boost, then we will have the top iGPU >1050M
 
  • Like
Reactions: Woochoo
when we get igpu performance boost every year around 30-40% and dGPU get only 10-15% its mathematically that sometime soon we will get the same performance between top iGPU and mid size dGPU
i bet if iGPU from Cannonlake will get another 40% boost and nvidia 1050M card will have only 10-15% boost, then we will have the top iGPU >1050M
Pascal will get approximetly 70% boost comparing to maxwell
 
So again. Intel demoed HD580 in Just Cause 3 in 1080p, and it got minimum framerate at 30 FPS. What they did not said is the detail level.

GTX950M in high, not ultra, just high in the same resolution averages in 30 FPS. So minimums are lower in that resolution.

Nvidia released GTX 950(768 CUDA core model) desktop with 75W TDP to counter HD580. Why do you think they did this? Because it is that rubbish? Or because HD580 is such big threat to anything that is below in performance of the GTX 950?

GTX 750 Ti and GTX950M/960M/850M and 860M use the same core - GM107. There is a lot to believe that HD580 is as fast as at least GTX950M. But if it will be like that? We have to wait and see for the reviews.

About Cannonlake. CLK will bring 4 core, 6 core and 8 core as mainstream offering, and much wider core count. Iris Pro in CannonLake may have, and it is my secure estimate, 24 more cores than Kaby Lake, but with new architecture that will be introduced in KabyLake. So think about 50-60% more performance than Skylake Iris Pro.
when we get igpu performance boost every year around 30-40% and dGPU get only 10-15% its mathematically that sometime soon we will get the same performance between top iGPU and mid size dGPU
i bet if iGPU from Cannonlake will get another 40% boost and nvidia 1050M card will have only 10-15% boost, then we will have the top iGPU >1050M
Not exactly ;). GP108 which will be viable for MBP due to TDP is slated for early 2017. GP107, may have tight thermal envelope to fit in MBP, but it should be 1024 CUDA core GPU with similar to Maxwell architecture(because Pascal IS Maxwell, but with FP64, Unified Memory and NVLINK). So think that GP107 will have desktop GTX960 levels of performance.

AMD. Here is a bit different story. What we know is that at the beginning of the year AMD demoed small Polaris GPU in Star Wars Battlefront in 1080p medium settings and it averaged 60 FPS while consuming around 20W, with 0.85v voltage, and 850 MHz core clock. We know that architecture is pretty different, however on software side it will behave and report just like Tonga/Amethyst GPUs(R9 395/395X from iMac). By the looks of things it will have 1280 GCN cores. But will have more graphical horsepower than previous versions of the GPUs. So it will be pretty steep mountain to climb for Nvidia and Intel. Of course, this is all based on rumors and what have been available in terms of architecture technology, so far.
 
Just bought a number of Precision 5510 laptops for my design department. Apple has been telling pro users to go away for awhile now, and we've completed our transition.

We were once Apple top to bottom. Xserves, Mac Pros, MBP's.

First the xserves were abandoned.

The workstation market has been abandoned.

I had been holding out for a Skylake MBP, but even if one ships, then what? It'll hang around for at least a year without an update, have too little memory, too few ports, etc.

We'll keep one or two around for the occasional Mac-only software. And I plan to still user them at home where performance isn't much a concern.

I haven't had a windows laptop in over 10 years but I'm excited to try!
What version of the MBP were you on? What was the issue with your existing Mac Pros that you moved from them to another OS/manufacturer? Were your competitors getting jobs over your company due to your hardware limitations?
 
Pascal will get approximetly 70% boost comparing to maxwell
So you think that 1050m will have 70% peromance boost than 950m?
In our present time very hard to achieve this.
So I don't think we will get more than 20%
 
So you think that 1050m will have 70% peromance boost than 950m?
In our present time very hard to achieve this.
So I don't think we will get more than 20%
At least 50% more performance, because of the transition to 14 nm node, that is what we can expect from Pascal. We do not know what to expect from AMD, because... well the core count will not reflect performance of past generations of GPUs. 2560 GCN4 core GPU may be as fast as 3584 GCN3 GPU. This is only an analogy, we have to wait for the end results to understand what is happening. However forum members on anandtech explained a lot of new architecture, and brought a lot of understanding about Pascal, also.

Easiest way to explain all of differences. Last gen of GPUs was that AMD hardware was 2 years ahead of Nvidia hardware, but Nvidia software(drivers) was 2 years ahead of AMD. Right now it looks like AMD software caught up, and Nvidia hardware starts to lag behind AMD hardware.
 
transition to 14 nm means for sure more power efficiency but not really a lot more power
but hey, if you have a link to prove me wrong plz post it, i would enjoy to be wrong :)
 
transition to 14 nm means for sure more power efficiency but not really a lot more power
but hey, if you have a link to prove me wrong plz post it, i would enjoy to be wrong :)
http://www.tsmc.com/english/dedicatedFoundry/technology/16nm.htm

TSMC said:
TSMC's 16FF+ (FinFET Plus) technology can provide above 65 percent higher speed, around 2 times the density, or 70 percent less power than its 28HPM technology.

In other words, in the same package on 16 nm process from TSMC(which Nvidia will use) you can pack 2 times more transistors, and made whole core count two times bigger compared to 28 nm, with even better power envelope.
 
  • Like
Reactions: GubbyMan
transition to 14 nm means for sure more power efficiency but not really a lot more power
but hey, if you have a link to prove me wrong plz post it, i would enjoy to be wrong :)
History of gpu, everybody nvidia brings new generation of cards with new process production it's approximately 1.7x times faster than earlier.
 
so you say Nvidia 1050M 70% faster vs Nvidia 950M ?
It will be nvidia choice. Kepler was 1.8x times faster than fermi(780ti vs gtx580), and fermi was 1.64x faster than tesla (gtx580 vs gtx285) on the simillar power consumption. So nvidia can provide 1050m with power consuption of 950m that is 1.7x time faster but it will depend on intel gpu performance and polaris performance because nvidia also want to keep gap between desktop gpus and mobiles.
 
Last edited:
I don't want a separate power-hungry GPU. I want a sleek, integrated, low-power solution.

If you want to play computer games (game fans seem to be getting older and older), buy a proper computer and connect it to a high res monitor.

I don't play games.

If I want to program a GPU, I will use an NVIDIA Tesla card. Or an Intel Xeon Phi, depending on what I'm doing.
 
I don't want a separate power-hungry GPU. I want a sleek, integrated, low-power solution.

If you want to play computer games (game fans seem to be getting older and older), buy a proper computer and connect it to a high res monitor.

I don't play games.

If I want to program a GPU, I will use an NVIDIA Tesla card. Or an Intel Xeon Phi, depending on what I'm doing.
13 inch is the way to go. Or base model 15 inch. Why do you have to make yourself more important than others?
 
I don't want a separate power-hungry GPU. I want a sleek, integrated, low-power solution.

If you want to play computer games (game fans seem to be getting older and older), buy a proper computer and connect it to a high res monitor.

I don't play games.

If I want to program a GPU, I will use an NVIDIA Tesla card. Or an Intel Xeon Phi, depending on what I'm doing.

This individual may came off as arrogant, but they have made some good points (despite presenting them poorly).
 
I'll probably never understand why some people need a laptop that is average-powerful.
For me, having a computer that is capable to handle different tasks is really important.
Especially if it's a computer that costs 2000 €.
Do a really need now the most powerful dGPU of the market?
Probably no, but if the next year I want to play a game that is heavy, maybe I won't have to worry about it.
I want durability on my 2000 € working machine, at the cost of have to carry 300 grams more of aluminum and batteries.
And I want to have the possibility to choose between different categories of power.

And by the way, dGPU it's not always about gaming. There are others tasks that need a dGPU to be execute.
 
I'll probably never understand why some people need a laptop that is average-powerful.
For me, having a computer that is capable to handle different tasks is really important.
Especially if it's a computer that costs 2000 €.
Do a really need now the most powerful dGPU of the market?
Probably no, but if the next year I want to play a game that is heavy, maybe I won't have to worry about it.
I want durability on my 2000 € working machine, at the cost of have to carry 300 grams more of aluminum and batteries.
And I want to have the possibility to choose between different categories of power.

And by the way, dGPU it's not always about gaming. There are others tasks that need a dGPU to be execute.

Well, look at it this way. The trend is thinner/lighter, we know this. And saying it is a "Pro" machine is hardly a point of argument anymore. Look what they did to the Mac Pro, they took out expandability for form factor. What makes you think the MacBook Pro won't meet a derivative of this fate? I would imagine with Apple's eagerness to add USB-C into the 12-inch MacBook, surely that means they are trying to gut the machine.

But why? Well, so you can assumedly use external GPUs/adapters to supplement whatever you had done prior. In my opinion, you would be delusional in thinking that Apple won't shrink the MacBook "Pro" and kill some ports along with it. Look at their entire line. The iMac, why on EARTH did they shrink it? It is a DESKTOP computer for Petes sake, they did it for aesthetic purposes. Look at the Mac Pro, yeah, it is now a cylinder for some strange reason. Now look at the MacBook Pro Retina... why would they not shrink a MOBILE laptop? Your "Pro" level needs can be supplemented with peripherals.
 
I'll probably never understand why some people need a laptop that is average-powerful.
For me, having a computer that is capable to handle different tasks is really important.
Especially if it's a computer that costs 2000 €.
Do a really need now the most powerful dGPU of the market?
Probably no, but if the next year I want to play a game that is heavy, maybe I won't have to worry about it.
I want durability on my 2000 € working machine, at the cost of have to carry 300 grams more of aluminum and batteries.
And I want to have the possibility to choose between different categories of power.

And by the way, dGPU it's not always about gaming. There are others tasks that need a dGPU to be execute.

What tasks? I'm sorry but you will be able to drive a 5k display (probably two) on the new MBP integrated GPU. You will be able to achieve an amazing GFlops number of matrix manipulation operations on the iGPUs mentioned.

What are you doing on your portable computer?
 
  • Like
Reactions: Woochoo
Well, look at it this way. The trend is thinner/lighter, we know this. And saying it is a "Pro" machine is hardly a point of argument anymore. Look what they did to the Mac Pro, they took out expandability for form factor. What makes you think the MacBook Pro won't meet a derivative of this fate? I would imagine with Apple's eagerness to add USB-C into the 12-inch MacBook, surely that means they are trying to gut the machine.

But why? Well, so you can assumedly use external GPUs/adapters to supplement whatever you had done prior. In my opinion, you would be delusional in thinking that Apple won't shrink the MacBook "Pro" and kill some ports along with it. Look at their entire line. The iMac, why on EARTH did they shrink it? It is a DESKTOP computer for Petes sake, they did it for aesthetic purposes. Look at the Mac Pro, yeah, it is now a cylinder for some strange reason. Now look at the MacBook Pro Retina... why would they not shrink a MOBILE laptop? Your "Pro" level needs can be supplemented with peripherals.

...so you're saying it won't be a "pro" and that's fine?
 
...so you're saying it won't be a "pro" and that's fine?

What I'm saying is it will be called by whatever moniker they choose to call it, but I believe it is inevitable that Apple changes their entire mobile lineup to thinner and kill along with it certain ports in order to do this. EVENTUALLY they will kill off the 3.5mm audio jack as well, again EVENTUALLY. Now? Probably not. It just seems like the writing is on the wall.

Who is to say that a "Pro" level laptop cannot be redefined so as to include more suitable external solutions while keeping the laptop/notebook itself in a slimmer design? Heck, Apple thought so with the Mac Pro desktop.
 
Well, look at it this way. The trend is thinner/lighter, we know this. And saying it is a "Pro" machine is hardly a point of argument anymore. Look what they did to the Mac Pro, they took out expandability for form factor. What makes you think the MacBook Pro won't meet a derivative of this fate? I would imagine with Apple's eagerness to add USB-C into the 12-inch MacBook, surely that means they are trying to gut the machine.

But why? Well, so you can assumedly use external GPUs/adapters to supplement whatever you had done prior. In my opinion, you would be delusional in thinking that Apple won't shrink the MacBook "Pro" and kill some ports along with it. Look at their entire line. The iMac, why on EARTH did they shrink it? It is a DESKTOP computer for Petes sake, they did it for aesthetic purposes. Look at the Mac Pro, yeah, it is now a cylinder for some strange reason. Now look at the MacBook Pro Retina... why would they not shrink a MOBILE laptop? Your "Pro" level needs can be supplemented with peripherals.

For the tiny tiny number of these GPU people (whoever they are), they can connect an external GPU to USB-C. The benefits (power and latency) of an integrated GPU with shared memory are obvious. Really, these GPU users would be better off buying a Tesla or running whatever is is they're doing on a GPU farm. This idea that a portable machine has to provide top end performance irrespective of user requirements is plain silly.
 
  • Like
Reactions: mjs402
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.