Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
Ive learned my lesson to not argue with some types of people about graphics hardware.

But...
55301.png

55300.png

In compute HD5200 was faster than GT650M. Person who complains that software is not able to use the hardware, and blames hardware for it is either stupid or uneducated.

HD580 will only increase the differences.

GT650M core config: 384:32:16 Cores/Tex/ROPs
HD5200: 340:40:4 - as you can see ROP count is bottleneck here, thats why EDRAM helps with performance a lot.
GTX 960M - 640:40:16 - ROP count the same as Kepler, 25% higher number of Texels, and much higher number of cores.
HD580 - 576:72:9. If you would ask me, because the nature of iGPU ROP is still bottleneck here, but you can see the leap of Texels. It will not be bottlenecked by filtrate(TX), nor it will be bottlenecked by Memory(EDRAM). Graphical performance of HD580 is higher than GM107, but it lacks a bit in terms of amount of execution units. The core count is also very reason why HD580 is rated at 1.15 TFLOPs of compute power.
 
Last edited:
  • Like
Reactions: Mr. Retrofire
Ive learned my lesson to not argue with some types of people about graphics hardware.

But...
55301.png

55300.png

In compute HD5200 was faster than GT650M. Person who complains that software is not able to use the hardware, and blames hardware for it is either stupid or uneducated.

HD580 will only increase the differences.
Thank you for CAD and Photoshop benchmarks ;) And what dinosaur is 650m?
 
The bars will change depending on what specific task it is you're doing.

For a general purpose computer, I don't want/need/care for a dGPU. It's a complete waste of valuable battery and costs a lot.

Especially when you look at the graphics capability already available. Connecting two 5k screens over USB-C will be possible. What more do people want???

If I want to optimize my computation, I'll do it on a Xeon Phi or a NVIDIA Tesla or Xeon AVX-512 or whatever. Not on my portable computer. It's stupid to even try.
 
  • Like
Reactions: Mr. Retrofire
GT650M core config: 384:32:16 Cores/Tex/ROPs
HD5200: 340:40:4 - as you can see ROP count is bottleneck here, thats why EDRAM helps with performance a lot.
GTX 960M - 640:40:16 - ROP count the same as Kepler, 25% higher number of Texels, and much higher number of cores.
HD580 - 576:72:9. If you would ask me, because the nature of iGPU ROP is still bottleneck here, but you can see the leap of Texels. It will not be bottlenecked by filtrate(TX), nor it will be bottlenecked by Memory(EDRAM). Graphical performance of HD580 is higher than GM107, but it lacks a bit in terms of amount of execution units. The core count is also very reason why HD580 is rated at 1.15 TFLOPs of compute power.
You compare sp's of maxwell to kepler ? Do you have even basic gpu knowladge?
 
AMD Polaris 11. That is completely different story ;).

There is a strong possibility at this point, indeed. I wouldn't have said it a few months back, but now the odds of having AMD Polaris is high. Basically many people are pointing in that direction. In my case that could be the thing that could tip off the balance in favor of buying the rMBP. AMD Polaris may be the right choice.
 
Yes I know and can explain you but this is a big offtop
Then explain why. And I will be able no matter what you say correct it.

Because you did not understood completely point of that post, you argue about. I am waiting for your explanation, why Maxwell GPU with lower core count are faster than Kepler. It will be good education for people who are reading this thread also.
 
Then explain why. And I will be able no matter what you say correct it.

Because you did not understood completely point of that post, you argue about. I am waiting for your explanation, why Maxwell GPU with lower core count are faster than Kepler. It will be good education for people who are reading this thread also.
I understand that you don't have knowledge about gpus and thats why you compare different architecture by comparing number of process units. If you are so interested why maxwell architecture is better than kepler: increased l2 cache, rewriten scheduler, changed number of sp per sm from 192 to 128, color compression, increased clocks
 
S6yUbFc.jpg

Pascal-Chart.png

Just two simple charts. Cores are exactly the same, they just do more in one clock. Registers are the same, everything is the same. Just number of cores could do more in one clock.
All of this meant, that 128 cores from Maxwell had 90% of performance of 192 cores from Kepler. Which Nvidia said themselves.
You still completely did not understood the nature of that post. Read it again, look at benchmarks, take context of it, and I hope, you will understand it.

Nothing of improvements of architecture brought increase in theoretical compute performance of Maxwell GPUs. It did not made architecture pull more compute power out of thin air.

768 CUDA core, 1 GHz Kepler GPU would still have exactly the same compute performance as 768 CUDA core, 1 GHz Maxwell GPU. As will be the case of Pascal.
 
768 CUDA core, 1 GHz Kepler GPU would still have exactly the same compute performance as 768 CUDA core, 1 GHz Maxwell GPU. As will be the case of Pascal.
Hahaha you made my day. They will have the same maximal theoretical peak performance which have very little to do with performance in real life scenario. Btw do you even know that in kepler time nvidia have two different architecture for big and smaller keplers?
 
Hahaha you made my day. They will have the same maximal theoretical peak performance which have very little to do with performance in real life scenario. Btw do you even know that in kepler time nvidia have two different architecture for big and smaller keplers?
So explain to me, why HD5200 was faster than Kepler if you say that Intel iGPU was rubbish? Why Vegas for example shows that iGPU was faster?

Why does compute performance matter in video editing, for example? At this point you are trying to prove me that:
A) I do not know anything about GPUs.
B) That you are right, and I am wrong.

But in all this you completely miss the point of iGPUs, and dGPUs, and their compute performance.
 
So explain to me, why HD5200 was faster than Kepler if you say that Intel iGPU was rubbish? Why Vegas for example shows that iGPU was faster?

Why does compute performance matter in video editing, for example? At this point you are trying to prove me that:
A) I do not know anything about GPUs.
B) That you are right, and I am wrong.

But in all this you completely miss the point of iGPUs, and dGPUs, and their compute performance.
You show some opencl benchmarks. As far as I know nvidia develop they cuda and dont bring much effort in opencl. Also little kepler is strictly gaming not compute card.
 
You show some opencl benchmarks. As far as I know nvidia develop they cuda and dont bring much effort in opencl. Also little kepler is strictly gaming not compute card.
For god sake...

Even if GPU is not compute focused it has Compute performance regardless, because that is how GPUs are currently developed. OpenCL vs CUDA. Its obvious why Nvidia develops CUDA - its their own proprietary API, that locks people to Nvidia hardware. OpenCL is open Source, and can be used on any hardware, any platform. It focuses ONLY on Compute performance of GPU. There is no magic here.

P.S. CPU + iGPU 1.15 TFLOPs of compute power in 45W Thermal envelope.
CPU + dGPU - 1.3 TFLOPs of compute power in 95W thermal envelope. You could park second CPU with iGPU in that thermal envelope with much better result. That is efficiency.

At this moment, there is absolutely no point in looking at any dGPU option for Macbook Pro, if it not brings huge benefit to the platform. If we would want something that brings benefit over HD580 - we would have to look at Pascal/Polaris GPUs.
 
67744.png

gtx 980(maxwell) 4,98 tflops
gtx780ti(kepler) 5,34 tflops
Maybe now you will understand something and stop comparing raw number to different architectures ;)
 
Last edited:
It's actually held up surprisingly well. I can play most video games on it. And I use it for game development, video editing and image editing. The fact that the Iris Pro will blow it out of the water is what makes me think that an iGPU isn't completely crazy as an idea.
 
so any chances to get only HD 580 ? or maybe polaris will be out in time for June-July for the 15" MBP?
there will be a big difference between hd 580 and polaris to make sense to put into the mac?
 
so any chances to get only HD 580 ? or maybe polaris will be out in time for June-July for the 15" MBP?
there will be a big difference between hd 580 and polaris to make sense to put into the mac?
The differwnce can be really big but when mobile polaris arrive nobody knows
 
The benefit of having a dGPU and Iris Pro together is that DX 12.1+ will be able to run SLI between the two.

While this doesn't effect MacOs yet. If you boot camp you'd be able to take advantage of performance boost. Of course that would have to be an AMD card not Pascal unless nvidia catches up and actually applies hardware DX 12.1+ rather than a software equivalent.
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.