Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
30% slower than a dGPU is not bad by any measure for an iGPU. The thing is, having Iris Pro means a lower CPU clock to compensate,and Apple cares more about CPUs than GPUs.

30% than a dgpu means nothing, 30% slower than a mainstream dgpu means nothing as well

in the end its all the same thing, the igpus are still on the low end as they should be, its a big performance gain, and with driver updates it should be more stable all around, those down spikes are not that justified.

whats been missing here is that the gk107 was never supposed to be mainstream, its a good performance boost over fermi, but still mainstream is the first time since the 300m series that we see an actual good improvement.

my point here is that mainstream market for dgpus in mobile is a joke for such a long time, that even intel can do something about it. update those drivers and we might get something close to 20% less performance.
 
This thread was mostly about Trinity, Richland and Iris Pro. With the benchmarks out now that one graphic was seriously heavily off. Even with desktop TDP AMD's APUs see no light against Iris Pro.

Most GPUs scale really well with TDP. Mainstream mobile has been a joke relative to Desktop because they have to deal with 30W TDP versus about 150 and more.

Low end GPUs don't have any point in mobile because they are less efficient than similar performance with an IGP. Mainstream GPUs whether they be a joke from a desktop perspective or not are the next to go. In many cases the dGPU won't be worth it anymore even if it is still faster.

30% difference can easily be negated by a bit of detail tuning. It is not like it used to be a question of being able to do any decent gaming vs non or only lowest settings.

Even between the current gen AMD and Nvidia parts there are 30% differences in some games that one card likes significantly more.

whats been missing here is that the gk107 was never supposed to be mainstream, its a good performance boost over fermi, but still mainstream is the first time since the 300m series that we see an actual good improvement.
With Kepler Nvidia also claimed a 100% efficiency improvement. Which is the same Intel did for Sandy to Ivy per EU.
Granted Fermi was hot and really inefficient with AMD clearly in the lead. It is easier to claim 100% improvement if you start out so badly. Still the die shrink should only be responsible for some 30-40% gains. The rest is a more efficient architecture. On the other hand Kepler was and is bad at general processing.
Improvements like from 500M to 600M series we won't see again for a while. The last time I think was the move to the 8000 unified dx10 architecture.
If history is any indication the GPUs only get small improvements usually and the grow with shrinks but only every 3 generations there are big boosts.

Also Intel is slimming down notebooks with their ultrabook initiative which means less and less notebook designs that want to add hot GPUs. One reason I think why both AMD and Nvidia moved their numbering up with AMD calling the 8800 already sort of mainstream. Nvidia going to 650M and now 760m. It is I think all to give the most important customers a feeling of getting good stuff.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.