Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
@bill-p
I don't know where you found the graphic but I think it is a bit misleading as those are desktop Trinity.
A 100W A10 does quite well in games. A 35W Trinity is no where near those benchmarks. Since we discuss notebooks here one should compare the appropriate things.
Richland won't get anywhere near a 650M in the appropriate TDP class. On a Desktop the GPU side of the APU has way more power to do stuff.

The 7660G is actually quite unimpressive in the mobile TDP variants.
http://www.anandtech.com/show/5831/amd-trinity-review-a10-4600m-a-new-hope/6
The 5800k is only the 100W desktop version with the 7660G and in theory almost the same max clock as the mobile. When pared with a CPU and forced to stay within the 35W power envelope it is a lot worse than what your graphic suggests.
 
@bill-p
I don't know where you found the graphic but I think it is a bit misleading as those are desktop Trinity.
A 100W A10 does quite well in games. A 35W Trinity is no where near those benchmarks. Since we discuss notebooks here one should compare the appropriate things.
Richland won't get anywhere near a 650M in the appropriate TDP class. On a Desktop the GPU side of the APU has way more power to do stuff.

The 7660G is actually quite unimpressive in the mobile TDP variants.
http://www.anandtech.com/show/5831/amd-trinity-review-a10-4600m-a-new-hope/6
The 5800k is only the 100W desktop version with the 7660G and in theory almost the same max clock as the mobile. When pared with a CPU and forced to stay within the 35W power envelope it is a lot worse than what your graphic suggests.

The 7660G is still comparatively faster than HD 4000, and it's about on par with 640M in some cases. I'm not saying that AMD's next-gen part must be on par with 650M, but it's clear that if anyone is closer to 650M performance, then AMD is "it".

And if you want to talk about appropriate TDP class, try a 35W Ivy Bridge coupled with a 15W GT 650M. That's hardly comparable to a single 35W Trinity CPU. Even comparing to the 10W GT 640M, Trinity is more power efficient.

If AMD had more headroom (maybe a 55W Trinity?), I don't doubt they'd be able to at least reach 640M performance for their IGP parts.
 
The thing is Intel intends to change nothing about the TDP classes and promise about double the HD 4000 performance.
In Anandtechs simple HD 4000 vs. 7660G comparison you can easily see where that will land it.

AMD will still be at 32nm. Their CPUs won't make any significant jump in efficiency any time soon. So they will need roughly the same power for it. We also know that call that might change in the GPU is GCN and we also know roughly what that means for efficiency from dedicated parts.
Now add to that that AMD Trinity is already hugely memory starved and won't get any embedded VRAM. DDR3 1866 won't save the day it is a necessity. So performance increase will be smaller rather than bigger compared to dedicated solutions.

A 650M is at around 30W not 15. The 640M is at least 20W.
What I mean with comparable TDP classes is what does it have to go up against? You won't see Richland in a getting a special 65W rating to go up against Intel CPU + dGPU combos. It will be a 35W maybe 45W Quad and has to compete with dedicated GPUs or not. And it has to compare to the equivalent Intel only notebooks.
How a 100W desktop Trinity performs simply doesn't matter and only distorts. Intels desktop HD 4000 barely runs any faster than the mobile.
What will you actually be able to buy that you can compare?
That is why Anandtechs benchmarks here are useful because they come from actual notebooks, with real existing CPU/GPU combinations and with real games and how they fare. If the CPU cannot cut it the GPU won't save the day and vice versa. He also showed the most useful medium settings which are actually usable and neither unimportant ultra low settings nor unplayable slideshow settings.

From everything I can make out AMD won't be closer to the 650M. I think Intel will. Their about double increase seems likely and with the vastly more efficient GPU and 22nm they simply have more room. AMD is still on 32nm and has neither a significant architecture change that promises vastly more efficient procesing on either the CPU nor the GPU side and no fix for the memory bus. What AMD needs is 22/20nm and seriously high bandwidth DDR4 or some embedded VRAM.
My prediction is that AMD's Richland may just even out with Intels GT3e in the 35W and 17W TDP. Save this thread. I would bet money that I am right.

There is also just so much more room for Intel to improve. They have ****** drivers now, they could still have a much more efficient hardware before they would get close to nvidia or AMD if the latter would have 22nm and they promise embedded on chip VRAM to take stress of the main memory DDR3.
They can pull of a double performance. AMD cannot because every one of these points is against them.
Even your graphic suggest a very mediocre increase in performance over Trinity compared to what would be necessary unless Haswell falls far short of its promise.
 
Actually, they did change TDP classes. In fact, they "invented" SDP because TDP "wasn't an adequate benchmark for power consumption".
http://www.theverge.com/2013/1/9/3856050/intel-candid-explains-misleading-7w-ivy-bridge-marketing

Look at the TDP for the upcoming Haswell quad parts that I posted on the last page. It's +2W for just the HD4600 (which is to be about 25-30% faster than HD 4000). GT3 is obviously going to consume a lot more (hence why it's not coupled with the high-end quad-core mobile chips)

I don't think Intel can work magic and squeeze out 2x performance in the same TDP package. You're wishing for too much.

By now, all information has pointed to Haswell falling very far off its target point. I think we have had this discussion since December of last year, but somehow there are still these wishes...
 
Last edited:
SDP is simply a new more fancy name for the cTDP (custom TDP) that they introduced quite a long time back. Nobody used it so they gave it a new name.

The slightly higher TDP is according to Intel due to the on package VR which does add quite a bit of TDP on that spot. Overall it is probably better.

Intel will still be sticking to the TDP classes and adding more on the 10W end. I doubt they will add a 45W dual core.
GT3e probably consumes more than the HD 4000 but it has the embedded RAM with shorter paths, which could add lots of bandwidth while working way more efficiently. Today a rather slow HD 4000 already benefits greatly from faster DDR3 memory. If they actually go from 16 to 40 EUs and the eRAM works they can clock them way lower while still getting double performance.
Intel also has the option of eating into the CPU power while gaming as those are other than AMDs cores generally up to the task. If they put more towards the GPU like AMD does that can still yield results especially at higher settings where the CPU is rarely the problem. They don't have the CPU bottleneck.
Today HD 4000 simply has a rather low peak power but GT3e should change that.

They said they can squeeze out equal performance at about half the power so it seems reasonable.
If you would put a Nvidia Kepler GPU in 22nm into the Haswell I think performance would double. Intel has more money than anybody else in the industry and been recruiting GPU specialists for over 5 years. The basics in transistor design and algorithms aren't exactly a secret in the industry. They also made great strides form HD 3000 to HD 4000. I don't think double performance is so unreasonable.

Also since Richland seems to offer about 30-40% efficiency gain over Trinity. All they need is about 60% over the HD 4000 to be about even. That should be easy. Everything past that would put them ahead. If they fixed the drivers I think that alone would fix a lot of the big difference where the 7660G pulls ahead. Like Civ V seems broken on Intel.
 
They screwed up while trying to make SDP sound like they improved power efficiency, hence the article. Also Intel itself has stated that SDP is nothing like TDP. Or please feel free to point out otherwise.

And not all of the voltage regulator is on the CPU.
http://www.anandtech.com/show/6355/intels-haswell-architecture/2

Some parts will remain on the motherboard. It makes sense because the chipset and other components are still on the motherboard. It's not like those components can be fed directly from the PSU.

Intel's move to integrated VR (partially) is more to control CPU voltage on a finer grain scale, and thus effectively squashing any third-party attempt at tweaking CPU voltage for more broad overclock.

They already attempted to squash overclocking efforts on desktop significantly, and now they're just doubling down to make sure only specific processors (designated K) in their lineup can be overclocked.

Another immediate drawback: since the TDP of the CPU has increased, there is less headroom for Turbo Boost to kick in. And before you forget, Turbo Boost headroom is also a necessity for HD 4000 to kick in to those 1.1GHz - 1.25GHz frequencies.

I don't think Intel will add a 45W dual-core, but it's clear to me they won't be able to push out a dual-core Haswell part that's on par with what we have in the 13" MacBook Pro but with 2 times the graphics processing power, and all the while making it look like the integrated voltage regulator doesn't make a dent in the overall heat output.

I'm not saying 2 times the graphical processing power is impossible, but that it's impossible to expect that much improvement at the same power consumption number, even taking into account a die shrink. It's not like they're going from 32nm to 16nm with 100% efficiency.

Also info says Intel is planning 37W, 47W, and 57W processors, so it may turn out that we won't even see a 35W equivalence dual-core part with GT3. Either that or the 37W part is actually a ULV equivalence coupled with GT3. But I don't really see much room for Intel to squeeze GT3 into the standard voltage lineup.
 
Last edited:
That is cTDP.
http://www.anandtech.com/show/4764/ivy-bridge-configurable-tdp-detailed
It is virtually the same thing. The manufacturer buys a CPU and later decides whether to run it at full speed or run it cooler with a less capable cooling system in a thinner design.
As much as I read on those SDP parts one could forgo the 7W and just run it like a 13W part and get actually the speeds promoted.

Think about it this way. HD 4000 takes a max of 12W or so of the total power budget. A 17W ULV does still pretty okay in terms of gaming. If you give a GT3e now room to go for 20W of power consumption, you still have 15W CPU left which should be plenty in ego shooters and such games without too much cpu load. Than you end up with a 35W CPU pretty much doubling down if that is joined by some efficiency improvement.
The truth is CPU performance is rarely really an issue if you run some native resolution with mid/high settings at just fluid 30-40fps. No reason not to beef up the GPU side.
They also claim that they can push with 8W a 3dmark bench score of a 17W Ivy Bridge. If they aren't lying on the whole front they should be able to reach at least some 60%. Maybe not double on the whole range down but who knows. I wouldn't rule it out. I guess 60% at least those 40EUs have got to do something. They will most likely not run at 1100Mhz but quite a bit lower. Still should make for some performance.

It is 22nm and the last two generations both made big strides. Looking at a 650M Kepler architecture if you cut away about one third that you need for memory subsystem that Intel has shared and add 30% efficiency gain from Intels 22nm. You would sit on a GPU with almost double the speed over what intel promises with Haswell at 15-20W consumption (if Nvidia did it). There is still a big difference so I see a lot of possibility for Intel making another big jump in architectural efficiency for its GPU.
If they are so good with CPUs and have this much money why should they still continue to suck so badly in the GPU department?
 
You said Custom TDP, not Configurable TDP.

And no, not all features that Intel "announced" are readily available in the same generation.

Otherwise, integrated voltage regulator AND northbridge should have happened for Core 2 Duo.

See this:
http://www.anandtech.com/show/1770

And it is actually worse than you may think. Look at it another way, Intel is basically saying their chips don't just push out 17W of heat. It can consume a lot more power and push out a lot more heat than the rated TDP when the situation allows it to do so. See what happened there?

And how is it that Intel is so good with CPU but so bad with GPU? Well, maybe it's because they only know how to build CPUs?

Have you ever heard of Larrabee?
http://en.wikipedia.org/wiki/Larrabee_(microarchitecture)

They amassed GPU engineers in order to design Larrabee, and the project was ultimately terminated just a bit over 2 years ago due to technical limitations that would not have allowed it to be competitive with traditional GPUs at the time.

I don't doubt Intel can come out with a good GPU solution eventually, but... here are some facts:

1) Up to this year, a huge majority of their GPU solutions (especially the PowerVR-based GMA 500/600 series) still don't have proper drivers. Even the flagship HD 4000 doesn't have proper drivers (driver limitations clearly show in the benchmarks you provided)

2) Up to this year, they still have yet to have a viable integrated GPU solution compared to the competition (AMD is always a step or two ahead)

3) They have had to resort to using dubious comparisons (GT3 vs 650M), pre-rendered movies, redefine TDP, etc... in order to convince the crowd that Haswell is all that. If you need source:
http://www.anandtech.com/show/5333/intel-sort-of-demonstrates-ivy-bridge-graphics-at-ces-2012
http://www.anandtech.com/show/5391/intel-demos-dx11-running-on-ivy-bridge-ultrabook

By the way, their demo unit was only able to push Medium settings in F1 2011... at 1366 x 768. Now... it's unclear if that was HD 4600 or HD 5200 (the computer shows it's running the same drivers as the HD 4000), but even if it was HD 4600, what are the chances that HD 5200 could do much better?
 
Here we go again.
Actually the difference between the stock GDDR5 and DDR3 version is pretty small at about 5-7%. The DDR3 makes up for it in higher core clock speed.

@Op I think people said that a 650M can run the game at those settings at around 45 fps. As Intel probably picked a setting that was high for promotional value but just fluid. Probably 25-30fps. Therefore the HD 5200 should end up at at least 60% the performance of a 650M.
If those assumptions are true, the 5200 should end up at around 630M levels. Still pretty damn good and significantly better than AMDs APUs.

At the same time Nvidia won't release anything significantly new until Maxwell 20nm in 2014. So Intel will come within arms reach. Half performance used to be the difference between the upper mid range as in 650M and all the cheaper lower derivatives. There would really be no reason to add a dedicated GPU unless one reserves at least 30W TDP for it. As Intel won't put them into the Quad Cores as the leaked roadmaps show those will be 35W CPUs with a GPU that beats stuff that today needs a 35W CPU + 15-20W GPU.
Anything that isn't at least as fast as a 650M doesn't really have a reason to exist anymore outside of AMD cheap notebooks. The 650M will stay top of the class in the 30W TDP range until 20nm hits and that won't be until 2014.

The 650m can run that game at 60 fps at the demoed settings (80 fps at 1050p). The hd 4000 can run the game at 768p and medium at 40 fps.

46680.png


Incidentally its one of the few games where the hd4000 manages to beat the a10. Which is probably why intel chose it.

There's only one Intel HD 4000 part. There's no variance that has "more" or "less" computational units. In that sense, it's just a matter of how slow/fast Intel chooses it to be.

And GT3 is not guaranteed for the higher-end mobile CPUs. The leaked specifications show that the highest-end quad-core i7 chip just gets HD 4600:

Intel_Haswell_Mobile_CPU_Lineup.png

Those are not final specifications. Remember all the kepler leaks that turned out to be wrong?

Did you even try to read the table?

http://en.wikipedia.org/wiki/Ivy_Bridge_(microarchitecture)#Mobile_processors

ULV CPU (i5-3337U ) max GPU frequency - 1.1 GHz (http://ark.intel.com/products/72055/)
'normal' CPU (i7-3820QM) max GPU frequency - 1.25 GHz (http://ark.intel.com/products/64889)

Yes, the ULV will be considerably slower (because both CPU and GPU are slower and because it is not a quad core), but there is no 50% difference in clocks.

----------



Agreed. I would just find it logical if they put the best IGP options into the mobile CPUS, where they are needed the most. Your leaked roadmap is surprising though, I really expected to find the GT3 core in the mobile CPUs. Maybe they will at least integrate it into their ULV CPU line...



Agreed again. Still, Intel is getting there. Slowly, but they are catching up. And I suppose its at least something. I would appreciate having a reasonable minimal GPU performance spec to base the code against.

There is a fairly significant difference because many times the ULV processor does not have the thermal headroom to run at those speeds. Sometimes in this test is throttled the cpu to 1.6 ghz which may have an effect.

47220.png


I think the reason why they're not in every product is because they consume quite a bit more power than good ol' HD 4000, so putting them on high-end quad-core chips would squash any chance for the OEM to couple those chips with a discrete GPU.

And I think discrete GPUs still make more sense for the higher end of the market.



Yeah, but while Intel is playing the catchup game, the competition just further widens the gap. Here's AMD upcoming IGP solution:

Image

On a side note, I think AMD has very good chances of outperforming the GT 650M with their IGP, since they're already matching the GT 670M performance with their HD 7870M mobile GPU, while keeping its TDP to the same package as the GT 650M.

?? AMD would have to more than double their gpu performance which is not likely. Its not going to happen. Amd is not going to manage to cram their 40 watt 7870m into a 35 watt cpu for a while, especially when their cpu is so bad (about i3 ivy level). The current 7660g is weaker than the 630m in almost every tested game (with the exception of civ 5 which seems to hate any gpu not amd) by about 30%. The logic behind this argument is like saying that boeing is going to make a really good car engine because they make good jets. Discrete gpu=/= igp.


The 7660G is still comparatively faster than HD 4000, and it's about on par with 640M in some cases. I'm not saying that AMD's next-gen part must be on par with 650M, but it's clear that if anyone is closer to 650M performance, then AMD is "it".

And if you want to talk about appropriate TDP class, try a 35W Ivy Bridge coupled with a 15W GT 650M. That's hardly comparable to a single 35W Trinity CPU. Even comparing to the 10W GT 640M, Trinity is more power efficient.

If AMD had more headroom (maybe a 55W Trinity?), I don't doubt they'd be able to at least reach 640M performance for their IGP parts.

The 7660g is not on par with the 640m in almost any case. Generally the 7660m is about 50% the speed of the 640m. Civ 5 seems to hate everyone but AMD and statistically should be tossed as an outlier when looking at the general performance of the 7660g, probably should also toss batman because the hd 3000 is not close to the 7660g.

http://www.anandtech.com/show/5831/amd-trinity-review-a10-4600m-a-new-hope/6

Please note that the 640m is using a ULV cpu and running early drivers (one of the first kepler devices). The 650m is using a lot more power than 15 watts (30-40) the 640m is using more power than 10 watts (25-32 ish).
 
Also thought I'd say this.

hitman.png


a10 is 20 fps at 1080p

dirt.png


a10 is 35.8 fps at 1080p

skyrim.png


a10 is 45 fps at 1080p

wow.png


a10 is ~60+ ish fps at 1080p.

Memory Bandwidth

sandra-memory-bandwidth.png


sandra-cache-bandwidth.png


Haswell Engineering sample on pre-release drivers is looking good igp wise. You can definitely see that it is being hurt a little in games at 1080p (especially dirt showdown) because of the loss in bandwidth. If they fix the bandwidth issue here (probably a chipset/ mobo issue) they are going to probably get at least another 10% at release with better drivers (which they need to do because intel drivers often suck).
Almost 1 TB L1 cache bandwidth can't hurt either.


No they can't beat desktop trinity (let alone the incremental richland). But if a SV i5 mobile haswell chip runs the hd 4600 at those clocks (1.15-1.2 ghz) which is what I strongly suspect then trinity 7660g will be trounced (not looking at kaveri or the gt3 version which will probably be even more powerful 20 eu @ 1.2 ghz vs 40 eu at 800 mhz =30% speed bump). Richland (trinity with efficiency optimizations and a slight cpu and gpu speed bump) may tie with gt2 but amd looks set to lose the igp crown for at least a little while (kaveri).
 
Those are not final specifications. Remember all the kepler leaks that turned out to be wrong?

Intel leaks are almost always more right than wrong.

And I'm sure Intel should have had final specs for Haswell by now if they plan on doing any unveiling by June.

?? AMD would have to more than double their gpu performance which is not likely. Its not going to happen. Amd is not going to manage to cram their 40 watt 7870m into a 35 watt cpu for a while, especially when their cpu is so bad (about i3 ivy level). The current 7660g is weaker than the 630m in almost every tested game (with the exception of civ 5 which seems to hate any gpu not amd) by about 30%. The logic behind this argument is like saying that boeing is going to make a really good car engine because they make good jets. Discrete gpu=/= igp.

The 40W 7870M is actually comparable to 670M, not 650M. The 7850M is a closer match to the 650M since it's comparable to 660M, but in reality, it's still faster than 650M, while being at the same (slightly lower) power consumption package.

So no, reaching 650M (stock 650M) performance isn't that big a task.

And there's no argument here. You're reading too deep into it. I'm merely stating a fact: without seeing any actual numbers for HD 5200, the only thing I can say for sure is that AMD is the closer one to providing an IGP solution that is closer to 650M performance.

It's definitely not even "close" to being on par with 650M, but just like you said, IGP =/= discrete.

Same reason why expecting HD 5200 to be that fast and efficient at the same time is unreasonable.

Haswell Engineering sample on pre-release drivers is looking good igp wise. You can definitely see that it is being hurt a little in games at 1080p (especially dirt showdown) because of the loss in bandwidth. If they fix the bandwidth issue here (probably a chipset/ mobo issue) they are going to probably get at least another 10% at release with better drivers (which they need to do because intel drivers often suck).
Almost 1 TB L1 cache bandwidth can't hurt either.

No they can't beat desktop trinity (let alone the incremental richland). But if a SV i5 mobile haswell chip runs the hd 4600 at those clocks (1.15-1.2 ghz) which is what I strongly suspect then trinity 7660g will be trounced (not looking at kaveri or the gt3 version which will probably be even more powerful 20 eu @ 1.2 ghz vs 40 eu at 800 mhz =30% speed bump). Richland (trinity with efficiency optimizations and a slight cpu and gpu speed bump) may tie with gt2 but amd looks set to lose the igp crown for at least a little while (kaveri).

It remains to be seen, but honestly, there are a lot of "ifs" to put there.

For instance, a 10% memory bandwidth improvement may not automagically translate into a 10% GPU performance improvement.

HD 4600 benchmarks already showed the expected performance improvement over HD 4000. It may actually be the GPU that's holding back performance at 1080p. If HD 4600 was bandwidth-starved, it would actually show less of a lead over HD 4000 at 1080p, but I'm not seeing that.

And Intel's drivers up until just the end of last year still suck. Unless they do a 180 change, I think you can expect the usual.

Also you're underestimating how fast Richland really is.

http://www.cpu-world.com/news_2013/2013010801_AMD_discloses_performance_of_future_mobile_APUs.html

It's actually about 20-40% faster than Trinity in terms of graphics performance.

If we're pitting 37W Core i5 against 35W Richland in terms of performance, then I know for sure which one is going to come out looking better. And that's not to mention we're comparing 22nm against 32nm...
 
Intel leaks are almost always more right than wrong.

The 40W 7870M is actually comparable to 670M, not 650M. The 7850M is a closer match to the 650M since it's comparable to 660M, but in reality, it's still faster than 650M, while being at the same (slightly lower) power consumption package.

So no, reaching 650M (stock 650M) performance isn't that big a task.

And there's no argument here. You're reading too deep into it. I'm merely stating a fact: without seeing any actual numbers for HD 5200, the only thing I can say for sure is that AMD is the closer one to providing an IGP solution that is closer to 650M performance.

It's definitely not even "close" to being on par with 650M, but just like you said, IGP =/= discrete.

Same reason why expecting HD 5200 to be that fast and efficient at the same time is unreasonable.



It remains to be seen, but honestly, there are a lot of "ifs" to put there.

For instance, a 10% memory bandwidth improvement may not automagically translate into a 10% GPU performance improvement.

HD 4600 benchmarks already showed the expected performance improvement over HD 4000. It may actually be the GPU that's holding back performance at 1080p. If HD 4600 was bandwidth-starved, it would actually show less of a lead over HD 4000 at 1080p, but I'm not seeing that.

And Intel's drivers up until just the end of last year still suck. Unless they do a 180 change, I think you can expect the usual.

Also you're underestimating how fast Richland really is.

http://www.cpu-world.com/news_2013/2013010801_AMD_discloses_performance_of_future_mobile_APUs.html

It's actually about 20-40% faster than Trinity in terms of graphics performance.

If we're pitting 37W Core i5 against 35W Richland in terms of performance, then I know for sure which one is going to come out looking better. And that's not to mention we're comparing 22nm against 32nm...

Intel leaks are usually correct but have been wrong before.

It is a massive task to fit a 40 watt gpu into a 35 watt total gpu + cpu package. Especially since amd would need to increase cpu performance to keep up with a 7870m (the gx60, a10 + 7970m is bottlenecked by the cpu to about 5-10% better than an i7 + 660m). Just because they have a more efficient discrete gpu does not mean that their igp is going to be better.

The 40W 7870M is actually comparable to 670M, not 650M. The 7850M is a closer match to the 650M since it's comparable to 660M, but in reality, it's still faster than 650M, while being at the same (slightly lower) power consumption package.

So no, reaching 650M (stock 650M) performance isn't that big a task.

This argument is like saying that because the gtx 680 is slightly more efficient than the 7970 ghz edition nvidia will have trouble putting that level of performance into tegra 4.


Please don't ever use 3d mark 11 benchmarks as a baseline for amd apu performance. Amd performs disgustingly good in 3d mark for the performance in real world games.

46664.png


Generally the 6630m is about 20% better in games despite a lower 3d mark 11 score. Just like using 3d mark 06 is a bad performance indicator of intel igp performance (because they score so well on the cpu tests). Much of this difference is because of the stronger cpu the 6630m is paired with. So for AMD increasing cpu performance is a must. Also like to point out that Anandtech got a very similar 3dmark 11 score as amd labs reported using a10 release drivers and 4 Gb of 1600 mhz cl 11 ram. I would expect a higher 3d mark 11 score for the a10.

And yes you can see the bandwidth problems with the gt2 haswell. Look at dirt showdown.
30/51=59% (1080p performance is 59% of 768p performance) compared to 34/24=70.6% for an ivy bridge igp (and similar for the hd 3000).

I didn't say this but in my previous post the a10 is the DESKTOP version. The mobile a10 is significantly slower than the desktop a10 and will be worse than the hd 4600.

Against Llano, Trinity is universally faster, but the smallest gap is in Mafia II (3%) while the largest gap is in StarCraft II (30%). On average, looking at these games Trinity is only 18% faster than Llano. What’s not entirely clear from the above chart is whether we’re hitting CPU limitations, memory bandwidth limitations (remember that Llano and Trinity share bandwidth with the rest of the system), or perhaps both. At our chosen settings, what is clear is that Trinity’s “up to 56% faster” graphics never make it that high.

Amd has claimed bs before. Even in synthetics the 7660g never reached that 56% claim (only 45% in 3dmark 11 or vantage). So we got a 36% gain in 3d mark that translated into an average real world performance of 18%. That makes any gain richland promises based on 3d mark 11 scores highly speculative.
 
And there's no argument here. You're reading too deep into it. I'm merely stating a fact: without seeing any actual numbers for HD 5200, the only thing I can say for sure is that AMD is the closer one to providing an IGP solution that is closer to 650M performance.
I just don't see that.
AMD is already underpowered on the CPU side because they push the GPU too much and have nothing really game changing coming on the CPU arch. How are they going to fix that. Intel could just work on their drivers alone and probably make good on quite a lot of the difference in some games. Dirt 3 looks like drivers simply got better.
Even with all the promises I see Richland not coming out ahead. The lower you go in performance the lower the settings go and the lower the settings the more CPU matters. Richland may look good on Dx11 but they don't assess too well how much CPU vs. GPU actually plays out when the driver has to go either or. An ULV APU I suspect to be even more CPU bound than the faster ones in the actually graphics settings that matter.
I also have a feeling that the better showing of the ULV in those tests is due to them being less memory bandwidth bound.

It remains to be seen how the HD 5200 actually looks in mobile and what clocks it comes with but I still think it will propel Intel GPUs to a new performance level.
They didn't dump all their engineers into larabee. The HD 3000/4000 was a complete new GPU architecture that had practically nothing to do with larabee. From design to market it takes 5 years and the first serious effort was Intel HD. Since then lots has changed. There are clearly a serious number of people working on the GPUs now. In money alone Intel can outspend their competition with ease. Money doesn't shorten the 5 year time to market quite as much but at some point it will pay of. Some people decided that Intel sucks at graphics and will forever do so. I also wish AMD was more competitive and stop Intel's monopoly pricing but realistically I just don't see it. The CPU side is just lacking too much and the GPU know how advantage is shrinking away.
 
Intel leaks are usually correct but have been wrong before.

Yeah, but the thing is that it's already somewhat confirmed that there are mainstream mobile parts (quad-core included) that have GT2.

Why do we need to insist that GT3 be included with every mobile chip?

It is a massive task to fit a 40 watt gpu into a 35 watt total gpu + cpu package. Especially since amd would need to increase cpu performance to keep up with a 7870m (the gx60, a10 + 7970m is bottlenecked by the cpu to about 5-10% better than an i7 + 660m). Just because they have a more efficient discrete gpu does not mean that their igp is going to be better.

Yeah, but I'm not saying they absolutely have to fit a 7870M into a CPU package.

That and the 7870M is faster than GT 650M, so they would be overshooting.

This argument is like saying that because the gtx 680 is slightly more efficient than the 7970 ghz edition nvidia will have trouble putting that level of performance into tegra 4.

No, I'm not saying that.

I'm saying the 7870M is not comparable to 650M. If you have to compare anything to 650M, try 7850M. Or in fact, try the 7770M.

Please don't ever use 3d mark 11 benchmarks as a baseline for amd apu performance. Amd performs disgustingly good in 3d mark for the performance in real world games.

You do realize 3DMark 11 has a separate score just for the GPU, yes?

And yes you can see the bandwidth problems with the gt2 haswell. Look at dirt showdown.
30/51=59% (1080p performance is 59% of 768p performance) compared to 34/24=70.6% for an ivy bridge igp (and similar for the hd 3000).

I didn't say this but in my previous post the a10 is the DESKTOP version. The mobile a10 is significantly slower than the desktop a10 and will be worse than the hd 4600.

7660G is about 20% faster than HD 4000.

If Richland (8650G) is indeed a 20% improvement over 7660G, then that means 8650G is about 44% faster than HD 4000, which would actually put it ahead of HD 4600.

Anandtech said:
Overall, it's a 20% lead for Trinity vs. quad-core Ivy Bridge

Source: http://www.anandtech.com/show/5831/amd-trinity-review-a10-4600m-a-new-hope/6

Amd has claimed bs before. Even in synthetics the 7660g never reached that 56% claim (only 45% in 3dmark 11 or vantage). So we got a 36% gain in 3d mark that translated into an average real world performance of 18%. That makes any gain richland promises based on 3d mark 11 scores highly speculative.

It's AMD APU vs AMD APU. It's not like I'm pushing Richland's 3DMark 11 scores against Intel's 3DMark 11 scores.

Unless you're saying AMD somehow cheats 3DMark even against their own platform...

And it's not like Intel didn't claim BS with their Haswell announcements. Do you need some links?

I just don't see that.
AMD is already underpowered on the CPU side because they push the GPU too much and have nothing really game changing coming on the CPU arch. How are they going to fix that. Intel could just work on their drivers alone and probably make good on quite a lot of the difference in some games. Dirt 3 looks like drivers simply got better.
Even with all the promises I see Richland not coming out ahead. The lower you go in performance the lower the settings go and the lower the settings the more CPU matters. Richland may look good on Dx11 but they don't assess too well how much CPU vs. GPU actually plays out when the driver has to go either or. An ULV APU I suspect to be even more CPU bound than the faster ones in the actually graphics settings that matter.
I also have a feeling that the better showing of the ULV in those tests is due to them being less memory bandwidth bound.

It remains to be seen how the HD 5200 actually looks in mobile and what clocks it comes with but I still think it will propel Intel GPUs to a new performance level.
They didn't dump all their engineers into larabee. The HD 3000/4000 was a complete new GPU architecture that had practically nothing to do with larabee. From design to market it takes 5 years and the first serious effort was Intel HD. Since then lots has changed. There are clearly a serious number of people working on the GPUs now. In money alone Intel can outspend their competition with ease. Money doesn't shorten the 5 year time to market quite as much but at some point it will pay of. Some people decided that Intel sucks at graphics and will forever do so. I also wish AMD was more competitive and stop Intel's monopoly pricing but realistically I just don't see it. The CPU side is just lacking too much and the GPU know how advantage is shrinking away.

Basically, you're saying you don't want to see it because in your mind, AMD CPU sucks and they will forever suck?

How's that different from someone saying Intel's GPU sucks and it will forever suck?

I'm not saying that AMD as a whole package is more appealing.

All I'm saying is simple: AMD got the lead in GPU performance up until last year. That may change this year, but I am not seeing that change yet. And it doesn't look like AMD will lose the lead on desktop, or if the comparison was against HD 4600.
 
Last edited:
AMD doesn't suck in my mind but currently the benchmarks just don't look good.
You say.
Yeah, but I'm not saying they absolutely have to fit a 7870M into a CPU package.

That and the 7870M is faster than GT 650M, so they would be overshooting.
But what would happen if you run those same benchmarks with an A8 ULV which is what an IGP 7870 ends up being paired with essentially.
When AMD makes a dedicated GPU with 2GB of GDDR5 memory on a 128bit interface paired with an Intel Core i5/7 they are doing great. An IGP is not the same. It is limited to DDR3 which it has to share with the CPU. It is paired with a much less efficient CPU that is practically forced to stay in ULV power requirements.
The 7660G is great on the desktop and blows Intel away but in those real mobile notebook comparisons of Anand it doesn't looks do good compared to the HD 4000. An HD 4600 will trounce it with those benchmark scores in all but those games where drivers suck.

I have no doubt that if AMD had Intel CPUs, some embedded RAM or DDR4 coming to fix the bandwidth issue they would already lead even without 22nm. But they don't have one problem they have three. Richland won't solve any of those. The GPU architecture and drivers are better but that won't be enough when I look at the benchmarks available.

All I'm saying is simple: AMD got the lead in GPU performance up until last year. That may change this year, but I am not seeing that change yet. And it doesn't look like AMD will lose the lead on desktop, or if the comparison was against HD 4600.
I don't think AMD will loose the lead on the desktop either. Afaik it isn't even sure that there will be any desktop GT3e. I don't think all mobile will be GT3e. Yet AMD only really wins on mobile with their top chip the rest already fall quite a bit short.
Unless Intels GT3e ends up being really underwhelming and barely any better than the HD 4600 I see no way AMD can keep their lead. They have too many bottlenecks.
Already the 7660G shows that faster DDR3 helps a lot. Now make the GPU faster still while the DDR3 mem goes maybe from 1600 to 1866.

Not all mobile chips will be GT3 but since they all cost 200-400$ anyway there isn't much reason not to go full out if you care at all.
 
On a side note, I had the same discussion regarding CPU limitations earlier, so I agree with you. Recent games are more CPU intensive than they let on. That's why sometimes a HD 4000 coupled with a quad-core CPU may be able to win out even against some discrete GPUs (like the GT 330M and GT 9600M in the old MBP). I have no doubt the same would happen with GT3.

However, that's just because the CPU is holding the platform back. Not because the GPU isn't capable.

Plus if GT3 is clearly better than Richland, then power consumption and TDP have to suffer. In that case, Intel can either introduce higher TDP packages, thus forsaking battery life and efficiency, or they can reduce CPU performance in order to match TDP with their mainstream GT2 counterparts. But then they'll run into the same CPU limitation trap discussed above.

And then you'll have to wonder which parts Apple would choose for their upcoming MacBooks. I know for a fact that dear ol' Apple has never pushed graphics performance aggressively...

And personally, I would choose more battery life and CPU performance over more graphics performance coupled with a mediocre CPU.
 
The thing I would emphasize is that in the range that these IGPs operate CPUs are relatively even more important.
The higher quality settings you use in games the less the CPU usually matters. It is usually fast enough and the GPU limits. With IGPs what you want is being able to play any game at low/mid maybe mid/high settings. First you need a cpu to pull of a decent baseline because the graphics settings are usually much more variable while if your cpu is too slow especially in RTS there is not much to do.

Even if the HD 5200 doubles from the HD 4000 it will still not be a gpu that allows high settings on anything but older games. What would be nice from an IGP is to allow Medium on most games including newer ones.
 
On a side note, I had the same discussion regarding CPU limitations earlier, so I agree with you. Recent games are more CPU intensive than they let on. That's why sometimes a HD 4000 coupled with a quad-core CPU may be able to win out even against some discrete GPUs (like the GT 330M and GT 9600M in the old MBP). I have no doubt the same would happen with GT3.

However, that's just because the CPU is holding the platform back. Not because the GPU isn't capable.


Plus if GT3 is clearly better than Richland, then power consumption and TDP have to suffer. In that case, Intel can either introduce higher TDP packages, thus forsaking battery life and efficiency, or they can reduce CPU performance in order to match TDP with their mainstream GT2 counterparts. But then they'll run into the same CPU limitation trap discussed above.

And then you'll have to wonder which parts Apple would choose for their upcoming MacBooks. I know for a fact that dear ol' Apple has never pushed graphics performance aggressively...

And personally, I would choose more battery life and CPU performance over more graphics performance coupled with a mediocre CPU.

Very true. But an apu (both intel and amd) is a cpu + igp. If either sucks then the whole chip is affected. It doesn't really matter if amd manages to increase their igp performance if their cpu is holding them back. What I mean by this is that it is perfectly all right to say intel's igp is better if it is better only because amd's cpu is holding it back. The igp is part of the package and if your cpu sucks too much then your igp will suck too, regardless if it technically is better.

Intel is currently in a better position than AMD efficiency wise (22nm vs 32 for trinity/richland and 28 for kaveri). If they throw the same amount of power at the igp intel will come out ahead (evidenced by the fact that intel is ~ equal to trinity at ULV power levels with much better cpu performance). I think intel demoed a ULV haswell chip running unigine heaven at the same level as an ivy bridge chip at half the power usage.

Also a better IGP may not hurt battery life at all. Battery life of mobile devices is mostly affected by idle or light usage as long as this is similar then battery life should not really be affected.

Yeah, but the thing is that it's already somewhat confirmed that there are mainstream mobile parts (quad-core included) that have GT2.

Why do we need to insist that GT3 be included with every mobile chip?

Yeah, but I'm not saying they absolutely have to fit a 7870M into a CPU package.

That and the 7870M is faster than GT 650M, so they would be overshooting.

I'm saying the 7870M is not comparable to 650M. If you have to compare anything to 650M, try 7850M. Or in fact, try the 7770M.

You do realize 3DMark 11 has a separate score just for the GPU, yes?

7660G is about 20% faster than HD 4000.

If Richland (8650G) is indeed a 20% improvement over 7660G, then that means 8650G is about 44% faster than HD 4000, which would actually put it ahead of HD 4600.



Source: http://www.anandtech.com/show/5831/amd-trinity-review-a10-4600m-a-new-hope/6



It's AMD APU vs AMD APU. It's not like I'm pushing Richland's 3DMark 11 scores against Intel's 3DMark 11 scores.

Unless you're saying AMD somehow cheats 3DMark even against their own platform...

And it's not like Intel didn't claim BS with their Haswell announcements. Do you need some links?



Basically, you're saying you don't want to see it because in your mind, AMD CPU sucks and they will forever suck?

How's that different from someone saying Intel's GPU sucks and it will forever suck?

I'm not saying that AMD as a whole package is more appealing.

All I'm saying is simple: AMD got the lead in GPU performance up until last year. That may change this year, but I am not seeing that change yet. And it doesn't look like AMD will lose the lead on desktop, or if the comparison was against HD 4600.

Gt3 won't be included in every chip.

Yes 3d mark does have a seperate gpu test and amd would lead by even more there because in the general test there is a section on cpu performance and an i7 will wipe the floor with the a10 in cpu physics. All im saying is that 3d mark is not even a good performance indicator for amd APUs period. Trinity increased 3d mark scores by 36% yet games only increased by 18%. Not a single game increased by the same amount as the 3d mark score did. If richland continues this trend then the 20% gain in 3d mark scores will only equate to about 10% in real world applications.

And anything within 10% is pretty insignificant (id say they were the same) except in the odd cpu heavy game where amd will get killed (gw2, hitman absolution) where trinity is already cpu limited. Which will push intel into the higher position. Intel also has a significantly better memory controller.

Amd getting their desktop apu performance to match a 7870m is possible but only with gddr5. Getting 40 watt gpu performance into a 35 watt chip (and they would need to improve their cpu substantially to keep up with a 7870m) is not going to happen for several years.
 
I think Intel will squeeze double the performance of the HD4000 with the HD5200 as they claim.

However, remember all technical deep dives revealed the Nvidia 650M in the MBPr is overclocked. The HD5200 will still be far away in pure performance vs the 650M in the MBPr.
 
UPDATE: Anandtech released a benchmark of the 5200, showing it to be 30-40% slower than the 650M.
 
I think Iris Pro might be a better GPGPU than something sitting on PCI Express for general purposes.
 
UPDATE: Anandtech released a benchmark of the 5200, showing it to be 30-40% slower than the 650M.

Then that must be at least confirmation that at the very least the 13" rMBP will be getting the 5200 Iris Pro Graphics right.. if it's that bad..
 
Then that must be at least confirmation that at the very least the 13" rMBP will be getting the 5200 Iris Pro Graphics right.. if it's that bad..

30% slower than a dGPU is not bad by any measure for an iGPU. The thing is, having Iris Pro means a lower CPU clock to compensate,and Apple cares more about CPUs than GPUs.
 
30% slower than a dGPU is not bad by any measure for an iGPU. The thing is, having Iris Pro means a lower CPU clock to compensate,and Apple cares more about CPUs than GPUs.

Yes, but with the eDRAM, there should be more opportunity to use the GPU for computing than with a discrete card.
 
30% slower than a dGPU is not bad by any measure for an iGPU. The thing is, having Iris Pro means a lower CPU clock to compensate,and Apple cares more about CPUs than GPUs.

What are you anticipating in terms of CPUs and GPUs for the MacBook Pro Retina line-up for 13" and 15"?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.