Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Really? I'm pointing out that the 9600 gt (and by extension the 330m) is not consistently 50-100% better than the hd 4000. Im not cherry picking results, there are only a few benchmarks there and I've listed all of the relevant ones (anandtech only has starcraft and half life 2 benches). I even made a note that I was only looking at two games. If there were more, I would have used them.

They are not consistently 50-100%, but that's about the average.

For instance, just because my rMBP 15" matches the rMBP 13" in general usage (Safari, etc...) doesn't mean it's not faster, right?

I believe that in most cases, when a gpu runs out of memory it uses system RAM instead.

No. When a dedicated GPU runs out of memory, it simply stalls until more memory is available for it to access things. System memory is used only as a cache, and not as work memory (memory that the GPU can actively address, read from and write to). The situation is different with integrated GPUs, where video memory is shared from system memory. This is highly dependent on drivers, of course, but the simple version is: since there are VRAM-bound situations, it's clear the dedicated GPU does not have enough memory.

Looking at your starcraft 2 results you are still proving my point, the 330m is not in a league of its own. Its exactly 25% faster than the hd 4000, indicating somewhat comparable performance. (not to mention that is a ULV hd 4000, and as you said yourself, the hd 4000 in other configurations can be much faster so the hd 4000 in a standard i5 ivy might be directly comparable to the the 330m).

Because StarCraft 2 is also a CPU-bound game. It just happens to be VRAM-bound as well when at higher graphics settings.

http://www.techspot.com/review/305-starcraft2-performance/page13.html

Looking at it as a CPU-bound game, it'd also explain why the HD 4000 can somewhat match the 6490M and 330M. StarCraft 2 only takes advantage of 2 cores, so it only needs 2 fast cores rather than 4 cores. The quad-core advantage of the MBP 2011 is then equalized, and it's just a matter of which CPU can Turbo Boost higher at that point.

Also explains why the 330M GT with the slower CPU was running slower than the other 2 computers.

You say yourself "Even if HD 4000 can get 1000 GFLOPS, it'd still be limited by memory bandwidth since it has to leech off of system RAM. Meanwhile, the GT 330M enjoys more RAM bandwidth with GDDR3."

Okay so the hd 4000 has more vram but its also much slower vram.

Faster VRAM doesn't help when there is not enough of it to store stuffs.

But when there is enough room to store things, then faster VRAM will be just that: faster.

In a situation where you have the same amount of VRAM that's not enough, then the faster VRAM will obviously be faster.

The question ultimately comes down to "which is more playable?" when looking at graphical performance. I used high resolution and high setting comparisons because also intel tanks when settings and resolution are turned up.

Yeah, then if you want to compare, use something other than a MacBook's GPU. Base-model GPUs like the 6490M and GT 330M and 9600M GT were all limited to 256MB VRAM.

Look at my slide of 900p high starcraft and your slide of medium 800p starcraft. Look at the difference between the 6490 with 256 MB vram and the 6750 with 512 MB vram. The delta between the two is much smaller on the high 900p settings than the medium 800p setting, indicating that vram is not a problem (i.e. 256 mb vram is not limiting the 6490 at 900p high--> the difference between the two is what one would expect).

It's simple, really. The 6750M is CPU-bound at High settings.

For StarCraft 2, when you crank up graphics, you also crank up physics and particle simulation.

It's easy to see: compare the 6750M and the Retina MacBook at Medium settings, then compare the same at High settings. Suddenly the delta is much larger, indicating that the 6750M is running into a CPU-bound situation.

At least anandtech does there benches in the same area for their comparisons, I have no idea whether the starcraft benchmarks on notebookcheck are at the beginning of a scene or in the middle of a heavy battle because a lot of them are user submitted. Stare at wall 60 fps -- great!, turn around 30 fps wtf? When notebookcheck does a review of a game themself, the benchmarks should be comparable, but when the results are user submitted, I'd take them with a grain of sand

All of the results on notebookcheck are done by themselves on multiple laptops and then the average is taken. I don't see where you get "user-submitted".

NotebookCheck said:
The following benchmarks stem from our benchmarks of review laptops. The performance depends on the used graphics memory, clock rate, processor, system settings, drivers, and operating systems. So the results don't have to be representative for all laptops with this GPU. For detailled information on the benchmark results, click on the fps number.

I'm pretty sure that the 330m used about 20-25 watts. Arrandale standard voltage has a TDP of 35 watts.

Considering that an UVL i5 ivy gets better performance than a standard voltage arrandale and the igp is about 70-80% of the 330m its nothing short of amazing how far power efficiency has come to fit that level of performance in about 20 watts total.

Note: the 330M in the MacBook Pro 2010 was underclocked to approximately 70% its original performance. So even if it was using 20W - 25W at its max performance settings, it's down to about 15W - 17W in the MBP 2010. That's how Apple conserves battery life.

So efficiency has improved, but not by as much as you may think.
 
Last edited:
http://www.anandtech.com/show/5772/mobile-ivy-bridge-and-asus-n56vm-preview/6

http://www.notebookcheck.net/Intel-HD-Graphics-4000-Benchmarked.73567.0.html

Look at the hd 4000 and the 6630. They are quite competitive. Not to mention a 6630 is essentially a downclocked 6750m.

There are games there (skyrim, bf3, dirt 3, batman) where the hd 4000 is definitely in striking distance of the 6630 (yes some others are significantly worse shogun2, civ 5) but for some games such as skyrim, the hd 4000 looks to be better than the 6490m.

The notebookcheck review compares it to the 7540m which is essentially a 6490 at 700 mhz instead of 800 mhz (note this is the ddr3 version with a a6) and it wins almost every time.

In some games its almost competetive with the 630m (metro, skyrim)
 
Last edited:
I don't know if this has been mentioned or not already, but the GT 650M can come in a GDDR3 AND GDDR5 variants. GDDR5 offers significantly higher memory bandwidth which translates into much higher performance.

It is entirely possible that Intel tested their GT3 against a lower clocked and GDDR3 variant of the GT 650M GPU.
 
It is entirely possible that Intel tested their GT3 against a lower clocked and GDDR3 variant of the GT 650M GPU.

If it's possible, then I suspect that's what they did. In my travels, when Marketing and Sales start to get involved in things, those types of tactics seem to take a front seat, especially when it will give the product in question a better showing.

With that in mind, I look at any sort of demonstration from manufacturers with a certain degree of skepticism ... Apple, Intel, nVidia, AMD, etc. included.
 
Those MW3 are slower given the res difference.
I never tried Crysis 2 only the original first Crysis and that only worked at almost everything low except for medium textures I think.

The 330M is default at 575/1265/1066 according to NBC.
In the MBP it is 500/1100/790 which is 87%. So not quite that bad.
It runs fine at 600/1300/890 though. The memory is really the biggest problem. I could run it stable at 648/1675/.... which is about the max over clock. Memory was the least stable and I never actually stayed above 900 MHz over clock. I don't know what poor GDDR3 they put in there. Even the max setting was only some 980. Afaik most GDDR3 runs fine up to 1100 and beyond.
Usually the extra just isn't worth the heat and stability. Since I don't play any MW3 at the moment I don't even bother with the 600Mhz over clock.

Overall it is terrible GPU that is way under-specced in the VRAM department. 256MB excludes it from many new games or makes it unusable. It also really destroys performance with some settings that should mean more than 5-15% performance difference.

If I ever get a new notebook with a dGPU, I won't settle for anything less than 2GB. As games get higher and higher textures they need the space and the minimum constantly grows. That stuff is so cheap (compared to the rest) the manufacturers should find the little space on the logic board. Loading textures is just not very demanding for the GPU but offers more detail than many demanding shader settings.

What some of this discussion also shows is that when you deal with hungry games it is often better to go for more CPU performance than GPU as low settings are often CPU bound. Especially Total War is one such where it is still fun at lowest settings if the CPU would just keep up.

All in all Intel does deliver way more CPU speed than Trinity and even equals Trinity with the lower TDP offerings. Sure Intel runs on 22nm and AMD probably could do better at 22nm. But for a non philosophical debate it only matters what one can buy today. AMD is much worse on the CPU side and isn't even significantly better on the GPU side except for its top offering with 35W and not really game changing advantage either.
Kepler on 22nm on die with embedded high throughput VRAM would be awesome but given the actually available non fictional alternatives Intel is offering a lot out of the bunch. Kepler over Fermi was also a huge jump in efficiency. There is still much headroom for Intel to improve. If they deliver better drivers and keep at it broadwell might be another significant efficiency jump not just from the 14nm and what ever comes after.
With Atom adopting the Ivy Bridge GPU in the next iteration they may focus quite a bit on power efficiency in the whole thing. After all that GPU as to compete with tile based rendering in the ultra low power market. Those are as much as I read simply more power efficient because they avoid memory access which are relatively quite expensive (in power) on those SoCs. Haswell is also promoted as basically delivering the same performance at 8W including the CPU.
 
http://www.anandtech.com/show/5772/mobile-ivy-bridge-and-asus-n56vm-preview/6

http://www.notebookcheck.net/Intel-HD-Graphics-4000-Benchmarked.73567.0.html

Look at the hd 4000 and the 6630. They are quite competitive. Not to mention a 6630 is essentially a downclocked 6750m.

There are games there (skyrim, bf3, dirt 3, batman) where the hd 4000 is definitely in striking distance of the 6630 (yes some others are significantly worse shogun2, civ 5) but for some games such as skyrim, the hd 4000 looks to be better than the 6490m.

The notebookcheck review compares it to the 7540m which is essentially a 6490 at 700 mhz instead of 800 mhz (note this is the ddr3 version with a a6) and it wins almost every time.

In some games its almost competetive with the 630m (metro, skyrim)

The truth is... most of those tests are CPU-bound. That's why the HD 4000 appears so close to 630M and 6630M. Let's walk through some of those.

Batman Arkham Asylum:

http://www.neoseeker.com/Articles/Games/Features/Batman_Arkham_City_performance/2.html

Batman: Arkham City shows scaling across four processor cores, which is currently the standard and the maximum threshold of the Unreal Engine. This means there will be a noticeable drop in performance when using a processor with less than four active cores. Additionally, the game displayed a strong performance for higher clock frequencies: in our results we observed a 20% frame rate increase when our Intel processor was overclocked to 3.6GHz from 2.66GHz.

6630M was coupled with a dual-core CPU in those benchmarks. The HD 4000 was coupled with a quad-core CPU.

Next... Battlefield 3

amd%20fx-8150%20cores.png


Given sufficient CPU performance, there shouldn't be a difference... but... if the CPU is sufficiently slow, then the difference will show. Dual-core compared to quad-core can result in that much of a drop. And again, as mentioned, it was 6630M + dual-core CPU vs HD 4000 vs quad-core CPU.

Next... Skyrim...

Again, a massive CPU-bound situation:

CPU%20Clock.png


Skyrim loves pure CPU performance per core. The scaling is so bad that you'd lose almost 1-2fps per 100MHz difference irregardless of GPU.

Next... Dirt 3...

http://www.techspot.com/review/403-dirt-3-performance/page7.html

It just loves more cores.

It was also interesting to see that Dirt 3 really dislikes dual-core processors as the Phenom II X2 560 averaged 54fps, making it effectively twice as slow as the Phenom II X4 980. Another fun fact: Dirt 3 seems to prefer hexa-core processors over their quad-core counterparts as the higher clocked Phenom II X4 980 performed worse than the Phenom II X6 1100T.

Like I said, HD 4000 comes in many flavors. The ones in the quad-core chips are not just clocked higher, they also have the advantage of being strapped to quad-core chips.

Try comparing HD 4000 strapped to dual-core CPUs and see what happens?

What some of this discussion also shows is that when you deal with hungry games it is often better to go for more CPU performance than GPU as low settings are often CPU bound. Especially Total War is one such where it is still fun at lowest settings if the CPU would just keep up.

Yeah. Like I just showed, 4 of more recent games are much more CPU-bound even at higher settings. That's probably why your MBP 2010 can't compete with HD 4000: it has a slower CPU. Even compared to a MacBook Air, I don't think your MBP 2010 would win any award in the CPU department.

But its pure GPU performance is still ahead, if and when game developers decide to not batter the CPU so much anymore.
 
The truth is... most of those tests are CPU-bound. That's why the HD 4000 appears so close to 630M and 6630M. Let's walk through some of those.

Batman Arkham Asylum:

http://www.neoseeker.com/Articles/Games/Features/Batman_Arkham_City_performance/2.html



6630M was coupled with a dual-core CPU in those benchmarks. The HD 4000 was coupled with a quad-core CPU.

Next... Battlefield 3

Image

Given sufficient CPU performance, there shouldn't be a difference... but... if the CPU is sufficiently slow, then the difference will show. Dual-core compared to quad-core can result in that much of a drop. And again, as mentioned, it was 6630M + dual-core CPU vs HD 4000 vs quad-core CPU.

Next... Skyrim...

Again, a massive CPU-bound situation:

Image

Skyrim loves pure CPU performance per core. The scaling is so bad that you'd lose almost 1-2fps per 100MHz difference irregardless of GPU.

Next... Dirt 3...

http://www.techspot.com/review/403-dirt-3-performance/page7.html

It just loves more cores.



Like I said, HD 4000 comes in many flavors. The ones in the quad-core chips are not just clocked higher, they also have the advantage of being strapped to quad-core chips.

Try comparing HD 4000 strapped to dual-core CPUs and see what happens?



Yeah. Like I just showed, 4 of more recent games are much more CPU-bound even at higher settings. That's probably why your MBP 2010 can't compete with HD 4000: it has a slower CPU. Even compared to a MacBook Air, I don't think your MBP 2010 would win any award in the CPU department.

But its pure GPU performance is still ahead, if and when game developers decide to not batter the CPU so much anymore.

God, i don't know why i'm doing this. None of those gpu's are powerful enough to be bottlenecked by the cpu (an i7 mobile ivy is equal to a desktop i3 in singlethread, better in multithread). The only exception is the a6 (which is a slow quad but should be able to keep up with the 7450) except on the rare really cpu heavy game (gw2, hitman)

Skyrim WAS bad when it came out, but i has since been patched. Here is the benchmarks of a $650 pc using a i3-2120 at 3.3 Ghz and a 6950.

Skyrim%20High%20No%20AA.png


Skyrim%20Ultra%208X%20MSAA.png


After the patch 1.4 occurred in early February, much of the cpu bottleneck dissappeared, (The notebookcheck review was done at the end of april, same with the anandtech review). Much of the cpu bound stuff occurs when details are high (like you said physics on ultra on sc2). Skyrim on high is cpu bound on an i3 at ~85 fps on high and 65 fps on ultra (both of which are far away from 40 fps on low). I

No, I don't know where you get this stuff from. If the anandtech skyrim benchmarks were cpu bound you'd think the i7 ULV + 640m would lose vs the i7 quad + hd 4000. Instead the ULV is more than 50% better (note kepler drivers for the 640m have since improved).

45942.png


You do realize that the bf3 chart there is gpu bound at 70+ fps? You are showing me an exact case of a COMPLETELY GPU BOUND game, because when you change the cpu you get no difference in fps. Bf3 multiplayer is cpu heavy, singleplayer is not. The benchmarks from notebookcheck are low quality and low fps (singleplayer) showing no lack of cpu power.

This pretty much says that bf3 is completely cpu bound. There is basically no difference between an i7 and a phenom x2. And still both are capable of 60+ fps at high 1050p average.

CPU_02.png


Dirt 3 is again not demanding enough to be cpu bound. The athlon x2 265 still gets 51 fps average at 1080p max settings. Our gpu's are in the 40-50 fps range and at medium (where cpu demand is lower). It looks like L3 cache is the culprit.

CPU2.png


page7.html


Batman is hardly cpu bound. (I can't load the cpu image for some reason).

50224.png


Yes some of these charts deal with desktop cpu/gpu but a mobile i7 is a powerful piece of hardware, better than an i5 ivy at multithread and about 10-20% slower with singlethread. notebookcheck has the 7450 with an a6 which should be fine for the majority of the tests, but we can throw that out if you want to.

The 630m uses an i7-2670QM, the hd 4000 uses an i7-3820 or an i7 3610. All of these processors should have no problem driving a 580m in almost every single game without any cpu bottlenecks. They are all within pretty much 20% of each other. Bottleneck is is pretty loose definition, if you throw enough gpu power at any game you can make it cpu bottlenecked. The thing is, the hd 4000 or 6630m are nowhere near enough to expose a cpu deficiency (unless paired with a chronically bad processor, which any modern intel is not).
 
Yeah. Like I just showed, 4 of more recent games are much more CPU-bound even at higher settings. That's probably why your MBP 2010 can't compete with HD 4000: it has a slower CPU. Even compared to a MacBook Air, I don't think your MBP 2010 would win any award in the CPU department.

But its pure GPU performance is still ahead, if and when game developers decide to not batter the CPU so much anymore.
Unfortunately I cannot buy myself any ice cream with that.
The Air is in a few instances faster, sometimes slower, sometimes equal. My Arrendale can go all out while gaming while the ULV IB needs to give some TDP head room to the GPU. That limits it clock speed quite a bit. I guess the two are equal at least. Games usually also only need fast Integer performance and fast memory access. The IB is much wider but not so much faster with simple things. Memory access is faster but clock speed should easily make up for the difference.

Anyway at the end of the day is the performance you get what matters. If my GPU wins an award in having more theoretical punch, that is worthless.
The 256MB VRAM thing was totally unecessary and I let my self talk into the phantasy that it doesn't matter. Truth is developers of engines can do more if they know they have VRAM to work with. Games like Arma 2 or GTA with wide outdoor viewing distance simply need VRAM.
I think even a 650M is underspecced at 512MB and I wouldn't buy one. At 1GB it is also crippled compared to what you could do with it in 2 years down the road in some games.
 
Anyway at the end of the day is the performance you get what matters. If my GPU wins an award in having more theoretical punch, that is worthless.
The 256MB VRAM thing was totally unecessary and I let my self talk into the phantasy that it doesn't matter. Truth is developers of engines can do more if they know they have VRAM to work with. Games like Arma 2 or GTA with wide outdoor viewing distance simply need VRAM.
I think even a 650M is underspecced at 512MB and I wouldn't buy one. At 1GB it is also crippled compared to what you could do with it in 2 years down the road in some games.

1GB can feel kind of slim in certain OpenGL apps. Performance can be great up to a certain poly count, then it just drops off a cliff. It's quite annoying. 2011 and 2012 gpus are quite powerful in terms of potential, yet I've observed these kinds of problems consistently. There are obviously workarounds (hiding meshes, wireframe modes), but you can hit the same kinds of problems on a mac pro with a 5870 in OSX.
 
I think it is also so unnecessary as VRAM really isn't very expensive compared to all the other stuff around it. I think for Apple it is really only a question of space on the logic board. They didn't want to find that little extra for more chips and more than 1GB. Also they want the cheaper models to look worse and cut the top offering in half regardless where it is.
OSX doesn't need much when you are just browsing but when you need it your at a loss. The notebooks sell for $ 2000+ but the 20 bucks for some RAM cannot be found.
 
None of those gpu's are powerful enough to be bottlenecked by the cpu (an i7 mobile ivy is equal to a desktop i3 in singlethread, better in multithread). The only exception is the a6 (which is a slow quad but should be able to keep up with the 7450) except on the rare really cpu heavy game (gw2, hitman)

It actually doesn't take a powerful GPU for the CPU to become a bottleneck.

Given a sufficient amount of decoding, decrypting, calculations (A.I, physics, etc...) that need to be done on the CPU, things will still be delayed enough. It's just that you see it more clearly when the GPU is more than powerful enough for fps to be clipped as the GPU significantly outperforms the CPU.

Or look at it another way, if the CPU wasn't the limiting factor, then clearly, you shouldn't see better performance with the same GPU by increasing CPU count or by increasing CPU frequency, right?

But this happens:

iUXfEH7sZdKDT.png


i0W3r7xWtX3I9.png


iHLD8CWU1nInc.png


ipNfXTCBDGiFa.png


ibmJ09IoBYhpRt.png


iJNdYBAHMdzaF.png


Source: http://www.notebookcheck.net/Intel-HD-Graphics-4000-Benchmarked.73567.0.html

See how HD 4000 coupled with 3820QM is so much faster? If the CPU wasn't the limiting factor, that shouldn't happen!

And that's what I've been trying to say. HD 4000 performance is very dependent on which CPU it's paired with. Or in other words, most modern games are actually more CPU-bound than we think. Slower CPUs don't necessarily clip fps, but they do cause slow-downs more so than faster CPUs. If you believe that HD 4000 is at least comparable to other GPUs, then you must also see that the same thing would apply to other GPUs as well.
 
It actually doesn't take a powerful GPU for the CPU to become a bottleneck.

Given a sufficient amount of decoding, decrypting, calculations (A.I, physics, etc...) that need to be done on the CPU, things will still be delayed enough. It's just that you see it more clearly when the GPU is more than powerful enough for fps to be clipped as the GPU significantly outperforms the CPU.

Or look at it another way, if the CPU wasn't the limiting factor, then clearly, you shouldn't see better performance with the same GPU by increasing CPU count or by increasing CPU frequency, right?

But this happens:

Image

Image

Image

Image

Image

Image

Source: http://www.notebookcheck.net/Intel-HD-Graphics-4000-Benchmarked.73567.0.html

See how HD 4000 coupled with 3820QM is so much faster? If the CPU wasn't the limiting factor, that shouldn't happen!

And that's what I've been trying to say. HD 4000 performance is very dependent on which CPU it's paired with. Or in other words, most modern games are actually more CPU-bound than we think. Slower CPUs don't necessarily clip fps, but they do cause slow-downs more so than faster CPUs. If you believe that HD 4000 is at least comparable to other GPUs, then you must also see that the same thing would apply to other GPUs as well.

That has nothing to do with the cpu power. Its because the hd 4000 has a max turbo of 1250 mhz on the 3820 while the 3610 has a turbo of 1100 mhz. The gpu clocks are different, the 3820 is more powerful than the 3610 in terms of graphic performance and so scores for the higher end i7 are better by about 10% because of the higher clocked gpu.

Edit: It also doesn't explain why the sandy bridge quad consistently beats the 3820 when having worse cpu performance.
 
That has nothing to do with the cpu power. Its because the hd 4000 has a max turbo of 1250 mhz on the 3820 while the 3610 has a turbo of 1100 mhz. The gpu clocks are different, the 3820 is more powerful than the 3610 in terms of graphic performance and so scores for the higher end i7 are better by about 10% because of the higher clocked gpu.

Uh, it's actually 17% faster in some cases. Even if the clock speed is that much higher, performance doesn't transfer linearly.

Edit: It also doesn't explain why the sandy bridge quad consistently beats the 3820 when having worse cpu performance.

Slower CPU just means lower performance. It doesn't necessarily mean fps is completely clipped by the CPU.

Or in other words, would you expect to see, for instance, more fps in Battlefield 3 with a better CPU coupled to that 630M? Even at low settings?

Irregardless of your answer, the reality is yes.

Here's the 630M strapped to a 2670QM.

http://www.notebookcheck.net/Review-Acer-Aspire-5755G-Notebook.68113.0.html

Low 34.3 Med 22.6 High 16.4

And here's the very same 630M strapped to a 3630QM.

http://www.notebookcheck.net/Review-HP-Envy-dv7-7202eg-Notebook.86978.0.html

Low 42.4 Med 28.3 High 21.6

Edit: and before you throw the "it's a different driver" response, please note that the performance improvement is roughly 24% at low settings, 25% at medium, and 31% at high. I don't recall nVidia claiming such huge performance improvement in any recent driver update.
 
Last edited:
Uh, it's actually 17% faster in some cases. Even if the clock speed is that much higher, performance doesn't transfer linearly.



Slower CPU just means lower performance. It doesn't necessarily mean fps is completely clipped by the CPU.

Or in other words, would you expect to see, for instance, more fps in Battlefield 3 with a better CPU coupled to that 630M? Even at low settings?

Irregardless of your answer, the reality is yes.

Here's the 630M strapped to a 2670QM.

http://www.notebookcheck.net/Review-Acer-Aspire-5755G-Notebook.68113.0.html

Low 34.3 Med 22.6 High 16.4

And here's the very same 630M strapped to a 3630QM.

http://www.notebookcheck.net/Review-HP-Envy-dv7-7202eg-Notebook.86978.0.html

Low 42.4 Med 28.3 High 21.6

Edit: and before you throw the "it's a different driver" response, please note that the performance improvement is roughly 24% at low settings, 25% at medium, and 31% at high. I don't recall nVidia claiming such huge performance improvement in any recent driver update.

The 3820 also has 8 MB of cache which the gpu has access to, speeding up certain games where it matters.

There is no way the 3630m bottlenecks the hd4000. People are buying the 3630m with a 680m/7970m and getting far higher performance.

image006.png


The 3820 with a 7970 stomps the 3820 with the hd 4000 (notebookchecks high is 768p), indicating that there is no cpu bottleneck. Look at that chart there, there is clearly no cpu bottleneck there (bf3 is a game that for SINGLEPLAYER requires a notoriously strong gpu but can run on a weak cpu)

I don't think you saw this but the 630m in the envy is clocked to 800 core, 900 memory while the acer is running at 672 core, 800 memory. That is a 19% overclock on the core and a 12.5% overclock on the vram. Notebookcheck also says that the acer suffers from high temperatures. We are running driver versions 285.64 and 306.97 respectively. Some variation between notebooks is expected (I think any variation needs to be over 5% to be stastically significant, especially when comparing different laptops). Taking this into account, the results are statistically viable.

There is very little clock difference between those cpus as well. 2.2 ghz sandy turbo 3.1 vs 2.4 ghz ivy turbo to 3.4 is at most a 16% increase in cpu power (3.4 ghz ivy is equal to a 3.6 ghz sandy 3.6/3.1~16%) far smaller than the 24% plus increase in frame rate. (if you think this is still the reason look at the chart above which clearly shows no cpu bottleneck with much more powerful gpus)
 
Last edited:
I don't think you saw this but the 630m in the envy is clocked to 800 core, 900 memory while the acer is running at 672 core, 800 memory. That is a 19% overclock on the core and a 12.5% overclock on the vram. Notebookcheck also says that the acer suffers from high temperatures. We are running driver versions 285.64 and 306.97 respectively. Some variation between notebooks is expected (I think any variation needs to be over 5% to be stastically significant, especially when comparing different laptops). Taking this into account, the results are statistically viable.

Again, performance does not transfer over linearly. It's still a 30% difference even if you want to take into account boost, drivers, and every other little things.

If you do the math, 15% faster CPU plus 19% faster GPU gives a theoretical 37% overall performance. Factor overhead in and that's closer to that 30% figure unless you're saying you can get 30% more performance by just overclocking the GPU by 19%.

And if you want to bring up temperature, do note that 630M + any random CPU is still going to put out more heat than HD4000 + any random CPU, so chances are the 630M will throttle sooner than HD 4000.

(if you think this is still the reason look at the chart above which clearly shows no cpu bottleneck with much more powerful gpus)

How many times must I say this until you understand? A CPU-bound situation does not necessarily mean fps is clipped. What you're trying to describe is an absolute extreme case of CPU-bound where the GPU is significantly faster than the CPU, and the CPU can not keep up.

A slower CPU may still cause lower performance even if the GPU is not that powerful.

amd%20fx-8150%20cores.png


Running the game with only 2 cores obviously causes performance issues even on a CPU that's capable of running it much faster. That's just how it is.

Here, I found a graph that doesn't have that weird 81fps limit:

2506


Link in case you can't see it:
http://www.sweclockers.com/artikel/14650-prestandaanalys-battlefield-3/5

I know Battlefield 3 is CPU-intensive because I have played the game. On my MBP 2011, my Mac Mini, the MacBook Air, and now my rMBP. It's much more CPU-intensive than it seems. Also quad-core performs much better than dual-core.
 
Last edited:
Again, performance does not transfer over linearly. It's still a 30% difference even if you want to take into account boost, drivers, and every other little things.

If you do the math, 15% faster CPU plus 19% faster GPU gives a theoretical 37% overall performance. Factor overhead in and that's closer to that 30% figure unless you're saying you can get 30% more performance by just overclocking the GPU by 19%.

And if you want to bring up temperature, do note that 630M + any random CPU is still going to put out more heat than HD4000 + any random CPU, so chances are the 630M will throttle sooner than HD 4000.



How many times must I say this until you understand? A CPU-bound situation does not necessarily mean fps is clipped. What you're trying to describe is an absolute extreme case of CPU-bound where the GPU is significantly faster than the CPU, and the CPU can not keep up.

A slower CPU may still cause lower performance even if the GPU is not that powerful.

Image

Running the game with only 2 cores obviously causes performance issues even on a CPU that's capable of running it much faster. That's just how it is.

Here, I found a graph that doesn't have that weird 81fps limit:

Image

Link in case you can't see it:
http://www.sweclockers.com/artikel/14650-prestandaanalys-battlefield-3/5

I know Battlefield 3 is CPU-intensive because I have played the game. On my MBP 2011, my Mac Mini, the MacBook Air, and now my rMBP. It's much more CPU-intensive than it seems. Also quad-core performs much better than dual-core.

Temperature varies across laptops so it is a valid point, some laptops can take the additional heat, some cannot and so throttle.

That same bf3 chart is not proving your point. Yes there is a slight cpu bottleneck with two cores however, on high, 1050p we are still getting over 70 fps. You didn't hear me when I said that given enough gpu power any game can have a cpu bottleneck, the problem is that the hd 4000 is nowhere near powerful enough to cause this with any processor its paired with. Any modern intel mobile processor can beat or match bulldozer at 3.6 (because it has only about 60% at most the IPC of ivy bridge this would be around 2.2 ghz).

You are also forgetting that those notebookcheck benchmarks are done in the singleplayer game, which is much less cpu intensive (I said single player did not require a good cpu while multiplayer does). And we are getting about 40 fps max. There is no cpu bottleneck.

If you really think that the cpu is so important in bf3 play singleplayer then decativate the gt 650m in your rmbp and use the hd 4000, I guarantee you will notice a difference.

A game is cpu bound if changing the cpu power results in changes in fps. If there is no change in fps then the game is not cpu bound (bottlenecks are the slowest part of the system and are the thing that drags down fps, if the cpu is not the slowest part in the system it will have no effect on the fps, bottlenecks change depending on what part of a given system is the slowest). Gw2 is notoriously cpu bound but given a i7-3820 and a 540m you would be hard pressed to find situations where the cpu would be holding you back at 1080p.
You keep saying that losses in fps are due to poor cpu performance so obviously we are talking abou worst case senarios.

Gpu +19% core + 12% vram speed + better drivers + different system + margin of error can equal about 30% difference.
 
If you really think that the cpu is so important in bf3 play singleplayer then decativate the gt 650m in your rmbp and use the hd 4000, I guarantee you will notice a difference.

I'm not sure if you have ever used a MacBook with a dedicated GPU, but that's not possible under Bootcamp, and there is no Battlefield 3 for Mac.

A game is cpu bound if changing the cpu power results in changes in fps.

And that's what my chart shows. BF3 does get more fps by increasing CPU power. Especially going from dual-core to quad-core. Please look back at the original discussion and notice that we do have dual-core processors in the comparison.

You keep saying that losses in fps are due to poor cpu performance so obviously we are talking abou worst case senarios.

Nope. Most benchmarks you have shown are with desktop processors, which have much better thermal profiles (having much better coolers), and which are also made to work at much higher default frequencies without any Turbo Boost. So those are more "ideal" situations for laptops.

To be fair, Battlefield 3 and many of the game titles mentioned were not made to run on laptops, but that's precisely why laptops are actually "worst case scenarios" for them.

Gpu +19% core + 12% vram speed + better drivers + different system + margin of error can equal about 30% difference.

I don't know if you're just intentionally being this way, but again, performance does not add linearly, nor does it translate linearly.

You don't get a performance boost addition by overclocking core and then memory. Put that way, I should have a 100% improvement by boosting core by 50% and then memory by 50%, right?
 
I'm not sure if you have ever used a MacBook with a dedicated GPU, but that's not possible under Bootcamp, and there is no Battlefield 3 for Mac.



And that's what my chart shows. BF3 does get more fps by increasing CPU power. Especially going from dual-core to quad-core. Please look back at the original discussion and notice that we do have dual-core processors in the comparison.



Nope. Most benchmarks you have shown are with desktop processors, which have much better thermal profiles (having much better coolers), and which are also made to work at much higher default frequencies without any Turbo Boost. So those are more "ideal" situations for laptops.

To be fair, Battlefield 3 and many of the game titles mentioned were not made to run on laptops, but that's precisely why laptops are actually "worst case scenarios" for them.



I don't know if you're just intentionally being this way, but again, performance does not add linearly, nor does it translate linearly.

You don't get a performance boost addition by overclocking core and then memory. Put that way, I should have a 100% improvement by boosting core by 50% and then memory by 50%, right?

Okay, why do you think people buy dedicated gpu's to play games such as bf3 anyway. Why does your quad core ivy + 650m get better fps than the 3820 + hd4000. I don't care whether bf3 can be run on mac or not, you are nitpicking away from the main issue, in games under OSX, why do you use the 650m? Because it runs better.

Your chart is correct. However you are drawing incorrect conclusions from it. The bulldozer cpu does appear to be having a slight effect on framerate when there are only two cores, however, the cpus being discussed (the 3610 and 3820) are both quad core with much higher IPC under turbo. That chart is using a very powerful gpu (probably a 680 or 7970, etc). If the gpu was instead something much weaker such as a 7770 ghz edition you would see no cpu bottleneck because the 7770 ghz edition could only do something like 40-50 fps at most at those settings and any cpu deficieny would be masked. The hd 4000 is far weaker and so your game would never be restricted to a gpu bottleneck because the gpu would always bottleneck first. (The hd4000 might be capable of putting out 10 fps to that screen, the cpu might be capable of 60 fps but the game will run at 10 fps because its limited by the gpu).

49944.png


There is clearly a large difference between the alienware with the 3720 and the 680m and the clevo with the 3720 and the 650m. Despite having the same cpu, bf3 runs far better on a more powerful gpu, hence the cpu is not limiting its performance in any way. If I were to give you a laptop with a cpu that was 10 times as powerful but a 650m you would only game as well as the clevo because your computer would lack gpu grunt.

Performance can add.

mem%20wow%20average.png


There is trinity on the desktop. By increasing the memory from 800 mhz to 1066mhz (+33%), we can increase the framerate from 37 to 46 fps (+24%)

wow%201920.png


Yet given the same RAM and timings trinity is clearly faster than llano.

It even gets slightly less bandwidth.

sandra%20memory%20bandwidth.png


So we have a situation where increasing core power AND increasing memory bandwidth both give an advantage to the gpu. So for the WOW test, a increase in core power at a given bandwidth gives an increase and an increase in bandwidth at a given core speed gives an increase. Increasing either one increases the fps. Increases both increases the fps even more.

Overclocking the chip (gpu from 800 to 1083 mhz).

3dmark%20graphics.png


The cpu is also overclocked but this is the graphics test and only reliant on the gpu (the physics cpu test actually shows a smaller % improvement).

In any test I would expect a variation between results of about 5% between different laptops (needs to be more than 5% to be statistically significant). (systematic error)

Different drivers (game patches?-- I don't know because I don't play the game) could account for between 5-10% variation.

I would also say that +/- 5% is identical because of experimental error (run 3d mark a couple times, it varies by a couple percent. .

Core increase by 19% + ram increase by 12%. --> Because of the slight increase in ram speed i would not be surprised to see an increase in fps of about 19% or same as core speed. Add 5% error (experimental) and 5-10% (because of drivers, different systems, etc) and yes we can easily see 30%. Would I expect it? No. Is it possible? Yes. (19+8+4=31% for example). Perhaps one notebook (the acer) throttled the gpu a couple times.

To answer your question, depending on your gpu, by overclocking your core speed 50% and overclocking your vram (50%) you could see an increase of up to and possibly slightly over 50% (but not 100%, I never said that). I would not expect this to happen often.

Please explain why using a faster gpu can more than double fps in bf3, given the same processor (from the chart above).
 
Last edited:
Look at that Battlefield 3 graph you posted yourself.

49944.png


See how the Samsung Series 7 outperforms the Clevo with both having a 650M?

I'll only say this the last time: the Clevo has a lower thermal profile than the Samsung Series 7, and therefore, the CPU gets throttled earlier.

By that, the Clevo actually doesn't have enough CPU power to feed Battlefield 3, and the end result is that even a 3720QM would lose to a 3615QM.

If you still don't get it after this, then... I give up. It's been a lot of back and forth for nothing at all. If you want to believe that the HD4000 is as good as what other GPUs you are comparing it to (330M GT or 6630M), then sure, please go ahead.

I've already used the chip, and I know enough that it doesn't quite compare to anything else. And that's all that matters to me.
 
Look at that Battlefield 3 graph you posted yourself.

Image

See how the Samsung Series 7 outperforms the Clevo with both having a 650M?

I'll only say this the last time: the Clevo has a lower thermal profile than the Samsung Series 7, and therefore, the CPU gets throttled earlier.

By that, the Clevo actually doesn't have enough CPU power to feed Battlefield 3, and the end result is that even a 3720QM would lose to a 3615QM.

If you still don't get it after this, then... I give up. It's been a lot of back and forth for nothing at all. If you want to believe that the HD4000 is as good as what other GPUs you are comparing it to (330M GT or 6630M), then sure, please go ahead.

I've already used the chip, and I know enough that it doesn't quite compare to anything else. And that's all that matters to me.

Sorry meant to change that but you beat me to it. The clevo uses ddr3 vram while the samsung uses gddr5.

Still, compare the samsung to the the alienware.

For the last time its not the cpu in these games.
 
Sorry meant to change that but you beat me to it. The clevo uses ddr3 vram while the samsung uses gddr5.

Still, compare the samsung to the the alienware.

For the last time its not the cpu in these games.

Clevo uses GDDR3 but it also has higher core clocks (950MHz boost or 835MHz base) vs Samsung's (835MHz boost or 745MHz base).

Unless you're going to say the GPU is now not the bottleneck but that VRAM bandwidth is.

Honestly, go play the game. I've played it. I get more performance on my MBP 2011 with the 6490M because my Mac Mini 2011 has a dual-core CPU. You won't see the limitation on those benchmarks because they compare mostly with desktop chips that are all quad-core, but try the game on a dual-core computer vs quad-core and it's pretty apparent.

Here's a video that someone else made:
http://www.youtube.com/watch?v=1F3t2ek1dwA

So, is Geforce 650M 512memory still better than HD 4000 or even 4600?

I think it's safe to say "yes", the 650M is still faster than HD 4000 and even HD 4600. But some people would like to believe Intel integrated graphics are not that bad... since it's clear Apple won't put dedicated graphics into anything but its 15" MacBooks.

I don't want to be smug about it, but the reality is just that integrated graphics are... at the end of the day, just integrated graphics. If you want good graphics performance on a Mac for gaming or 3D modeling, the only way is to buy a Mac with dedicated graphics.
 
Clevo uses GDDR3 but it also has higher core clocks (950MHz boost or 835MHz base) vs Samsung's (835MHz boost or 745MHz base).

Unless you're going to say the GPU is now not the bottleneck but that VRAM bandwidth is.

Honestly, go play the game. I've played it. I get more performance on my MBP 2011 with the 6490M because my Mac Mini 2011 has a dual-core CPU. You won't see the limitation on those benchmarks because they compare mostly with desktop chips that are all quad-core, but try the game on a dual-core computer vs quad-core and it's pretty apparent.

Here's a video that someone else made:
http://www.youtube.com/watch?v=1F3t2ek1dwA



I think it's safe to say "yes", the 650M is still faster than HD 4000 and even HD 4600. But some people would like to believe Intel integrated graphics are not that bad... since it's clear Apple won't put dedicated graphics into anything but its 15" MacBooks.

I don't want to be smug about it, but the reality is just that integrated graphics are... at the end of the day, just integrated graphics. If you want good graphics performance on a Mac for gaming or 3D modeling, the only way is to buy a Mac with dedicated graphics.

All the laptops in that chart are quad core, dual cores have nothing to do with that bf3 anandtech chart.

The gddr3 can certainly be a bottleneck, it has less than half the bandwidth of the gddr5 version.

Its bf3 SINGLEPLAYER not MULTIPLAYER. Multiplayer requires a strong cpu, singleplayer does not.

btw does your mac mini have the 6630m or only the hd3000?
 
All the laptops in that chart are quad core, dual cores have nothing to do with that bf3 anandtech chart.

The gddr3 can certainly be a bottleneck, it has less than half the bandwidth of the gddr5 version.

Its bf3 SINGLEPLAYER not MULTIPLAYER. Multiplayer requires a strong cpu, singleplayer does not.

btw does your mac mini have the 6630m or only the hd3000?

The dual-core comment was for the HD 4000 comparison. We're still on that one, right? Or is it just a general discussion on how Battlefield 3 is not CPU-bound now?

And again, I have played BF3. It makes no difference whether it's Singleplayer or Multiplayer. The game is just very CPU-bound if it only has 2 cores to deal with.

And my Mac Mini has the 6630M.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.