Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
So wait, nVidia delivers the same power constraints, at similar prices in volumes (Macs haven't jumped in price since moving from the 320M to the Intel stuff) but with much greater performance...

Tell me now, which priorities exactly are you talking about ?

The business market, getting a same cost nVidia chip that performs much better within the same power constraints for a given generation, isn't impacted at all.

Why do people feel the need to apologize for Intel ? The facts are plain as day : Intel can't do graphics. At all.

----------



Intel isn't catching up. They're still as behind now as they were before. By definition, catching up means you'll eventually get there. After 14 or so years, I have no illusion of Intel ever "getting there".

And the 320M does as many FPS as the 3000HD in OpenGL what ? CPU bound benchmarks ? Of course, it has a big CPU to help it. GPU bound benchmarks ? Oups, no, not there. The 3000HD hasn't caught up to the 320M. You can't say something as generic as "FPS in OpenGL". There is no such thing as a static FPS count for OpenGL, it all depends on what API calls you're making, what you're rendering and what part of your pipeline is the bottleneck.

The Cinebench 11 OpenGL benchmark has nothing to do with the CPU, as far as I can tell. I can get the same numbers when using the 3000HD in a 2011 MBA and when using the 3000HD in a 2011 Mac Mini with a quad core CPU. I consider it to be a pretty good comparison tool.
 
Guys you need to see the matter from Apple's point of view if you need to guess why they chose Intel HD over nVidia (or if they should care about FPS in some games):

In 2011 they delivered an MBA that was faster than the previous (2010) model, in all the key points that matter for an ultrabook. 2011 model is faster on the desktop graphics (you don't need benchmarks for this, difference is day/night here) thanks to the much faster cpu, maintaining the same battery life, same size and same weight. It performs day to day tasks much faster than 2010 model. Job done, nothing else here to see. None cares about raw graphics benchmarks for an ultra portable computer. Or at least none should care. If there are people who bought an MBA and expect to use them as gaming rigs, they are definitely on the wrong boat.

Mind, also, that the 2011 model is also faster on some latest games (e.g. Portal 2), since it seems that C2D cpu is starting to be the bottleneck for the latest games (or some of them at least). So, in long term, if someone (for some odd reasons) had to chose an MBA even to play games, the 2011 model is still a better choice.
 
Guys you need to see the matter from Apple's point of view if you need to guess why they chose Intel HD over nVidia (or if they should care about FPS in some games):

In 2011 they delivered an MBA that was faster than the previous (2010) model, in all the key points that matter for an ultrabook. 2011 model is faster on the desktop graphics (you don't need benchmarks for this, difference is day/night here) thanks to the much faster cpu, maintaining the same battery life, same size and same weight. It performs day to day tasks much faster than 2010 model. Job done, nothing else here to see. None cares about raw graphics benchmarks for an ultra portable computer. Or at least none should care. If there are people who bought an MBA and expect to use them as gaming rigs, they are definitely on the wrong boat.

Mind, also, that the 2011 model is also faster on some latest games (e.g. Portal 2), since it seems that C2D cpu is starting to be the bottleneck for the latest games (or some of them at least). So, in long term, if someone (for some odd reasons) had to chose an MBA even to play games, the 2011 model is still a better choice.
This is my view of the matter as well.
 
Guys you need to see the matter from Apple's point of view if you need to guess why they chose Intel HD over nVidia (or if they should care about FPS in some games):

Hum, this discussion wasn't about Apple per se. It was about Intel. We know why Apple "chose" Intel over nVidia : Hint, they didn't. Intel forced themselves back into relevance by suing nVidia and refusing to license them to make chipsets for the Core iX line-up.

Litigate if you can't innovate.

Mind, also, that the 2011 model is also faster on some latest games (e.g. Portal 2),

In what mystical world is this ?

39947-4e3175c-intro.png
 
yeh cuz that seems extremely trustworthy. right because the 2.3 i7 MBP achieves like 4x better then the 2.7 i7 MBP with the same GPU :rolleyes:;)

Probably a 15" or 17" MBP with a 2.3 using the dedicated GPU vs a one using the internal GPU.

Still, I'd rather believe Ars than OSXdaily news. ;)
 
yeh cuz that seems extremely trustworthy. right because the 2.3 i7 MBP achieves like 4x better then the 2.7 i7 MBP with the same GPU :rolleyes:;)

The 2.7 i7 is a 13" with dual core and HD3000 IGP.

The 2.3 i7 is a 15" with discrete GPU.

But as KWRX points out, osxdaily isn't a very reliable source. I was just being facetious.
 
At the end of the day, most people are not gaming on Macs. Your average Apple consumer doesn't care what chip powers their notebook. Their eyes would grow wide at the part numbers mentioned.

From Apple's point of view, using Intel's IGP instead of slapping another discrete chip onto there helps conserve power and increase battery life. Most people prefer better battery life over increased performance.

Outside of gaming, are there many mainstream applications for graphics cards besides Folding@Home and Bitcoin mining? Unless sequence genomes for a university or crack passwords, graphics chips are rather limited in terms of their scope and use.

While I agree that Intel's chips are terrible, they have come a long way since the "Intel Extreme Graphics" days. The HD 3000 is quite good for older games, offering performance similar to a midrange graphics card from 2004 or 2005. That is pretty impressive, given Intel's record in graphics.

I don't see how this argument is relevant for many of you guys. Do you guys play a lot of video games? If so, why not get a MacBook Pro 15-inch and call it a day? It's not as portable, but we haven't reached that point where high-end discrete cards can be stuffed into a MacBook Air chassis without overheating.
 
At the end of the day, most people are not gaming on Macs. Your average Apple consumer doesn't care what chip powers their notebook. Their eyes would grow wide at the part numbers mentioned.

Great, so we could get good GPUs and they wouldn't know or care! I really don't see how this is even an argument since we're not discussing Apple. :rolleyes:

We're discussing Intel's lack of graphics competence.
 
Great, so we could get good GPUs and they wouldn't know or care! I really don't see how this is even an argument since we're not discussing Apple. :rolleyes:

We're discussing Intel's lack of graphics competence.

Intel's integrated graphics aren't great for gaming, sure.

But from Apple's point of view, it's clear that Intel's graphics are "good enough" to render the Mac OS desktop and play back 1080p Flash videos. And that is what most consumers care most about. The performance in gaming is moot, since most people aren't gaming on MacBook Airs.
 
We're discussing Intel's lack of graphics competence.
I think what you fail to comprehend is what the Intel GPU is supposed to be.
You understand anything about how GPUs are built. I assume you read some Anandtech articels but cannot put two and two together.

If Nvidia or AMD build a fast GPU they jsut put lots and lots of shaders on the die accompanied by 128bit memory interface for any GPU that is really worth mentioning.
Now did Intel want to do that? Maybe crank up the EUs to 32 and add a third 64 bit memory channel to free up more bandwitdh for the EUs. The GPU already needs more space than 2 cores so why should it take even more space.
That GPU goes on every freakin DIE only an idiot would put a huge GPU there just so the 5% of users who actually need that much speed are happy. A big GPU would inevitably have more leakage it would need more power for what?
Those that want more speed should get a dedicated GPU and the fools who think an ultraslim notebook is for gaming, well let them be fools.

Intel did the only sensible thing. Create a an as power efficient GPU as possible and make it only fast enough so it lacks no speed for the intended purpose.
They succeded in everything. A lot of fixed function units may not be perfect for some purpose but it works perfectly in this use case. The GPU is more than fast enough for any 2D stuff and just fast enough so you can (if you really have to) play most games at low settings.
It handles everything concerning video encode/decode equally well or better than AMD/Nvidia and it needs really very little power for the whole job.

It is simply ridiculous to say they don't know what they are doing. They know perfectly well they simply tried to buil the perfect IGP and not some gaming GPU.

Now comparisons to Nvidia. You claim Intel sucks because that is tradition probably.
But the GPU is much faster than the 9400M and nobody complained about that one. Yet nodody, aside from some fool who thinks he has a gaming notebook, does anything diffferent graphics wise on the Notebook than back then with the 9400M.
Before the 9400M most IGPs were regarded as utterly useless.
The 320M is only so slightly better than the hd 3000 and in almost all cases if the hd 3000 fails to offer decent 30+ frame rates the 320M usually fails too. Intels GPU is clearly optmized to max performance on low settings and looses more with more details which is a perfectly reasonable target as any optimization towards that end would be wasted as most games won't run fluent anyway at those settings.

And did you look at some OSX benches of the 320M and the Intel HD 3000. The OSX open GL driver seems to much better from Intel than the windoes version. The 320M in windows does better but in OSX it is the other way around. Not that it matters with the useless junk of game ports that exist for OSX.
A great deal comes from drivers the GPU is good as it is.

It also doesn't matter that the 320M is older and Nvidia might have some 520M like version now which is still a GPU more or less useless for gaming just that bit faster than a 320M. Look at the 520M which might be in the same power budget and could make a 320M succesor. It is hardly worth it at already 17W TDP with 48 shaders. They just cannot compete at 40nm with Intels 32nm power efficiency wise.
http://www.notebookcheck.com/Welche-Spiele-laufen-auf-Notebook-Grafikkarten-fluessig.13827.0.html

Again to sum it up
an integrated GPU on the CPU DIE is supposed to

be fast enough for any 2D stuff including mutliple monitors :check:
accelerate movies to save power :check:
give some limited gaming experience :check:

everything beyond that is always a tradoff. do I want that bagage?
bagage is not just power consumption but also simple DIE area. That costs and if you 95% of people don't need a gaming GPU it just makes the DIE more expensive for nothing. The 5% that need it should get a dedicated GPU as always that is the point of an IGP.

It is no lack of competence on Intel's side it is the lack of some people to get in their heads what Intel's objectives are.
 
The performance difference is present in Windows. In OSX it's a wash between 2010/2011 graphics, and as many have said, the 2011 is an all around better OSX machine so Apple dont see a benefit to an Intel GPU pulling more power.

I have to say, as an IGP on a low voltage chip that is powerful and has long battery life, the HD3000 is pretty remarkable... OSX games run fine and the desktop feels like Pro performance. Intel could certainly do better games wise, but it looks like they are really allocating resources to improving based on how much CES exposure they gave the HD4000. It's a step in the right direction to have a decent default GPU component alongside the CPU, bigger laptops really benefit from being able to switch off a high performance card when its not needed.

The interesting thing about this is the idea that the thunderbolt display could in future have a decent card in it, which would allow an Air (or Pro 13) to plug into pro graphics on a large screen. I forget the name of the Sony laptop is built on this concept... but an Air plus a large screen would be my perfect desktop replacement.

----------

I think what you fail to comprehend is what the Intel GPU is supposed to be.... *words*

Said it much better than I did, nice.
 
I'm really impressed with the Intel 3000 chip. It has served me very well having an extremely small computer that is capable of playing brand new games!
 
From Apple this, From Apple that. Not the topic.

From Apple's point of view, they have no choice since Intel basically litigated nVidia away. I know this, we all do. That's not what we're discussing.

Would you rather have discrete graphics included with the MacBook Air, then? For example, the Nvidia GT 520M is twice as fast as the HD 3000, yet has a TDP of 17 watts. There is no place on the MBA's motherboard for such a chip, and if Apple decided to include one, the MacBook Air's chassis would need to be thicker to accommodate a better cooling system and the chip.

Furthermore, the HD 3000 does what it needs to do - render the desktop and play back video. These two things the HD 3000 does very competently.

Yes, it would be wonderful to have better graphics around, but the HD 3000 is good enough for most consumers. Yes, it is also a pity that Intel shut down Nvidia's integrated graphics business, but what can be done about that?

What was this conversation about anyways? That the Intel HD 3000 is too slow for games? That everyone knows. MacBook Airs were never meant to game, and Intel has created the perfect graphics solution for non-gaming activities within the thermal envelope that Apple has demanded for the MBA.

Oh, and before you claim that the 320M is better than the HD 3000, the performance difference is mostly minimal, at least on the OS X side.
 
Would you rather have discrete graphics included with the MacBook Air, then? For example, the Nvidia GT 520M is twice as fast as the HD 3000, yet has a TDP of 17 watts. There is no place on the MBA's motherboard for such a chip, and if Apple decided to include one, the MacBook Air's chassis would need to be thicker to accommodate a better cooling system and the chip.

Furthermore, the HD 3000 does what it needs to do - render the desktop and play back video. These two things the HD 3000 does very competently.

Yes, it would be wonderful to have better graphics around, but the HD 3000 is good enough for most consumers. Yes, it is also a pity that Intel shut down Nvidia's integrated graphics business, but what can be done about that?

What was this conversation about anyways? That the Intel HD 3000 is too slow for games? That everyone knows. MacBook Airs were never meant to game, and Intel has created the perfect graphics solution for non-gaming activities within the thermal envelope that Apple has demanded for the MBA.

Oh, and before you claim that the 320M is better than the HD 3000, the performance difference is mostly minimal, at least on the OS X side.

I see this stated time and time again, but I have racked up about 60+ hours of game time on Skyrim (settings at high/medium) about 20+ hours in Rage (that game runs REALLY WELL!), some time in Deus Ex: Human Revolution, some Assassins Creed: Brotherhood...I've caused some destruction in Battlefield Bad Company 2...I don't know where people are getting the impression that the MBA can't game.....it actually does a fantastic job! And continues to impress me! Sure there are some poorly coded console ports that don't run well...but they only run on the highest end systems. But the games that were made for PC are incredibly playable!!

Before criticizing its ability to play games, go check out youtube.
 
I see this stated time and time again, but I have racked up about 60+ hours of game time on Skyrim (settings at high/medium) about 20+ hours in Rage (that game runs REALLY WELL!)...

Yeah it's fine for people like us that dont mind low-medium graphics... but it's generally people that are used to gaming desktops that are the yardstick in these conversations. I mean, Skyrim will run near 60fps on low no problem (still playable) or maybe 25fps av on medium... but a pc gamer does not consider that performance admissible ;)

p.s. if you are getting a high fps on above medium in Skyrim I would love to know your cfg file, OS and driver combo -you have some kind of wizardry going on?
 
I think what you fail to comprehend is what the Intel GPU is supposed to be.
You understand anything about how GPUs are built. I assume you read some Anandtech articels but cannot put two and two together.

You know what they say about assumptions. :rolleyes:

Let's just say I've been gaming on PC since the days of ISA cards and FPUs. The rest of your post is repeating the same crap I've constantly shown wrong in this thread : AMD and nVidia can ship no compromise IGPs. Intel can't. That's a competence problem, not a "compromise is required" problem.
 
Yeah it's fine for people like us that dont mind low-medium graphics... but it's generally people that are used to gaming desktops that are the yardstick in these conversations. I mean, Skyrim will run near 60fps on low no problem (still playable) or maybe 25fps av on medium... but a pc gamer does not consider that performance admissible ;)

p.s. if you are getting a high fps on above medium in Skyrim I would love to know your cfg file, OS and driver combo -you have some kind of wizardry going on?

I average about 25-30 (frame limited to 30) fps in skyrim...usually closer to 30..I am using TESVAL, I have the latest intel drivers (october I think) and frame limiter. I added some of the memory tweaks to the cfg and disabled mouse smoothing. I get really nice results. One of these days I will shoot some footage to prove it :).

I have a pretty beefy desktop at home and runs everything at 60fps plus, but on the go, with a laptop the size of an iPad I am perfectly happy running a little lower. Plus, the screen is small enough and clear enough that disabling AA not only gives major performance boost...it also looks perfectly fine. If you were to crank the graphics all the way down to the lowest setting I'm sure you could get 60fps...but why do that...it looks great and performs great!
 
5%?
About +10 FPS increase (from 20-25 FPS with the HD3000) is a lot more than just 5% - even when comparing C2D models with the i5/i7. Who knows how the 320m would compare when used with a better CPU.

And I had the chance to play Skyrim on both the 2010 and 2011 model (bootcamp). It is a quite a difference.

eh, I HIGHLY doubt the 320m is so freakin powerful that it was being bottlenecked by Core 2 Duo. These virtual reach arounds some people give the 320m is pretty comical, like it's the holy grail of IGP.
 
You know what they say about assumptions. :rolleyes:

Let's just say I've been gaming on PC since the days of ISA cards and FPUs. The rest of your post is repeating the same crap I've constantly shown wrong in this thread : AMD and nVidia can ship no compromise IGPs. Intel can't. That's a competence problem, not a "compromise is required" problem.
I read most of it and you haven't really shown anything wrong.
You are aware that a MCP89 aka 320M would require about 50% of the Sandy Bridge Quad Core Die space just for the GPU.
Now Intel's hd 3000 takes less than half that space for about the same performance. Sure it is 32 vs 40nm but that step would imply only some 62% shrink and if you cut away the unecssary stuff in an on die gpu they probably get about equal speed from equal die space used.
At the end of the day Intel still holds 32/22 vs 40/28 process lead and that by itself makes the GPU better.

Explain to us where is the competence problem here?
All I see is that they didn't want to contribute as much die space to the GPU as you'd wish. Like AMD Fusion which has a much bigger GPU compared to the rest of the CPU.
Is it incompetence to not grow the enitre chip by 30% and dedicate all the additional space to more EUs and stuff? Or may it just be a design decision made in the full knowledge of all implications?
 
You are aware that a MCP89 aka 320M would require about 50% of the Sandy Bridge Quad Core Die space just for the GPU.

You are aware that the 320M is more than just a GPU right ?

I'll leave it at that. Rest of your post is gibberish in relation to this quite major detail.
 
I average about 25-30 (frame limited to 30) fps in skyrim...usually closer to 30..I am using TESVAL, I have the latest intel drivers (october I think) and frame limiter. I added some of the memory tweaks to the cfg and disabled mouse smoothing. I get really nice results. One of these days I will shoot some footage to prove it :)

Hey man, can I trouble you to post a detailed reply on how to go through these optimizations? I would love to implement them myself, and I'm sure other forum members would too!

Thank you in advance!
 
You are aware that the 320M is more than just a GPU right ?
I am very much aware of that. I was only refering to the GPU part of the 320M. Which would require close to 100 sqmm, which in turn is about half a Sandy Bridge DIE which is 216 sqmm.
The entire MCP89 is much bigger at about 180 sqmm. Almost as big as an entire Sandy Bridge Quad Core DIE.
I was only talking about the GPU part of the die not including all the surrounding southbridge stuff as well as Northbridge stuff like PCIe, dual channel IMC and so forth.
I'll leave it at that. Rest of your post is gibberish in relation to this quite major detail.
In that gibberish are some valid questions and you seem to overlook some major details if you even for a minute thought I was talking about the enitre MCP89 die. Look at a 13" MBP logic board from 2010 and you should see how ridiculous that assumption is. That die is huge compared to the tiny 82 sqmm Penryn DC next to it and nowhere near 100 sqmm. It looks to be as big as a quad Core Sandy Bridge and is not far off.
13_macbookpro_logicboard_ifixit.jpg
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.