Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
It certainly won't be the exact same one. MBA's always use low voltage / ultra low voltage CPUs. I can't see them change that considering the much smaller batteries in them.

True, but even if we get the 1.6GHz Sandy Bridge Core i5, it should be very impressive in CPU-intensive tasks, if the new MacBook Pro is any indication. The base 2.3GHz Core i5 in the 13" is nearly as fast as the previous 2.8GHz Core i7 used in the old 17" Pro.

The Intel HD 3000 is sufficient for Angry Birds, which is about as intense as many Air buyers get when it comes to gaming. Most of us would be happy with the massive CPU boost.
 
True, but even if we get the 1.6GHz Sandy Bridge Core i5, it should be very impressive in CPU-intensive tasks, if the new MacBook Pro is any indication. The base 2.3GHz Core i5 in the 13" is nearly as fast as the previous 2.8GHz Core i7 used in the old 17" Pro..

True, the iCores will be a big upgrade!

By the way, there is no Sandy Bridge i5 ULV running at 1.6Ghz, that one's an i7! So that will make the upgrade path to ultimate more interesting than just a 200Mhz boost.

The i5 ULV is 1.4Ghz, the 1.5 and 1.6 Ghz ULV's are both i7 only.
 
I want an 11.6" air, but I don't plan on doing any graphically intense tasks at all - nothing more intense the 720p video streaming/playback. Considering the 1.4 GHz clock speed in the low-end air, I feel like a Sandy Bridge update, with a processor that can boost to over 2 GHz, will provide a considerable improvement for most tasks without a noticeable sacrifice in graphics (for me, at least).

I don't see as much of a benefit for the 13", as it's clock rate is already either 1.83 or 2.13 GHz, and I don't see any real gains happening.

This is my thought under the assumption that the 17W TDP processors are what gets put in the airs next: http://en.wikipedia.org/wiki/Sandy_Bridge#Mobile_processors
 
Last edited:
I think if they did, they would of put them in the 2011 13" MacBook Pros.

Exactly. That's why I think the MacBook Air will get the Sandy Bridge chip this fall. It is never going to be the first to get a new chip, and Ivy Bridge won't be out until next year. At some point the Core 2 Duo (particularly the 1.4GHz version) will struggle to keep up with netbook chips that are coming out, and the supply will continue to dwindle, so that's why I think the ULV i5/i7 chips will make their way to the Air by the end of the year.
 
Actually if I think about it, it's possible the next MBA is going to have a chip made by Apple. It's possible Apple is building in support to Lion.
 
Exactly. That's why I think the MacBook Air will get the Sandy Bridge chip this fall. It is never going to be the first to get a new chip, and Ivy Bridge won't be out until next year. At some point the Core 2 Duo (particularly the 1.4GHz version) will struggle to keep up with netbook chips that are coming out, and the supply will continue to dwindle, so that's why I think the ULV i5/i7 chips will make their way to the Air by the end of the year.

ULV SB procs are dropping in june, along with the next MBA update.

10/08 -> 6/09

10/10 -> 6/11

no major redesign, but a SB proc bump along with thunderbolt is almost guaranteed.
 
Last edited:
Question. Does Thunderbolt fit in a 2010 MBA ? ;) I mean this machine is just a "tiny bigger" than an USB port.

Does the current MBA have a display port? Thunderbolt would be great for the air.
 
True, but even if we get the 1.6GHz Sandy Bridge Core i5, it should be very impressive in CPU-intensive tasks, if the new MacBook Pro is any indication. The base 2.3GHz Core i5 in the 13" is nearly as fast as the previous 2.8GHz Core i7 used in the old 17" Pro.

The Intel HD 3000 is sufficient for Angry Birds, which is about as intense as many Air buyers get when it comes to gaming. Most of us would be happy with the massive CPU boost.

What? I am playing Mass Effect 2 and Black Ops fine on my 11" Air. You should do a little research before making that statement.
 
Intel HD 3000 vs GeForce 320M...

OK, folks, just a simple reality check. First of all, *any* chip combining CPU and graphics will have severe memory bottlenecks, so it's hard to see it ever competing with a discrete graphics card. But let's look at what you get:

Intel HD 3000: 12 execution units, operating at:

ULV: 350 MHz
LV: 500 MHz
Mainstream: 650 MHz
High end desktop: 1100 MHz

Most websites I've seen that have looked at application performance say the high end desktop variant of the HD 3000 is roughly on par with the Geforce 310M, with the ULV version about 1/3rd the speed. Now here's where it gets tricky:

According to an Apple Insider thread, Supposedly, the "320M" integrated chipset was a version of the GT216 core created just for Apple which also did not have its own memory. Quoting: The 320M has 48 cores and is not to be confused with "Geforce GT 320M". Apple rated it as 80% faster than the 9400M, but at about half the speed of the (genuine) Geforce GT 330M.

While it's hard to say just how fast the 320M really is, I've not seen a thread anywhere that says it's *slower* than the HD 3000. It generally ranges from "on par" to around 3x faster...

But honestly, if you care about GPU performance, go with a discrete part. GPUs are by definition bandwidth intensive, and do do well being hobbled in a CPU socket...
 
OK, folks, just a simple reality check. First of all, *any* chip combining CPU and graphics will have severe memory bottlenecks, so it's hard to see it ever competing with a discrete graphics card. But let's look at what you get:

Intel HD 3000: 12 execution units, operating at:

ULV: 350 MHz
LV: 500 MHz
Mainstream: 650 MHz
High end desktop: 1100 MHz

Most websites I've seen that have looked at application performance say the high end desktop variant of the HD 3000 is roughly on par with the Geforce 310M, with the ULV version about 1/3rd the speed. Now here's where it gets tricky:

According to an Apple Insider thread, Supposedly, the "320M" integrated chipset was a version of the GT216 core created just for Apple which also did not have its own memory. Quoting: The 320M has 48 cores and is not to be confused with "Geforce GT 320M". Apple rated it as 80% faster than the 9400M, but at about half the speed of the (genuine) Geforce GT 330M.

While it's hard to say just how fast the 320M really is, I've not seen a thread anywhere that says it's *slower* than the HD 3000. It generally ranges from "on par" to around 3x faster...

But honestly, if you care about GPU performance, go with a discrete part. GPUs are by definition bandwidth intensive, and do do well being hobbled in a CPU socket...

Been 365 days bro but A+ information nonetheless
 
Yeah, that boat left a year ago. Both HD3000 and 320m are plenty good enough for day to day computing and watching HD videos. Both can do light gaming and neither is good enough for any serious gaming. The 320m will just suck a little less in that scenario, but still suck nonetheless.
 
OK, folks, just a simple reality check. First of all, *any* chip combining CPU and graphics will have severe memory bottlenecks, so it's hard to see it ever competing with a discrete graphics card. But let's look at what you get:

Intel HD 3000: 12 execution units, operating at:

ULV: 350 MHz
LV: 500 MHz
Mainstream: 650 MHz
High end desktop: 1100 MHz

Most websites I've seen that have looked at application performance say the high end desktop variant of the HD 3000 is roughly on par with the Geforce 310M, with the ULV version about 1/3rd the speed. Now here's where it gets tricky:

According to an Apple Insider thread, Supposedly, the "320M" integrated chipset was a version of the GT216 core created just for Apple which also did not have its own memory. Quoting: The 320M has 48 cores and is not to be confused with "Geforce GT 320M". Apple rated it as 80% faster than the 9400M, but at about half the speed of the (genuine) Geforce GT 330M.

While it's hard to say just how fast the 320M really is, I've not seen a thread anywhere that says it's *slower* than the HD 3000. It generally ranges from "on par" to around 3x faster...

But honestly, if you care about GPU performance, go with a discrete part. GPUs are by definition bandwidth intensive, and do do well being hobbled in a CPU socket...

This is kinda irrelevant because any GPU needs a CPU with enough horsepower to throw geometry at it.

The Core2 series in the older machines is way past overdue to be retired.

If you want sandy bridge, you don't get 320m.

ps: your HD3000 clock speeds are way out and not including "turbo" speeds. The reality is, in general use it is as fast as a 320m in most situations, paired with a massively faster CPU. Its a no brainer.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.