Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
At the heart of Sandy Bridge is an essentially new processor microarchitecture, the most sweeping architectural transition from Intel since the introduction of the star-crossed Pentium 4.

*sigh*
In fact, it is NOT a new architecture.
It's the same marketing jamboowamboo when the Core 2 CPUs were introduced, which weren't a new architecture either.

Just like Merom was a refinement of its older Pentium M (Yonah) sibling, Sandy Bridge is a refinement of its Nehalem predecessor. Pentium 4 WAS an entirely new architecture at its time, but it was abandoned.

The only noteworthy thing about Sandy Bridge's architecture is a new set of instructions called AVX. AVX won't be supported for a while, since you need not-yet-existing Service Pack 1 for Windows 7 and an unknown new Mac OS release.

However, be warned. AVX will be very short lived according to current Intel plans. There's already a replacement announced for the next CPU generation, based on Larrabee's architecture. So it is highly questionable that AVX will ever see the same level of adoption as SSE.
 
Great, so buy SB now but expect 1% of software to be written to take advantage of it in 5 years time....greaaaaat!!!

Many would say the big difference today is OpenCL. AVX/GPU/CPU use all in the same framework. There's still a lot of software to be written to take advantage of this but the big difference is people can't now wait until their single thread software gets quicker with clock speed. We've already hit that ceiling and the dies have spread into multicore.

Big things are afoot in the computing industry.
 
Too bad we have to wait until May for new MacBooks. I am very sure Apple follows the same release schedule as last year. First, it is time for iPad 2. Then, MacBooks. May it is.

Uh, why May? Apple has never waited longer than 10 months to update the MBPs; last time they updated them was in April (not counting the small BTO option added a few months ago), so that means February at the latest. (Hopefully)
 
Cool. Now Apple just needs to come out with my dream laptop: a 17" MacBook Air. I want something thin, light, and big.
 
*sigh*
In fact, it is NOT a new architecture.
It's the same marketing jamboowamboo when the Core 2 CPUs were introduced, which weren't a new architecture either.

Yes yes, SB is a simple and minor augmentation to the previous architecture. It's been a slow and profitable development cycle for Intel moving from the FSB architecture to what they have now. Talk about milking to the max.
 
Uh, why May? Apple has never waited longer than 10 months to update the MBPs; last time they updated them was in April (not counting the small BTO option added a few months ago), so that means February at the latest. (Hopefully)

I think they will be updated when they are ready. iPhones and iPods seem to be the only devices released on a regular schedule.
 
Yes yes, SB is a simple and minor augmentation to the previous architecture. It's been a slow and profitable development cycle for Intel moving from the FSB architecture to what they have now. Talk about milking to the max.

Further to my ARM comments this approach brings baggage. It is that that will stop Intel in its tracks at some point.

@Speedy2 - Larrabee will see the light of day as Knight's Corner/Ferry. We're a long way from seeing this hit the mainstream of compute and even longer for graphics. 2013 minimum is my hunch.
 
I think they will be updated when they are ready. iPhones and iPods seem to be the only devices released on a regular schedule.

True :( I, like many others, are tired of waiting for the new MBP lol, perhaps Apple will update the MBPs without an event?
 
The Intel/Nvidia dispute previously hindering Apple has been resolved, meaning that the next revision will likely feature another Nvidia-based chipset/discrete GPU combo.

Nvidia has already announced they are exiting the chipset business. So unless they are re-arming with AMD/ATI, it's not going to happen.

Another 2 steps forward 1 step back chip.
 
Wow. I was one of those ones waiting for an Arrandale in the MBP. Does everyone remember that wait? That was painful.

I'm wondering what everyone will be doing on the MBP forum now, waiting for the new SB architecture to come out.... I'm still too traumatized to go back in there and find out!
 
You are of course right that it is disappointing that they can't beat the 2009 tech, however, the question that is important is how much progress have them made in the last year? Intel claimed it wanted to take graphics seriously, so even if they have not overcome Nvidia from 2009, perhaps this shows promise for the future? I suspect the improvements in GPU tech are always big at first and incremental later on.

Intel has been promising good graphics since the i740. If you're old enough to remember those, you know at this point that any promises Intel makes is pure crap as far as graphics goes.

I'm wondering what everyone will be doing on the MBP forum now, waiting for the new SB architecture to come out.... I'm still too traumatized to go back in there and find out!

And then all the 13" would-be buyers will be crying themselves to sleep at night when they get a big helping of Intel graphics slower than the 320M.

A nice new fast CPU with slower graphics than the old model. How pathetic. If at least Intel would let someone else who knows a thing or two about graphics design the thing instead of always shipping their sub-par offerings.
 
The overall performance is what matters. You cannot use 320M with Sandy Bridge or other iX CPUs so we don't know how fast 320M would be with better CPU.

If you compare GPUs there is no 'overall' performance. You can't just stick in a faster CPU and say 'This GPU is faster'

So basically, the 320m GPU is about as fast as the Intel graphics, which means theoretically you at least won't downgrade when Apple switches to Sandy Bridge.

The HUGE issue however is that Intel has no proper OpenCL driver, and it's highly questionable if they'll be able to get a decent one out within a year. All the performance tests were done in Windows using DirectX.
 
Yes yes, SB is a simple and minor augmentation to the previous architecture. It's been a slow and profitable development cycle for Intel moving from the FSB architecture to what they have now. Talk about milking to the max.

I think they have been trying. Intel just can't seem to get dense execution units right. That is why the Atom and their GPU efforts have all been failures. Both ARM chips and GPUs are mostly execution units with very little instruction scheduling logic. Chips that need advanced instruction scheduling to execute a single stream of instructions as fast as possible seem to be the only thing Intel is good at. Hopefully this is changing or it will probably kill Intel. The future is parallel processing and making instruction scheduling as irrelevant as possible. The only reason that isn't a reality is because our software isn't designed for that paradigm, but work that Apple is doing may make that a reality. This has been predicted for the last couple decades, so it isn't like Intel should be unprepared. It is the only obvious path after hitting the limits of current technology. Honestly, I'm surprised that Intel is having such a hard time with this transition. I guess you can't always just throw money at a problem.
 
Last edited:
Further to my ARM comments this approach brings baggage. It is that that will stop Intel in its tracks at some point.

Indeed, certainly it will be interesting to see how this plays out. Once ARM start ramping up the cores Intel may have a huge mobile market pulled from under it's feet. I see Intel has made some moves into ULV to mitigate ARM's segment but it may very well play out to have been too little too late. We'll see.
You make some good points.
 
*sigh*
In fact, it is NOT a new architecture.
It's the same marketing jamboowamboo when the Core 2 CPUs were introduced, which weren't a new architecture either.

Just like Merom was a refinement of its older Pentium M (Yonah) sibling, Sandy Bridge is a refinement of its Nehalem predecessor. Pentium 4 WAS an entirely new architecture at its time, but it was abandoned.

The only noteworthy thing about Sandy Bridge's architecture is a new set of instructions called AVX. AVX won't be supported for a while, since you need not-yet-existing Service Pack 1 for Windows 7 and an unknown new Mac OS release.

However, be warned. AVX will be very short lived according to current Intel plans. There's already a replacement announced for the next CPU generation, based on Larrabee's architecture. So it is highly questionable that AVX will ever see the same level of adoption as SSE.

It doesn't matter. It is just for (hopefully) decent GPU and OpenCL support. It's not like you will be writing code in assembler for this or that it will likely speed up an ordinary program, so the instructions don't really matter. The goals are different so it doesn't need widespread adoption. SSE was designed to speed up ordinary programs, so it had to be widespread. AVX just needs to be adopted by the driver writers at Intel and Apple. That shouldn't be too much to ask.
 
Sandy Bridge does not support GPU OpenCL

It's in the GeForce 9400M and the GeForce 320M and both are IGPs, though it is Intel we're talking about here, so yeah, I share your skepticism there for sure.

Intel's Sandy Bridge does not support OpenCL in their integrated GPU. It's GPU is not programmable like the discrete GPUs in the nVidia GeForce 9400M and GeForce 320M.

Intel still hasn't been able to develop programmable GPUs - which are required to run OpenCL.

Instead, and very disappointingly, the Sandy Bridge Processors will use drivers which will allow the CPU to run OpenCL.

OpenCL is suppose to run on BOTH the CPU and GPU, allowing both to multitask. This allows significant acceleration of certain tasks such as video processing. This allows significant acceleration of tasks since it can utilize the multiple processing units of modern GPUs.

Sandy Bridge Processors will only have the CPU running OpenCL tasks. The Sandy Bridge GPUs will be idle.

This means they are going to be MUCH SLOWER running OpenCL dependent applications than a combination of Intel Processor PLUS discrete GPU - such as the nVidia GeForce 320M.

This is one reason Apple decided to stick with the older Core2Duo processors plus GeForce 320M in the MacBook rather than stick an i3 CPU with integrated graphics like other PC makers. The discrete graphics - plus OpenCL capabilities - just blasted the newer processors with integrated graphics - such as the i3 line.

I am sorely disappointed in Intel. They are sorely lagging in GPU technology.

Sure, some sites are happy that OpenCL just runs on the Sandy Bridge processors. But they miss the point that if Intel's GPU was programmable, the Sandy Bridge processors would be so much faster than they are now.

We're still waiting for an Intel Processor with an integrated programmable GPU that can run OpenCL in the GPU.

Disappointing.
 
Who cares?

These chips are intended for cheap netbooks and laptops. Not Apple's macbook pros. Who cares if they deliver better performance than intel's previous iterations? Intel's gpu are rubbish (they always have been!) and they have no place inside apple's computers.

I used to own a 2nd gen blackbook which i got rid of almost immediately after i compared it's graphics capabilities to my 2001 geforce gfx card.

Some people treat this like good news but they really need to get their facts right before they start posting here...:mad:
 
Here's a question for those in the know...

Does this Sandy Bridge technology free up the current Ram limitations in the Mobile Platforms? Could we see 16GB Ram capable MacBook Pros? (Provided that Ram manufacturers will deliver of course).
 
The HUGE issue however is that Intel has no proper OpenCL driver, and it's highly questionable if they'll be able to get a decent one out within a year.

Is that true though? http://www.techeye.net/software/intel-announces-opencl-sdk. Admittedly I think they're pushing toward the x86 based Larrabee GPU future. So I take your point. Intel are very publicly backing OpenCL right now (mainly as a counter to NVIDIA's CUDA).

The future is parallel processing. The only reason that isn't a reality yet is because our software isn't designed for that paradigm yet, but work that Apple is doing may make that a reality. This has been predicted for the last couple decades, so it isn't like Intel should be unprepared. It is the only obvious path after hitting the limits of current technology. Honestly, I'm surprised that Intel is having such a hard time with this transition. I guess you can't always just throw money at a problem.

They, like many others, have thrown a lot at this. They've been active with MPI, OpenMP, now OpenCL support. Not to mention Intel's Ct and early TBB
approaches.

The problem here is that the clock speed free ride the software industry has been riding is over. No-one is that surprised but no-one really has this solved right now. Interesting the gaming and scientific/research sectors are most aware of this. Hence the drive from the research community for OpenCL/CUDA and whatever replaces MPI.

Apologies for veering off topic here but this is all part of the picture for Intel right now.
 
These chips are intended for cheap netbooks and laptops. Not Apple's macbook pros. Who cares if they deliver better performance than intel's previous iterations? Intel's gpu are rubbish (they always have been!) and they have no place inside apple's computers.

I used to own a 2nd gen blackbook which i got rid of almost immediately after i compared it's graphics capabilities to my 2001 geforce gfx card.

Some people treat this like good news but they really need to get their facts right before they start posting here...:mad:

Even if you have a discrete chip, having a good graphics support in the CPU will allow the system to shut off the discrete chip to save power. Some of us professional users would love to hit a 10 hour runtime. You may also be able to use one chip for OpenCL and the other for graphics or combine them together for more power. Offloading data from the CPU to an embedded GPU is probably much faster then offloading to a discrete GPU that can't share memory. So (as someone who is not an expert) I would say that a discrete chip may be inferior for OpenCL.
 
Intel has been promising good graphics since the i740. If you're old enough to remember those, you know at this point that any promises Intel makes is pure crap as far as graphics goes.

And then all the 13" would-be buyers will be crying themselves to sleep at night when they get a big helping of Intel graphics slower than the 320M.

A nice new fast CPU with slower graphics than the old model. How pathetic. If at least Intel would let someone else who knows a thing or two about graphics design the thing instead of always shipping their sub-par offerings.

Well said!! Thank you!!
 
Intel's Sandy Bridge does not support OpenCL in their integrated GPU. It's GPU is not programmable like the discrete GPUs in the nVidia GeForce 9400M and GeForce 320M.

Intel still hasn't been able to develop programmable GPUs - which are required to run OpenCL.

This is indeed the crux. As I say the Intel GPU, *at this stage* is just a spoiler for NVIDIA's low end GPU market.

The other point we're missing here is that the GPU needs to be on the socket for future performance. The Intel/NVIDIA spat means that NVIDIA is likely unable to got that route (though some say their Transmeta purchase allows them a route in as they are now technically an x86 licensee).

Intel would have this sown up if thy could just have better GPUs or had another plan. Hmmm say, Larrabee. Thing is, I suspect they're very delayed on this and this is giving them trouble as they have this current problem of non-performing integrated GPUs on the die.
 
Sandy Bridge? Who the heck is naming these? Why not just call it Wet Dog or something?
 
Eventually they may catch up or even surpass discrete chips. Even if you have a discrete chip, having a good GPU in the CPU will allow the system to shut off the discrete chip to save power.

Only Intel DOES NOT offer a good gpu inside the cpu. Amd does...


Some of us professional users would love to hit a 10 hour runtime.

10 hours of doing what exactly? Spreadsheets?
I'm an Architect and both Vectorworks and Cinema 4D, which I use on a regular basis on my Macbook Pro, require a decent GPU WITH GPGPU capabilities...
 
Intel bashing seems a little bit of a tradition.

Yes the 320M still is a faster GPU and testing at lowest details (with the highest CPU impact) with a much faster CPU doesn't seem fair, but it is reasonable. That is IGP performance and for practical use SB means that you get pretty descent usable performance.

Intel may have a bad track record in GPUs but Intel also always had loads of money and really produced great work if they wanted too. Core 2 Duo after Pentium 4 is one example. They just started the whole GPU stuff a few years ago. You cannot do this from one day to the next. A CPU architecture usually needs a few years till it is available and the SB GPU is the first GPU that comes from this new focus.

The most noteworthy thing I think is the fact that this SB IGP is probably the best possible solution for Notebooks one can think of. It gets the speed from other things than the 320M and has a lot of fixed function units. That maybe bad for OpenCL and maximum programmability but it is very good for battery life. A lot of the performance comes not from many execution units but low cache latency and so on that means the performance per watt under load is probably a lot better.
The direct connection to LLC aka L3 might also negate some of the fixed function problems. It is not a high performance GPU but it is designed to excel in its field. If someone needs massive GPGPU processing power and CPU + AVX etc. is not enough than he/she will use a dedicated GPU. Why design an IGP for anything but its intended purpose if it hurts battery life. Most people don't need OpenCL. They might need some encode acceleration and intel provided with 3sqmm chip space dedicated to that and it performs better than some high end GPU.
It can do everything an average user needs at the lowest possible energy consumption. I'd say that makes it the perfect GPU for notebooks. Everything else just needs an additional dedicated GPU monster than can now even in quad core CPU be disabled to get great battery life.

When you look at the whole energy consumption it sounds like it would be worth to do a small redesign and make 15" and 17" quad core compatible. The battery life could be advertised as the same but the performance would almost double. But history tells us Apple prefers the slim form to speed so it is not a very likely possibility. It would be a nice surprise.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.