Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Umm not to rain on anyone parade but NVidia and Windows has been doing this for a while now, not sure about ATI though.
nVidia came up with CUDA, an alternative to OpenCL. I was not aware that Windows was using it, though. ATI, on the other hand, was looking at OpenCL but had yet to implement full support for it.
 
Wow, that's impressive. GPU performance is a resource we couldn't have found a better time to tap, in my humble opinion. :D

That doesn't necessarily mean too much. Assuming you can feed it the info fast enough it is going to be amazing, but any task that utilizes it will have to A) be highly parallelizable (not true for most tasks) and B) be limited by the processing power rather than the hard drive, RAM, user input speed etc.

This will be great for some tasks (handbrake, compiling code, video decoding) but don't expect it to make Safari run any faster.

Also, this stuff has been going on in Windows for at least a couple years. Folding@Home utilizes the GPU like this, and things like PS CS4 are supposed to be already utilizing the GPU on Windows (not sure how that is going on Macs).
 
While it isn't very amazing the Intel GMA X3100 does have hardware shaders to take advantage of their computation abilities.

You'll get a few more on the GMA X4500. The GPU based decoding is still a little up in the air right now.
 
You mean Media Me iTunes X? ;):D

  • A Core 2 Duo at 2.5 GHz is 20.00 GigaFLOPS using SSE.
  • 2 quad-core Xeons at 3.2 GHz is 102.4 GigaFLOPS using SSE.
  • A GeForce 8600M GT is 91.20 GigaFLOPS.
  • A GeForce 8800 GT is 504.0 GigaFLOPS.
  • The Larrabee (Intel GPU using x86 cores) is expected to reach at least 960 GigaFLOPS.

Larrabee is 20x more efficient in terms of performance per watt than a Core 2 Duo, despite having half the single thread performance.

AMD's current HD4700 gets 1.2 TFLOPS, today, at a quite affordable price. Quite a lot of power consumption too, but a lot less than Larrabee is rumoured to consume.
 
I'm hoping that this means that Apple will move to discrete graphics across the line again later this year in order to boost performance.

Could be a part of the "products with features at price points that our competition can't match" deal.
 
As I see it, Apple could just as easily use Intel's upcoming GMA X4?00 (I don't remember what the ? is, but I know it's a number) in future MacBooks, which will probably work much better with OpenCL than the GMA X3100 does.

The X4500 would be far outclassed especially since it only had 10 unified shaders, compared to 32 for the 8600M GT. You may be correct on it being better than the X3100, although it shouldn't be by much, the X3100 has 8 unified shaders.
 
Seeing all this nice stuff about GPU's is great and all but you have to remember that the GPU is suited to particular applications & tasks . It will be the combination of CPU's & GPU's working together, so it's not going to be the end all and be all of processing power. The deciding factor in this waiting game will be NVIDIA's track record and Apple's fantastic offering of GPU's we have all come to love and know, so we will see if this technology delivers or maybe we will see another MobileMe repeat?
 
While it isn't very amazing the Intel GMA X3100 does have hardware shaders to take advantage of their computation abilities.

You'll get a few more on the GMA X4500. The GPU based decoding is still a little up in the air right now.

The big issue is that the shaders have additional functionality in modern graphics cards to allow them to work as generic calculation engines rather than just for the purposes of graphics. I think AMD made the leap between the HD2000 and the HD3000 series, although before that it was still good enough for Folding @ Home which had a client written directly for the AMD hardware using AMD's CTM (lower level than CUDA, but the same idea, probably harder to get to grips with).
 
The X4500 would be far outclassed especially since it only had 10 unified shaders, compared to 32 for the 8600M GT. You may be correct on it being better than the X3100, although it shouldn't be by much, the X3100 has 8 unified shaders.
Keep in mind not all shaders are created equal. More shaders is still better though but I wouldn't compare different manufacturers on shader count alone.
 
As I see it, Apple could just as easily use Intel's upcoming GMA X4?00 (I don't remember what the ? is, but I know it's a number) in future MacBooks, which will probably work much better with OpenCL than the GMA X3100 does.
but there use of system ram is not that good for this kind of setup.
 
but there use of system ram is not that good for this kind of setup.
True - integrated GPUs aren't that powerful to begin with. However, Intel's QuickPath Interconnect technology will reduce the penalty integrated graphics must pay to access system RAM - this will make a performance difference, but how big remains to be seen.
 
True - integrated GPUs aren't that powerful to begin with. However, Intel's QuickPath Interconnect technology will reduce the penalty integrated graphics must pay to access system RAM - this will make a performance difference, but how big remains to be seen.
AMD's available IGP solutions are quite passable with their integrated memory controller usage. Then again some of them also have a small amount of dedicated DDR3 RAM as well.
 
What about Toshiba's Spurs Engine?

Would hate to leave out Toshiba's Spurs Engine multimedia powerhouse. Toshiba's put it into one of their laptops, so why not Apple who makes some serious multimedia applications. Didn't Apple mention some product transitions. It seems that you could have a board with integrated graphics, intel cpu and spurs engine to speed up multimedia.
 
Someone with that article is using a misnomer. How is it that there are "hundreds of microprocessor cores" waiting to do my bidding yet my processor is a "dual-core" one? That makes absolutely no sense. Part of me thinks someone means "transistors" or something else.
 
Someone with that article is using a misnomer. How is it that there are "hundreds of microprocessor cores" waiting to do my bidding yet my processor is a "dual-core" one? That makes absolutely no sense. Part of me thinks someone means "transistors" or something else.
A unified GPU shader is a specialized processor. You can consider the many dozens to hundreds of them to be multiple cores.
 
what would we compare the GPU in the iPod touch / iPhone ?

Ati Rage vs iPhone/iPod gpu?

Dude, like check out PowerVR chips, they r mad and I'm going to put one in my Mac Pro today!!!!

Umm not to rain on anyone parade but NVidia and Windows has been doing this for a while now, not sure about ATI though.

Yeah, that's definitely taken Vista places and don't get me started on NVIDIA, they can't even make SLi worth the extra dollars due to their "fantastic" drivers.
 
I don't think anyone is claiming the technology is brand new or the idea exclusive, but the big difference is Apple are pushing for an apparent open standard which will also be built in to Mac OS X and I assume it will also mean support is added to all the main compilers.

I'm guessing the other main part of Apple's plan is to have Mac OS X and all their own applications re-compiled to make us of it on release which will give their software and computers a big advantage over the competition if they don't keep up.
 
Keep in mind not all shaders are created equal. More shaders is still better though but I wouldn't compare different manufacturers on shader count alone.

It would be safe to say that Intel shaders wont be as fast as either ATI or Nvidia shaders. I can't seem to find a whole lot of info on the math capabilities (like you can of the others) on beyond3d. I guess we will have to wait till some more comprehensive benchmarks take place. Of course this would all be a moot point for Apple since they run different drivers anyways. Performance in Windows doesn't necessarily translate to performance in Mac.
 
If I can afford an 8-core Mac Pro, it's wouldn't be such a big deal, since those monster can run through a DVD faster than Mexican water through a first time tourist.:D:p

You mean faster than a Mexican hopping the border fence into the states. ;)
 
nVidia came up with CUDA, an alternative to OpenCL. I was not aware that Windows was using it, though. ATI, on the other hand, was looking at OpenCL but had yet to implement full support for it.

It's not so much used by Windows as Windows is the platform it's used on. I think it's been available for Linux just as long, but I distinctly remember it not being available for Mac OS, but looking at the CUDA downloads page, apparently it is.

There's also some stuff mulling around about CUDA being ported to Radeon cards, but I haven't been watching that closely.

Hopefully OpenGL 3.0 specs will be finalized in time for Snow Leopard. That will put all these DX10 cards in Macs to work! It'll put me to work too. I'll finally take the time to learn OpenGL.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.