I'm hoping it will be the beginning of replacable integrated graphics.
"replaceable integrated graphics" is an oxymoron. The only reason we call them "integrated" is because they ARE integrated into the motherboard and therefore are unable to be replaced with newer units. This allows
Apple to save money by having Intel build cheapo graphics directly into the motherboard.
Also, the GPU is a great place to look for hardware acceleration in video decoding. QuickTime X would be the perfect place for this, and such improvements would benefit iChat.
Yep, hardware accelerated video decoding is already in place in all modern discerete graphics, although I'm unsure if Apple has been using this, or if that is indeed was Quicktime X is going to do for desktop OSX. I also believe Intel's integrated crap finally has integrated decoding as well with the new laptop platform.
Sounds great. But considering that all of NVIDIA's mobile GPUs are defective and will fail under their own massive thermal output, I hope Apple starts to go back to ATi...
That's a one-time manufacturing problem that is said to only affect a small amount of outstanding mobile GPUs. I've had 5+ over the years, and I haven't had an issue. I even recently replaced a Geforce Go 7900 with a mobile Quadro unit from Ebay, and it has been working great.
AFAIK, considering nVidia doesn't even have a fab, it could be the fault of the chinese/taiwanese fabrication facility.
Maybe this will trickle up to notebooks and desktops and we'll no longer have a two processor CPU-GPU combo, but one GPGPU or one CPU that's great at graphics!
Well, that is already starting to happen. Although not technically "merging" into one commmon unit, both Intel and AMD have projects on tap for 2009 that integrate CPU and CPU cores into the same package, and eventually into the same die even.
Although the first verions of these will only have the capabilities of integrated graphics, I'm sure eventually they will completely combine and we'll no longer see seperate high-end enthusiast graphics cards, as they'll be replaced by massively-parallel hybrid processors that take care of all processing duties on the sytem.
Also, this stuff has been going on in Windows for at least a couple years. Folding@Home utilizes the GPU like this, and things like PS CS4 are supposed to be already utilizing the GPU on Windows/
Umm not to rain on anyone parade but NVidia and Windows has been doing this for a while now, not sure about ATI though.
Both Nvidia and ATI have their own GPGPU software development kits, but Nvidia's CUDA has definitely been mentioned a lot more. Probably because they were the dominant force in GPUs recently as ATI didn't have competitive product offerings for a long time.
The SDKs have been available for Windows/Linx, but there haven't been ANY comemercial consumer applications using them, so I definitely wouldn't say "windows has been doing this for awhile". The Adobe Photoshop GPU presentation was an unofficial look into the Adobe's research labs, not something currently available for Windows, and folding@home is obviously a specialist scientific case. GPGPU is still largely something for research, although I have seen some examples of internal use at corporations.
Publicly at least, Apple is already way out in front of Microsoft having formed the OpenCL standards body to streamline GPGPU processing into a standard framework.
Keep in mind not all shaders are created equal. More shaders is still better though but I wouldn't compare different manufacturers on shader count alone.
definitely! Intel's IGPs have come a long way, but they are still crap compared to anything from AMD/nVidia.
True - integrated GPUs aren't that powerful to begin with. However, Intel's QuickPath Interconnect technology will reduce the penalty integrated graphics must pay to access system RAM - this will make a performance difference, but how big remains to be seen.
The IGP in current platforms has a direct connection to the memory throught the mem controller on the northbridge -- with nehalem, the IGP will have to go through the northbridge over quickpath to the CPU-integrated memory controller, which will then have to access the memory which is directly connected to the processor.
I can't imagine that this would improve the performance of the IGP through this mechanism, as the latency would appear to be cancel out any increase in memory bandwidth. I'm no expert however, so I'll have to do some more research.
Would hate to leave out Toshiba's Spurs Engine multimedia powerhouse. Toshiba's put it into one of their laptops, so why not Apple who makes some serious multimedia applications. Didn't Apple mention some product transitions. It seems that you could have a board with integrated graphics, intel cpu and spurs engine to speed up multimedia.
The "Spurs engine" is stupid. First of all, they should have used a full CELL BE chip. Secondly, it doesn't matter anyways because who one wants a proprietary, hard-to-program co-processor? Just like the "Physx" add-in card, this will fail because it won't have any extensive community support. For speeding up parllel computations, You want a vendor-neutral standards-based approach that will be compatible with a variety of hardware AKA OpenCL.
Just how much extra can they squeeze from the GPUs? Doesn't just about everything (Quartz 2D, Core Image, Core Video) use the GPU already?
Indeed, but you are referring to graphics-related processing. The term "GPGPU" refers to using the GPU to do general purpose calculations for tasks that are easily parallelized like video processing/encoding/decoding/iDCT, audio encoding, digital image processing, scientific simulations like fluid dynamics, protein folding, Oil/gas geology etc.
The difference is that Larrabee's 960 gigaFLOPS is double-precision (used in CPUs), while the HD 4870's 1.2 teraFLOPS is single-precision. Regular GPUs take a big hit on double-precision. In the HD 4870's case, it's 1/5 the SP speed (for the GTX 280, it's 1/12). In other words, 240 double-precision gigaFLOPS.
You are correct, but FP64/double precision is only really important for scientific and industrial simulation/computation since they need that level of accuracy. I don't think it will matter for most consumer use of GPGPU.
That doesn't change the fact that it will be integrated on/in to Nehalem cores. It wont be a separate socket. There is supposed to be that option, but I douby Apple would go that route.
There seems to be a common misunderstanding that Larabee will be introduced as a CPU-integrated graphics chip. Larabee is going to be an PCIe add-on board, like most GPUs. Intel's first products where they combine CPU and GPU cores into the same package or die are going to be using graphics technology from their current motherboard integrated graphics, NOT from Larabee ---- aLthough this will no doubt change down the road sometime.