The CUDA SDK for windows is awesome. It includes some small demos of CUDA implementations such as a real-time mandelbrot generator, particle simulation, stable fluids model, and a bunch of other command-line based tests and such. I ran the demos on an 8800GTS which was sustaining several hundred GLOPS during the demos and tested to about 60 GB/s of internal bandwidth.
There is some extreme power in GPUs that's just waiting to be unlocked.
Damn you! I need one NOW! This may push me over the edge for a new Mac Pro with the 8800GT!
This GP GPU sounds like another CPU. I mean, mathematical tasks such as image and sound processing? If they're really interested in another CPU, just add another CPU. It doesn't have to be from NVIDIA. It can be from Intel, no?
I don't really get it. Or is this just like what AltiVec used to be?
Think Copressor. "GPGPU" is just a concept. It refers to a method of programming the shaders (sort of like simple "cores") of a video card to carry out "general-purpose" computationally -intensive calculations, instead of 3d graphics calculations. Due to the highly-parallel nature of 3d graphics processing, the hardware built to calculate it is also very efficient when used on other types of highly-parallel calculations generally seen in the High-performance computing arena (supercomputers). Think of uses such as digital signal processing, digital imaging, Ray-tracing, digital audio and video processing, scientific simulations such as molecular dynamics, computational chemistry, weather modeling, neural networks, etc.
The main article is sort of misleading by making it appear as if a "GPGPU" is only a discrete item seperate from existing graphic cards. Granted, nVidia is now making seperate "GPGPU" cards that are basically an 8800GTX without a DVI port and some other tweaks. My point is that "GPGPU"is just a concept, and can be done on existing high-end Nvidia (and ATI) graphics cards, namely the 8800 series. Originally, people were trying to adapt the shaders in GPUs to process general data using the GPU shading language, which was incredibly difficult. Now both nVidia (with CUDA) and ATI (Close to Metal) offer SDKs for simpler programming of the GPUs in a c-like language.
However, GPGPU won't be replacing your Core 2 Duo anytime soon, as it is not capable of the general tasks your processor does now. It will probably be used as a type of coprocessor on to which specialized applications will off-load their data processing.
So wouldn't my 8800GT technically be capable of running as a GPGPU in my Mac Pro, assuming Appe issues the correct SW update?
In theory Nvidia would just have to release their CUDA SDK for OSX.
Sorry Intel, sounds like the days of x86 instruction set CPU(s) are coming soon to an end. In a few years these babies maybe be providing most of the horse power for general purpose computing.
I would definitely not go that far. GPUs can't do anything other than extremely parallel calculations. You'll still need an x86 for all the general processing tasks.
The only issue I have is that I think the current GF8 series of cards are only FP16 - 16 bit floating point aka single precision. This may not be enough for most apps. It would be great for running test runs, but I know for scientific calcs FP16 isn't sufficient, they need FP32.
Single prec is 32-bit, double is 64. I believe only the new "dedicated" GPGPU cards from Nvidia support double precision CUDA.