zakatov said:
I have no background in programming whatsoever, but from what you just said it seems that "last year," the CI would let the OS handle the GPU stuff without special calls, whereas now you have to specifically get the GPU involved. Am I understanding this correctly?
not quite.
disclaimer1: I'm a newbie to the platform, but here's my understanding of it.
disclaimer2: Taking a bit of license with some details to make this understandable to non-programmers.
Core Image is presented as an API (application programming interface). Think of that as a set of functions that programmers can make use of. It is a new API (set of functions) available in Tiger. There's a good overview of it here.
http://www.apple.com/macosx/features/coreimage/
Now, there are several layers of API available for graphics and media. At the highest level are Cocoa APIs (NS), then there's Core Graphics (CG) which is for 2D, there's also now Core Image (CI) for image processing that runs on the GPU, there's OpenGL (GL) which is for 3D (also runs on GPU). (there are others too such as CM).
3D stuff (GL) runs on the GPU. You see this in the genie effect when you minimize and maximize an app or when you use expose (for example). This hardware acceleration (3D on GPU) has been marketed as Quartz Extreme.
Core Image is a new API for image processing that runs on the GPU. It ships with a bunch of pre-supplied functions such as filters and transitions, plus a plugin architecture caled Image Units. It's a much simpler, higher-level API for hardware accelerated (GPU based) image processing.
We didn't have this before and apps that make use of it mean we are better off than we were before, even without Quartz 2D extreme. Since not all 2D is hardware accelerated (Q2D extreme) then programmers using CI simply need to be aware of what runs out on the GPU and what doesn't.