I was perusing Apple Developer tech notes from 2005 and came across the following clarification about the GPU acceleration on the PB 12" (1Ghz+) and 2003-2004 G5 stock FX 5200 GPU.
I ran some of my own non-scientific tests with Core Image and OpenGL and my results on a PowerBook G4 12" 1.5Ghz with 1.25GB RAM confirm:
Explicitly requesting the GPU perform a Core Image operation results in:
1. Low(er) CPU usage, but longer rendering times (approx 1.5x duration with my custom filter stack).
2. Lower App Memory usage as the VRAM on the GPU is in use [in addition to] System Memory.
3. Hotter running temps - the GPU cooling isn't as effective as the CPU. The single cooling fan is further away from the GPU and the processor makes only indirect contact with the heatsink via a thermal silicone pad.
Switching back to the default "Software" (CPU) Core Image rendering mode reveals;
1. Full CPU load, but quicker render.
2. Substantially larger App memory usage.
3. Cooler running temps.
I am curious if Apple performed a finely tuned balancing act with this GPU to give the "effect" of hardware acceleration throughout OS X on the PB12" (and G5s). Given the hotter running temps, I would guess that Leopard was more GPU biased that Tiger on this balance.
The FX 5200 (Go) seems to be a poor choice of GPU for what were considered pro machines. The Radeon Mobility 9700 and 9600 used in the 15" and 17" models were far better graphics processors for their time.
I personally prefer the idea of using the hardware for what it was designed for; i.e GPU for graphics acceleration. But I am a little stumped on this one.
If one were hypothetically building an app which made use of Core Image, would it be considered better to force GPU acceleration on the PB12" / FX 5200 or stick with "Apple knows best" and allow the CPU to take the load?
Feel free to discuss...
Technical Q&A QA1416
Specifiying [sic] if the CPU or the GPU should be used for rendering.
Q: Which processor will Core Image use for rendering, and how can I specify it?
A: Core Image can either use the system's CPU or an ARB fragment-capable GPU for rendering. Unless specified, Core Image will use a simple set of rules to determine the best processor for rendering on the current system. Table 1 lists the rules and the order in which they are evaluated.
Table 1: Rules, in order, that Core Image uses to determine the best processor for rendering
Code:If the GPU is Default Processor GeForce 5200 series CPU (See note) ARB fragment capable HW (except for the GeForce 5200 series) GPU non-ARB fragment capable HW CPU
Note: By default, Core Image uses the CPU for rendering on systems with a GeForce 5200 series card because, for most benchmarks, the 5200 can be slower than the CPU on currently shipping hardware.
.....
Posted: 2005-08-16
I ran some of my own non-scientific tests with Core Image and OpenGL and my results on a PowerBook G4 12" 1.5Ghz with 1.25GB RAM confirm:
Explicitly requesting the GPU perform a Core Image operation results in:
1. Low(er) CPU usage, but longer rendering times (approx 1.5x duration with my custom filter stack).
2. Lower App Memory usage as the VRAM on the GPU is in use [in addition to] System Memory.
3. Hotter running temps - the GPU cooling isn't as effective as the CPU. The single cooling fan is further away from the GPU and the processor makes only indirect contact with the heatsink via a thermal silicone pad.
Switching back to the default "Software" (CPU) Core Image rendering mode reveals;
1. Full CPU load, but quicker render.
2. Substantially larger App memory usage.
3. Cooler running temps.
I am curious if Apple performed a finely tuned balancing act with this GPU to give the "effect" of hardware acceleration throughout OS X on the PB12" (and G5s). Given the hotter running temps, I would guess that Leopard was more GPU biased that Tiger on this balance.
The FX 5200 (Go) seems to be a poor choice of GPU for what were considered pro machines. The Radeon Mobility 9700 and 9600 used in the 15" and 17" models were far better graphics processors for their time.
I personally prefer the idea of using the hardware for what it was designed for; i.e GPU for graphics acceleration. But I am a little stumped on this one.
If one were hypothetically building an app which made use of Core Image, would it be considered better to force GPU acceleration on the PB12" / FX 5200 or stick with "Apple knows best" and allow the CPU to take the load?
Feel free to discuss...
Last edited: