Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

AphoticD

macrumors 68020
Original poster
Feb 17, 2017
2,285
3,472
I was perusing Apple Developer tech notes from 2005 and came across the following clarification about the GPU acceleration on the PB 12" (1Ghz+) and 2003-2004 G5 stock FX 5200 GPU.
Technical Q&A QA1416
Specifiying [sic] if the CPU or the GPU should be used for rendering.

Q:
Which processor will Core Image use for rendering, and how can I specify it?
A: Core Image can either use the system's CPU or an ARB fragment-capable GPU for rendering. Unless specified, Core Image will use a simple set of rules to determine the best processor for rendering on the current system. Table 1 lists the rules and the order in which they are evaluated.

Table 1: Rules, in order, that Core Image uses to determine the best processor for rendering
Code:
If the GPU is                          Default Processor
GeForce 5200 series                    CPU (See note)
ARB fragment capable HW
(except for the GeForce 5200 series)   GPU
non-ARB fragment capable HW            CPU

Note: By default, Core Image uses the CPU for rendering on systems with a GeForce 5200 series card because, for most benchmarks, the 5200 can be slower than the CPU on currently shipping hardware.

.....

Posted: 2005-08-16

I ran some of my own non-scientific tests with Core Image and OpenGL and my results on a PowerBook G4 12" 1.5Ghz with 1.25GB RAM confirm:

Explicitly requesting the GPU perform a Core Image operation results in:
1. Low(er) CPU usage, but longer rendering times (approx 1.5x duration with my custom filter stack).
2. Lower App Memory usage as the VRAM on the GPU is in use [in addition to] System Memory.
3. Hotter running temps - the GPU cooling isn't as effective as the CPU. The single cooling fan is further away from the GPU and the processor makes only indirect contact with the heatsink via a thermal silicone pad.

Switching back to the default "Software" (CPU) Core Image rendering mode reveals;
1. Full CPU load, but quicker render.
2. Substantially larger App memory usage.
3. Cooler running temps.

I am curious if Apple performed a finely tuned balancing act with this GPU to give the "effect" of hardware acceleration throughout OS X on the PB12" (and G5s). Given the hotter running temps, I would guess that Leopard was more GPU biased that Tiger on this balance.

The FX 5200 (Go) seems to be a poor choice of GPU for what were considered pro machines. The Radeon Mobility 9700 and 9600 used in the 15" and 17" models were far better graphics processors for their time.

I personally prefer the idea of using the hardware for what it was designed for; i.e GPU for graphics acceleration. But I am a little stumped on this one.

If one were hypothetically building an app which made use of Core Image, would it be considered better to force GPU acceleration on the PB12" / FX 5200 or stick with "Apple knows best" and allow the CPU to take the load?

Feel free to discuss...
 
Last edited:
Just seems the FX 5200 was a bad choice from the start (along with inadequate cooling), to give Apple some benefit of the doubt, it wouldn't have been designed to accomodate Leopard though.
It's an absurd kind of logic at play here - any graphics intensive software will slow down if you throw it at the card, which wouldn't be good for games. Graphics apps maybe, but then again, the name of the game back then was more speed.
I've often wondered what the direct comparison between the graphics in the iBook 12" is - sometimes mine 'feels' superior to the Powerbook.
 
Hmm... this got me thinking- the old MAME OS X was written to take advantage of Core Video and Core Image, and this was the first app with shared source code that popped into my head. I wonder if it could be modified and recompiled to test your theory...?

Link- http://mameosx.sourceforge.net
[doublepost=1507276493][/doublepost]
Just seems the FX 5200 was a bad choice from the start (along with inadequate cooling), to give Apple some benefit of the doubt, it wouldn't have been designed to accomodate Leopard though.
It's an absurd kind of logic at play here - any graphics intensive software will slow down if you throw it at the card, which wouldn't be good for games. Graphics apps maybe, but then again, the name of the game back then was more speed.
I've often wondered what the direct comparison between the graphics in the iBook 12" is - sometimes mine 'feels' superior to the Powerbook.

I've got a 12" 1.33ghz Ibook G4, that I'm willing to bet would test very closely to the final 12" PB G4, especially considering the GPU and higher memory capacity of the iBook...
 
Just seems the FX 5200 was a bad choice from the start (along with inadequate cooling), to give Apple some benefit of the doubt, it wouldn't have been designed to accomodate Leopard though.
It's an absurd kind of logic at play here - any graphics intensive software will slow down if you throw it at the card, which wouldn't be good for games. Graphics apps maybe, but then again, the name of the game back then was more speed.
I've often wondered what the direct comparison between the graphics in the iBook 12" is - sometimes mine 'feels' superior to the Powerbook.

Yes, totally backwards logic.

1. Plan and develop Core Image, a highly capable GPU based image processing framework which bridges Quartz and OpenGL to achieve amazing real-time results, entirely on the GPU.
2. Release a Mac with an under-powered GPU.
3. Hack OS X to fallback on the CPU for Core Image only on this particular GPU. Call the FX 5200 "Core Image Supported" to sell more PowerBooks and pretend that everything is still fully accelerated.

Although it wasn't mentioned in the tech note, I believe the Radeon Mobility 9200 found in the iBooks and Mac mini work on a similar basis (except they do specifically state that Core Image is: Not Supported).

The last revision iBook's 9550 on the other hand did have full Core Image acceleration like the 9600/9700 (at least I believe this is the case). So these units should theoretically outperform the final PB12.


Cool, thanks. I'm downloading the source now. I'll dig through it and see what jumps out.
 
The last revision iBook's 9550 on the other hand did have full Core Image acceleration like the 9600/9700 (at least I believe this is the case). So these units should theoretically outperform the final PB12.

That's true but then Apple gave the 9550 half the VRAM of the FX5200...making it harder to decide which is the better option...
 
  • Like
Reactions: Amethyst1
The openmark scores show a near double benchmark result for the 9550 over the 5200, but this doesn't take texture sizes into account.

At the very least, the PowerBook 12" can run Doom 3 and the iBook can't. Although it does mean playing at the lowest settings, the lowest res (640x480), and preferably in Panther. But it's still quite playable.
[doublepost=1507288266][/doublepost]
Hmm... this got me thinking- the old MAME OS X was written to take advantage of Core Video and Core Image, and this was the first app with shared source code that popped into my head. I wonder if it could be modified and recompiled to test your theory...?

Link- http://mameosx.sourceforge.net

Interesting. They haven't explicitly specified the hardware renderer for Core Image, which means on the FX 5200, Mame's real-time Core Image filters (Gaussian Blur, Bump Distortion, Bloom, etc ) are processing on the CPU.
 
  • Like
Reactions: Amethyst1
Interesting. They haven't explicitly specified the hardware renderer for Core Image, which means on the FX 5200, Mame's real-time Core Image filters (Gaussian Blur, Bump Distortion, Bloom, etc ) are processing on the CPU.

So he took the "simple" way out, although I am curious as to whether he experimented with forcing GPU acceleration. The first build; actually the whole of; MAME OS X came about to use the new-at-the-time features, and I'm willing to bet that he played around with them a bit...
 
I was perusing Apple Developer tech notes from 2005 and came across the following clarification about the GPU acceleration on the PB 12" (1Ghz+) and 2003-2004 G5 stock FX 5200 GPU.


I ran some of my own non-scientific tests with Core Image and OpenGL and my results on a PowerBook G4 12" 1.5Ghz with 1.25GB RAM confirm:

Explicitly requesting the GPU perform a Core Image operation results in:
1. Low(er) CPU usage, but longer rendering times (approx 1.5x duration with my custom filter stack).
2. Lower App Memory usage as the VRAM on the GPU is in use [in addition to] System Memory.
3. Hotter running temps - the GPU cooling isn't as effective as the CPU. The single cooling fan is further away from the GPU and the processor makes only indirect contact with the heatsink via a thermal silicone pad.

Switching back to the default "Software" (CPU) Core Image rendering mode reveals;
1. Full CPU load, but quicker render.
2. Substantially larger App memory usage.
3. Cooler running temps.

I am curious if Apple performed a finely tuned balancing act with this GPU to give the "effect" of hardware acceleration throughout OS X on the PB12" (and G5s). Given the hotter running temps, I would guess that Leopard was more GPU biased that Tiger on this balance.

The FX 5200 (Go) seems to be a poor choice of GPU for what were considered pro machines. The Radeon Mobility 9700 and 9600 used in the 15" and 17" models were far better graphics processors for their time.

I personally prefer the idea of using the hardware for what it was designed for; i.e GPU for graphics acceleration. But I am a little stumped on this one.

If one were hypothetically building an app which made use of Core Image, would it be considered better to force GPU acceleration on the PB12" / FX 5200 or stick with "Apple knows best" and allow the CPU to take the load?

Feel free to discuss...
I think the point is apple made a clever kludge and renders most of Motion on the CPU with the GPU providing final image mixing and output. That would be the only explanation as my mind dredges up the fact that Core Image uses the full 32 bit floating calculation GPUs can do but the fx5200 only works quickly with half precision meaning AltiVec had to ironically pick up the slack.
 
  • Like
Reactions: AphoticD
A clever kludge indeed.

Maybe Apple happened to make a good deal on Nvidia’s “Go” chips at the time and decided to shoehorn this underpowered GPU into a “pro” Mac due to it’s featureset rather than performance.

Despite all this, the PBG4 12” is still hands down my favourite little PowerPC Mac :apple:

(Typing on one of ‘em right now)
 
A clever kludge indeed.

Maybe Apple happened to make a good deal on Nvidia’s “Go” chips at the time and decided to shoehorn this underpowered GPU into a “pro” Mac due to it’s featureset rather than performance.

Despite all this, the PBG4 12” is still hands down my favourite little PowerPC Mac :apple:

(Typing on one of ‘em right now)
Yeah I once owned a 12 inch PowerBook with the first FX5200. It worked well. I forget but I think I handed it off to my cousin who didn’t have a computer at the time. She used it for a while before switching to a Mac book pro then a gaming pc.
 
  • Like
Reactions: AphoticD
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.