You rely on a few tests from one url to be an example for a dual CPU's overall worth. The only 10-20% boost you refer to is software not optimized for dual.
Photoshop is threaded reasonably well. Obviously the others are at least threaded, or there would be no performance gain.
A dual will be faster even when there is not optimized sw and far faster when there is.
Unless the machine's so bandwidth starved that it doesn't help much at all. Or the application is written in C, or another language that doesn't support threading.
I have owned 3 dual Macs in my days
That's nice..
with the dual 1.42 MDD being the fastest. I base my every word on actual experience
Great. So you've run a machine with two 1.42 GHz processors each outfitted with a full 1 MB of cache and sharing 1.33 GB/s bandwidth, vs. putting two, 2 GHz (or 1.8 GHz) chips sporting only 512 KB of cache in a machine sharing 1 GB/s bandwidth. Fantastic comparison. You do realize the reason Apple started putting 2 MB of SRAM L2 cache in their G4s was to make up for the horrific mem/FSB bandwidth bottlenecks, right? Shame there was never really a PPC7470.
and the "proof" is all around you but I guess you're too limited in your thinking to see it.
Mkay. I could make claims to the opposite and say the proof was "all around you," but that wouldn't make what I said any more true. Unless you're going to provide actual evidence, people aren't going to buy what you're saying when you're arguing with someone who is in fact supporting what they say with demonstrative evidence.
In your first post of our exchange you list xbench gives you 300MB/sec then you go on to say in your 2nd post that it's 200MB/sec.
Um. Actually if you'd read my first post a bit more carefully you'd notice I said XBench gives me ~300MB/s, and GeekBench (and a couple others) give me in the 200+ MB/s range.
How did you lose this 100MB/sec overnight?
Computer got tired. But don't worry, I gave it coffee.
And about that copy/fill test in xbench.. my G4 gets 5.7GB/sec in that same test you get 1.1GB/sec. Thats over 5x faster memory on a 100mhz bus Sawtooth.
Maybe your best is 5.7... mine's ~1.6 GB/s. The lowest I score is around 800 MB/s. The Fill test, as I mentioned, is somewhat meaningless.. or at least has no real relationship with memory speed. Obviously the code is small enough to run in cache, which is what it does.
Given that the 1 GHz 7455 in my G3 has 1 MB of backside L2 cache in the form of two 512 KB SRAM chips running at 250 MHz (256-bits wide), it's going to have a much higher latency and a far slower clock than what I can only assume must be in your G4.. a 7447 or 7448 (likely a 48), of on-die L2 cache running at 1:1 with the CPU.
Obviously neither my machine nor yours is capable of the scores it posts-- the Beige G3 has a maximum theoretical memory bandwidth of 533 MB/s, and the Sawtooth 800 MB/s. This is measuring cache speed, not memory bandwidth. If they'd made a 7448 for the G3, I'd be able to hit 5 GB/s+ too.
The differences you claim to not be so different are in fact extremely so.
As I've explained, no. Fill measures cache speed. If you want accurate memory bandwidth tests, check the Fill in Geekbench.
Any AGP G4 board would belittle any G3 board no matter what CPU it had.
>_> Throw a 500 MHz chip back in your Sawtooth and see how it stacks up. Hell, why don't you just post your XBench subscores here? Especially the memory tests.
You can't even put dual upgrades in any G3's because the boards simply can't cope with it. A G4 can easily.
Erm... there was actually a Dual 500 MHz G4 upgrade for a short time from DayStar for Beige G3s. What exactly is it about the G3 logic board that makes you think it... "can't cope with it"?
Say whatever you want but I base everything I say on things I have actually seen and experienced and not just a flood of jargon like you throw out.
I can plainly see that you haven't.
You get lost in bus mhz etc. like a wintel guy and seem to completely forget how RISC architectures get things done.
PowerPC ceased to be RISC a long time ago... and the CISC vs. RISC argument was a bit silly to begin with. You do realize extra stages were added to that G4+ you've got in there in order to ramp up the clockspeed, right? RISC was never really inherently "superior" to CISC in the first place. *That's* jargon.
If everything could be run out of cache, then sure, memory bandwidth wouldn't really matter. But that's not how things work.