mac G4's are no supercomputers - official

Discussion in 'General Mac Discussion' started by sr, Mar 11, 2002.

  1. sr macrumors newbie

    Sep 21, 2001
    Below copied from Macuser: something to get the mac cultists' knickers in a twist.

    An independent computer performance tester has concluded that Apple's Power Mac 'is no supercomputer', contrary to claims that Apple has consistently made.
    Using a series of tests - designed to be as platform independent as possible - the Standard Performance Evaluation Corporation (SPEC) concluded that compared to a 1GHz Pentium processor, the Motorola Power PC 1GHz processsor currently employed in Apple's top Power Mac, 'is far less suited for scientific applications', despite the fact that the Power PC FPU, 'with its 32 registers ought to have been superior to the x86 [Intel Pentium] FPU with its antiquated stack structure and eight registers only.'
  2. CHess macrumors regular

    Dec 13, 2001
    San Francisco Bay Area
    Interesting comment. Who is SPEC and where do they publish their data?
  3. sr thread starter macrumors newbie

    Sep 21, 2001
  4. AmbitiousLemon Moderator emeritus


    Nov 28, 2001
    down in Fraggle Rock
    can anyone say flame bait?

    this was already posted once and many people pointed out the problems with the test im not sure why you felt you needed to post it again.

    an whats with this "official" thing. what makes the G4 a "supercomputer" is that it performs a gigaflop. this is what the US government defines a supercomputer as. its clearly a dated definition but as far as official goes thats what makes the g4 a supercomputer. perhaps the term needs to be redefined to terraflop but until that happens any computer that can perform a gigaflop is considered a super computer.
  5. sr thread starter macrumors newbie

    Sep 21, 2001
    If you had read the article, the definition issue was dealt with in it. i didn't know that this had been posted. But it is a new article on macuser. Hence i posted.
  6. Pants macrumors regular

    Aug 21, 2001
    "this was already posted once and many people pointed out the problems with the test im not sure why you felt you needed to post it again."

    irrespective of percieved 'problems' with this test, it is extremely bad publicity -

    for all the thoughts leveled at this test for not using the right apple compilers, equal arguements could be made for the intel test (its a p3 or crying out loud, not even a p4). Apple have been making a *big* play for scientific researchers, and yet this kind of benchmark is what we suspected all along. I dont see this as flame bait, but another bit of apple hype.
  7. OSeXy! macrumors regular

    Jan 17, 2002
    London (or virtually here)
    Depressing as that piece is, it becomes interesting when it's talking about how slow OS X is. Interesting because it implies that there is a lot of scope for improving it, if Apple can get enough clever engineers working on it.

    It also seems clear that the tests were not enabling Altivec and the DP capability. Still, the results are surprising. Anyway, even if Moto doesn't give us faster chips soon, at least Apple could give us a faster OS! It is in full control there.
  8. Taft macrumors 65816


    Jan 31, 2002
    Two other threads.

    As AmbitiousLemon said, this has been posted to the site already...twice!

    And talk about a sensationalistic headline! No supercomputer. This article IS generating bad publicity and much of it is *unfounded*.

    Double precision FP is what the G4 doesn't do as well. This has been written about all over the web already. In the other areas the G4 holds its own.

    And, though I don't want to beat a dead horse (look at the other threads), this test can be put under some suspicion. SPEC is new on OS X. Compilers are also an issue that the testers fail to appropriately address (and couldn't--the C't article makes some reference to these problems). In short, the bench is fraut with potential pitfalls.

    Overall, I *do* think Apple needs speed boosts in their processors. However, the bad press contained in these articles is far out of proportion to the problem itself. TheRegister even severely attacks OS X on having too much legacy code unable to be optimized properly.

    TheRegister blame OS X "speed problems" on this supposed fact. Fact is, OS X is *apparently* slow because of Aqua and the Quartz engine. It is interface lags, pure and simple. I rarely get the spinning cursor of death, and latencies have not been a problem for me, even doing graphics stuff (though I haven't tested audio to any extent).

    The bottom line is that I find OS X much faster for everyday tasks than OS 9. The only exceptions to this are interface related and I see those tasks getting better with each subsequent release. Copying files, playing media files, multitasking...they are all much better. And while OS X doesn't beat LinuxPPC, it doesn't appear to lag far behind in speed.

    These allegations against OS X are the first of its kind I've encountered from the "mainstream" press and I believe them to be without much merit--or at the very least proof.

  9. OSeXy! macrumors regular

    Jan 17, 2002
    London (or virtually here)
    That's interesting, Matthew. I'm not a programmer, so I can't comment. But I believe you in good faith - your posts seem well informed and balanced.

    So you think the core of the OS is running smoothly and that the bottlenecks and potential optimisations lie in the added layers (Quartz, Aqua, et al.). Why do you think The Register points to the core of the OS, then? Maybe they know there is also quite a lot of work to do there?

    What is Apple's best way forward? Where should Apple be concentrating its efforts to improve the operation of the OS? More and more people seem to be getting frustrated with the progess on this front.

    We mac-types tend to be pretty tolerant. But potential converts to the platform will be put off by the 'interface lags' you mention -- even if they can complete their main tasks more quickly than on (almost) any other platform. Perception of something's speed is often based more on banal, minor, repeated tasks, such as opening, shifting and re-sizing windows... No matter how fast somthing 'actually' is, if it 'feels slow' that will make as much of an impression.
  10. Taft macrumors 65816


    Jan 31, 2002
    Their source seems to point to an ex-Apple employee. And I don't necessarily doubt that the statements are completely baseless. But we have to look at the facts.

    First, not all of the API sets are NeXT based. This means that even if the NeXT code was so legacy and sloppy as to make it hopelessly slow, Apple had a chance to recover some of the speed loss with their own code. This goes all of the way down to the Kernel level where *extensive* work has been done and if memory serves me, the kernel may not even be the same as the kernel used in NeXT.

    Second, speed comparisons between Mac OS X and Classic/LinuxPPC are very revealing. While I don't have a link to exact numbers (I'll try to find some and post them when I get home from work), Mac OS X performs better than OS 9 in some areas, and worse in others. Take rendering a web page. Anecdotal (and observed) evidence suggests that OS X web browsers don't render any slower than in OS 9. Now try scrolling through that page or resizing the window. Slower right? I've also done some small amount of testing with iTunes and mpg123 (alex_ant did more extensive testing). These small (and less than expansive, I'll admit) tests showed that while OS X utilized more CPU/resources than OS 9/Linux it wasn't (in my opinion) an unreasonable amount more.

    These facts point to one conclusion: at lower levels OS X performs very well (maybe even better than OS 9 and not too shabby compared to Linux), while on higher levels it does not perform as well.

    Thus, SPEC results, which I can only assume run at *lower* levels (there is absolutely no way they have been ported to Foundation APIs under OS X), would not be effected by this slowdown at higher levels. Also, I have seen no mention of SPEC test comparisons under OS 9, which would be interesting (and I think compabrible).

    Now there are a few different ways we can explain the slowness of the higher level operations (ie windowing ops). Legacy/sloppy NeXT code could certainly be one of those explanations. But there is some evidence against that:

    1) Quartz. It is a very advanced graphics engine requiring much more horsepower than the more simple engines with less sophisticated display abilities. Think about it...transparency available to just about every object, PDF, high levels of anti-aliasing in everything. You can tell Quartz' quality by just looking at an OS X screen. There is a Quartz implementation of an asteroids game out there that runs very slow as compared to others in its genre. I have no doubt that this is because of the vast number of abilities Quartz has.

    2) Objective-C/Java. Objective-C is a very good object oriented language. One of its advantages is late-binding (aka runtime-binding) which allows decisions about objects to be made at runtime rather than at compile time (C++ also allows this, but it is not as ubiquitous in C++ as it is in Obj-C). This makes for very flexible objects and very cool coding techniques. The problem with this is that you sacrifice some speed for that flexibility. Also, in many circumstances that flexibility is just not needed but could be used anyway due to sloppy/lazy programming. I'm sure this kind of situation is present in OS X code, so it probably needs to be cleaned up to some extent, but the use of the language itself comes at some speed sacrifice.

    Java has a similar speed problems in terms of its byte-code limitations (though as far as I know, no OS X code that the user interacts with is written in Java).

    3) Compilers. Finally optimizing the executables created from Obj-C code is probably not an easy task (this may in fact be what the ex-Apple employee was referring to in "serialization of objects). The nature of its extensive use of objects (its based on smalltalk for gods sake) and late binding presents some interesting problems for compiling effectively. Also, compilers probably don't take advantage of AltaVec without code modifications.

    In summary, I think the bench that this thread references reflects solely on the PowerPC processor. OS X itself probably contributed very little to the poor floating pont scores of the G4. Go do a search on google for G4 bench tests and technical reviews. You'll find it doesn't do well in double precision fp calcs mainly because the processor wasn't designed to handle them well. It is one of the G4's weak points.

    Don't blame OS X for this. And don't think that these results make the G4 an irrelevant processor. Its just one area of the chip that won't effect most users in the least.

  11. gbojim macrumors 6502

    Jan 30, 2002
    Excellent points by mrtrumbe and all true.

    One thing I find ridiculous about this whole thing is the testers said they wanted to keep everything even and therefore did not employ altivec - so really all they tested is the floating point unit on the PPC. That does not make a fair comparison of capability nor does it mean systems using PPC chips are not supercomputers. Motorola did not create Altivec just to make the chip bigger and hotter. Altivec is what makes the PPC very fast.

    One big problem is you cannot optimize code that would take advantaqe of Altivec in the same way on Intel because the Pentium architecture is designed to optimize multimedia work only - not anything you want that can work with vectors. So the benchmaring solution is to not optimize anything. OK. But all that means is a crippled PPC is slower than an PIII. I certainly don't see Apple, Cisco, Ford or anyone else refusing to utilize Altivec because it gives Motorla and unfair advantage over competing Intel chips. Cisco is using the 8540 because of Altivec.

    The thing I find really funny is the early "supercomputers" like the Crays and such were able to achieve that status for one reason only - the on-board vector processing functionality which was basically an early version of Altivec.
  12. evildead macrumors 65816


    Jun 18, 2001
    WestCost, USA
  13. mischief macrumors 68030


    Aug 1, 2001
    Santa Cruz Ca
    Isn't that a bit like comparing a GMC to a Ford but because the GMC uses some cool new valve tech we'll just hook the starter motor to the transmission?

    Gee. The starter motor couldn't move the truck!:eek: :confused: :p :rolleyes: ;)
  14. King Cobra macrumors 603

    Mar 2, 2002
    It is not as if an affordable supercomputer PC is around the corner...

    Right now, even the slowest G4, running at 350MHz, is even a supercomputer. It can process over a billion float point operations in one second and costs less than $1000. The last time I checked the Pentium 4 2000MHz could not even process 0.3 Gigaflops. (I am afraid I have no reference for this right now, but I did see it somewhere.) When this Pentium came out it cost...around $2000 maybe?

    My point is that the article gives a bad meaning to Apple and the G4 computers, in contrast to PCs, which have failed to process an affordable supercomputer.
  15. Rower_CPU Moderator emeritus


    Oct 5, 2001
    San Diego, CA
    SPEC benchmark slanted to favor Intel chips- official

    Here's an article from Appleturns :

    Sigh, we hate it when the rest of the industry can't stay in character. Look, people, we're the soap opera, right? So when an allegedly technically astute German computer magazine starts publicly lambasting Apple for calling the Power Mac G4 a "supercomputer" because its testing using the industry standard SPEC benchmarks reveals that, for floating point operations, a 1 GHz G4 processor is a total dog on quaaludes compared to even a relatively ancient 1 GHz Pentium III, our role is to bask in the drama, wring our hands, and milk it for all it's worth and then some. Meanwhile, it's up to the "real" technical press to notice and report that said German magazine's methodology had more holes in it than a colander made out of Swiss cheese. In this manner, you, the viewer, would get both an entertaining dose of hysterical melodrama and a serious technical grounding that would let you enjoy our hysterics while realizing you'd have no reason to run screaming into traffic. See? It's all about demarcation.

    Unfortunately, as faithful viewer Timothy Ritchey pointed out, it looks like everyone else out there forgot their lines. Even The Register, normally way better about this sort of thing, surprised us by running a headline of "Benchmarks demolish Apple speed boasts"-- sheesh. So now, much as we hate to do it, we're forced to backtrack and mention a couple of obvious reasons why c't's benchmarking results aren't really a reason to start drafting a suicide note.

    First and foremost, c't itself admitted that the G4 should have mopped the floor with "the x86 FPU with its antiquated stack structure and eight registers only"-- so why, when the G4 was shown to be half as fast as the Pentium III, did the magazine just say "gee, we guess the G4's no supercomputer" and then saunter away, hands in pockets, whistling a jolly tune? Doesn't anyone think it's strange that they failed to mention that the SPEC2000 test, as compiled, utterly ignores the G4's Velocity Engine registers, which are what gives that chip its supercomputer-class, greater-than-gigaflop floating point performance? What c't did is tantamount to forcing you to write with your toes and then telling you that your handwriting sucks.

    What's more, while the industry just loves SPEC benchmarks, faithful viewer Mark Davis reminds us that they've always been biased towards Intel processors, in part because the SPEC code just floods the chip with a constant stream of perfect instructions and let it work at peak efficiency, which is nothing like how real software is processed. As you may recall from Jon Rubinstein's "Megahertz Myth" spiel, Intel's recent chips take a speed hit from the recurring need to clear and refill those extra-long pipelines due to incorrect predictive branching-- it's that whole "pipeline tax" thing. With the SPEC test, there are no data dependency bubbles, and therefore no pipeline tax, so Intel's chips perform better than they would in actual battle conditions.

    Apple itself obliquely refers to this problem on its G4 page: "Another aspect of speculative operation worth noting is that it is possible to create (for testing purposes) a contrived set of instructions that can make the processor guess correctly much more often than it would under real-world conditions. Thus a 'benchmark' with no relation to actual performance can be crafted to cleverly avoid the bubble problem and thus indicate unrealistically high performance." No one's mentioned by name, but all signs point to SPEC-- which was never meant to test real-world performance.

    Good enough, people? So don't panic, and just remember that it's totally ludicrous to think that a 1 GHz G4 would perform only half as well as a 1 GHz Pentium III at real-world tasks. Chip-level benchmarks like SPEC mean nothing when it comes to getting your work done; for cryin' out Pete's sake, c't even disabled one of the G4 processors in that dual rig for the sake of measuring the performance of a single chip, but you're not going to turn one off to run Photoshop, right? Right. Now, back to the drama; we just hope that the other sites can remember their roles next time, because we're supposed to be rattling on about black turtlenecks and Reality Distortion Fields, consarn it, not analyzing benchmark data. Seriously, if we ever have to say "incorrect predictive branching" in an episode again, we're gonna have to bust some heads.

    So there!
  16. Onyxx macrumors regular

    May 5, 2001
    aqua and quartz

    the comments about about the os x's layers causing the bottlenecks are very true. If you want proof, install xfree86 and some sort of window manager and run those in the place of quartz and aqua. wow now doesnt that puppy haul when it comes to interface. But its still the same when running non interface stuff. Compiling with the darwin kernel isn't all that fast but its not extremely slow ( all of this is based on my own tests on a g3 400 powerbook and gnudarwin with xfree86 with a base version of gnome)

    If apple revamps the aqua code (does anyone know what its written in? cocoa or just a carbon app?) to make it more efficent i would be much happier with os 10. The quartz rendering engine is a layer that shows much promise but is too advanced for current hardware. If that layer was optimized or perhaps cut down a notch as far as features go perhaps I wouldn't see cpu useage spikes of of 0% to 37% just for moving a window.

    We also have to remember to take a step back somtimes and realize that os x is just in it's fledgling stage with signs of enourmous potential. What consumer os out ther has a unix core, full symetric multiprocessing support, etc. (not including linux. Linux still has a long way to go before it can be considered user freindly) Give it time.
  17. maclamb macrumors 6502


    Jan 28, 2002
    Northern California
    I must agree that the OSX interface feels sluggish:
    Ti667, G4/400, IBook 500
    No doubt that my dell UI/Win2k is more responsive.
    I don't do video/sound, so "business user" is how I use my machine.

    So, why do I use a Mac
    1st (and this has ALWAYS been primary reason - since 1988 when I first used an Apple II) is The Look
    There is soemthing psychologically and emotionally more pleaseing about looking at a Mac (9 or 10, but 10 is way better than 9). I feel better looking at it. and I feel better using it.
    2nd - It's unix core
    3rd - ease of use/setup/maintain - I'm pretty Windoze smart, so really, config is not a big thing for me - but mac is easier, no doubt
    4th - Well, the Mac is cool

    So, flame away...

    BTW _ I just finsihed installing XP Pro into VPC5.02 on my ti667 - not fast - but enough to make it usable - which i think says alot to this thread that the CPU is fast - and Aqua/Quartz is slow.
    I assume with a blazing CPU and tuned OS code (can we say G5/10.3?4?5?11.1?) this will be a system that will be totally amazing ---
    until then I'm looking at lots of Potential - but side by side my PC does respond faster (UI only, yes, I know I am in the scrolling minority, but not al of us do 3D Anim....)

  18. buffsldr macrumors 6502a


    May 7, 2001
    Yo tech mag freaks.... "SuperComputer" is a marketing ploy to sell comps. And since Apple already has us on its court, the Germans could reveal that apples weren't computers after all and we'd still buy them.
  19. Gelfin macrumors 68020


    Sep 18, 2001
    Denver, CO
    While I'm thinking about it, it may amuse you to note that at one time SPEC had some degree of notoriety in benchmarking circles, because it was common to use in-house compilers for the testing, and a popular approach to improving one's SPEC scores was to write targeted solutions manually in assembly code, then write your compiler such that it recognized when one of the SPEC modules was fed to it and just brainlessly dumped the hand-tuned code out to disk.

    I don't want to name names, since I'm not a hundred percent sure, but I believe one of the primary adherents of this approach "back in the day" was a company whose name rhymes with "Blintel."

    I'm pretty sure this isn't the case anymore, at least not like it was. But I have no doubt that the compiler still does some selective targeted optimization.

Share This Page