Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The Mhz figures seem to be rather like 0-60 times or fuel consumption quoted in the motor industry. Only Schumacher can achieve the accelearation times, and the set up for the optimum fuel is probably even a challenge for Schumacher's mechanics.

Browser speed is equally nefarious, as Safari and Opera make instant contact with sites, they then hang, meanwhile Camino seems idle then gives you the whole screen. Which is faster depends on a load of factors.

Why arent the speed tests in Photoshop and real world apps the figures being really pushed by Apple? Real users dont need to be rocket scientists to compare how quickly an app works, and it is real users who are going to be buying computers. Make it easy rather than pandering to the go-faster stripe brigade who are successfully turning the new release into an ugly no-win slanging match.

My tuppeny bit.
 
This controversy is infuriating to me. Benchmarks are extremely useful, but *only* if you possess the technological knowledge to know what they prove and (more importantly) what they don't prove. Apple/Veritest's benchmarks are just fine, as they only purport to illustrate certain things and the testing methodology is fully disclosed.

However, if you take them to mean something they don't, then your conclusion could be way off-base--even completely contradictory of the truth. If you say, for example, that "this bar is only twenty percent longer than that bar; does this mean this machine is only twenty percent faster than that machine?" then you are absolutely dead wrong in your reasoning. No benchmark test can tell you that and the truth might be the opposite. By the same token, you can criticize *any* test for the assumptions it makes. That doesn't mean all criticisms are invalid. But if the test is narrowly focused on a specific objective, then you cannot reasonably (or correctly) extrapolate another objective.

As an example, we can fairly easily compare floating-point computation between two chips. As soon as you put those chips into computers, however, the speed difference might get bigger, smaller, or flip completely. As soon as you add an operating system, the results can change in any way, as soon as you add application programs the results can change in any way, and as soon as you impose measuring parameters, the results can change in any way.

When a company such as Apple commissions a reputable testing firm, it has some say as to the methodology. If, however, the testing firm believes that the methodology would unfairly bias the results, it will refuse to do the test. To do otherwise would damage its reputation, and reputation is *everything* to these firms.

As one poster said, the Photoshop test is still very useful, because we can assume *generally* similar real-life results using that application. Other tests are more useful to developers, as they indicate particular strengths and weakness of *specific* areas. These tests can also provide *very general* guidance to consumers as well, but nothing more specific than, say, "This machine is really fast" or "This machine is definitely faster than the G4." In the latter case, the result is merely because the difference was substantial. In the case of dual G5 vs. dual Pentium 4, it is not possible to *conclusively* prove that the G5 is faster, but Apple's claim, based on the indicated tests, could be viewed as *reasonable* or even *likely.*

Person X can run further than Person Y. Does that mean that Person X is better looking?

elo
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.