Yet, despite the fact it is only 1.6GHz, the Pentium M performs just as well as the 2.2GHz Pentium 4.
Originally posted by Mudbug
So a big question I have is if they are the same speed, why have the 2.2 Ghz version at all, and not instead use the 1.6 Ghz version for the same output with less power consumption, and less heat as well, I would assume.
Is it just because 2.2 Ghz sounds faster when spoken aloud?
Originally posted by Mudbug
So a big question I have is if they are the same speed, why have the 2.2 Ghz version at all, and not instead use the 1.6 Ghz version for the same output with less power consumption, and less heat as well, I would assume.
Is it just because 2.2 Ghz sounds faster when spoken aloud?
Clearly you know nothing about benchmarking and how it is abused. The Intel world has for years been plagued with benchmark abuse. A common cheat is to write a compiler that recognizes specific benchmark routines and programs. As these benchmarks have known results, the results are coded into the compiler. This type of compiler relieves the processor of most of the work required to compute the benchmark's results. By using a third-party compiler, the processor is forced to execute the benchmark program rather than look up its results.Originally posted by MorganX
....
This is just absolutely ridiculous and stupid. Benchmarking a CPU using code not optimized for it, will not measure anything except how slow a CPU can run software not written for it. This is utter stupidity.
Comparative benchmarks should be run using the best available compiler for each CPU. This will give the best indication of real world performance as each platform will run software optimized for it.
....
Initial Power Mac G5 Configuration for all SPEC CPU2000 Testing
The following items were initially performed on the Apple Power Mac G5 system before starting the testing. The configuration described below was used for all SPEC CPU2000 testing.
Installed BootROM version 5.0.0b5
Installed Mac OS X version 10.2.7 build 6S43
Installed the Tachyon development environment version 6K452. This provides the appropriate development tools for generating the SPEC binaries and installs Apples version of the GCC compiler ( version 3.3 build 1379 ) on the test system
Install the NAGWare Fortran 95 compiler 4.2(500). This is required to build the SPEC binary files for the SPECfp_base and SPECfp_rate_base testing.
Install the Computer Hardware Understanding Development kit ( CHUD ) version 3.0.0b19. This tool is designed to simplify performance studies of PowerPC Macintosh systems running Mac OS X by providing a set of tools for developers to analyze their applications. CHUD will be available for download after June 23, 2003 at to http://developer.apple.com/tools/performance.
Using the Reggie tool available from CHUD, modify CPU registers to enable memory Read By-pass. As Read requests are speculatively sent to the memory controller, this eliminates the need to wait for the snoop response required in a multiprocessor configuration thus reducing the time required for a read request.
Used the command hwprefetch -8 to enable the maximum of eight hardware pre-fetch streams and disable software-based pre-fetching.
Installed a high performance, single threaded malloc library. This library implementation is geared for speed rather than memory efficiency and is single-threaded which makes it unsuitable for many uses. Special provisions are made for very small allocations (less than 4 bytes). This library is accessed through use of the lstmalloc flag during program linking.
Originally posted by Cubeboy
"The x86 CPUs on the other hand have very high power consumption due to the old, inefficient architecture as well as all the techniques used to raise the performance and clock speed."
"PowerPCs seem to have no difficulty reaching 1GHz without compromising their performance or generating much heat - how? "
Is that so? Aren't you forgetting about the Pentium M? Right now it offers superior performance to mobile G4s and G3s and while consuming equal or less power than either the G4 or G3 as well as less heat and we're talking about the top of the line models, not the LV models.
"However the x86 floating point unit is notoriously weak and SSE is now used for floating point operations."
I suppose you didn't like talking about the triple FPU of the Athlon/Opteron which beats the crap out of the G4s pathetic single FPU, and probably is equal or better than the G5's double FPU.
"The difference in power consumption is greater than 10X for a 1GHz G4 (7447) compared with the 3GHz Pentium 4."
First of all, compare the top of the line for BOTH the P4 & G4, secondly, the 10X is wrong (don't tell me, you got it by adding 30W to the P4's TDP), TDP is the maximum power you can dissipate running anything that does useful work. Intel's TDP is the equivalent to other manufacturers' Max power. Lastly, the 1 GHz G4 dissapates 22W, not 10W. Higher clocked G4's will no doubt dissapate even more heat. Link Below:
http://e-www.motorola.com/files/32bit/doc/data_sheet/MPC7457EC.pdf
I'm not even going to begin on SPEC, I've already repeated myself enough.
Originally posted by MisterMe
Clearly you know nothing about benchmarking and how it is abused. The Intel world has for years been plagued with benchmark abuse. A common cheat is to write a compiler that recognizes specific benchmark routines and programs. As these benchmarks have known results, the results are coded into the compiler. This type of compiler relieves the processor of most of the work required to compute the benchmark's results. By using a third-party compiler, the processor is forced to execute the benchmark program rather than look up its results.
Originally posted by Mr. MacPhisto
Frankly, I'll be surprised if Intel can get their chips past 4GHz
without sacrificing performance too much.
Originally posted by ouketii
i found this site on my own a few days ago. read the whole thing, very interesting, if you want to know about the core diffs b/t x86 and ppc. gotta say, ppc seems like the winner. and with linux presumably getting more attention, people will no longer be forced to use intel as it runs on ppc also
Originally posted by ddtlm
I read that article and found it poorly argued. Sounds like the guy just went to Arstechnica and read everything, but would be unable to elaborate on anything he argued. A few highlights of his arguement that I found very bothersome:
1) Comparing the heat ouput of a G4 to a P4 while comparing the performance of a G5 to a P4. The 10x heat output difference is just stupid, I'd give him 2x maybe, but we know Intel can do better as the Pentium M has shown.
2) Selecting a best-case G4 to compare to a worst case P4, and then assuming that Intel's figures were even worse than they claim.
3) Ignoring AMD's offerings when it was time to talk about "short and fat". The Athlon-family is shorter and fatter than even the G5.
4) Hand-wavy dismissal of the Pentium M rather than dealing with it as an efficient, fast x86 chip.
5) Mythic-scale Alpha worship.
The author seems to have started with the conclusion and made an article to support it. It is typical partisan propaganda. People predicting the demise of x86 are every bit as ignorant as those predicting the demise of Apple. (Edit: Oh, I see ewinemiller beat me to making that claim by 15 minutes!)
It is not known if even Intel can kill their x86 creation. People want stuff that works, not some wizbang new Itanium. Classic anti-Apple compains like "there is no games" will also apply to desktop Itaniums.Intel has clearly specified that they intend to push Itanium and attempt a transition attempt to it in desktop form.
Why focus on what they make today? The demise of x86 is of course going to happen sometime, heck, the Sun will turn to a red giant sometime, but x86 will be alive an kicking well after AMD has had plenty of time to make something better than the Athlon.AMD is really the only manufacturer offering a future for x86 and their chips run hotter than Intel.
He didn't address it! He waved his hands at it and said "you scare me". Blaming everything on the 1MB L2 is rediculous... just look at how far that got AMD's Opteron. (Actually the Opteron is made on a power-saving process compared to the Athlon too.) The Pentium M is far more significant than most PPC folks are comfortable with. It is a chip that performs something like an equal-clocked Athlon and sucks power something like, dunno, a lower-clocked Pentium III. I really need to look up some hard power dissipation figures on it to make better comparisons, but anyway its pretty darn good.As for the Pentium M, he DID address it. Why is it efficient? CPU Cache.
No need for you to put that "salivating" spin on things, I ignored your posts as well as those of many other because I think the claims are wrong, and I don't have time to argue about every little thing.(and ignored because you seem to be in love with x86 - as you imply by previous posts salivating over Intel and its design)
The latest popular claim, but quite untrue. You should be able to think of lots and lots of chips with huge amounts of cache that are not even slightly efficient, such as the Power4. Which of course as an example I chose because obviously the PPC970 became more efficient because it has less cache, and despite clocking higher.the cache is what makes it more efficient
Explain why off-die cache at half speed is less energy efficient. Conventional wisdom suggets the lower clock speed of the memory would in fact make less heat.Not only this, but some of the G4's access this cache at half their clock rate.
Yes, now tie that fact back to the claims about x86 being doomed. The Pentium M is a great example of an efficient x86 chip.It is a known fact that PPC processors don't need to do as much work to attain similar results.
Well I guess you could claim that this is the point, I just re-read and it seems that his objective was to justify his advocacy of PPC chips over x86. (As a side note, he claims to have believed these things before doing any research.) But really this was never up for arguement about best-case efficientcy, because everyone already knows that x86 chips have pay overhead for translating their instructions. The question is how heavy the overhead is, and the author did a terrible job providing and justifying an answer. He concentrated on Performance-first x86 designs like the Pentium 4 when he wanted to talk about their power use, he concentrated on the G5 when he wanted to talk about PPC performance, he concentrated on the G4 when he wanted to talk about PPC power use. He fudged power use figures of the P4 to suit his goals, he used a power use figure for the unreleased 7457 (10W@1ghz) to suit his conclusions, he dismissed low power x86 designs as low performance or cache-assisted without compelling justification, and he generally packed in irrelevent architectural, historical and benchmark FUD to cast x86 in a bad light. (Who cares if it is 40 years old, who cares if it has to use rename registers?) On one hand he dismissed all the benchmarks that show x86 as being fast as biased, and on the other he pretty much expected the G5 to perform better than the benchmarks available, he even went ahead and mentioned that a G4 can outperform a P4 by like 3.5 times when its running AltiVec optimized code. That claim was of course a load of crap, because he used it to suggest that 3.5x is a possible performance gain from an auto-vectorising compiler, rather than the truth, which is that the gain was on some single specific application where the AltiVec was written by hand. (Why not invite some x86 proponents to write some x86 assembly to show some "benchmark" where a P4 or Athlon goes as fast as can be?) This guy is biased. Like I said, he's a typical partisan spewing propaganda.The point of this article is to set aside market differences and make this conclusion: If x86 and PowerPC shared a completely equal footing in clockrate, cache, bus speed, RAM, and hardware help then there would not be any competition as to which is better.
Do you believe in IBM like its a religion? Do you have faith in them?As IBM develops the POWER5 derivative it will become clear which chip is superior.
A line of wisdom that utterly refutes the "x86 is dead" nonsense regardless of power usage arguments.But the market will determine if the x86 stays around.
That seems quite optimistic. The same thing that keeps people using arcane flavors of Unix like Irix (ard worse) will keep Windows entrenched in the many of the corporate places that have chosen it, and that ignores the fact that MS is by no means in decline.Several businesses are going to Linux, so it is possible that the Intel-MS hegemony may be in its final years.
A while back noone believed that x86 would ever be in a position where RISC chips wished they could surpass its performance. Heck, a while back people were already predicting the demise of x86... thats why Intel made IA64.I don't think the x86 will continue to have the strangle hold when it is surpassed in performance.
Maybe they'll use the same fabs, but x86 is serious money compared to PPC. Huge market.And it will be, especially if AMD withdraws from the race and joins IBM, Moto, and Apple - which is a possibility.
Originally posted by Mr. MacPhisto
I wouldn't have predicted it's demise until now. Intel has clearly specified that they intend to push Itanium and attempt a transition attempt to it in desktop form. AMD is really the only manufacturer offering a future for x86 and their chips run hotter than Intel. While I acknowledge the power of Opteron, it is still quite expensive and quite hot.
Originally posted by Mr. MacPhisto
As for the Pentium M, he DID address it. Why is it efficient? CPU Cache. As stated previously by myself (and ignored because you seem to be in love with x86 - as you imply by previous posts salivating over Intel and its design), the cache is what makes it more efficient - and slightly more expensive than higher clocked chips. The G4 is at 180nm compared to the Pentiums at 130nm. The current G4 has 256K of L2 compared with 1024K on the PentM's. Not only this, but some of the G4's access this cache at half their clock rate.
Originally posted by Mr. MacPhisto
So, do you think the vaunted Pentium-M could compete with a G4 that was designed to have the same amount of cache, same RAM, same bus speed, etc? What if we added the hardware assists that the x86 uses so that it can keep up (they're coming too, and that will be when x86 will be obsolete). The point of this article is to set aside market differences and make this conclusion: If x86 and PowerPC shared a completely equal footing in clockrate, cache, bus speed, RAM, and hardware help then there would not be any competition as to which is better. As IBM develops the POWER5 derivative it will become clear which chip is superior.
Originally posted by Mr. MacPhisto
But the market will determine if the x86 stays around. Several businesses are going to Linux, so it is possible that the Intel-MS hegemony may be in its final years. And yes, I could be wrong. But even if x86 doesn't die - the future appears to be a threeway split between the IBM/Moto PPC, AMD/VIA x86, and Intel's Itanium. I don't think the x86 will continue to have the strangle hold when it is surpassed in performance. And it will be, especially if AMD withdraws from the race and joins IBM, Moto, and Apple - which is a possibility.
Originally posted by Cubeboy
Intel hasn't specified anything, all plans for Itanium still have it remain strictly for Servers, Blades, and possibly high end workstations (like Alpha Workstations). Who's to say AMD won't pursue RISC or VLIW cpus in the future, after all, their Athlons and Opterons were basically RISC cpus. Current prices list a Opteron 240 at below $300, a Opteron 242 slightly above $700, and a Opteron 244 at $800, making them quite viable for high end desktops and workstations of any kind.
Wrong, a branch predictor which resulted in a reduction of mispredicted branches by 20%, Dedicated Stack Management, Micro-ops Fusion, along with the 400 MHz FSB and 1 MB L2 cache give Centrino it's enfficiency. It seems you forgot to mention that the G4 has a 2 MB L3 cache, the L2 cache itself is 256k, whether it operates at the full speed of the CPU has to do with the core.
The 1.6 GHz Centrino performs like a 2.4 GHZ P4 overall and like a 2.66 GHz P4 in office. Might I also mention that their is a 1.7 GHz Centrino which assuming linear scaling would perform like a 2.53 GHz Pentium 4 overall and a 2.8 GHz Pentium 4 in office. The G4, let's see, a 1.25 GHz model got the crap beaten out of it by a 2 GHz Pentium 4 running Jet3d despite that the G4 was running optimized code, a DUAL 1.42 GHz G4 performs anywhere between 50% to over 200% worse in games despite some games being threaded, 250% slower in Cinebench OpenGL, and significantly slower in most other apps despite having dual processors.
More BS I see, even with Dual Processors and 2 megs of L3, the fastest G4 wasn't able to keep up with the fastest P4's and Athlons. A SINGLE Centrino on the other hand, can keep up with the Pentium 4 pretty well overall and might even be able to surpass it in some areas like office. Do you really think a faster G4 with more L2 cache, and no benefit in bandwidth (since the L3 provides 5.4(?) GB/S) and a entire processor less will do any better?
So far, this has proven to be nothing but BS, why do you think x86 currently holds the performance crown? Gee, could it be because the differences between CISC and RISC have become so subtle as to be nearly indistinguishable? You do know that the current Pentium 4s and Athlons only have one external layer of CISC? How about that other than some legacy code, the Athlon is a full fledged RISC CPU? So far you've failed provided a shred of evidence for any of your claims.
Originally posted by MisterMe
Clearly you know nothing about benchmarking and how it is abused. The Intel world has for years been plagued with benchmark abuse. A common cheat is to write a compiler that recognizes specific benchmark routines and programs. As these benchmarks have known results, the results are coded into the compiler. This type of compiler relieves the processor of most of the work required to compute the benchmark's results. By using a third-party compiler, the processor is forced to execute the benchmark program rather than look up its results.
Originally posted by sturm375
This is why no-one takes Steve Jobs seriously anymore when it comes to benchmarks.
Originally posted by Cubeboy
Intel hasn't specified anything, all plans for Itanium still have it remain strictly for Servers, Blades, and possibly high end workstations (like Alpha Workstations).