Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Fender2112

macrumors 65816
Aug 11, 2002
1,135
374
Charlotte, NC
Nice article.

I did not know x86 architecture was that old. You have to figure that Intel and AMD will eventually have to drop the x86. Or at least reavamp it. There definitely will be some growing pains. But that's part of growing up.
 

Mudbug

Administrator emeritus
Jun 28, 2002
3,849
1
North Central Colorado
Yet, despite the fact it is only 1.6GHz, the Pentium M performs just as well as the 2.2GHz Pentium 4.

So a big question I have is if they are the same speed, why have the 2.2 Ghz version at all, and not instead use the 1.6 Ghz version for the same output with less power consumption, and less heat as well, I would assume.

Is it just because 2.2 Ghz sounds faster when spoken aloud?
 

themadchemist

macrumors 68030
Jan 31, 2003
2,820
0
Chi Town
I have a question...The author noted that the compiler Dell used took specific advantage of certain Intel features. It mentioned auto-vectorization, though.

Wouldn't that help both the PPC and Intel processors? Or was the Dell compiler optimized for the way that Intel does vectorization, as opposed to Altivec?

Thanks guys.
 

MorganX

macrumors 6502a
Jan 20, 2003
853
0
Midwest
Originally posted by Mudbug
So a big question I have is if they are the same speed, why have the 2.2 Ghz version at all, and not instead use the 1.6 Ghz version for the same output with less power consumption, and less heat as well, I would assume.

Is it just because 2.2 Ghz sounds faster when spoken aloud?

The main reason for the performance difference, that no one mentions because it does not negate the MHz myth, is that the Pentium M has 1MB on-die L2 cache. Take away the L3 from G4 Powermacs and you have a slow-aZZ iMac.

There is no secret. The smaller the manufacturing process the more, and the cheaper, you can add on-die cache. That is where Intel is going across the board, and that is why X86 isn't even close to maxing out performance yet. They've been ruling the roost with minimal cache in their desktop CPUs.
 

MorganX

macrumors 6502a
Jan 20, 2003
853
0
Midwest
>>If SPEC marks are to a useful measure of CPU performance they should use the same compiler, an open source compiler is ideal for this as any optimisations added for one CPU will be in the source code and can thus be added to the other CPUs also keeping things rather more balanced. <<

This is just absolutely ridiculous and stupid. Benchmarking a CPU using code not optimized for it, will not measure anything except how slow a CPU can run software not written for it. This is utter stupidity.

Comparative benchmarks should be run using the best available compiler for each CPU. This will give the best indication of real world performance as each platform will run software optimized for it.

What good it is to use a Sh**y open source compiler, when you're never going to run Sh**y open source software on a given CPU?
 

Mr. MacPhisto

macrumors 6502
Jan 16, 2003
281
0
Originally posted by Mudbug
So a big question I have is if they are the same speed, why have the 2.2 Ghz version at all, and not instead use the 1.6 Ghz version for the same output with less power consumption, and less heat as well, I would assume.

Is it just because 2.2 Ghz sounds faster when spoken aloud?

I think it is cost. The 1.6 is slightly cheaper than the 2.2 if left alone, but the Pentium M's speed is improved by increasing the L2 cache on the chip and adding L3 (I think, I know the L2 is higher). This means the CPU is more efficient and has less down time, but processor cache ain't cheap, so the 1.6 ends up costing more than the 2.2.
 

MisterMe

macrumors G4
Jul 17, 2002
10,709
69
USA
Originally posted by MorganX
....

This is just absolutely ridiculous and stupid. Benchmarking a CPU using code not optimized for it, will not measure anything except how slow a CPU can run software not written for it. This is utter stupidity.

Comparative benchmarks should be run using the best available compiler for each CPU. This will give the best indication of real world performance as each platform will run software optimized for it.

....
Clearly you know nothing about benchmarking and how it is abused. The Intel world has for years been plagued with benchmark abuse. A common cheat is to write a compiler that recognizes specific benchmark routines and programs. As these benchmarks have known results, the results are coded into the compiler. This type of compiler relieves the processor of most of the work required to compute the benchmark's results. By using a third-party compiler, the processor is forced to execute the benchmark program rather than look up its results.
 

Cubeboy

macrumors regular
Mar 25, 2003
249
0
Bridgewater NJ
"The x86 CPUs on the other hand have very high power consumption due to the old, inefficient architecture as well as all the techniques used to raise the performance and clock speed."

"PowerPCs seem to have no difficulty reaching 1GHz without compromising their performance or generating much heat - how? "


Is that so? Aren't you forgetting about the Pentium M? Right now it offers superior performance to mobile G4s and G3s and while consuming equal or less power than either the G4 or G3 as well as less heat and we're talking about the top of the line models, not the LV models.

"However the x86 floating point unit is notoriously weak and SSE is now used for floating point operations."

I suppose you didn't like talking about the triple FPU of the Athlon/Opteron which beats the crap out of the G4s pathetic single FPU, and probably is equal or better than the G5's double FPU.

"The difference in power consumption is greater than 10X for a 1GHz G4 (7447) compared with the 3GHz Pentium 4."

First of all, compare the top of the line for BOTH the P4 & G4, secondly, the 10X is wrong (don't tell me, you got it by adding 30W to the P4's TDP), TDP is the maximum power you can dissipate running anything that does useful work. Intel's TDP is the equivalent to other manufacturers' Max power. Lastly, the 1 GHz G4 dissapates 22W, not 10W. Higher clocked G4's will no doubt dissapate even more heat. Link Below:

http://e-www.motorola.com/files/32bit/doc/data_sheet/MPC7457EC.pdf

I'm not even going to begin on SPEC, I've already repeated myself enough.
 

sturm375

macrumors 6502
Jan 8, 2002
428
0
Bakersfield, CA
Something I only read about in 1 article concerning the G5 Benchmarking (If I find it again, I'll post a link to it). The article above accuses Dell of modifiing the compiler to preform better on the SPEC tests. I am sure that is true. However the following is taken from the VeriTest PDF document documenting the testing of the G5:


Initial Power Mac G5 Configuration for all SPEC CPU2000 Testing
The following items were initially performed on the Apple Power Mac G5 system before starting the testing. The configuration described below was used for all SPEC CPU2000 testing.
• Installed BootROM version 5.0.0b5
• Installed Mac OS X version 10.2.7 build 6S43
• Installed the Tachyon development environment version 6K452. This provides the appropriate development tools for generating the SPEC binaries and installs Apple’s version of the GCC compiler ( version 3.3 build 1379 ) on the test system
• Install the NAGWare Fortran 95 compiler 4.2(500). This is required to build the SPEC binary files for the SPECfp_base and SPECfp_rate_base testing.
• Install the Computer Hardware Understanding Development kit ( CHUD ) version 3.0.0b19. This tool is designed to simplify performance studies of PowerPC Macintosh systems running Mac OS X by providing a set of tools for developers to analyze their applications. CHUD will be available for download after June 23, 2003 at to http://developer.apple.com/tools/performance.
• Using the “Reggie” tool available from CHUD, modify CPU registers to enable memory Read By-pass. As Read requests are speculatively sent to the memory controller, this eliminates the need to wait for the snoop response required in a multiprocessor configuration thus reducing the time required for a read request.
• Used the command “hwprefetch -8” to enable the maximum of eight hardware pre-fetch streams and disable software-based pre-fetching.
• Installed a high performance, single threaded malloc library. This library implementation is geared for speed rather than memory efficiency and is single-threaded which makes it unsuitable for many uses. Special provisions are made for very small allocations (less than 4 bytes). This library is accessed through use of the –lstmalloc flag during program linking.

All of the items I bold faced, are special optimizations used in the compiler to get a better SPEC score. No optimizations were done on either of the Dell computers. Also, Apple used OS 10.2.7, which I am sure has a few optimizations, that we may or may not see when it is released to the public. Follow the links from the G5 information pages on http://www.apple.com, download the PDF from VeriTest yourself, and read.

This is why no-one takes Steve Jobs seriously anymore when it comes to benchmarks.
 

Mr. MacPhisto

macrumors 6502
Jan 16, 2003
281
0
Originally posted by Cubeboy
"The x86 CPUs on the other hand have very high power consumption due to the old, inefficient architecture as well as all the techniques used to raise the performance and clock speed."

"PowerPCs seem to have no difficulty reaching 1GHz without compromising their performance or generating much heat - how? "


Is that so? Aren't you forgetting about the Pentium M? Right now it offers superior performance to mobile G4s and G3s and while consuming equal or less power than either the G4 or G3 as well as less heat and we're talking about the top of the line models, not the LV models.

"However the x86 floating point unit is notoriously weak and SSE is now used for floating point operations."

I suppose you didn't like talking about the triple FPU of the Athlon/Opteron which beats the crap out of the G4s pathetic single FPU, and probably is equal or better than the G5's double FPU.

"The difference in power consumption is greater than 10X for a 1GHz G4 (7447) compared with the 3GHz Pentium 4."

First of all, compare the top of the line for BOTH the P4 & G4, secondly, the 10X is wrong (don't tell me, you got it by adding 30W to the P4's TDP), TDP is the maximum power you can dissipate running anything that does useful work. Intel's TDP is the equivalent to other manufacturers' Max power. Lastly, the 1 GHz G4 dissapates 22W, not 10W. Higher clocked G4's will no doubt dissapate even more heat. Link Below:

http://e-www.motorola.com/files/32bit/doc/data_sheet/MPC7457EC.pdf

I'm not even going to begin on SPEC, I've already repeated myself enough.

He addressed the Pentium M. The reason for the Pentium M's speed is increased cache for the chip. If a G4 had 1024KB of L2 and was manufactured at 130nm then and also had a FSB equivalent to that of the Pentium, my bet would be with the G4. If Moto and/or IBM start driving up the cache and the FSB speed then we'll see the PPC hold it's own quite well against the Intel. Just remember, comparing a 180nm chip to a 130nm chip for power consumption isn't a fair comparison. The article compared architecture and reached the conclusion that many have reached: PowerPC processors are more efficient processors based on newer technology while x86 processors are jury-rigged, old, but also cheaper to manufacture.

And the 10x is not wrong. I'll wait for the 7457 to test it out myself, but I remember reading that a 7457 running at 1.5GHz displaces less than 20W. The Pentium M only can do better because it is a Pentium III with tons of cache. Put 1024KB L2 and 2048 L3 coupled with a 400MHz or faster FSB with dual channel DDR400 RAM and I'd say the G4 would beat, or at least hold its own against the Pentium M. As I've said all along, PPC is superior to the x86 instruction set in almost every way. With IBM working harder on the desktop versions of these chips, it's only a matter of time before they push them past the point where Intel can compete. Frankly, I'll be surprised if Intel can get their chips past 4GHz without sacrificing performance too much.
 

ewinemiller

macrumors 6502
Aug 29, 2001
445
0
west of Philly
Originally posted by MisterMe
Clearly you know nothing about benchmarking and how it is abused. The Intel world has for years been plagued with benchmark abuse. A common cheat is to write a compiler that recognizes specific benchmark routines and programs. As these benchmarks have known results, the results are coded into the compiler. This type of compiler relieves the processor of most of the work required to compute the benchmark's results. By using a third-party compiler, the processor is forced to execute the benchmark program rather than look up its results.

Except I don't think this is what the Intel compiler does, it's just very good at making the best of the Intel chips. A couple years back, I was going to add SIMD support to one of my products. At the time Visual Studio did not support it so I had to pick up the Intel compiler. Of course the first thing I did was a recompile using the Intel compiler and run it through few of my test scenes. I saw 40% jump in render speed and my product was not the only thing in the scene so it probably increased the performance of my product by somewhere around 50-60% (just a guess since I had no way of isolating that specific part) with just a recompile. At first I thought they were doing some sort of autovectorization, but I saw the same jump on pentium IIs which don't have SSE. This wasn't a benchmark or any code that Intel's engineers could have possibly forseen, it was straight C++ code for a small commercial product with lots of heavy math. The Intel compiler is just that good!
 

ouketii

macrumors regular
Mar 6, 2003
103
0
i found this site on my own a few days ago. read the whole thing, very interesting, if you want to know about the core diffs b/t x86 and ppc. gotta say, ppc seems like the winner. and with linux presumably getting more attention, people will no longer be forced to use intel as it runs on ppc also
 

ewinemiller

macrumors 6502
Aug 29, 2001
445
0
west of Philly
Originally posted by Mr. MacPhisto
Frankly, I'll be surprised if Intel can get their chips past 4GHz
without sacrificing performance too much.

I'd be careful about making predictions like that. People have been saying the x86 chips are going to hit a wall for about the same amount of time people have been touting the impending doom of Apple. I seem to remember hearing folks say the same kinds of things about x86 when the PowerPC was introduced. Apple is still here doing great and Intel consistently keeps upping the performance of the x86 line. I suspect neither one of those will change any time soon.
 

ddtlm

macrumors 65816
Aug 20, 2001
1,184
0
I read that article and found it poorly argued. Sounds like the guy just went to Arstechnica and read everything, but would be unable to elaborate on anything he argued. A few highlights of his arguement that I found very bothersome:

1) Comparing the heat ouput of a G4 to a P4 while comparing the performance of a G5 to a P4. The 10x heat output difference is just stupid, I'd give him 2x maybe, but we know Intel can do better as the Pentium M has shown.

2) Selecting a best-case G4 to compare to a worst case P4, and then assuming that Intel's figures were even worse than they claim.

3) Ignoring AMD's offerings when it was time to talk about "short and fat". The Athlon-family is shorter and fatter than even the G5.

4) Hand-wavy dismissal of the Pentium M rather than dealing with it as an efficient, fast x86 chip.

5) Mythic-scale Alpha worship.

The author seems to have started with the conclusion and made an article to support it. It is typical partisan propaganda. People predicting the demise of x86 are every bit as ignorant as those predicting the demise of Apple. (Edit: Oh, I see ewinemiller beat me to making that claim by 15 minutes!)
 

hvfsl

macrumors 68000
Jul 9, 2001
1,867
185
London, UK
The Pentium M is faster than the G4 clock for clock. The only thing the G4 is faster at is PhotoShop because it has been heavily optimised for the Altivec. It is not because of the cache that the Pentium M is faster, but the overall deisgn. The G4 is only slightly younger than the P3 which has long since been discontinued by Intel.
 

hvfsl

macrumors 68000
Jul 9, 2001
1,867
185
London, UK
Originally posted by ouketii
i found this site on my own a few days ago. read the whole thing, very interesting, if you want to know about the core diffs b/t x86 and ppc. gotta say, ppc seems like the winner. and with linux presumably getting more attention, people will no longer be forced to use intel as it runs on ppc also

Why you would choose to run Linux on PPC over x86 is silly, since one of the main points of Linux is that it is cheap and x86 hardware is half the cost of PPC hardware at the same speeds. A P4 3Ghz PC with Radeon 9800Pro can be can be got for under $1500, while the compariable Mac cost over $3000. It actually costs more to make a Dual G5 Mac than you can buy a top of the range Dell Machine for.
 

Mr. MacPhisto

macrumors 6502
Jan 16, 2003
281
0
Originally posted by ddtlm
I read that article and found it poorly argued. Sounds like the guy just went to Arstechnica and read everything, but would be unable to elaborate on anything he argued. A few highlights of his arguement that I found very bothersome:

1) Comparing the heat ouput of a G4 to a P4 while comparing the performance of a G5 to a P4. The 10x heat output difference is just stupid, I'd give him 2x maybe, but we know Intel can do better as the Pentium M has shown.

2) Selecting a best-case G4 to compare to a worst case P4, and then assuming that Intel's figures were even worse than they claim.

3) Ignoring AMD's offerings when it was time to talk about "short and fat". The Athlon-family is shorter and fatter than even the G5.

4) Hand-wavy dismissal of the Pentium M rather than dealing with it as an efficient, fast x86 chip.

5) Mythic-scale Alpha worship.

The author seems to have started with the conclusion and made an article to support it. It is typical partisan propaganda. People predicting the demise of x86 are every bit as ignorant as those predicting the demise of Apple. (Edit: Oh, I see ewinemiller beat me to making that claim by 15 minutes!)

I wouldn't have predicted it's demise until now. Intel has clearly specified that they intend to push Itanium and attempt a transition attempt to it in desktop form. AMD is really the only manufacturer offering a future for x86 and their chips run hotter than Intel. While I acknowledge the power of Opteron, it is still quite expensive and quite hot.

As for the Pentium M, he DID address it. Why is it efficient? CPU Cache. As stated previously by myself (and ignored because you seem to be in love with x86 - as you imply by previous posts salivating over Intel and its design), the cache is what makes it more efficient - and slightly more expensive than higher clocked chips. The G4 is at 180nm compared to the Pentiums at 130nm. The current G4 has 256K of L2 compared with 1024K on the PentM's. Not only this, but some of the G4's access this cache at half their clock rate.

This article talks about architecture, plain and simple. It is a known fact that PPC processors don't need to do as much work to attain similar results.

So, do you think the vaunted Pentium-M could compete with a G4 that was designed to have the same amount of cache, same RAM, same bus speed, etc? What if we added the hardware assists that the x86 uses so that it can keep up (they're coming too, and that will be when x86 will be obsolete). The point of this article is to set aside market differences and make this conclusion: If x86 and PowerPC shared a completely equal footing in clockrate, cache, bus speed, RAM, and hardware help then there would not be any competition as to which is better. As IBM develops the POWER5 derivative it will become clear which chip is superior.

But the market will determine if the x86 stays around. Several businesses are going to Linux, so it is possible that the Intel-MS hegemony may be in its final years. And yes, I could be wrong. But even if x86 doesn't die - the future appears to be a threeway split between the IBM/Moto PPC, AMD/VIA x86, and Intel's Itanium. I don't think the x86 will continue to have the strangle hold when it is surpassed in performance. And it will be, especially if AMD withdraws from the race and joins IBM, Moto, and Apple - which is a possibility.
 

ddtlm

macrumors 65816
Aug 20, 2001
1,184
0
Mr. MacPhisto:

Intel has clearly specified that they intend to push Itanium and attempt a transition attempt to it in desktop form.
It is not known if even Intel can kill their x86 creation. People want stuff that works, not some wizbang new Itanium. Classic anti-Apple compains like "there is no games" will also apply to desktop Itaniums.

AMD is really the only manufacturer offering a future for x86 and their chips run hotter than Intel.
Why focus on what they make today? The demise of x86 is of course going to happen sometime, heck, the Sun will turn to a red giant sometime, but x86 will be alive an kicking well after AMD has had plenty of time to make something better than the Athlon.

As for the Pentium M, he DID address it. Why is it efficient? CPU Cache.
He didn't address it! He waved his hands at it and said "you scare me". Blaming everything on the 1MB L2 is rediculous... just look at how far that got AMD's Opteron. (Actually the Opteron is made on a power-saving process compared to the Athlon too.) The Pentium M is far more significant than most PPC folks are comfortable with. It is a chip that performs something like an equal-clocked Athlon and sucks power something like, dunno, a lower-clocked Pentium III. I really need to look up some hard power dissipation figures on it to make better comparisons, but anyway its pretty darn good.

(and ignored because you seem to be in love with x86 - as you imply by previous posts salivating over Intel and its design)
No need for you to put that "salivating" spin on things, I ignored your posts as well as those of many other because I think the claims are wrong, and I don't have time to argue about every little thing.

the cache is what makes it more efficient
The latest popular claim, but quite untrue. You should be able to think of lots and lots of chips with huge amounts of cache that are not even slightly efficient, such as the Power4. Which of course as an example I chose because obviously the PPC970 became more efficient because it has less cache, and despite clocking higher.

Not only this, but some of the G4's access this cache at half their clock rate.
Explain why off-die cache at half speed is less energy efficient. Conventional wisdom suggets the lower clock speed of the memory would in fact make less heat.

It is a known fact that PPC processors don't need to do as much work to attain similar results.
Yes, now tie that fact back to the claims about x86 being doomed. The Pentium M is a great example of an efficient x86 chip.

The point of this article is to set aside market differences and make this conclusion: If x86 and PowerPC shared a completely equal footing in clockrate, cache, bus speed, RAM, and hardware help then there would not be any competition as to which is better.
Well I guess you could claim that this is the point, I just re-read and it seems that his objective was to justify his advocacy of PPC chips over x86. (As a side note, he claims to have believed these things before doing any research.) But really this was never up for arguement about best-case efficientcy, because everyone already knows that x86 chips have pay overhead for translating their instructions. The question is how heavy the overhead is, and the author did a terrible job providing and justifying an answer. He concentrated on Performance-first x86 designs like the Pentium 4 when he wanted to talk about their power use, he concentrated on the G5 when he wanted to talk about PPC performance, he concentrated on the G4 when he wanted to talk about PPC power use. He fudged power use figures of the P4 to suit his goals, he used a power use figure for the unreleased 7457 (10W@1ghz) to suit his conclusions, he dismissed low power x86 designs as low performance or cache-assisted without compelling justification, and he generally packed in irrelevent architectural, historical and benchmark FUD to cast x86 in a bad light. (Who cares if it is 40 years old, who cares if it has to use rename registers?) On one hand he dismissed all the benchmarks that show x86 as being fast as biased, and on the other he pretty much expected the G5 to perform better than the benchmarks available, he even went ahead and mentioned that a G4 can outperform a P4 by like 3.5 times when its running AltiVec optimized code. That claim was of course a load of crap, because he used it to suggest that 3.5x is a possible performance gain from an auto-vectorising compiler, rather than the truth, which is that the gain was on some single specific application where the AltiVec was written by hand. (Why not invite some x86 proponents to write some x86 assembly to show some "benchmark" where a P4 or Athlon goes as fast as can be?) This guy is biased. Like I said, he's a typical partisan spewing propaganda.

As IBM develops the POWER5 derivative it will become clear which chip is superior.
Do you believe in IBM like its a religion? Do you have faith in them?

But the market will determine if the x86 stays around.
A line of wisdom that utterly refutes the "x86 is dead" nonsense regardless of power usage arguments.

Several businesses are going to Linux, so it is possible that the Intel-MS hegemony may be in its final years.
That seems quite optimistic. The same thing that keeps people using arcane flavors of Unix like Irix (ard worse) will keep Windows entrenched in the many of the corporate places that have chosen it, and that ignores the fact that MS is by no means in decline.

I don't think the x86 will continue to have the strangle hold when it is surpassed in performance.
A while back noone believed that x86 would ever be in a position where RISC chips wished they could surpass its performance. Heck, a while back people were already predicting the demise of x86... thats why Intel made IA64.

And it will be, especially if AMD withdraws from the race and joins IBM, Moto, and Apple - which is a possibility.
Maybe they'll use the same fabs, but x86 is serious money compared to PPC. Huge market.
 

Cubeboy

macrumors regular
Mar 25, 2003
249
0
Bridgewater NJ
Originally posted by Mr. MacPhisto
I wouldn't have predicted it's demise until now. Intel has clearly specified that they intend to push Itanium and attempt a transition attempt to it in desktop form. AMD is really the only manufacturer offering a future for x86 and their chips run hotter than Intel. While I acknowledge the power of Opteron, it is still quite expensive and quite hot.

Intel hasn't specified anything, all plans for Itanium still have it remain strictly for Servers, Blades, and possibly high end workstations (like Alpha Workstations). Who's to say AMD won't pursue RISC or VLIW cpus in the future, after all, their Athlons and Opterons were basically RISC cpus. Current prices list a Opteron 240 at below $300, a Opteron 242 slightly above $700, and a Opteron 244 at $800, making them quite viable for high end desktops and workstations of any kind.

Originally posted by Mr. MacPhisto
As for the Pentium M, he DID address it. Why is it efficient? CPU Cache. As stated previously by myself (and ignored because you seem to be in love with x86 - as you imply by previous posts salivating over Intel and its design), the cache is what makes it more efficient - and slightly more expensive than higher clocked chips. The G4 is at 180nm compared to the Pentiums at 130nm. The current G4 has 256K of L2 compared with 1024K on the PentM's. Not only this, but some of the G4's access this cache at half their clock rate.

Wrong, a branch predictor which resulted in a reduction of mispredicted branches by 20%, Dedicated Stack Management, Micro-ops Fusion, along with the 400 MHz FSB and 1 MB L2 cache give Centrino it's enfficiency. It seems you forgot to mention that the G4 has a 2 MB L3 cache, the L2 cache itself is 256k, whether it operates at the full speed of the CPU has to do with the core.

The 1.6 GHz Centrino performs like a 2.4 GHZ P4 overall and like a 2.66 GHz P4 in office. Might I also mention that their is a 1.7 GHz Centrino which assuming linear scaling would perform like a 2.53 GHz Pentium 4 overall and a 2.8 GHz Pentium 4 in office. The G4, let's see, a 1.25 GHz model got the crap beaten out of it by a 2 GHz Pentium 4 running Jet3d despite that the G4 was running optimized code, a DUAL 1.42 GHz G4 performs anywhere between 50% to over 200% worse in games despite some games being threaded, 250% slower in Cinebench OpenGL, and significantly slower in most other apps despite having dual processors.

Originally posted by Mr. MacPhisto
So, do you think the vaunted Pentium-M could compete with a G4 that was designed to have the same amount of cache, same RAM, same bus speed, etc? What if we added the hardware assists that the x86 uses so that it can keep up (they're coming too, and that will be when x86 will be obsolete). The point of this article is to set aside market differences and make this conclusion: If x86 and PowerPC shared a completely equal footing in clockrate, cache, bus speed, RAM, and hardware help then there would not be any competition as to which is better. As IBM develops the POWER5 derivative it will become clear which chip is superior.

More BS I see, even with Dual Processors and 2 megs of L3, the fastest G4 wasn't able to keep up with the fastest P4's and Athlons. A SINGLE Centrino on the other hand, can keep up with the Pentium 4 pretty well overall and might even be able to surpass it in some areas like office. Do you really think a faster G4 with more L2 cache, and no benefit in bandwidth (since the L3 provides 5.4(?) GB/S) and a entire processor less will do any better?

Originally posted by Mr. MacPhisto
But the market will determine if the x86 stays around. Several businesses are going to Linux, so it is possible that the Intel-MS hegemony may be in its final years. And yes, I could be wrong. But even if x86 doesn't die - the future appears to be a threeway split between the IBM/Moto PPC, AMD/VIA x86, and Intel's Itanium. I don't think the x86 will continue to have the strangle hold when it is surpassed in performance. And it will be, especially if AMD withdraws from the race and joins IBM, Moto, and Apple - which is a possibility.

So far, this has proven to be nothing but BS, why do you think x86 currently holds the performance crown? Gee, could it be because the differences between CISC and RISC have become so subtle as to be nearly indistinguishable? You do know that the current Pentium 4s and Athlons only have one external layer of CISC? How about that other than some legacy code, the Athlon is a full fledged RISC CPU? So far you've failed provided a shred of evidence for any of your claims.
 

Mr. MacPhisto

macrumors 6502
Jan 16, 2003
281
0
Originally posted by Cubeboy
Intel hasn't specified anything, all plans for Itanium still have it remain strictly for Servers, Blades, and possibly high end workstations (like Alpha Workstations). Who's to say AMD won't pursue RISC or VLIW cpus in the future, after all, their Athlons and Opterons were basically RISC cpus. Current prices list a Opteron 240 at below $300, a Opteron 242 slightly above $700, and a Opteron 244 at $800, making them quite viable for high end desktops and workstations of any kind.



Wrong, a branch predictor which resulted in a reduction of mispredicted branches by 20%, Dedicated Stack Management, Micro-ops Fusion, along with the 400 MHz FSB and 1 MB L2 cache give Centrino it's enfficiency. It seems you forgot to mention that the G4 has a 2 MB L3 cache, the L2 cache itself is 256k, whether it operates at the full speed of the CPU has to do with the core.

The 1.6 GHz Centrino performs like a 2.4 GHZ P4 overall and like a 2.66 GHz P4 in office. Might I also mention that their is a 1.7 GHz Centrino which assuming linear scaling would perform like a 2.53 GHz Pentium 4 overall and a 2.8 GHz Pentium 4 in office. The G4, let's see, a 1.25 GHz model got the crap beaten out of it by a 2 GHz Pentium 4 running Jet3d despite that the G4 was running optimized code, a DUAL 1.42 GHz G4 performs anywhere between 50% to over 200% worse in games despite some games being threaded, 250% slower in Cinebench OpenGL, and significantly slower in most other apps despite having dual processors.



More BS I see, even with Dual Processors and 2 megs of L3, the fastest G4 wasn't able to keep up with the fastest P4's and Athlons. A SINGLE Centrino on the other hand, can keep up with the Pentium 4 pretty well overall and might even be able to surpass it in some areas like office. Do you really think a faster G4 with more L2 cache, and no benefit in bandwidth (since the L3 provides 5.4(?) GB/S) and a entire processor less will do any better?



So far, this has proven to be nothing but BS, why do you think x86 currently holds the performance crown? Gee, could it be because the differences between CISC and RISC have become so subtle as to be nearly indistinguishable? You do know that the current Pentium 4s and Athlons only have one external layer of CISC? How about that other than some legacy code, the Athlon is a full fledged RISC CPU? So far you've failed provided a shred of evidence for any of your claims.


OK, let's see if we can see the point of the article: PPC architecture is superior AND has a better future outlook. He did make a good point about the law of diminishing returns which can be seen in Intel.

Now, answer THE question, oh ye who worships at the altar of Intel: If the Moto 74xx had branch prediction, cache, etc, FSB, EVERYTHING equivalent to the Intel chip - who would win in raw performance? The answer is obvious - it is the PPC. The fact that the G4 could compete with it quite well at 130nm with more cache and a faster FSB shows the potency of PPC because, admittedly, the G4 has not been pushed forward very much by Motorola. And as for the omission of the L3, not all G4s are equipped with the L3 cache.

I've used Centrino tech and it's not bad. I'm impressed that Intel got a little more life out of their aging architecture. But, if those same (or better) additions are made to the 32bit PPC processor then the Centrino is not the top dog anymore.

If IBM and Motorola decide to step it up on the consumer end to create a viable option to the x86 then they will surpass anything Intel can come up with. The G4 could have some mods made before it moves to .09 (if Moto is to be believed) to increase performance greatly - and increase power efficiency.

And that's the point of the article. Note, he talks about using high end chips that are RISC. It's not a Mac vs. PC thing. It's a RISC vs. CISC thing - x86 vs. PPC. The PPC needs more attention than it has had, but I think that is now on the way. With proper attention, x86 dominance will end and so will the Intel-MS hegemony. The new world of computing will feature two or three different platforms with fairly even splits, with none of them dominant. IBM has pledged to kill the Itanium and wants to have PPC architecture become more dominant - and I believe IBM will succeed, even on the consumer platform.

So, I shouldn't have said x86 will die - not instantly. But it will begin to fade away if the PPC is kept up properly by Moto and IBM (more IBM). I think Motorola is finally getting their butts in gear on this end as well. IBM's 970 may have been a wake up call.
 

MorganX

macrumors 6502a
Jan 20, 2003
853
0
Midwest
Originally posted by MisterMe
Clearly you know nothing about benchmarking and how it is abused. The Intel world has for years been plagued with benchmark abuse. A common cheat is to write a compiler that recognizes specific benchmark routines and programs. As these benchmarks have known results, the results are coded into the compiler. This type of compiler relieves the processor of most of the work required to compute the benchmark's results. By using a third-party compiler, the processor is forced to execute the benchmark program rather than look up its results.

Benchmark abuse has nothing to do with my comment, unless you are suggesting that compilers cannot be optimized without specific benchmark cheats.

Apple used several optimizations for the G5's, do you consider them cheats, or optimizations?

Like I said, if I were going to base my departmental purchases on benchmark results, I would want results using optimized software whether running a benchmark or real software.

Cheating is a separate issue not a part of my comment or much of the article. Maybe I should reread it.
 

MorganX

macrumors 6502a
Jan 20, 2003
853
0
Midwest
Originally posted by sturm375
This is why no-one takes Steve Jobs seriously anymore when it comes to benchmarks.

I see no reason not to take his G5 benchmark seriously, but I wouldn't give much credibility to his PC bencmark results. I don't have a problem with optimizing the G5 code, I have a problem with not optimizing the x86 code.

Using the potential abuse argument for an excuse not to optimize is meaningless. If you're going to cheat you can cheat regardless of what software you use. Given the description for the malloc library this is the only optimization I would say is a cheat just to inflate benchmark results.
 

MorganX

macrumors 6502a
Jan 20, 2003
853
0
Midwest
Originally posted by Cubeboy
Intel hasn't specified anything, all plans for Itanium still have it remain strictly for Servers, Blades, and possibly high end workstations (like Alpha Workstations).

Actually the do plan a low power Itanium 4thQ '03. Priced about 20% higher than the 3.2Ghz PIV. Should be able to get a workstation for about $2500-3500.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.