ksz said:
Yeah,
right. There are several pages of benchmarks here published by Tom's Hardware.
That's a Game benchmark. We're talking about Dualcore-CPUs. In particular Paxville, which is
not in there. So what are you talking about?
AMD's processors have a nice performance edge almost across the board, but the difference is "not very significant". Your link shows 2x improvement over Intel, but that's not evident in Tom's benchmarks. You can always twist benchmarks just as you can twist a "study" to suit your foregone conclusions.
Well, this _may_ be due to the fact that Tomshardware is notoriously very Intel/nVidia-friendly, more than is acceptable, which is also the reason why Anand split off IIRC!...
It may also be due to the fact that Dual-Dualcore is a totally different ballpark than Single-Dualcore (see Tomshardware). Because Intel is especially bad in Dual Dualcore (in contrast to the Opteron or 970MP) because they squeeze EVERYTHING over one FSB! Intel's last Netburst-Xeons (Dempsey) coming Mid-2006 have a Cinebench-Score of 883 on four 3.2 GHz Cores. The Dual 2.7 GHz G5 does 709 while the Quad 2.5 GHz G5 does 1150 and a Dual Opteron 280 (2.4 GHz) does 1104!
What i quoted is a friggin Apache-Benchmark. If that's not a proper measure of an Enterprise-Level Pro-CPU i don't know what is!
You may also look one page earlier to see some media encoding benchmarks of Paxville and one page after to see Sciencemark-Benchmarks. They show the same picture: Intel is getting its butt handed to them by AMD!
Intel developed two generations of the Pentium M as well as an ultra-low voltage version of the chip. They introduced HyperThreading and will have Virtualization Technology on the desktop with Yonah.
Wow, so Yonah is a desktop-CPU now? I never knew, when did Intel change this? ;-)
Hypnothreading is NOT, i repeat NOT in the Pentium-M. Neither is it in the Yonah. Furthermore Intel's stayed mum on the topic even with Merom! The word on the street is that the Pentium-M uses its resources so efficiently that Hypnothreading yields no improvements, so Intel will probably abandon it...
Intel has also pushed aggressively on new process technologies and have had better success than IBM.
Aha, so that's why Dell had to dump the Prescott from their lineup when it was released until further notice?
Gosh, i remember the HUGE uproar when Apple downclocked the G4 by 50 MHz when it was released because Moto couldn't get 500 MHz ones done in volume! Strangely enough nobody seems to remember Intels Prescott-disaster with Dell!...
To their credit, IBM has produced noteworthy advances (alone or in partnership) in process technology including the copper damascene process, strained silicon, SOI, double-gated FinFETs, improved junction properties with high-k dielectrics, etc. etc.
...and FC-BGA, and SSDOI, and copper, and Dualcore. In retrospective IBM is responsible and pioneered most innovations in Chip manufacturing (and design) during the last 7 years. Intel's always only been on the forefront of Die-Shrinks. And that was only a few months with 90nm (hard earned months if you ask Dell!), let's see how it turns out with 65nm!
The problem the industry as a whole encountered 2-3 years ago was power management and high leakage currents. These two issues brought conventional scaling to a virtual standstill as the average power density increased to about 13 Watts / cm^2. A steam iron, by comparison, dissipates 5 Watts per cm^2. The industry as a whole rode the CMOS power curve up to its very limits, and is actively searching for new materials and techniques to continue to increase both performance and packing density while managing heat dissipation and leakage. This is a very difficult problem, which is a key reason for the paradigm shift away from raw Mhz to increased function.
In effect, if you cannot continue to jam more speed, then you must jam more features. This is the driving force behind dual and multi core processors, VT, additional FPUs, improved vector units, more L1 cache, etc. More features are going on-chip because the customer is not going to pay top dollar without a good reason. Clock speed has been the historical justifier for top dollars, but that's changing.
Here, let me sum up what you're saying in one sentence: "You're right - Singling out IBMs problems was wrong of me"
I don't fault IBM for technological incompetence. I do fault them for problems with execution in the time to ramp yield, in the time to introduce more differentiation based on features, and in the time to introduce low-power mobile models.
Low-power models are a CUSTOM DESIGN, because IBM themselves have no use for this! Everyone but Apple knows that if you have a custom design done for you, you have to pay for it. Sony/Toshiba did it. Even M$ did it, and they're friggin M$! Only El Turtleneck didn't want to pay, so here we are looking forward to the mediocrity that is Intel!
Btw: IBM isn't a company that thrives on Chip sales alone, like Intel or AMD is. Back when they introduced the G5 it was clear that with one single OEM with 3.5% marketshare worldwide we would just still NOT see new generations and variations of chips released on a quarterly basis as is the case with Intel and AMD! Be realistic!