Separate names with a comma.
Discussion in 'Current Events' started by szark, Aug 18, 2003.
From the Inquirer:
And that's at 90-nm!
I'm not surprised...our new 2.8 GHz Dell overheats and smells like something is burning after 10 minutes if someone accidentally bumps the door to the computer desk closed...so a higher clocked Pentium...geez. My PII used to overheat and melt parts in my computer all the time on my old Gateway, too, I went through 5 modems in that damn thing before I switched to Apple.
Hot Hot hot!
Its getting hot hot hot!
--Couldn't help myself.
Damn... I think they are gonna start to equip these things with Water Cooling!
Cool, now they definitely could team up with Hasbro to upgrade the Easy Bake Oven to Pentium Power.
Crunch numbers and bake a cake at the same time.
Oven? More like a space heater. Nothing like a dual purpose appliance in your room.
-Holy Mother of effing Pizza!
Sorry, just felt good saying that.
Well, I guess this is what happens when you over build a CISC processor.
Yes, yes, I know that even the P4 has RISC execution capability - but it's a bolt-on. It's still a 1991 586 at heart.
The only thing "bolted on" is the x86 front end.
Actually 90nm hasn't done as well as Intel was thinking, and based on that I'm thinking that IBM may find themselves in the same situation (not 103W, but more dissipation than they expected). Everyone should avoid laughing at Intel until we know how well IBM does...
Not true. It's the foundation.
Otherwise MS would have to start code-new.
BTW- This on isn't laughing at Intel - they are smart guys. I'm shaking my head because they are held from truly innovating because they need to maintain old MS software.
Yes, it sounds like bias, but it's just fact.
MS and Intel's greated strength, is also their greates weakness. They have dominance, but have also built in the inertia for change.
You do have to admit, it must take Intel some serious ability to exploit x86 technology to the point where it is now.
I absolutely agree, and my cup is up to Intel for continually increasing the performance of the x86 architecture, and to Microsoft for keeping Windows functional.
Both, being based on 70's tech - this used to be considered impossible. Well done gents.
BTW- This is honest, no sarcasm.
It makes you wonder...if they weren't limited by Windows legacy x86 code and Intel could fully go RISC or do whatever they wanted with their processor technology, how much could they wipe the floor with everyone else?
It's called Itanium. It's VLIW, kinda next-gen RISC.
Hasn't that processor proven to be a huge let down for Intel, though? I suppose it's more the fault of the PC industry not able to move beyond the normal Windows way of doing things, but still, it seems like they aren't having much luck.
Yeah, but it is probably still the fastest processor out there. (Seemingly due as much to impressive fabrication as anything else.)
The x86 ISA is not the foundation of the P4 nor any other modern x86 processor, it is only the language that the front-end translates to some back-end RISC ops. There is nothing 70's about the P4 except for its front-end (which is what makes MS happy).
Not quite. The x86 instruction set is just a "frontend" and has little to do with the implementation of the chip. They can radically replace the design and still keep the same frontend. For a radical example of this, see the Transmeta processor designs. Their design has nothing to do with the original Pentium, yet MS didn't have to start "code-new".
It's a bit like saying that just because both Mac OS X, HP UX, Solaris and Linux lets you type "ls" and "cd" in the terminal, they are all the same.
However, I agree with what you said about inertia. Definitely the main reason for the Itanium's failure.
Just start building the computers into their own micro-refridgerator cases, no worries about overheating!
Exactly, result: nice technology, terrible market share (a la Apple?)
From The Register:
"Dell managed to eke out 54 system sales in the fourth quarter of 2002 but saw shipments fall to 14 units in the most recent period. That's right. 14.
IBM boasted 34 sales in Q4 but saw Itanic servers disappear all together in Q1. What do we mean by disappear? Zero servers shipped. "
Obviously, slow and steady evolution, rather than revolution, is the way to increased market share.
My 2.6 gets really hot. But that system is a quiet as an iMac.
Heh, I thought the jury was still out on the Itanium being nice technology. I won't say anything positive about it other than "its pretty fast".
I am not sure how much power/heat the 2.8 GHz P4 puts out, but it is a whole lot. Our Dell system is so hot that after a month of use, it's already warped all the wood around the desk it sits in and the fan sounds like it's ready to spin right out of the case if you play games or do anything to make it work on something.
next-gen? It's a direct descendent of HP Precision Architecture, which goes back to around 1990. The only thing PA RISC did for Hewlett-Packard was to unify their hardware lines.
Itanium proved a failed implementation, but Itanium 2 looks much better. By the next generation, it may actually be used in more than 200 machines.
It doesn't compare so well to Power4+. Core for core it does fine, but the key is that Power4+ packs dual cores on a similar sized die to Itanic's single core die, so in the end Itanic gets left behind.
Still, Itanic certainly leaves UltraSPARC in the dust, as well as the abandoned Alpha, PA-RISC, and MIPS lines, FWIW.
I don't think the 1.7ghz Power4+ compares well performance-wise core-vs-core to the 1.6ghz Itanium2, and I don't think dual-core or die space considerations matter. It may cost Intel a lot more to fab two Itanium2's than it costs IBM to fab a Power4+, but I don't think Intel cares at this point (still in the massive bleeding of cash stage).
There is no such thing as a 1.6 Ghz Itanium 2, so I am sure that a Power4+ compares very well to it since it scores 0 on all benchmarks!
On the single-threaded SPEC benchmarks, the 1.5 Ghz Itanium 2 scores either 1075 or 1300 on SPECint2000 (depending on whether you want to use SGI's, Dell's, or HP's numbers) and between 1875 and 2100 on SPECfp2000. The 1.7 Ghz Power4+ scores 1100 in SPECint2000 and 1700 in SPECfp2000. So a single Power4+ core is about 5% slower in integer and 15% slower in floating point than an Itanium core, according to SPEC, although you should note that the Itanic scores appear to reflect the publicized "179.art cheat" that Sun found, whereas the Power4+ scores appear to be playing it straight. Nevertheless, the Itanic is slightly faster in single threaded code, but the key word there is slightly. I'm sure that the two cores will trade off leads in single threaded benchmarks with each new revision (note that Itanic was revised more recently than Power), as they have been for the past couple years. I would personally consider both cores to be pretty competitive with each other on a strictly core-to-core comparison, and I will say the same thing when Power is slightly ahead of Itanic, but perhaps you consider 10% to be a much larger gap than I do.
The important question, however, is whether these chips are used to run just one thread. The answer is no. These are first and foremost server chips, and nobody is going to run a server with just one thread active. The chips also see some use in workstations. There it is possible that you could care only about single threaded performance, but still unlikely, as evidenced by the fact that over 80% of HP's Madison workstation configurations come with dual processors (see http://www.hp.com/workstations/itanium/zx6000/reseller.html). So, in summary, SPEC CPU2000 is clearly *not* the benchmark you want to use to evaluate these chips.
Wow, then you had better tell IBM, Sun, Intel, and HP this, because their long term plans are all heavily focused on producing multicore chips!
If Intel doesn't care about costs, then why don't they just double the L3 cache from 6 MB to 12 MB? I'm sure it's technologically possible, and it would undoubtedly crank those SPECfp scores up a bit higher. Sure, it might cost more, but costs are irrelevant, right?
Look, in the end it is all about trade-offs. If IBM wanted to, they could replace the second core on Power4+ with more cache, and then the single threaded SPECfp2000 scores would shoot up even higher (that benchmark, as you probably know, is highly dependent on bandwidth). But they (and by implication their customers) feel that those transistors are better used on a second core, because they don't want to just run one thread at a time. Implicitly Intel/HP are also thinking this, since they are trying pretty hard to get a dual-core Itanic chip out the door sooner rather than later.
There is also a subtle point that many people miss when comparing single threaded SPEC scores between Itanic, which is a VLIW processor, and a more "conventional" RISC processor like Power4. Specifically, Itanium tries to wring everything it can out of instruction level parallelism, whereas Power4 focuses on getting performance from thread level parallelism. But since SPEC only runs one thread, it is essentially the case that Itanium is allowed to exploit its parallelism in this benchmark whereas the Power4 is restricted from exploiting its parallelism (since the benchmark only runs as a single thread). A better way to look at it is from the standpoint of the problem. To the extent that a task is inherently non-parallizeable, neither Power4 nor Itanium is probably going to do too well at it. But to the extent that the task is parallizeable, then Itanium can use its ILP advantage in SPEC, but Power4 is not allowed to use its thread level advantage in SPEC. In reality, of course, to the extent that the problem is parallelizable, any decent programmer would be trying to utilize both ILP and thread level parallelism.
Completely unrelated question: did you say at one point that you are running OS X off of a RAID Level 0 two disk array?
I was considering getting a second 160 GB drive for my G5 (when I eventually get it) to stripe, but I had read that you can't boot OS X off of a striped array, which would obviously throw a wrench in my plans.
Do you actually get any noticeable performance improvements out it? Honestly, it is not like I am editing huge media files on a regular basis...I was just thinking of getting some extra storage, and if I have it then I was thinking I might as well stripe it. But perhaps it is more trouble than it's worth.