Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
AMD has basically put an emulation layer to run on top of their chips ot be x86 compatible. So when you quote windows numbers, that's not really accurate. Much of their gain is from memory architecture.
My point isn't that they should change now but that they might have made the wrong choice last year.

also, related to VTech, name one project that has been done on it. Gee, there isn't one because it was all a marketing, not a technical, decision.

The biggest reason (and this was the logic behind my original post) was that IBM runs many businesses and THEY MUST MAKE THE NUMBERS EVERY QUARTER. Chip-making is R&D intensive and if they're going to cut, this is a place where they will cut whereas AMD MUST PRODUCE THE BEST CHIPS IN ORDER TO MAKE THE NUMBERS.


maverick13 said:
Why should they have done this? By switching to AMD, Apple whould abandon the PowerPC platform and throw compatibility out of the window plus requiring a port of Mac OS X to x86. Moreover they would need a rewrite of ALL the current applications for the new hardware.
I believe that switching to IBM was the wisest thing Apple has done in the last few years.They kept their PowerPC platform, progressed to a chip with great performance(G5's floating point performance is bettern than the Opteron's, see flop/s ratings and the virginia tech interview) and placed their bet on IBM who uses the POWER processors for most of her systems. So it will definetely support and progress the architecture and it can always uses this progress to future PowerPC chips.
I am not worried about this, I am only worried if we will actually see the PowerPC 975(if it is named like this) this year and a PowerBook G5 till the first quarter of 2005.

Maverick
 
river_jetties said:
AMD has basically put an emulation layer to run on top of their chips ot be x86 compatible. So when you quote windows numbers, that's not really accurate. Much of their gain is from memory architecture.
My point isn't that they should change now but that they might have made the wrong choice last year.

Some one who knows a lot more about this than me can go into more depth about this one, but I am under the impression that modern Intel chip also have an "emulation layer" to make them x86 architecture compatible.

also, related to VTech, name one project that has been done on it. Gee, there isn't one because it was all a marketing, not a technical, decision.

Of course it was all marketing, aren't most of these supercomputers built to be just that, bragging rights for the institution that built it and the companies that supplied the parts. How many people can off the top of their head name projects run on any of the top ten supercomputers in the world?

The biggest reason (and this was the logic behind my original post) was that IBM runs many businesses and THEY MUST MAKE THE NUMBERS EVERY QUARTER. Chip-making is R&D intensive and if they're going to cut, this is a place where they will cut whereas AMD MUST PRODUCE THE BEST CHIPS IN ORDER TO MAKE THE NUMBERS.

I am sure that IBM's chip department is expected to produce. Otherwise, why be a chip manufacture at all? We are not talking about a product you can sell at a loss to make money in the future. I would think IBM expects to see the best chips rolling out of their plants.
 
pjkelnhofer said:
Some one who knows a lot more about this than me can go into more depth about this one, but I am under the impression that modern Intel chip also have an "emulation layer" to make them x86 architecture compatible.
You're thinking of Intel's Itanium line of processors. The Pentiums (including Pentium 4 and Pentium M) are THE original x86 processors (no emulation).
 
Since the Pentium Pro the Pentiums have been emulating x86

wrldwzrd89 said:
You're thinking of Intel's Itanium line of processors. The Pentiums (including Pentium 4 and Pentium M) are THE original x86 processors (no emulation).

This isn't correct. (and the 8086, 286, 386 and others are more truly the originals)

Starting with the Pentium Pro, the Pentium chips have had a RISC-like core that has a translation layer to emulate the x86 instruction set.

The x86 instructions are broken down into "micro-ops" that the core executes.

http://www.intel.com/design/Pentium4/prodbref/index.htm

In addition to the data cache, the Pentium 4 processor includes an Execution Trace Cache that stores up to 12-K decoded micro-ops in the order of program execution.

This increases performance by removing the decoder from the main execution loop and makes more efficient usage of the cache storage space since instructions that are branched around are not stored.

The result is a means to deliver a high volume of instructions to the processor's execution units and a reduction in the overall time required to recover from branches that have been mis-predicted.
 
AidenShaw said:
This isn't correct.

Starting with the Pentium Pro, the Pentium chips have had a RISC-like core that has a translation layer to emulate the x86 instruction set.

The x86 instructions are broken down into "micro-ops" that the core executes.
I didn't know that...I learn something new every day on these forums :)
 
AidenShaw said:
...and how long ago were they ordered ????

Not exactly sure. I'm divisional support, a department just got them. I know they were evaluating G4's before the G5 was announced, so they probably ordered them pretty early.
My point was really that they are shipping in quantity finally.
 
AidenShaw said:
This isn't correct.

Starting with the Pentium Pro, the Pentium chips have had a RISC-like core that has a translation layer to emulate the x86 instruction set.

The x86 instructions are broken down into "micro-ops" that the core executes.

See I knew that some one smarter than me would explain this properly.
 
AidenShaw said:
This isn't correct.

Starting with the Pentium Pro, the Pentium chips have had a RISC-like core that has a translation layer to emulate the x86 instruction set.

The x86 instructions are broken down into "micro-ops" that the core executes.

The athlon family also uses risc-like micro-ops. This is one of the big reasons why x86 hasn't died like many were predicting in 1994 when the PPC came out.
Even the G5 breaks down PPC code into even simpler micro-ops. This is pretty common these days.

I don't see how this has anything to do with how one evaluates performance though.
 
ffakr said:
The athlon family also uses risc-like micro-ops. This is one of the big reasons why x86 hasn't died like many were predicting in 1994 when the PPC came out.
Even the G5 breaks down PPC code into even simpler micro-ops. This is pretty common these days.

I don't see how this has anything to do with how one evaluates performance though.

I doesn't really have anything to do with evaluating performance. It started as a question of Apple going with IBM vs. AMD.
 
river_jetties said:
AMD has basically put an emulation layer to run on top of their chips ot be x86 compatible. So when you quote windows numbers, that's not really accurate. Much of their gain is from memory architecture.
My point isn't that they should change now but that they might have made the wrong choice last year.

also, related to VTech, name one project that has been done on it. Gee, there isn't one because it was all a marketing, not a technical, decision.

The biggest reason (and this was the logic behind my original post) was that IBM runs many businesses and THEY MUST MAKE THE NUMBERS EVERY QUARTER. Chip-making is R&D intensive and if they're going to cut, this is a place where they will cut whereas AMD MUST PRODUCE THE BEST CHIPS IN ORDER TO MAKE THE NUMBERS.
Paying 4.5 Million for marketing reasons? :D :D :D
Sorry but whereas marketing reasons may exist for their decision, I don't believe they are a major factor. I am in a university right now and don't think that the university professors did this for a marketing reason.
As for naming a project, can you name me many projects for the other top 9? :rolleyes: The system is offline now due to the upgrade. They are expecting the Xserves in June(due to the delays), we'll see the projects after it is set up (again).

Maverick
 
qubex said:
But having just recently introduced the POWER4-derived 970, doesn't it seem odd that IBM would run off and introduce the POWER5-derived "975" (or whatever) a mere twelve months later?

Actually, it doesn't seem at all odd to me, because the 970FX looks to be moving in one direction, and the potential 975/980 in another. The former is undergoing massive power optimizations to have dropped 50% of its heat in a single generational revision, and I'll readily grant that (though I still think it's too hot for a portable). It's also ideal for the growing market of blade servers, where IBM could cram two on a card and sell them as 7U racks with 48 processors (or however many) in the space that 1U servers would only fit 14 at the moment. In those environments, heat is a concern, but you're already spending a lot of money on powering and cooling the whole rack.

On the other hand, I can see the 975/980 being positioned as a server and workstation class chip, where you want as much performance as possible from a smaller number of processors. If you fit two to four of them in a chassis and kit them out with a very different kind of system - PCI-Express and so on - then you come up with the kind of system that Apple needs to sell, and which IBM could also market under their own brand. They can't be happy about being beholden to Intel and Microsoft, and having a serious contender would allow them some freedom to pursue their own agenda, which seems to tie neatly with that Office application they just announced.

It seems to me that would instantly vanify all the hard work and money invested into the 970: in other words, retrospectively, investing all that hard work into the 970 would look like a very bad idea if they were planning to introduce the 975 already a year ago.

Segmentation of the market, man. IBM can easily afford to roll out a new generation that's more powerful than their embedded 750 line, while not cannibalizing that too badly (power consumption REALLY matters to some). They're going to need to answer the new efforts from FreeScale or risk losing business to the relative new-kid player in the current market.
 
prototype vs. production

pjkelnhofer said:
It is nice to dream isn't it, but there is absolutely no indication that IBM, who cannot even mass produce the 970 or 970fx chips with a good yield, is secretly producing a chip that is 150% faster in pure clock speed and even faster than that in performance. Maybe we will be suprised, but I doubt that they could keep the existence of a chip that would be essentially twice as fast as what is out there right now, this much of a secret.

I've got to echo this. Building a prototype in a lab and mass producing a product are two completely different games, and unless IBM was cranking out thousands of 975s or whatever back when Steve made his projection, it was a foolish prediction to make (even if it does come true).

I think as chip manufacturing continues to appoach a near-atomic level, were going to see more and more of a discrepancy between making a prototype chip in a lab and reliably mass producing it. Predictions and roadmaps will become less meaningful until serious breakthroughs in materials engineering take place. Just a thought.

One last thing: in response to me pointing out that IBM announced the 970 a full year before it shipped in volume as the G5, several people have written off the "secrecy" around the 975 as a result of Apple's request, saying that Apple is probably the sole financer of the project and therefore has more control over what information is released. That's a pretty weak argument considering that NO ONE else (other than IBM itself) uses the 970. If someone else does, please point it out. After the G5 was released, the joint venture with IBM was always described in general terms as "Apple hired IBM to make the 970". There's no doubt that Apple had a SIGNIFICANT investment, if not the entire investment, in the 970, and yet it was still announced a year before it shipped for Apple. If anything, between the 970 and the 975, the 970 should have been kept more secret as it was an entire generation processor leap for Apple. Why secrecy now and not then? If there is an argument for screcy now, it sure isn't "Apple financed this processor". Keep trying.
 
jakemikey said:
One last thing: in response to me pointing out that IBM announced the 970 a full year before it shipped in volume as the G5, several people have written off the "secrecy" around the 975 as a result of Apple's request, saying that Apple is probably the sole financer of the project and therefore has more control over what information is released. That's a pretty weak argument considering that NO ONE else (other than IBM itself) uses the 970. If someone else does, please point it out. After the G5 was released, the joint venture with IBM was always described in general terms as "Apple hired IBM to make the 970". There's no doubt that Apple had a SIGNIFICANT investment, if not the entire investment, in the 970, and yet it was still announced a year before it shipped for Apple. If anything, between the 970 and the 975, the 970 should have been kept more secret as it was an entire generation processor leap for Apple. Why secrecy now and not then? If there is an argument for screcy now, it sure isn't "Apple financed this processor". Keep trying.

Some of the answers you have been given had nothing to do with Apple's investment. To shortly repeat them: before Apple officially anounced the G5 at WWDC 2004, linking the 970 to Apple was just guessing. Now that Apple and IBM have said they will be working together closely on new PPCs for the foreseeable future, a publicly announced 975 would be linked to Apple with absolut certainty, which would destroy Apple's secrecy. Keep listening.
 
the future said:
Some of the answers you have been given had nothing to do with Apple's investment. To shortly repeat them: before Apple officially anounced the G5 at WWDC 2004, linking the 970 to Apple was just guessing. Now that Apple and IBM have said they will be working together closely on new PPCs for the foreseeable future, a publicly announced 975 would be linked to Apple with absolut certainty, which would destroy Apple's secrecy. Keep listening.

"just guessing"? The G4's were way behind in performace, and IBM comes out with a 64-bit processor, a PowerPC processor (!), with the specs and heat ratings suitable for a desktop, and you're telling me it was "just guessing"? How many other computer manufacturers use the PowerPC platform? How often does IBM deviate from making beastly server processors into making desktop processors just for fun?
 
pjkelnhofer said:
Of course it was all marketing, aren't most of these supercomputers built to be just that, bragging rights for the institution that built it and the companies that supplied the parts. How many people can off the top of their head name projects run on any of the top ten supercomputers in the world?
These computers are mostly used for science projects I think. Computationally intensive tasks such as calculating protein folding, weather forecasting, running climate models, modelling nuclear bombs and so on. Ofcourse its great for a scientific institution and the suppliers to have the fastest supercomputer but the scientific output is the most important part. The supercomputers around the world are very often paid for (or leased by) by various national research councils. There is usually very little to be gained from having the best supercomputer unless you have scientists using them....
 
are you using a Pentium for your math ?

thatwendigo said:
The [970fx] is undergoing massive power optimizations to have dropped 50% of its heat in a single generational revision, and I'll readily grant that (though I still think it's too hot for a portable).

It's also ideal for the growing market of blade servers, where IBM could cram two on a card and sell them as 7U racks with 48 processors (or however many) in the space that 1U servers would only fit 14 at the moment.

First of all, you can fit seven 1U servers in a 7U space - that should be pretty obvious.

The IBM BladeCenter chassis is 7U, and holds 14 dual CPU blades. (http://www-1.ibm.com/servers/eserver/bladecenter/chassis/more_info.html)

IBM is currently shipping dual 3.2GHz Xeon blades, so they've obviously solved the heat problem (the two 8-inch squirrel cage blowers in the BladeCenter chassis pull a *lot* of air over the heat sinks). BTW, the chassis has 7200 watt power supplies (4*1800w) for the 7U unit...

bladectr_svr_hero.jpg
 
Zappa said:
These computers are mostly used for science projects I think. Computationally intensive tasks such as calculating protein folding, weather forecasting, running climate models, modelling nuclear bombs and so on. Ofcourse its great for a scientific institution and the suppliers to have the fastest supercomputer but the scientific output is the most important part. The supercomputers around the world are very often paid for (or leased by) by various national research councils. There is usually very little to be gained from having the best supercomputer unless you have scientists using them....

I shouldn't have said it was all marketing, but a big part of getting a computer onto the "Fastest Supercomputer" list is about having bragging rights not to mention excellent PR when some is going trying to find a place to do the kind of project that requires teraflops of computing. Like you said, they are often leased so it is important to be able to get your computer used and thus making money for the institution that owns it.

I also, understand what kind of projects are normally done on these things, but I was trying to point out to the orignal poster (who was implying that the Big Mac was a fraud because we couldn't name something done on it) that outside of the people actually working on the projects 99.9% are unknown. These are not the kind of thing that get reported on the evening news. They go on silently, probably doing amazing work that is benifitting all, but with most people having no idea that the work is being done at all.
 
jakemikey said:
"just guessing"? The G4's were way behind in performace, and IBM comes out with a 64-bit processor, a PowerPC processor (!), with the specs and heat ratings suitable for a desktop, and you're telling me it was "just guessing"? How many other computer manufacturers use the PowerPC platform? How often does IBM deviate from making beastly server processors into making desktop processors just for fun?

Hindsight makes guessing that much easier, doesn't it? Maybe you don't remember the heated discussions, here on macrumors and elsewhere, but there were a great many doubters who were not convinced at all that the 970 would become the G5.

Oh how I wish that in hindsight it will be so very obvious that WWDC 2004 would see the introduction of the PPC 975 as Rev. 2 of the G5...
 
blades....AIX workstations and small servers

jakemikey said:
How often does IBM deviate from making beastly server processors into making desktop processors just for fun?

IBM was showing PPC970 chips in their BladeCenter boards before the Apple G5 announcement.

IBM also has a confused set of desktop and small server systems running various older POWER chips - simplifying that product line with the PPC970 would make a lot of sense.

Those were both good reasons for the 970, without any involvement with Apple.
 
pessimism/realism

AidenShaw said:
IBM was showing PPC970 chips in their BladeCenter boards before the Apple G5 announcement.


Are you sure about that? I seem to remember reading that announcement a few months after WWDC - on MacRumors nonetheless. I could be mistaken.

Anyway, I'm glad we've got the two camps on this thread. Keeps things interesting. I guess I'm in the pessimist/realist camp because I think it'd be more interesting to see what Apple does under the pressure of not delivering a 3 GHz machine. Apple's got a lot of talent, and talent goes a lot further under extreme pressure. Of course I guess they didn't do a whole lot during the G4 lag - except market the idea of a "megahertz myth".

Just another thought--I wonder what the long -term precedent has been as far as chip announcement from the manufacturer / implementation as an actual product. Just off-hand I remember Moto announcing their 1.5 GHz chips a good month before they hit the Powerbooks. I can't think of anything else off-hand, but can anyone help me out? What's the precedent for Apple announcing a breakthrough product with a brand-new chip with no prior manufacturer announcement? Just curious - I honestly have no clue other than the G4 1.5 and the 970.
 
AidenShaw said:
IBM is currently shipping dual 3.2GHz Xeon blades, so they've obviously solved the heat problem (the two 8-inch squirrel cage blowers in the BladeCenter chassis pull a *lot* of air over the heat sinks). BTW, the chassis has 7200 watt power supplies (4*1800w) for the 7U unit...

That wattage is misleading because those are redundant power supplies. Typically (I don't have one of these to confirm this) you can run the server with about half the redundant power supplies. It wouldn't be unusual to require at least 3 of the 4 power supplies to actually boot the system, mainly because all the drives spin up at once. Since I don't own one, I can't (easily) confirm this, but it is probably 2 of 4 or 3 of 4 required, or at worst, 3 of 4 required to run and 4 of 4 required to boot.

One thing I haven't seen a lot of notice of...
AFAIK, the typical wattage for a 2 GHz 970 or 970fx is the typical wattage at the rated speed. Both CPUs bus slew, the 2 GHz 970s slew down to a cpu clock of 1300 MHz.
The Centrino (Dothan and Banias) have very low typical thermal ratings because they are designed to run at 600MHz unless under load, at which point they clock up. I've seen much higher numbers for the Dothan when the cpu is running full speed. Intel reports a "Thermal Guideline" of 21.0W for a 2.0 GHz Dothan. I'm not precisely sure what they mean by "Thermal Guide" but I'm assuming it is the upper thermal range that OEMs need to deal with when building 2 GHz Dothan systems. This number (low 20's) is consistant with what I've seen in online reviews of Pentium-Ms.

Saying that a PPC 750fx can't run in a notebook based on a typical rating of 24.5 watts at 2GHz ignores many facts..
* Who says it has to be 2 GHz?
* What would the average wattage be if it ran at the bus slewed 1300 MHz?
* PC manufacturers already make notebooks with Pentium 4 desktop processors and Athlon 64s which use MUCH more than 24.5 watts. Sure their battery life sucks unless they de-clock themselves, but thermally, it has been done.

jmho,
ffakr

edit: Intel's 2GHz Dothan info:
http://processorfinder.intel.com/sc...cFam=942&PkgType=ALL&SysBusSpd=ALL&CorSpd=ALL
 
jakemikey said:
"just guessing"? The G4's were way behind in performace, and IBM comes out with a 64-bit processor, a PowerPC processor (!), with the specs and heat ratings suitable for a desktop, and you're telling me it was "just guessing"? How many other computer manufacturers use the PowerPC platform? How often does IBM deviate from making beastly server processors into making desktop processors just for fun?

IBM has produced PowerPC chips from the very beginning (PowerPC 601). Apple always knew they COULD go to IBM. They just didn't for a long time. I think he's right to say that when Moto failed on their version of the G5 that Apple went to IBM and said "slap some AltiVec on a desktop-ified POWER4." Now that they delivered on that promise, and (allegedly?) delivered protoype 3 GHz chips last year, Jobs said we'll have Desktops running at 3 GHz in one year. Not sure what those chips were based on but there are a couple of things we know for sure:

1) IBM had prototype PowerPC chips running at 3 GHz last year at this time. Could've been a 970, 975, 980 etc. but they did. That's the basis for Jobs saying "in 1 year we'll b at 3 GHz." This is pretty widely reported.

2) IBM is shipping POWER5 based servers in 2 weeks-- June 11th.

3) IBM is building the sucessor to the 970 off the POWER5 design simultaneously. This is the opposite way they did the POWER4/970.

4) IBM hit snags with the 90nm / SSOI process but are largely out of it now.

Take it for what you will, but I still believe the 970 sucessor is coming soon.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.