Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
macsrus said:
First off Im not saying that Apple is definatly overclocking the CPUs

Second In my company's negotiations with IBM over a cluster composed of their JS20 Blade Servers(PPC 970). It was brought up as to why Apple was currently selling 2.0 GHZ 970s in the Xserve while IBM was only selling 1.6 GHZ in their Blades.

IBM told us and I quote "that running the CPU at 2.0(these were still 130nm at the time), was more of a risk for thermal problems than IBM was willing to take."
They also said "if Apple wanted to clock theirs to 2.0 and take the chance of burning them up then that was Apples problem."

I would suspect that IBM feels the same way about the current batch of PPC they are selling to Apple.


Sounds like a salesman lying to you to try to get your business.

IBM stamps the CPU in the xServe 2.0 GHz. That means that the people who make and sell the chips stand behind it at that clock speed. They're not going to do that if they don't think 2.0 is a reasonable speed.

His allegations otherwise are pure FUD.

Why don't you ask him if he knows of a single example of an xServe 'burning up'. If he can't provide one, he needs to shut up or he's open for a libel suit.
 
Am I missing something here?

The way people are talking about overclocking chips, it seems like there is no limit to the speed that can be achieved. Does that mean a 1.6Ghz G5 could be run at 2.5Ghz if it's kept cool enough?! :confused:

ps- please see my above post about WWDC expectations.
 
Soire said:
Am I missing something here?

The way people are talking about overclocking chips, it seems like there is no limit to the speed that can be achieved. Does that mean a 1.6Ghz G5 could be run at 2.5Ghz if it's kept cool enough?! :confused:

ps- please see my above post about WWDC expectations.

I believe technically, IBM should only be really producing one chip for Apple--these chips are then "relabeled". There are, 3 kinds of chips that can be yielded--consumer level (can't clock that high, these are the 1.6-1.8s), mid-level (can do 2.0 or 2.5 with substantial help) and the high-level (can do 2.5 without breaking out a sweat).

When I say it can hit a clock speed "easily", it means that it can do it with LESS voltage and is STABLE. Every chip that comes from a wafer can't necessarily hit the highest clockspeed--it is usually ONLY the chips right from the center (@ the 'sweetspot'). So technically, if IBM were to mislabel a 2.5 as a 1.6, that 1.6 could hit 2.5 easily (provided the ROM or BIOS-equivalent allows you to provide slightly more vCore, if it is needed).

This is the basic concept behind the legendary Athlon 1700+ (they originally run @ 1470Mhz, with a simple bios change they run @ 2200Mhz and are stable). However, the chances of a "mislabeling" happening with IBM is pretty slim, so don't get all your hopes up. I'm pretty sure IBM does not share the SAME overclocking reputation that AMD has -_-.
 
Mav451 said:
I believe technically, IBM should only be really producing one chip for Apple--these chips are then "relabeled". There are, 3 kinds of chips that can be yielded--consumer level (can't clock that high, these are the 1.6-1.8s), mid-level (can do 2.0 or 2.5 with substantial help) and the high-level (can do 2.5 without breaking out a sweat).

When I say it can hit a clock speed "easily", it means that it can do it with LESS voltage and is STABLE. Every chip that comes from a wafer can't necessarily hit the highest clockspeed--it is usually ONLY the chips right from the center (@ the 'sweetspot'). So technically, if IBM were to mislabel a 2.5 as a 1.6, that 1.6 could hit 2.5 easily (provided the ROM or BIOS-equivalent allows you to provide slightly more vCore, if it is needed).

This is the basic concept behind the legendary Athlon 1700+ (they originally run @ 1470Mhz, with a simple bios change they run @ 2200Mhz and are stable). However, the chances of a "mislabeling" happening with IBM is pretty slim, so don't get all your hopes up. I'm pretty sure IBM does not share the SAME overclocking reputation that AMD has -_-.

This is exactly how it is does with one addendum. When a company gets an exceptionaly good yield (as in lots of high-end chips) but needs to fill orders for lower-end chips they will simply stamp them and ship them out as the lower-end chip.

It is this practice that often makes "over-clocking" a PC not such a big deal. Later in the process when yields have improved there maybe a majority of high-end chips getting produced, but still a need to fill the lower-end orders. I doubt that IBM is at that point yet.
 
pjkelnhofer said:
This is exactly how it is does with one addendum. When a company gets an exceptionaly good yield (as in lots of high-end chips) but needs to fill orders for lower-end chips they will simply stamp them and ship them out as the lower-end chip.

It is this practice that often makes "over-clocking" a PC not such a big deal. Later in the process when yields have improved there maybe a majority of high-end chips getting produced, but still a need to fill the lower-end orders. I doubt that IBM is at that point yet.

Yeah, that's what I'm thinking. And since reselling to Macs is no where near the shear volume that AMD puts out, I have a feeling it will be quite a while till there is a point when there are that many "low-end" orders to warrant that step. Oh well *_*
 
macsrus said:
There is no doubt that Apple always uses the same 4 or 5 commercial apps
as their benchmark... there is good reason for this.... because they r the only freaking apps that run on a mac

Lets face it Apple really only markets to other apple users.... PC users dont really switch because they cant run the apps they are used to running...
And there are not really any Macintosh equivalents....

I know I will get flamed for this post, but a walk into any software/computer store will prove me right... 1000s of PC apps.... few or less Mac apps on the shelf.

That being said I still like my MAC...


Which appsp that the "average" computer uses have no Mac equivalents? Office? Well, there's Office for Mac, as well as better productivity apps, like RagTime (free for individuals) or Appleworks. Presentations? PowerPoint, Keynote (a fraction the cost of PP). Publishing? Granted, Publisher only comes for Windoze, but MacPublisherPro ($15) and Ragtime are equally powerful.

Digital home movies?? Nothing I have seen in Windoze comes close to iMovie. Add SoundStudio for a pittance and you have a fair home audio/video production station, the equivalent of which I have not seen in Windoze.

Finance? Quicken and TurboTax are cross-platform

There used to be a dearth of CAD programs for Macs, but the newer X-based apps are getting rave reviews.

If you are talkling about FrontPage, also Windoze only, I will agre with you that there is no Mac equivalent...thankfully. However, with $20 for Joe Burns' HTML Goodies and a weekend, any Mac user can produce sound web sites that are devoid of all the Crap code FrontPage users produce.

"1000s of PC apps" Who uses them? There are thousands of Mac apps available, too. Oh, you must be talking abuot games!

So, let's see, for the average home user, the cost of switching is a new Mac plus about $60 for SoundStudio, $100 for Toast 6 (the one hole in the Mac toolkit), broadband access to download Ragtime, and a couple of weekends.

...and they won't need a teenage computer geek to explain the hookup!

No flames here, just don't buy the propaganda. For most peoople everything they use is available on Macs!
 
jragosta said:
Sounds like a salesman lying to you to try to get your business.

IBM stamps the CPU in the xServe 2.0 GHz. That means that the people who make and sell the chips stand behind it at that clock speed. They're not going to do that if they don't think 2.0 is a reasonable speed.

His allegations otherwise are pure FUD.

Why don't you ask him if he knows of a single example of an xServe 'burning up'. If he can't provide one, he needs to shut up or he's open for a libel suit.

Thats funny... Here it is 9 months later and IBM still doesnt offer faster than 1.6 GHZ PPC in their JS20....
And again here is why.....

First IBM did not offer the info I gave earlier. They gave it to us when we were drilling them as to why we couldnt get 2.0 GHZ processors instead of the 1.6s.
They stated that they would not use the 2.0s(130nm used in Power Mac) or the original 2.0(90nm that was used in the xserve) in their JS20 because those CPUs didnt have power/thermal control built in...
i.e fan failure leads to thermal failure

They said they wouldnt upgrade their system until the the new series with those power/thermal control were made.

So this was not IBM trying to sow FUD as you made out... but just them giving their reasons as to why they made the business decisions they did.

IBM is much more cautious when they design systems, than others who may be reselling their technology usually are.
Thats one of the primary reasons why IBM's reliability is better than the rest.(i.e. IBM builds systems with 5 9s reliability)
I Figure some will take exception with my last statement but anyone who has had Large IBM systems know I speak the gospel.

Now

The newer 90nm chips that have power control add(i.e. those currently being produced)will be offered as 2.2 GHZ for the IBM JS20 in late JULY early August.
During this same time frame Apple will be putting the same CPU at 2.3 GHZ in their xserves.(the 2.5s that Apple is using are also this newer design)
 
Mav451 said:
I believe technically, IBM should only be really producing one chip for Apple--these chips are then "relabeled". There are, 3 kinds of chips that can be yielded--consumer level (can't clock that high, these are the 1.6-1.8s), mid-level (can do 2.0 or 2.5 with substantial help) and the high-level (can do 2.5 without breaking out a sweat).

When I say it can hit a clock speed "easily", it means that it can do it with LESS voltage and is STABLE. Every chip that comes from a wafer can't necessarily hit the highest clockspeed--it is usually ONLY the chips right from the center (@ the 'sweetspot'). So technically, if IBM were to mislabel a 2.5 as a 1.6, that 1.6 could hit 2.5 easily (provided the ROM or BIOS-equivalent allows you to provide slightly more vCore, if it is needed).

I'm not sure I understand you correctly, but I think you have it right.

IBM makes a batch of 970FX chips. They then test them. The ones that make it to 2.5 GHz go into the 2.5 bin - until they have enough to meet orders for the 2.5. Then, the ones that make 2.0 go into the 2.0 bin - until they have enough to make their orders. The ones that are left over go into the 1.8 bin. These either failed at 2.0 or were never tested above 1.8.
 
macsrus said:
Thats funny... Here it is 9 months later and IBM still doesnt offer faster than 1.6 GHZ PPC in their JS20....
And again here is why.....

First IBM did not offer the info I gave earlier. They gave it to us when we were drilling them as to why we couldnt get 2.0 GHZ processors instead of the 1.6s.
They stated that they would not use the 2.0s(130nm used in Power Mac) or the original 2.0(90nm that was used in the xserve) in their JS20 because those CPUs didnt have power/thermal control built in...
i.e fan failure leads to thermal failure

They said they wouldnt upgrade their system until the the new series with those power/thermal control were made.

So this was not IBM trying to sow FUD as you made out... but just them giving their reasons as to why they made the business decisions they did.

IBM is much more cautious when they design systems, than others who may be reselling their technology usually are.
Thats one of the primary reasons why IBM's reliability is better than the rest.(i.e. IBM builds systems with 5 9s reliability)
I Figure some will take exception with my last statement but anyone who has had Large IBM systems know I speak the gospel.

Now

The newer 90nm chips that have power control add(i.e. those currently being produced)will be offered as 2.2 GHZ for the IBM JS20 in late JULY early August.
During this same time frame Apple will be putting the same CPU at 2.3 GHZ in their xserves.(the 2.5s that Apple is using are also this newer design)


Your original statement said that the 2.0 was too hot for server use. Since Apple is using them without difficulty, that's pure FUD.

IBM is free to say that they're not good enough at thermal management to put the 2.0 into a blade and that might be true. But that's not what you said originally. You said that the 2.0 would burn up in server use - which is outright false. Apple has no problem with it.
 
gloftis said:
Which appsp that the "average" computer uses have no Mac equivalents?

You obviously must not live or work in the real world...

As a former IT manager at 3 different companies( the smallest of which had 350 employees) Macs were the most troublesome computers on our network....
Not because of reliability....
Not because of ease of Use...
And not because Apple doesnt build a great computer...(Even though all MAC OSs before OS X... looked like crappy Windows 3.1)

BUT because their lack of compatable business applications with the rest of the world.....

And no im not talking about FREAKING word processors and the like.

Almost ever piece of business software we used at any of the companies I have worked at would not run on a MAC

WAIT .....

I TAKE that back

If I installed Virtual PC on them and ran a COPY of windoze then I could get my users access to the apps they needed...

Actually in most cases users that had Macs also had to have PCs

It would take me hours to list all the apps that MACs are not compatable with....

Again this isnt a fault of the MAC per se .... it is just a fact of life... in this monopoly world we live in.
 
pjkelnhofer said:
That is also some one from IBM sales who most certainly does not want you buying XServes instead of Blades.
Also, the XServe never had a 130nm 970 in it, it did not go to G5 until the 970FX was availible and that has been the only chip ever in the G5 XServe.

I never meant to imply that the 970 used in the xserve was 130nm

check my other post and I better explain it.

Also I never said that we bought the blades instead of the xserves :)
 
jragosta said:
IBM is free to say that they're not good enough at thermal management to put the 2.0 into a blade and that might be true. But that's not what you said originally. You said that the 2.0 would burn up in server use - which is outright false. Apple has no problem with it.

Actually due to the newness of the xserves... the jury is still out....

Although I think Apple has done an adequate job of cooling them.... The problem comes in when the fans fail....

The 970fx that is currently in the xserve does not have power/thermal control..... so when fans die thermal problems will happen.....

Now you may say that the same thing will happen to the 1.6s.....

Ahh thats the whole enchilada there...... The 1.6s are cool enuff without the fans running..... That was IBMs point

By the way the 1.6s IBM used in their JS20 are the identical 2.0 90nm 970fx used in the original xserve..... they r just underclocked
 
Au contrar

macsrus said:
You obviously must not live or work in the real world...

As a former IT manager at 3 different companies( the smallest of which had 350 employees) Macs were the most troublesome computers on our network....
Not because of reliability....
Not because of ease of Use...
And not because Apple doesnt build a great computer...(Even though all MAC OSs before OS X... looked like crappy Windows 3.1)

BUT because their lack of compatable business applications with the rest of the world.....

And no im not talking about FREAKING word processors and the like.

Almost ever piece of business software we used at any of the companies I have worked at would not run on a MAC

WAIT .....

I TAKE that back

If I installed Virtual PC on them and ran a COPY of windoze then I could get my users access to the apps they needed...

Actually in most cases users that had Macs also had to have PCs

It would take me hours to list all the apps that MACs are not compatable with....

Again this isnt a fault of the MAC per se .... it is just a fact of life... in this monopoly world we live in.

I was a technical writer for 16 years and worked for at least 4 companies with 300+ employees. We used Word, FrameMaker, Photoshop, Powerpoint, Excel, Dreamweaver, Director, Authorware, Corel, and Illustrator, all of which are--or were--cross-platform compatible. The only kink we had was with on-line help, but we resolved that at the last two by dumping the inherently buggy Word/Robohelp "solution" in favor of FrameMaker/WebWorks single sourcing. In every case, I was able to work evenings on my Mac at home, even when all we had was PCs at work.

There was an issue with Project, but it was easy to sell management on FastTrack Schedule, which is cheaper than Project and more intuitive.

While I find it credible the the "business software" you were using was not Mac compatible, I suspect your company's software purchase decisions were driven by the IT manager instead of the users.

I ran into this phenomenon repeatedly. Several times I had to convince non-writers that it was cheaper in the medium-to-long run to move from "publishing" with Word, which is a huge exercise in futility (In the mid-1990s, Microsoft actually built its technical pubs with FrameMaker, then ports them to Word for publishing) to investing in FrameMaker, a true document publishing system that was, until May of this year, Mac/Windiows/UNIX based.

On the other hand, people are, by nature, afraid of new things, so most users are not even aware of alternative apps, let alone eager to try them!

And, although the difference in capabilities is a myth, the myth will live as long as people can't get their Wordstar 2000 for Mac!

Perception is reality for the imagination-challenged.
 
macsrus said:
Actually due to the newness of the xserves... the jury is still out....

Although I think Apple has done an adequate job of cooling them.... The problem comes in when the fans fail....

The 970fx that is currently in the xserve does not have power/thermal control..... so when fans die thermal problems will happen.....

Now you may say that the same thing will happen to the 1.6s.....

Ahh thats the whole enchilada there...... The 1.6s are cool enuff without the fans running..... That was IBMs point

By the way the 1.6s IBM used in their JS20 are the identical 2.0 90nm 970fx used in the original xserve..... they r just underclocked

All of that is speculation - and most of it is wrong.

The xServe has been out for months. If there were going to be problems, they would have appeared by now. Most thermal failures occur in the first days or weeks. In fact, a chip that is problematic at a certain speed will often 'settle down' after time and work smothly.

While I don't know whether the 970FX has thermal management, the xServe certainly does. If a fan fails, the system won't over heat.

As for the 1.6 IBMs being underclocked 2.0s, this has been explained to you several times. You might learn something if you'd listen.

At the time the xServe G5 was announced, here's the way it worked. IBM made a batch of 970FX chips. They tested them one at a time. A very tiny number might have passed at some speed > 2.0 GHz. These were presumably used for R&D or held until there were enough to sell. A modest number passed at 2.0 GHz. These would be stamped at 2.0. A greater number would have passed at 1.8 GHz. These would be stamped 1.8 GHz and so on.

Given that the 2.0s were very scarce and they couldn't make enough to meet demand, there's no way that they would be selling the scarce 2.0s as 1.6s- which are essentially the plentiful batch rejects.

You're 0 for 3 in this post. Congratulations.
 
jragosta said:
All of that is speculation - and most of it is wrong.
While I don't know whether the 970FX has thermal management, the xServe certainly does. If a fan fails, the system won't over heat.

As for the 1.6 IBMs being underclocked 2.0s, this has been explained to you several times. You might learn something if you'd listen.

EACH of your points were speculation....

But instead of trying to educate you on the facts.... Ill just agree to disagree with you.
 
macsrus said:
EACH of your points were speculation....

But instead of trying to educate you on the facts.... Ill just agree to disagree with you.

Wrong. My statements were based on knowing how the chips were made.

If IBM has so many 2.0 GHz 970FX chips, why is it that Apple is just now getting enough of them to get rid of the xServe backlog?

You should learn a little bit about how chips are made.
 
macsrus said:
By the way the 1.6s IBM used in their JS20 are the identical 2.0 90nm 970fx used in the original xserve..... they r just underclocked

There seems to be some discussion in this thread about how chips are made. Just to clarify:

All chips, as long as they are same core stepping, no matter what speed, come off the same silicon wafer. After they are made they are tested, so say they test at 2Ghz. All that pass are marked and sold at that speed, those that fail are tested at the next lower benchmark, and so on. This doesn't mean they are underclocked, it just means they failed at the higher benchmarks even though they are the same chip.

Occasionally, what will happen is that demand for lower end chips will be so high that the manufacturer has to sell high speed chips as low end chips just to satisfy demand. On the pc side this has happened to the AMD Athlon XP 2500+ (Barton core) and the Intel P4 2.4c (Northwood C core) which are favourites with overclockers as you can run them at higher Mhz without any probs. I have the Barton chip and I run it at 3200+ speed (2.2Ghz), no change to core voltage, and its not any hotter than running it at 2500+ (1.8Ghz). I haven't had ANY stability probs either. Serious overclockers have got this chip up to 2.4-2.5Ghz but they had to change voltage and look at cooling.

Other than satisfying specific demand, there is no point for a manufacturer to intentionally underclock. It costs the same to make a 2Ghz chip as it does to make a 1.6Ghz chip, the difference being you can charge more for the 2Ghz chip.
 
jragosta said:
Wrong. My statements were based on knowing how the chips were made.

If IBM has so many 2.0 GHz 970FX chips, why is it that Apple is just now getting enough of them to get rid of the xServe backlog?

You should learn a little bit about how chips are made.


Everyone already knows the process that Silicon manufactures use to produce their chips.... That is nothing new.

And to speculate as you do about where the CPU's are going based on the process of CPU sampling is just that... pure speculation

You seem to think that IBM would sell the best parts to a competitor instead of using them themselves....(Even though they really could do just that... it isnt very likely


Consider this IBM makes very little money from selling CPU's to Apple...
Believe it or not Apple doesnt sell many computers....
I know for a fact that IBM uses more PPC970 & 970fxs than Apple does.


I have a question for U.. Do you own xserves? Do you own JS20 blade servers?

We do... I have seen the CPUs.... I am not speculating.

I also know why there is an xserve backlog it isnt just because of CPU yields(but before long everyone else will know the real reason for the backlog)

But again I will just agree to disagree.
 
Bigheadache said:
There seems to be some discussion in this thread about how chips are made. Just to clarify:

All chips, as long as they are same core stepping, no matter what speed, come off the same silicon wafer. After they are made they are tested, so say they test at 2Ghz. All that pass are marked and sold at that speed, those that fail are tested at the next lower benchmark, and so on. This doesn't mean they are underclocked, it just means they failed at the higher benchmarks even though they are the same chip.

Occasionally, what will happen is that demand for lower end chips will be so high that the manufacturer has to sell high speed chips as low end chips just to satisfy demand. On the pc side this has happened to the AMD Athlon XP 2500+ (Barton core) and the Intel P4 2.4c (Northwood C core) which are favourites with overclockers as you can run them at higher Mhz without any probs. I have the Barton chip and I run it at 3200+ speed (2.2Ghz), no change to core voltage, and its not any hotter than running it at 2500+ (1.8Ghz). I haven't had ANY stability probs either. Serious overclockers have got this chip up to 2.4-2.5Ghz but they had to change voltage and look at cooling.

Other than satisfying specific demand, there is no point for a manufacturer to intentionally underclock. It costs the same to make a 2Ghz chip as it does to make a 1.6Ghz chip, the difference being you can charge more for the 2Ghz chip.

I have no disagrement with this.... I have been an avid overclocker since the 486sx 20... overclocked mine to a massive... mind boggling 33mhz
man those were the days...

I would add one change to your description on how CPU batches are tested...
Namely that... In the early runs it is pretty much as you said, but as time progresses chips r no longer sorted during testing.. if x number of CPU's fail at a certain clock speed then the whole batch is retested at a lower speed till it passes. if less than x number fails then just the failed CPUs are removed and rebatched....
Also even later on the process generally gets so refined that virtually no CPUs fail the clock tests and they r just packaged(stamped at whatever speed)as demand requires.


AS I said in my last post I am not speculating on IBM using some 2.0s clocked at 1.6 in their JS20... we own a rack of them... (6 blade centers 14 blades ea 2 CPUs per blade) I pulled the heat sinks on a blade they were 2.0s... I cant say all of them are, but the 2 I looked at were.
 
macsrus said:
I have no disagrement with this.... I have been an avid overclocker since the 486sx 20... overclocked mine to a massive... mind boggling 33mhz
man those were the days...

I would add one change to your description on how CPU batches are tested...
Namely that... In the early runs it is pretty much as you said, but as time progresses chips r no longer sorted during testing.. if x number of CPU's fail at a certain clock speed then the whole batch is retested at a lower speed till it passes. if less than x number fails then just the failed CPUs are removed and rebatched....
Also even later on the process generally gets so refined that virtually no CPUs fail the clock tests and they r just packaged(stamped at whatever speed)as demand requires.

yes you are right on speed binning, I was trying to simplify. On late steppings there is no need to speed bin individual CPUs. I don't have a P4, but i understand that is why nearly every P4 2.4c (northwood C core) can overclock to 3Ghz straight out of the box. By the time intel got to this stepping they ironed out everything.
 
macsrus said:
Everyone already knows the process that Silicon manufactures use to produce their chips.... That is nothing new.

And to speculate as you do about where the CPU's are going based on the process of CPU sampling is just that... pure speculation

You seem to think that IBM would sell the best parts to a competitor instead of using them themselves....(Even though they really could do just that... it isnt very likely


Consider this IBM makes very little money from selling CPU's to Apple...
Believe it or not Apple doesnt sell many computers....
I know for a fact that IBM uses more PPC970 & 970fxs than Apple does.


I have a question for U.. Do you own xserves? Do you own JS20 blade servers?

We do... I have seen the CPUs.... I am not speculating.

I also know why there is an xserve backlog it isnt just because of CPU yields(but before long everyone else will know the real reason for the backlog)

But again I will just agree to disagree.


You seem to _enjoy_ being consistently wrong.

Let's address each of your points:

-that I am speculating on how the parts are tested. No, it's not speculation. That's standard industry practice. EVERYONE in the industry does it that way. You're implying that IBM does something different than the entire world. Seems to me that you'd have to have evidence for such a silly claim. You don't.

-that IBM makes very little money from selling CPUs to Apple. Wrong again. IBM sells about 1 million G5 CPUs to Apple (that's not counting G3 or other chips). At an average of about $300, that's $300 million. That's hardly 'little money'. And if you check, you'll find that Apple's CPU usage is an order of magnitude greater than what IBM uses in their own systems.

-that IBM keeps the best parts for themselves. If so, why is Apple having no problem selling 2 GHz xServes while you admit that IBM isn't any higher than 1.6 GHz? Given IBM's low usage, they could easily find enough chips for their own needs if they were willing to use 2.0 GHz. (and, btw, IBM doesn't consder Apple to be a competitor for their blade servers - nor does anyone else I know).

-xServe deliveries were not due to G5 availability. That's odd. Everyone in the industry acknowledges that the delays were due to G5 availability. Even IBM and Apple admitted this. But you know of some mysterious reason that IBM and Apple are trying to cover up enough that they'll lie about why the xServes are late. Riiiiiiggght.

-You've been arguing that IBM uses 2.0GHz chips in their 1.6GHz blades. And you are now claiming to have seen this. I say you're lying. Post a screen shot of the chip. There's no way IBM is using 2.0 GHz chips in their 1.6 GHz blades on any regular basis. There's no advantage to doing so. You're making a wild claim-prove it.

The bottom line is simple. There's no chip difference. IBM is using 1.6 GHz chips in their 1.6GHz servers. The reason IBM can't get to 2.0 GHz blade servers (yet) has nothing to do with chip quality. It's simply a matter of cooling. The heat density for a stack of 2.0 GHz blades is greater than the heat density for a stack of 1.6 GHz blades. That's simple physics. IBM apparently can't provide enough cooling to handle 2.0 blades. There's nothing wrong with that - that's a very difficult thing to do. But your inane conspiracy theories to try to avoid this issue are getting absurd.
 
jragosta said:
You seem to _enjoy_ being consistently wrong.

Let's address each of your points:

-that I am speculating on how the parts are tested. No, it's not speculation. That's standard industry practice. EVERYONE in the industry does it that way. You're implying that IBM does something different than the entire world. Seems to me that you'd have to have evidence for such a silly claim. You don't.

-that IBM makes very little money from selling CPUs to Apple. Wrong again. IBM sells about 1 million G5 CPUs to Apple (that's not counting G3 or other chips). At an average of about $300, that's $300 million. That's hardly 'little money'. And if you check, you'll find that Apple's CPU usage is an order of magnitude greater than what IBM uses in their own systems.

-that IBM keeps the best parts for themselves. If so, why is Apple having no problem selling 2 GHz xServes while you admit that IBM isn't any higher than 1.6 GHz? Given IBM's low usage, they could easily find enough chips for their own needs if they were willing to use 2.0 GHz. (and, btw, IBM doesn't consder Apple to be a competitor for their blade servers - nor does anyone else I know).

-xServe deliveries were not due to G5 availability. That's odd. Everyone in the industry acknowledges that the delays were due to G5 availability. Even IBM and Apple admitted this. But you know of some mysterious reason that IBM and Apple are trying to cover up enough that they'll lie about why the xServes are late. Riiiiiiggght.

-You've been arguing that IBM uses 2.0GHz chips in their 1.6GHz blades. And you are now claiming to have seen this. I say you're lying. Post a screen shot of the chip. There's no way IBM is using 2.0 GHz chips in their 1.6 GHz blades on any regular basis. There's no advantage to doing so. You're making a wild claim-prove it.

The bottom line is simple. There's no chip difference. IBM is using 1.6 GHz chips in their 1.6GHz servers. The reason IBM can't get to 2.0 GHz blade servers (yet) has nothing to do with chip quality. It's simply a matter of cooling. The heat density for a stack of 2.0 GHz blades is greater than the heat density for a stack of 1.6 GHz blades. That's simple physics. IBM apparently can't provide enough cooling to handle 2.0 blades. There's nothing wrong with that - that's a very difficult thing to do. But your inane conspiracy theories to try to avoid this issue are getting absurd.

Not worth the time to address your misrepresentations of what Ive said.

Since you seem to like to attack me personally. This will be my last responsce to you.
 
macsrus said:
Not worth the time to address your misrepresentations of what Ive said.

Since you seem to like to attack me personally. This will be my last response to you.

I hope that you will still continue to give us your opinion. After all this is a rumor site.

Getting facts are very important to all of us.
 
You don't know IBM, do you?

jragosta said:
-You've been arguing that IBM uses 2.0GHz chips in their 1.6GHz blades. And you are now claiming to have seen this. I say you're lying. Post a screen shot of the chip. There's no way IBM is using 2.0 GHz chips in their 1.6 GHz blades on any regular basis.

There's no advantage to doing so. You're making a wild claim-prove it.

Ever heard of the term "safety margin"?

I believe macsrus 100% if he says that the chip is stamped 2.0 - that's SOP for IBM to go for reliability over absolute top performance. (Or to over-spec for the same reason - to have a compfortable safety margin.)

I wouldn't be surprised if IBM ran the JS20 at full load in a 50°C room with half the fans disabled. If they found that it worked at 1.6GHz, and smoked at 2.0GHz - they'd under-clock to 1.6 for the additional margin of safety.
____________________

Here's another one that you can verify for yourself....

Go to IBM.com and look at the eSeries x445 (the up-to 32-way Xeon MP server).

Look at the memory options - PC2100 (266MHz) DDR ECC SDRAM.

Look at the memory controller - it has a 400 MHz FSB. Dig a little deeper, you'll see that the memory is effectively clocked at 200MHz (100MHz with DDR).

So, why does IBM require 266MHz DIMMs for a 200MHz memory controller?

SAFETY MARGIN !!


BTW - did you know that a Pentium 4 does not even need a heat sink? The thermal management is so effective that you can remove the heat sink and the Pentium 4 will continue to run.

SAFETY MARGIN !!

(http://www6.tomshardware.com/cpu/20010917/heatvideo-01.html)
 
AidenShaw said:
BTW - did you know that a Pentium 4 does not even need a heat sink? The thermal management is so effective that you can remove the heat sink and the Pentium 4 will continue to run.

SAFETY MARGIN !!

(http://www6.tomshardware.com/cpu/20010917/heatvideo-01.html)

It's a bit misleading to say that the Pentium 4 "doesn't need" a heatsink, because the core ramps down until it's not running fast enough to generate the waste heat that would harm the chip. Also, the part you're talking about is a much, much older and cooler Pentium 4 (running at 2.0ghz) that puts out between 55 and 75 watts, depending on whether it's a 130 or 180nm part.

It's a matter of performance, and not just the safety margin.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.