Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Why don't you admit that you were completely wrong?

jragosta said:
IBM has supported my position.

Funny, I read the same Trekkie post and I thought that he said that you were FOS about your claim that blades have to be underclocked because of the chip density....

The only thing around here that's "dense" is you!
 
jragosta said:
Macsrus never claimed to have seen at 2.0 GHz chip on an IBM system sold as 1.6.

I re-read his posts, and I think he was *exactly* claiming that the blades that IBM is selling as 1.6 GHz contain CPUs with 2.0 GHz stamped on them.


jragosta - what alternative universe do you inhabitat?
 
AidenShaw said:
I re-read his posts, and I think he was *exactly* claiming that the blades that IBM is selling as 1.6 GHz contain CPUs with 2.0 GHz stamped on them.


jragosta - what alternative universe do you inhabitat?

You really need to learn to read carefully.

He claimed that the 1.6 GHz IBM blades have 2.0 GHz chips in them. BUT HE NEVER CLAIMED TO HAVE ACTUALLY SEEN ONE. He made all sorts of allegations like 'have you ever looked under the heat sink', but if you read for content (instead of just imagining things), you'll see that he never once claimed to have actually seen a 2.0 chip in a 1.6 server.

AND since the IBM rep says simply that IBM does not underclock (or overclock) chips, that makes you and macsrus the ones living in a fantasy world.
 
AidenShaw said:
Funny, I read the same Trekkie post and I thought that he said that you were FOS about your claim that blades have to be underclocked because of the chip density....

The only thing around here that's "dense" is you!

That's because I never made that claim. I never said that they underclock chips. That's macsrus - who you are siding with.

I said that it's much harder to cool a blade server, particularly when the chip density gets high and it is therefore common for a blade server to use less than the fastest chip available. Your own URLs supported that - the systems you provided showed that if you have 2 CPUs in a case, you can use the fastest chip. When you have 4 CPUs in a case, the systems you cited DON'T use the fastest chip.

But I never claimed that they underclock. Never.
 
macsrus said:
Actually I believe IBM's Product manager 100% on not underclocking or over clocking The INTEL CPUs in their HS20 and HS40 product lines
Also I Believe 100% IBM never overclocks a CPU used in ANY System they have ever sold.

The real question is whether or not they have ever used a CPU that marked at a higher rated speed as a slower part....
I know for a fact that is true as I personally have a JS20 blade with 2.0 CPUs on it.....


As I said - I don't believe you. IBM says that they don't do that. Common sense says that they don't do that. All we have is your word - which IBM specifically says is wrong.

Post a picture.
 
jragosta said:
You really need to learn to read carefully.

He claimed that the 1.6 GHz IBM blades have 2.0 GHz chips in them. BUT HE NEVER CLAIMED TO HAVE ACTUALLY SEEN ONE. He made all sorts of allegations like 'have you ever looked under the heat sink', but if you read for content (instead of just imagining things), you'll see that he never once claimed to have actually seen a 2.0 chip in a 1.6 server.

AND since the IBM rep says simply that IBM does not underclock (or overclock) chips, that makes you and macsrus the ones living in a fantasy world.


I need to clarify that. Macsrus also says that 'he knows for a fact' that there's a 2.0 GHz chip in his 1.6 server. He never says how he knows that. He also refuses to provide any evidence - such as a picture - or even a screen shot of a utility showing that to be true.

Since IBM says it doesn't happen, his unsupported claim (especially when he doesn't specifically claim to have seen the markings on the chip) doesn't carry much weight.
 
someday we'll anchor those goalposts in concrete

jragosta said:
But I never claimed that they underclock. Never.


I think that we use somewhat different definitions for "underclock" and "overclock". You (and some others) seem to use a strict definition based on what is stamped on the CPU cover.

Other people are using a looser definition, where "underclock" means a clock slower than the chip is capable of for a larger safety margin, and "overclock" means running faster with less safety margin than usual. Since chip speeds are a continuum, and different speeds are stamped on the same chip, the looser definition has some merit.

For example, I'll say that 1.6 GHz in the JS20 is underclocked, period.

You'll call it underclocked if the case is stamped 2.0, and not if the case is stamped 1.6.

But what if IBM itself took a batch of 2.0 chips, rubbed off (erased) the ink that said "2.0", and put a new "1.6" stamp on the same chip? Does the new arrangement of the ink molecules on the case suddenly change the way the chip is clocked relative to its capabilities?

_________________________________________
jragosta said:
I said that it's much harder to cool a blade server, particularly when the chip density gets high and it is therefore common for a blade server to use less than the fastest chip available.

No, what you said is:
jragosta said:
[post #702]
You just can't run a blade server at as high a clock speed as a conventional server. It just can't be done.

There are too many chips generating too much heat in too small an area. You're forced to use a chip below the top end. That's true of every blade vendor.

You didn't use words like "common", you said "It just can't be done". You compared "blade servers" to "conventional servers", not blades to extreme gamer desktops or any other system.


______________________
jragosta said:
I said that it's much harder to cool a blade server, particularly when the chip density gets high and it is therefore common for a blade server to use less than the fastest chip available.

Your own URLs supported that - the systems you provided showed that if you have 2 CPUs in a case, you can use the fastest chip. When you have 4 CPUs in a case, the systems you cited DON'T use the fastest chip.

Yet again, your reading comprehension falls short.

Dual processor blades and dual processor conventional systems use the dual-processing capable "Xeon" CPU. The fastest version of this chip is 3.2 GHz, and it's found both in blades and conventional servers. This chip does not contain the logic needed for more than 2-way systems.

Quad processor blades and quad processor conventional systems use the multi-processing capable "Xeon MP" CPU. This chip does have the logic for running in systems with more than 2 CPUs. The fastest version of this chip is 3.0 GHz, and it's found both in quad blades and quad conventional servers.

So, again we show that your statement "You just can't run a blade server at as high a clock speed as a conventional server" is wrong. Don't insult us by claiming that you said something else - your words are back there on post #702.

Blades and conventional servers are running at the same clock rate. Dual CPU blades are the same top speed as dual CPU conventional servers. Quad CPU blades are the same top speed as quad CPU conventional servers.

If you are interested in understanding the two models of Xeon server chips, please surf over to http://www.intel.com/products/server/processors/index.htm?iid=ipp_home+server_proc&
 
Something else about this whole argument/dicussion/Disagreement that we seem to be having....

I had intentially left some details out....(I was trying to provide some info about a couple topics.... i.e. a rumor..... without violating a couple NDA's)
I think that within a Month or so the things covered under those NDA's will become public and some of my posts will make more sense...

So as I'm saying If you were to go back to my original posts you would understand how this all got started.)

Obviously the tanget we are on now will never make sense to some people.

But I will again try to clarify it.

It really didnt have much to do with the original posts other than IBM saying that the chips they originally Sold to Apple were unsuitable
to put into their blade server(at speeds greater than 1.6)
i.e. we asked why couldnt we get 2.0 PPC970s in their blade severs, and they told us that "the CPU didnt have power/thermal control built in."
We then asked what was different about the ones Apple was using in their Xserves(this was before Apple had started shipping Xserve G5's.... I believe they were publicly announced at this time though... its hard to remember)... IBM said,"they were the same ones." They said the chip was designed originally without the thermal/power control at "Apples request" and IBM said, " That Apple told them they were not concerned about the lack of thermal/power control of the CPU."
We pressed IBM further about getting these chips clocked at 2.0 instead of 1.6 in their blades. IBM said that they were to concerned about the lack of thermal/power control properties of the CPU and said if Apple wanted to take the chance to burn up the CPUs in their Xserve that was their problem.(Now obviously that last statement is to be taken with a grain of salt because, I'm sure that IBM certified the CPU to run at that speed(2.0) under certain tempature parameters and Im sure Apple had determined that they could properly cool them in a 1U).
I'm also sure that IBM never intended to Imply that Apple was doing something stupid.(I do think they meant to imply that IBM was more cautious).
Lastly IBM said that when the second generation 970s/970fxs were produced that they would have thermal/power control in the CPU and they would add them to their blade servers at that time.

Now.. Some people seem to not understand whether or not I claim to have actually seen 2.0 CPUs clocked at 1.6 on a JS20 blade.

Lets put that to rest once and for all......

I HAVE PHYSICALLY SEEN 2.0 CPUs CLOCKED AT 1.6 ....

MY COMPANY OWNS SEVERAL JS20 BLADE SERVERS AND ONE OF THE BLADES HAS 2.0 CPUS ON IT.

I CANNOT say that ALL of our BLADES have 2.0s on them because we only looked at one....(We pulled the heat sinks out of curiosity)

NOW some of you will never believe me on that....without some proof
Our system is currently being used for production runs as a cluster.
When the next shutdown takes place I will be happy to provide a PHOTO of the CPUs.

ONE QUESTION THOUGH

AFTER I provide it

Will you promise not to Say ......

( HOW DO WE KNOW MACSRUS DIDNT FAKE THESE PHOTOS ?)
 
AidenShaw said:
I think that we use somewhat different definitions for "underclock" and "overclock". You (and some others) seem to use a strict definition based on what is stamped on the CPU cover.

Other people are using a looser definition, where "underclock" means a clock slower than the chip is capable of for a larger safety margin, and "overclock" means running faster with less safety margin than usual. Since chip speeds are a continuum, and different speeds are stamped on the same chip, the looser definition has some merit.

There's half the problem. You insist on making up definitions.

There is a definition used by nearly everyone in the industry. Overclocking means running the chip faster than it's stamped and rated speed. Underclocking means running the chip slower than its stamped and rated speed.

If you'd stick to using the definitions that the real world uses, perhaps you wouldn't look so foolish.
 
jragosta said:
If you'd stick to using the definitions that the real world uses, perhaps you wouldn't look so foolish.

OK, I will....


SO, MACSRUS CONFIRMS THAT IBM IS INDEED UNDERCLOCKING THE PPC970 CHIPS IN THE JS20.
 
AidenShaw said:
OK, I will....


SO, MACSRUS CONFIRMS THAT IBM IS INDEED UNDERCLOCKING THE PPC970 CHIPS IN THE JS20.

No, he asserts it.

So far, he's provided no evidence other than his assertion.

And, anyone wanting to evaluate this intelligently will realize:

1. IBM says that they don't do this.
2. It doesn't make any logical sense to do it. They 2.0 chip is more valuable than the 1.6 and there's no gain in reliability.
3. He argued about how IBM did it for several days before he magically 'had proof'. If he had really seen that 2.0 mark on the chip, why didn't he say so up front - before I hammered him on it?
4. He's demonstrated consistently that he doesn't know what he's talking about.

Basically, I don't believe him for the reasons given above.
 
jragosta said:
And, anyone wanting to evaluate this intelligently will realize:

1. IBM says that they don't do this.
You misquote the IBM guy.....

Trekkie said:
IBM in *no way* underclocks, overclocks, or uses 'old processors' in their BladeCenter servers (The HS20 and HS40) or any server we sell.

While I do not manage the JS20 (PowerPC 970) directly, we announced this product in November of 2003. At that time 1.6GHz was where we wanted to be. This in no way means we're done and don't want to 'stand behind' a 2.0GHz version of said product. While I cannot publicly comment without an NDA I believe I can say we're 'not done yet' with this product and will continue to do other things with it. Also the PowerPC 970 at 1.6GHz in our chassis consumes the same power/puts out about the same heat as a 2.8GHz processor from Intel. Heat/Power of the 2.0 was not the issue, when we announced it was.

While he does include the phrase "or any server we sell."

Anyone who carefully reads the previous couple of posts before the Trekkie posted, would understand that he was really addressing yours and AidenShaws tangent about HS20s and HS40s.

He did come back and talk about the JS20s too as I have shown above.And his comment about " Heat/Power of the 2.0...." completely agreeded with what I had been saying about those CPUs.



jragosta said:
2. It doesn't make any logical sense to do it. They 2.0 chip is more valuable than the 1.6 and there's no gain in reliability.

While I will agree from a logic stand point it doesnt make any sense to put a CPU capable of a faster speed in a system that is being clocked at a slower speed... They in fact did just that. At least for one blade that we looked at.

jragosta said:
3. He argued about how IBM did it for several days before he magically 'had proof'. If he had really seen that 2.0 mark on the chip, why didn't he say so up front - before I hammered him on it?

This is false....

I did say so from the start...

My first post that started this whole argument was post 687
macsrus said:
By the way the 1.6s IBM used in their JS20 are the identical 2.0 90nm 970fx used in the original xserve..... they r just underclocked

I said in post 693
macsrus said:
I have a question for U.. Do you own xserves? Do you own JS20 blade servers?

We do... I have seen the CPUs.... I am not speculating
The time lapse between those 2 posts was about 4 hours.
Not several days as YOU CLAIM

jragosta said:
4. He's demonstrated consistently that he doesn't know what he's talking about.

Actually the only thing that has been demonstrated is
That You cant read.... and you constantly misquote and misreprestent what people say... while constantly calling them liars.

jragosta said:
Basically, I don't believe him for the reasons given above.

Basically, I could care less what you believe.....
If You Physically came to my office and saw it yourself you still wouldnt believe it.
 
give up on the idealogue

macsrus said:
Basically, I could care less what you believe.....

If You Physically came to my office and saw it yourself you still wouldnt believe it.

Leave it at that....

Don't bother with taking a blade down when the current run is over, as you've guessed - you'll only be accused of "photoshopping" the "2.0" onto the chip.

jragosta isn't worth the effort, anyone reading this thread sees that he's more interested in spouting contradiction than understanding the whole story. When you prove him wrong, he'll quickly change the subject without admitting that he was wrong.

(For example, several times I've pointed out how completely wrong he is with the "blades can't run the fastest chips" argument - yet he's still hasn't been able to simply admit that he made an error on that point.)

Drop it - even if you show him a perfectly sharp photograph of a 2.0 chip in a JS20 - he'll accuse you of playing games with Photoshop or pulling a chip out of a 2.0 GHz Mac and putting it in the JS20 for the photo.

Since he doesn't listen to reason, you can't win. Cut your losses now....
 
one minor disagreement

macsrus said:
While I will agree from a logic stand point it doesnt make any sense to put a CPU capable of a faster speed in a system that is being clocked at a slower speed... They in fact did just that.

Yes, I think that it does make sense.

SAFETY MARGIN !!
 
Army Buys 1566 Xserve Cluster

jragosta said:
If IBM has so many 2.0 GHz 970FX chips, why is it that Apple is just now getting enough of them to get rid of the xServe backlog?

macsrus said:
I also know why there is an xserve backlog it isnt just because of CPU yields(but before long everyone else will know the real reason for the backlog)

jragosta said:
-xServe deliveries were not due to G5 availability. That's odd. Everyone in the industry acknowledges that the delays were due to G5 availability. Even IBM and Apple admitted this. But you know of some mysterious reason that IBM and Apple are trying to cover up enough that they'll lie about why the xServes are late. Riiiiiiggght.

Hmmmmm Looks like i was right on this one too.....

Apple has/had several large cluster deals in the works with more coming
Just keep watching the news

I heard from a friend about this last one... and a couple more.
But couldnt say anymore about it than what I said at the time.... to protect my friend.
 
macsrus said:
Hmmmmm Looks like i was right on this one too.....

Apple has/had several large cluster deals in the works with more coming
Just keep watching the news

I heard from a friend about this last one... and a couple more.
But couldnt say anymore about it than what I said at the time.... to protect my friend.

Let's see if I can make this simple enough for you.

This unit has a couple thousand processors.

Apple sells several HUNDRED THOUSAND PowerMacs per year.

The number of processors in the Army's computer don't amount to a hill of beans in the big picture. Blaming them for Apple being late is absurd.

Both IBM and Apple have admitted that IBM's problems making the chips are the reason for the delays.

But I guess you know more than both IBM and Apple now.
 
jragosta said:
Let's see if I can make this simple enough for you.

This unit has a couple thousand processors.

Apple sells several HUNDRED THOUSAND PowerMacs per year.

The number of processors in the Army's computer don't amount to a hill of beans in the big picture. Blaming them for Apple being late is absurd.

Both IBM and Apple have admitted that IBM's problems making the chips are the reason for the delays.

But I guess you know more than both IBM and Apple now.

Again You prove you dont know anything about what apple sells....
While it is true that Apple sells about 200k power Macs a QUARTER.....
NOT A YEAR
Apple DOES NOT break out xserve sells from that total... i.e.
They dont say x number of power macs were desktops/notebooks and how many were xserves....
Now just so you can UNDERSTAND

Apple has only been making and selling between 5k and 10k xserves (whether G4 or G5 ) a quarter

Now when a customer orders 1500+ and another has ordered 1100+ and 2 others that I have been told about have ordered large orders plus 250 ordered by the NAVY, that brings their total for commited FIRM date deliveries to 30 to 60 percent of what they are producing on the assembly lines....

Now These systems..... unlike users who have ordered 1 or 2 units at a time... are done thru bid processes and have Firm delivery dates that Apple is contractually obligated to meet. That postpones shipments of other xserves for smaller orders....


Now again if you actually knew how to read you would have understood the following sentence.... and not continued to post your nonsense
Post 693
macsrus said:
I also know why there is an xserve backlog it isnt just because of CPU yields(but before long everyone else will know the real reason for the backlog)

One other thing the POWER MACS have been shipping pretty much all long... and again it was xserves not the power macs that have had the big delays....
While the CPU availability for Xserves had an affect as I have shown the orders they have gotten and Apples manufacturing process have also been causing delays.....
By the way... according to Apple all outstanding xserve orders should ship within a month.... and they will be completly caught up....
Also according to Apple they have started another assembly line for building xserves to better keep up with demand.
 
jiggie2g said:
Apple just loves to piss me off this is why I am just going to Build a PC , **** Apple , atleast they could have put out something better in the lower models. a single/dual 2ghz and Dual 2.2 low and mid end would have been great. and come on $3000 for a comp and all we get is a 2nd rate soon to be outdated and replaced Radeon 9600XT that should be minimum . a 9800XT should be standard on a 3K machine , i'd love to see Dell , HP or AlienWare try to pull this Bull **** off. ALL APPLE HAS DONE IS UPDATE 1 MACHINE AND NOW IS TRYING 2 SELL US IT'S OVER STOCKED CRAP AT A DISCOUNT.

While i like the Liquid cooling alot i'm disappointed to see it's only on the High end. NOTE TO APPLE: AMD NOW HAS AN ATHLON 64 +3800. They will have a +4000 by Oct/Nov or sooner.

Also Why did they ever bother to put in a PCI-X Slot in. this is a complete waste as PCI-X will be used Exclusively for the Server Market, The PC Industry has already Picked the Better and faster PCI Express as the new Standard to replace PCI making PCI-X a total waste unless ur running an X-Serve. Doesn't Apple Realize i can Biuld me a Spanking new +3800/FX-53 system for under 2K , and don't let me jump on those Phoney Benchmarks where the new G5 2.5 beats the AMD 64 FX-53 by 93% .....LMAO yea right.


the test was testing what comp could do the 45 filters at the same time and that is the g5 strongest point the abbility to run several processes at the same time at high speeds unlike the amds that can not do as many (althought they are faster) . the benchmarks are fair ( if you overlook the fact that mac os has better ram managment than windows ). and there is no point of building your own pc because you'll end up spending much more money then you first hoped and once it is finished you'll only have to wait a few months for windows to **** up your computer. and consider that you will never be able to build you own machine that can be as quite as a g5 and to get a simmilar cooling system will cost you about another 200. and everyone who builds there own computer spend so much time on maintenance. but it is realy you choice.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.