Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Macmaniac said:
I think Apple should release the 2.3ghz XServes to the general public, Apple should keep their servers updated on a frequent basis, that way they will be seen as a bigger IT competitor.
I think Apple's waiting to build up some supply of the 2.3 GHz processors before making them widely available. Apple doesn't want to announce it now and force customers to wait until November to get one. That would be a PR disaster.
 
duel G5s?

You've got to be careful with those duel G5 PowerMacs, especially when they start dueling with each other, fighting for bandwidth and other system resources. It'd seem hard for them to be productive at all with little if any system unity! :D
 
broken_keyboard said:
So the 2.3 G5 does not need liquid cooling, but the 2.5 does?

I don't think its a matter of needing liquid cooling, I think the liquid cooling is just quieter than the necessary fans.
 
Ahh... so the true reason for the previously odd-sounding upgrade emerges. Nice.
 
Remember, the Dual 2.5 GHz reach about 105 degrees Celcius at tops on die. Liquid cooling helps keep the fans running quieter, even though I hear it can get pretty loud. The PowerMac G5 could use a redesign for the next line of chips. Gonna need to keep that liquid cooling around for a while.
 
Correction: the Dual 2.5Ghz G5s require the cooling. I read something abou this...they got too hot with the 90nm stuff, and it wasn't practical to add a bijillion more fans. So, they went liquid.
 
broken_keyboard said:
So the 2.3 G5 does not need liquid cooling, but the 2.5 does?

the liquid cooling is to reduce the noise, you don't need as much noise reduction in a server environment compared to a PC environment, meaning they can use louder fans in the Xserves
 
Not ridiculous - the towers were unusably unstable

Mantat said:
Just imagine the time wasted to switch the computers for a 15% CPU increase. Very hard to believe. And if true, the guy leading this switch must have an angry bunch of scientist at his back. Dont forget that these clusters arent there for braging rights, they are supposed to do some science stuff...

It wasn't for the speed increase - it was because the lack of ECC memory made the PowerMac G5 cluster so unstable that it was unusable for doing any real work.

The "sugar-coated" version from the VAtech website is

Well with the concept proven we now had to make sure we had a system capable of conducting scientific computation.

We needed to upgrade the system to something with error correcting code (ECC) RAM.

The Power Macs did not support it and the XServes were coming.


And, I bet that you're right that there are a bunch of angry scientists. Angry that the university spent millions on a PR "proof of concept" that was unusable from the get-go. Angry that a year after the system was installed and announced there was still nothing usable by the users. Angry that all the press releases and advertisements didn't translate into computing resources usable by the people from the university.

Maybe a little less angry because they didn't ever have System X taken away from them - since it would never run long enough to have been given to them in the first place.

Do you think that it's a coincidence that the company that Dr. V. is with is pushing Xeons and Itania, including the world's fastest cluster (Itanium, not PPC970)? (http://www.californiadigital.com/)

Did you also see that Dell is selling pre-packaged 64-bit Xeon clusters based on the same Infiniband networking that VAtech is using? (http://news.com.com/Dell,+Topspin+tout+InfiniBand+clusters/2110-1010_3-5387085.html) The "cheap supercomputer" just got a lot cheaper!
 
AidenShaw said:
It wasn't for the speed increase - it was because the lack of ECC memory made the PowerMac G5 cluster so unstable that it was unusable for doing any real work.

That's patent nonsense. The original cluster was not "so unstable that it was unusable for doing any real work" by any stretch. The lack of ECC RAM merely meant that they had to run simulations more than once to verify the run and ensure that bit errors in memory had not skewed the results. With ECC, any simulations can be run once, without any such double checking for bit errors. Hence, you get results much faster (even without the clock speed increase), because you don't have to run simulations more than once. That doesn't make the original cluster worthless, just less efficient.

It's also rather widely accepted that they were willing to live with certain trade-offs in the original cluster so they could 1) prove the concept and 2) make the all important supercomputer "Top 100" for last year's deadline. Near as I can tell, the original cluster was a stop along the way to get the "real" cluster going - time to shake down all of the software and all that - and those actually building the thing knew that, if not from the very beginning, certainly by the time they were putting together 'BigMac I'. I don't see how they could have realistically just put together a perfect working cluster right off the bat - with any computer. They were also able to sell off many of the original machines to pay for the new XServes, so it's not like all of that money just disappeared into a pit. This was a research project in and of itself - its sole purpose was not simply to provide a big cluster for people to use, it was also to see if it could be done in the first place. Maybe they could have spent the money on something else, but for every funded science project, you can find someone who didn't get their project funded and thought theirs was 'better'. This isn't the first time, and it won't be the last.
 
So the Big Mac software doesn't work?

AidenShaw said:
It wasn't for the speed increase - it was because the lack of ECC memory made the PowerMac G5 cluster so unstable that it was unusable for doing any real work.

Do you know something we don't? I thought Varadarajan wrote error detection code that was integral to Big Mac? An un-clever algorithm would reduce the speed by half, hardly unusable.
 
I was thinking of something today after i read this. What if the dual 2.5's are actually 2.3's but liquid cooled cause the are OC'D? I know I know... if i read what i am saying now i wouldn't believe it either. I just wanted to say something someone else would eventually (and mean it)
 
I Beg To Differ

Longey Nowze said:
the liquid cooling is to reduce the noise, you don't need as much noise reduction in a server environment compared to a PC environment, meaning they can use louder fans in the Xserves

Site your source, please.

While Apple discusses at length both liquid cooling as well as a quiet operation of their new G5 towers on their web site, I do not recall a link made between these two items; I believe Apple was forced into liquid cooling due to what is commonly known as "heat density". Please refer to the following, copied from someone very knowledgable in this area:

The 970fx 2.0 chips run a LOT cooler than the 970 chips.

Speed CPU Process Typical power Die size Typ. power/area
-----------------------------------------------------------------
1.8 GHz 970 130 nm 51 Watts 118 mm2 0.43 W/mm2
2.0 GHz 970 130 nm 66 Watts 118 mm2 0.56 W/mm2
2.0 GHz 970FX 90 nm 24.5 Watts 66 mm2 0.37 W/mm2
2.5 GHz 970FX 90 nm 50 Watts 66 mm2 0.76 W/mm2


What's important here is the value of W/mm2.

Now, then, the question is what type of chip the 2.3GHz is...
 
shamino said:
VT sold their G5 PowerMac cluster nodes. They were available through (I think) MacMall for a few months. So they didn't lose their investment. IIRC, they actually sold for close to Apple's original price - people were willing to pay that much because the computers were virtually new and included some certificate of authenticity stating that the computer was a VT cluster node. (Yes, VT didn't get the full purchase price, since the store took a cut, but VT also didn't pay full retail price - they paid Apple's educational institution price.)
As I stated before, this creates even a more cost effective solution.

Anyone using these clusters can stay current in a more effective manner as compared to prior methodologies. In this case, just sell the old systems and buy new. No need to upgrade the infrastructure each time which saves resources (time and money).

Plus, the initial cost is much less as well.

Go Apple! :D

Sushi
 
Mantat said:
I really doupt this. If its true, this is the most ridiculous thing I have seen this week. Just imagine the time wasted to switch the computers for a 15% CPU increase. Very hard to believe. And if true, the guy leading this switch must have an angry bunch of scientist at his back. Dont forget that these clusters arent there for braging rights, they are supposed to do some science stuff...
Your joking right? Fifteen percent is a significant improvement -- especially in the scientific community doing research.

Sushi
 
So I understand that ECC memory somehow portects all those little 'bits' in memory but I am wondering how often they are lost in non-ecc ram? I mean wouldn't our computers always crash if this was something that actually happened alot?
 
wrldwzrd89 said:
This is a good day for Mac supercomputing. Maybe Virginia Tech's cluster will hit 12 teraflops with this upgrade. Over time, I can see this growing to 15, 20, 30, 40 teraflops as Apple releases new XServes. By the time it hits 40 teraflops - bye bye #1 spot for the Earth Simulator!

Not anymore. IBM's Blue Gene succeeds the Earth Simulator as the fastest supercomputer in the world.

I read it here. http://www.nytimes.com/2004/09/29/technology/29computer.html
 
AidenShaw said:
Do you think that it's a coincidence that the company that Dr. V. is with is pushing Xeons and Itania, including the world's fastest cluster (Itanium, not PPC970)? (http://www.californiadigital.com/)
The company is pushing BOTH platforms. And the fact that the Itanium cluster is the world's fastest cluster might have something to do with the fact that it includes nearly 4100 Itaniums.
 
AidenShaw said:
Did you also see that Dell is selling pre-packaged 64-bit Xeon clusters based on the same Infiniband networking that VAtech is using? (http://news.com.com/Dell,+Topspin+tout+InfiniBand+clusters/2110-1010_3-5387085.html) The "cheap supercomputer" just got a lot cheaper!
Those clusters will start at $55000 for eight-server clusters. Are you sure that's a lot cheaper? One DP XServe and seven DP cluster nodes would be $25000, less than half the cost!

(Sorry for the double post - didn't know how to add a second quote with your link when I tried to edit my last post.)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.