Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Re: Mac Clustering

Originally posted by cocoa_nut
XnavxeMiyyep:

The machines do indeed need to talk to each other. The required speed of this communication channel is totally dependent on the nature of the parallel application you intend to run. Some programs can be "parallel-ized" such that very little data need be transmitted between the nodes performing the computations, while other codes are structured such that high-bandwidth interconnects are required to pass data back and forth. So, yes you will need to network the computers in some way to form the cluster. :)

cocoa_nut
Ok. My Dad might give me his old G3 iMac, cuz i gave him a faster one. I'll test it with that.
 
offline thread about clusters:

Hey everyone....

Our cluster is no 65 on the list... and yes, the timing is very important. We only had it on for about a week when the deadline for the list came around. We didn't have anything optimized properly, and many of the nodes were mis-wired (ie, the management consoles are on 100Base-T, and the compute nodes are on GigE, but many of them were plugged into the 100 switches instead of the gig switches.
Anyway, we got on at 65, although i have been told that we are now benchmarking fast enough to be in the top 30...

Anyway, i mentioned this vt rumor to some of the guys at work and this is was their responses. Thought you might be interested:

---
This puzzles me - they will have approx 2200 x 2GHz CPUs and we
have 1024 x 2.? GHz CPUs so they will have approx twice the raw CPU
power we have._ Why then do they think they are going to get 5 times
the max throughput we get - 10 Tflops vs approx 2 TFlops?

Is someone on crack or is it all down to the interconnects
(Infiniband, whatever that is)?_ It seem incredible that all that
speedup would be simply due to their CPUs being 64bit while ours
are 32bit._ I'll be very interested to see how this project pans out
and whether their cluster ever gets even close to that 10 Tflop number
they have mentioned.

John (the pessimist)

---
and the response from our cluster admin:
---
The interconnects play a HUGE role in achieving those sorts of numbers. The latency with Gig E is much more than myrinet or any other highspeed interconnect._ I can achieve about 500 G/Flop with the 128 myrinet nodes. It takes over twice that number of Gig/E nodes to produce similar numbers. Also the G5 floating point performance is much higher than the Xeon processors. Also being able to address more memory via 64bit is adventageous as well._ The early Opteron benchmark numbers show this very well.

For your reference:
http://www.amd.com/us-en/Processors/ProductInformation/0,,30_118_8796_8800~70045,00.html
http://www.apple.com/powermac/

---
John's point is well taken but Virginia tech_will have much more options with a 64 bit system than people limited to 32 bit, re: memory access per_node._ However,_note that Cray is now using the AMD chip and a newly-developed multiCPU backplane to put Sandia Labs back into Top500_contention by creating a system_like TimeLogic and Paracel - 32 bit, standard cheap memory and 2G per CPU but very few interconnects, everything is on 16 enormous backplanes with full-speed access to the memory pipeline, and they have the potential to exceed 32K 32-bit_processors and 64 TB of RAM_per_system (Paracel was at 18K CPU's at their last iteration) making Cray the lowest cost per clock cycle at roughly $16M (3x our price) per system and potentially exceeding 20 TF (10x our performance and 2x Virginia Tech but more scalable)._ Also, the Cray system could be an Unreal server without modifications.
 
Re: offline thread about clusters:

Originally posted by neilt
Also being able to address more memory via 64bit is adventageous as well

Did you point out that OS X and YDL are 32-bit, and therefore applications cannot address more memory than Xeon systems?
 
Re: Re: offline thread about clusters:

Originally posted by AidenShaw
Did you point out that OS X and YDL are 32-bit, and therefore cannot address more memory than Xeon systems?

nielt makes some interesting points that highlight some of the statements about VT.

1. Very low latency interconnects and proximate co-location.

2. Maxed out memory capacity and speed.

3. Dualies on every box thus effectively lowering the average latency issue.

4. YDL which frankly is supportive of going 64 bit on as many drivers as the "customer" is willing to assist with, and wana bet how much VT is willing to assist YD? Not to mention any open source aspects of OSX.

What is particularly interesting is the G5 has been on the street for a little over a week, the dualies have not arrived anywhere visible yet, but 64 bit versions of Panther have been seeded to "some folks". One wonders if places like VT will get 64 bit Panther in time for next year's list.

They certainly will get G5's, hyperinterconnect and maxed out internal resources in time for THIS YEAR.

10 Teraflops? Probably not. But well over 4 and approaching 6? Yep.

Rocketman
 
64-bit Panther? Where?

Originally posted by Rocketman
....but 64 bit versions of Panther have been seeded to "some folks". One wonders if places like VT will get 64 bit Panther in time for next year's list.

I haven't seen any news of "64-bit versions of 10.3" - please add a link to that info...

As far as VT and 64-bit, unless their application needs more than 4GiB per task, there's no benefit to going to 64-bit for them. Sticking with 32-bit could be the right thing for their apps.
 
Re: Re: Mac Clustering

Originally posted by XnavxeMiyyep
But I should ethernet them together, right? And will it improve multitasking, too?

It won't help multi-tasking, and it won't help any of your existing apps - they haven't been written for clustering.

It will help the programs that you download from the pooch site, though. Lots of fun with fractal toys....
 
Re: 64-bit Panther? Where?

Originally posted by AidenShaw
unless their application needs more than 4GiB per task, there's no benefit to going to 64-bit for them.

Is this true? Many of my applications do not require 4Gib (a gross understatement), so I have no need for 64bit?
 
FYI - It's supposed to run OS X:)

FWIW - From Macintouch:

Following up on the note we published in late July, a MacInTouch reader writes:
Several weeks ago I sent word to you about Virginia Tech building a supercomputer with G5s; well, the news is out and watching the forums I would like to clear up some things that people are saying.
_ [...] The cluster will indeed run Mac OS X, not Linux as many people are saying. They have purchased 1,100 dual 2Gigs, with 4 gigs of RAM. They were purchased with no special discount from Apple. It is expected to be the 3rd or 4th fastest supercomputer in the world.
 
Re: Re: 64-bit Panther? Where?

Originally posted by cr2sh
Many of my applications do not require 4Gib (a gross understatement), so I have no need for 64bit?

In almost all cases you won't benefit from 64-bit until you *need* more than 4 GiB of RAM, and you're running large applications that need the 4 GiB all to themselves.

If you have 512MiB or 1 GiB of RAM on your system - almost certainly 64-bit will be of no use.

If you run many small applications, 32-bit should be OK.

10.2.7 / 10.3 are 32-bit, yet they can run about sixteen 2 GiB applications at the same time on a system with 8 GiB or RAM - each app gets its own 2 GiB. You don't need a 64-bit operating system or a 64-bit chip for this - Pentiums and G4 chips support up to 64 GiB of RAM on a 32-bit chip.

You'll want the PPC970 because it's fast - but not because it is 64-bit. It shouldn't matter that OS X doesn't support 64-bit.

-as

ps: There's one minor exception to this - the 64-bit chip can do native 64-bit integer arithmetic even when it is running with 32-bit addresses. A very small number of programs spend a lot of time with large integers, and those programs could see an improvement on the PPC970. How much improvement would depend on the program, and on how much of its time is in 64-bit integer code.
 
Not Important???

Originally posted by DrGonzo
why would you buy 1100 first revision g5s? I understand they want the machines ASAP... but still to spend at least $2mil on a bunch of machines that are brand spanking new with no small scale usage and certainly no large scale, and also were, for the first time, mass produced... just doesn't seem like a great way to do things. Then again, maybe this cluster isn't that important.

One of the top 5 fastest supercomputers in the world and you're saying it might not be that important?!?

Va Tech knows what they're doing in this respect. As other's have stated, the machines can one day be replaced with faster models and then passed down to another department for individual workstation use. They are saving a boatload of $$ on this deal compared to other offerings. I'm sure the total cost of ownership for these machines knocks-the-socks-off anything else.

BTW... the money for this project did come from a grant.
 
confirmed

September 2 - 17:21 EDT__ Virginia Tech confirmed Tuesday that it will use 1,100 Power Mac G5s as part of a supercomputer cluster now under construction. "The new cluster is designed to make its way into the rankings of the world's largest supercomputers, a list that currently has no Macs," reports CNET News.com. "Virginia Tech will use the cluster to perform research on nanoscale electronics, chemistry, aerodynamics, molecular statics, computational acoustics and molecular modeling, among other tasks." The university said it has been working with Apple for months to set up the cluster. Interestingly, the school said dual 2GHz machines started coming off the manufacturing lines last month. The news was first reported by Think Secret.
http://www.macminute.com/2003/09/02/vt
 
Re: Re: Re: 64-bit Panther? Where?

Originally posted by AidenShaw
In almost all cases you won't benefit from 64-bit until you *need* more than 4 GiB of RAM, and you're running large applications that need the 4 GiB all to themselves.

If you have 512MiB or 1 GiB of RAM on your system - almost certainly 64-bit will be of no use.

If you run many small applications, 32-bit should be OK.

10.2.7 / 10.3 are 32-bit, yet they can run about sixteen 2 GiB applications at the same time on a system with 8 GiB or RAM - each app gets its own 2 GiB. You don't need a 64-bit operating system or a 64-bit chip for this - Pentiums and G4 chips support up to 64 GiB of RAM on a 32-bit chip.

You'll want the PPC970 because it's fast - but not because it is 64-bit. It shouldn't matter that OS X doesn't support 64-bit.

-as

yah learn somethin' new everyday...

ps: There's one minor exception to this - the 64-bit chip can do native 64-bit integer arithmetic even when it is running with 32-bit addresses. A very small number of programs spend a lot of time with large integers, and those programs could see an improvement on the PPC970. How much improvement would depend on the program, and on how much of its time is in 64-bit integer code.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.