Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
narco said:
I don't get it -- what's the point of having one of the fastest computers? Bragging rights? What do they do on these computers, Photoshop tests?
In this particular case, according to the C|Net article, it is being used for hypersonic aerodynamic simulation. That is, a simulation of how aircraft parts behave at speeds much faster than the speed of sound.

Probably as a part of designing new vehicles/weapons that move at hypersonic speeds.
 
tortoise said:
Cluster interconnects like Myrinet and Infiniband don't have any more bandwidth than various flavors of Ethernet. What you are paying for is an order of magnitude reduction in latency versus Ethernet.
The latency issue is definitely what drives up the price. So is jitter (that is, per-packet variation in the delay characteristics - important for some applications, like video.)

But there is still a big bandwidth difference. According to the Infiniband FAQ, Infiniband's link speeds are 2.5, 10 and 30Gbps. (I also remember reading that 100G Infiniband is under development for the future.)

Ethernet, on the other hand, doesn't go that fast. The only commonly used speeds are 10M, 100M and 1G. There's a spec for 10G, but there are very few vendors shipping any devices at that speed.
 
macsrus said:
4th. Even though they selected Gig E ..... all Gig E isnt created equal..... Some are much better than others.... and suprisingly some have fairly low latency......Example Extreme's Blackdiamond is even as low as 10 microseconds....
Now they didnt say whose Gig E they used.... But if they went with Extreme that could help.
I hope they don't plan on using the 1000BaseT interface built-in to the motherboard. :eek:
 
The Red Wolf said:
Up until a few months ago when the Elk Grove CA manufacturing facility was closed, PowerMac G5s were produced just south of Sacramento. They're now produced in conjunction with a California based independent contractor, within California.

That may have been only the case for PowerMac G5s bound for the US market. My 1.6GHz G5 (ordered in August 2003, arrived in October 2003) has a 'Made in China' notice on the system information panel located at the bottom of the case on the access panel side (I'm in Sydney).
 
shamino said:
I hope they don't plan on using the 1000BaseT interface built-in to the motherboard. :eek:

Actually the Gig E that is built into the xserve G5 mother board is rather quick.....
It is vastly improved over the G5 desktop or the xserve G4....

And should perform better than any third party adapter that could be installed into the PCI-X bus... since the 2 Gig E chipsets on the G5 are not in the southbridge... and actually have there own pipes to the hypertransport of the CPU's and Memory.....
 
shamino said:
But there is still a big bandwidth difference. According to the Infiniband FAQ, Infiniband's link speeds are 2.5, 10 and 30Gbps. (I also remember reading that 100G Infiniband is under development for the future.)

Ethernet, on the other hand, doesn't go that fast. The only commonly used speeds are 10M, 100M and 1G. There's a spec for 10G, but there are very few vendors shipping any devices at that speed.


There is as much Ethernet available as Infiniband at those respective bandwidth levels. 10G Ethernet can be had now, and can be driven at >100G using WDM, channel bonding, etc. That's the great thing about fiber Ethernet actually: there is no real limit to the maximum bandwidth you can buy if you want to spend some cash. To put it another way, if they can switch Infiniband that fast, then they can certainly switch Ethernet that fast with its looser constraints. The limit is the switch fabric. While you don't see local offices using 10G+ fiber Ethernet links, it is used by long-haul and metro fiber providers, particularly as Ethernet slowly becomes THE protocol standard for all backbone networking.

Different technologies for different purposes really. Infiniband, Myrinet, Quadrics, RDMA, etc are designed for low latency above all else, with bandwidth coming second. Ethernet is designed for extremely scalable and efficient bandwidth, but sacrifices average latency to do it. Ethernet, being as ad hoc as it is, can get closer to theoretical than anything else when it comes to real-world throughput.

For my own supercomputing needs, I've been eyeballing the Octiga Bay gear (recently acquired by Cray). That is one hell of a fabric. Terabit scale fabrics with 1-us latency across hundreds of processors. Damn... I would love to see a benchmark from one of those systems, especially with the sick amount of floating point performance the new Pathscale compilers can squeeze out of that already fast architecture.
 
tortoise said:
For my own supercomputing needs, I've been eyeballing the Octiga Bay gear (recently acquired by Cray). That is one hell of a fabric. Terabit scale fabrics with 1-us latency across hundreds of processors. Damn... I would love to see a benchmark from one of those systems, especially with the sick amount of floating point performance the new Pathscale compilers can squeeze out of that already fast architecture.

Yes the Octiga Bay stuff is really sweet.... It was designed using FPGA's.....and ASICS....
The only problem is a question of scalability....
While Octigabay claims "those low latencies are one reason why the machine can scale to a ridiculously large 12,000 processors in a single machine"
I talked to Cray about these and they were not as optimistic about its scalability.
CRAY said that they were only planning to offer it for smaller cluster sizes.....512 processors and below....
Using current Opterons that will only get you in the 3 to 4 TFlop range for quite a healty price tag....
 
tortoise said:
There is as much Ethernet available as Infiniband at those respective bandwidth levels. 10G Ethernet can be had now, and can be driven at >100G using WDM, channel bonding, etc.
I'm going to have to call out your error here.

WDM and channel bonding are not Ethernet technologies. They're optical technologies that you can choose to layer Ethernet over. You can layer lots of other things over it as well.

If you use this as an argument for saying that Ethernet currently runs faster than 10G, then I can use that same argument to claim that V.90 dial-up runs faster than 10G. After all, I can use SONET muxes to merge millions of voice channels into a single OC-192 and then use WDM to mux dozens of those onto a single fiber.
tortoise said:
That's the great thing about fiber Ethernet actually: there is no real limit to the maximum bandwidth you can buy if you want to spend some cash.
No. That's the great thing about fiber, period. I can do the same WDM and channel bonding with ATM, Frame Relay, POS, or any other fiber-based technology and hit those same bandwidth levels.

Your argument says nothing that's unique or special to Ethernet.

As for what service providers are using for their metro-area facilities, you know as well as I do that there is no universal consensus over what is "best" here. There are a lot of competing technologies and different providers are using different systems. Price and politics play as much a role in selection as the technical merits of any given system.
 
shamino said:
I'm going to have to call out your error here.

WDM and channel bonding are not Ethernet technologies. They're optical technologies that you can choose to layer Ethernet over. You can layer lots of other things over it as well.


No, you are somewhat incorrect here. First, channel-bonding is not an optical technology. While in theory you could channel-bond most Layer-2 network protocols, in practice you can't. Every Ethernet fabric that is worth a damn, right up to the 10G terabit fabrics, has bonding built into it. It could be copper, fiber, wireless, WDM, or whatever, as long as it is an Ethernet fabric, you can bond arbitrary channels in the fabric. Most channel bonding is done as a Layer-3 protocol hack.

The WDM is convenient primarily because it means you don't have to run a physical layer to each port while still using the bonding capabilities of the fabric. That's more of a convenience issue.


shamino said:
If you use this as an argument for saying that Ethernet currently runs faster than 10G, then I can use that same argument to claim that V.90 dial-up runs faster than 10G. After all, I can use SONET muxes to merge millions of voice channels into a single OC-192 and then use WDM to mux dozens of those onto a single fiber.
No. That's the great thing about fiber, period. I can do the same WDM and channel bonding with ATM, Frame Relay, POS, or any other fiber-based technology and hit those same bandwidth levels.


Again, not really true. With the Ethernet fabrics there is no translation layer, but with most of the other things you mention there is a pretty big bottleneck in actuality with the encapsulation and layer translation to do all that. A properly done high-performance Ethernet fabric is completely "flat" from edge to edge. To put it another way, Ethernet fabrics are two parallel highways that both go to the same endpoint, and they can choose to take either at the fork. For those other protocols, when they get to the fork for the parallel highways, they are forced to change vehicles.

The bandwidth of a given Ethernet network is the same as the bandwidth of its switch fabric. While you can bond other protocols, the consequences and broader interactions are a little different. (Not that anyone on this thread really cares about those details.)


shamino said:
As for what service providers are using for their metro-area facilities, you know as well as I do that there is no universal consensus over what is "best" here. There are a lot of competing technologies and different providers are using different systems. Price and politics play as much a role in selection as the technical merits of any given system.


Actually, a pervasive Ethernet Layer-2 is slowly becoming the concensus technology. There are a lot of reasons why, but primarily it is because it is superior to SONET, ATM, etc by just about any metric you care to use (cost, latency, bandwidth, reliability). There are already companies that use a globally switched Layer-2 Ethernet network from edge to edge. Ethernet fabrics are ridiculously fast and extremely flexible if you don't jump into Layer-3.
 
Colsa to TRIPLE XSERVE cluster by years end

I just heard that Colsa plans to triple the size of the Armys cluster by years end. It is suppose to grow to over 4672 xserves.

Can You imagine thats over 74 TFlops

I also heard that they will be adding Infiniband to the complete system later this year.....

Cool News Huh......
 
Even if the Army supercluster were scaled to 74 TFlops, it still wouldn't compare at all to a full-scale IBM Blue Gene/L run. Projected power: 340 TFlops (next year).

Early on, someone mentioned a $58 million cluster. Even that, assuming linear speed scaling, wouldn't top the Blue Gene/L. Sorry, but IBM's got Apple beat on this one.


P.S. - A 1/16-scale Blue Gene/L prototype landed #4 on the Supercomputer List. Now that's power. Of course, you COULD argue that it runs off of PowerPC processors. :p
 
iMook said:
Even if the Army supercluster were scaled to 74 TFlops, it still wouldn't compare at all to a full-scale IBM Blue Gene/L run. Projected power: 340 TFlops (next year).

Early on, someone mentioned a $58 million cluster. Even that, assuming linear speed scaling, wouldn't top the Blue Gene/L. Sorry, but IBM's got Apple beat on this one.


P.S. - A 1/16-scale Blue Gene/L prototype landed #4 on the Supercomputer List. Now that's power. Of course, you COULD argue that it runs off of PowerPC processors. :p

Yes and you can also argue that the cost of Blue Gene/L will be over 350 million
noone will be running BG/L exclusively(i.e. as a single user running a single job).... It will more than likey be running hundreds of small jobs at the same time, ea using maybe a few hundred processors ea.
One could argue.... wouldnt it be better to buy 70 15 Tflop systems for the same money ?
 
iMook said:
....assuming linear speed scaling, wouldn't top the Blue Gene/L. Sorry, but IBM's got Apple beat on this one.

Good. I wish they'd just quit. I ordered my G5 deck-out tower 2 weeks ago, and I want it before it becomes a liquid-cooled pod in a Matrix-like college basement. :)
 
shamino said:
In this particular case, according to the C|Net article, it is being used for hypersonic aerodynamic simulation. That is, a simulation of how aircraft parts behave at speeds much faster than the speed of sound.

Probably as a part of designing new vehicles/weapons that move at hypersonic speeds.

wouldn't it make more sense for the air force to acquire something with this much computing power?
 
jhu said:
shamino said:
In this particular case, according to the C|Net article, it is being used for hypersonic aerodynamic simulation. That is, a simulation of how aircraft parts behave at speeds much faster than the speed of sound.

Probably as a part of designing new vehicles/weapons that move at hypersonic speeds.
wouldn't it make more sense for the air force to acquire something with this much computing power?
What makes you think the Air Force doesn't employ contractors with supercomputer clusters already?

As for what the Army is doing with it, they're obviously not designing planes. The Army doesn't fly them. And a hypersonic helicopter sounds pretty ridiculous to me.

But there are other things that might be appropriate. Like hypersonic bullets (perhaps as fired from a rail-gun) or missiles.
 
jetdesigner said:
I'm very glad to hear that Apple's technologies and products will help push the frontier of aeronautics. To get past the current stagnation in commercial aviation technologies, basic research like hypersonics and scramjet engine are needed. This kind of computing power provides the researchers with the capability to model and predict hypersonic aerodynamics (computational fluid dynamics) and match experimental data from flight test programs ($$$$$$$$) to theoretical data. This in turn leads to better tools that allows more accurate predictions and more optimized designs for the next generation vehicles. Imagine vehicles that can carry passengers from LA to Tokyo in about 1 hour.

From the replies above, it's obvious that some people feel that Apple's contract for providing this research tool to the military is morally wrong. However, I must point out that the type of cost involved with aerospace research usually means that the private sector can not or will not make the long term R&D investment required. This is further exacerbated by the development of stock market based economy where quarterly results determines the success or failure of a company. Companies are increasingly unwilling to make the long term R&D commitments with corporate funding because the pay off may not come for 10 years or more. Take a look at the history of aviation...all the major advancements that became commercially viable operations are due to military investment in the necessary basic research and technology development. Jet engines and rocket engines are just two examples. Large aircraft and associated structural and materials technologies that make inexpensive air travel possible today are similarly due to military's needs for large transports and bombers.

Technology is always a two-edged sword. Any technology can be used to benefit the society or to hurt the society. It's not the technology or even the weapons that kills. It's the people that employ those weapons that has the ultimate moral responsibility.

It's good to know that our favorite fruit company is not only helping with bio-technologies, but also aerospace technologies. And if the rumors of CATIA being ported to OS X turns out to be true, that would be fantastic news. Companies like Boeing use thousands of CATIA workstations. The 777 was developed using CATIA running on IBM's workstations that were built on PowerPC's and AIX operating system. I would love to see some or all of these CATIA workstations in the aerospace industry replaced with OS X based CATIA systems. It would be nice to see Apple make a serious come back in the aerospace industry.

I couldnt have said this any better myself
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.