Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Originally posted by idea_hamster
I noticed that they make a point of noting that every one of these has a 48x CD drive and a -- you guessed it! -- 1.44MB floppy drive! "The floppy is dead! Long live the floppy!" Now, I can imagine that a floppy might be useful ... but I can't figure out how ...:confused:

Umm no. Every BladeCenter chasis has a floppy and CD Drive, not every blade. You push a button on the blade and that blade now becomes is the one the floppy/CD/keyboard/video/mouse/etc points to.

Each chassis holds up to 14 blades in a 7U (1U = 1.5inches vertical height for rack mounting). The back has removable parts so you can have redundant power supplies, gigabit switches, and fiberchannel. The front-to-back channelled airflow design (as I've mentioned before) probably influenced a certain aluminum tower from your favorite fruity computer company.

It also makes an X-Serve sound whisper quiet. ;)

I should know we have one at work (just with P4 Xeons, not with the 970 in them).

A floppy is useful in Linux installations (the main OS dropped onto these will be Red Hat Linux ES or SUSE Linux) for a boot disk. Boot disks can be useful for recovery and also can provide a minimum level of physical security. Also, it might be useful for a security admin to keep their GPG key on there.

Originally posted by usingmac
If I were Steve Jobs......and I didn't know IBM was releasing this machine....they are a competitor. This just gave Apple a reason to port OSX to Intel.

It's posts like these that make me understand why people roll their eyes when I say I've owned Macs for 18 years.

This is nothing new, IBM has demo/talked this for over a year with a plan to release in Q1 2004 (as I've said many times on this forum). IBM's plan has always been to make the blades interchangeable with each other (P4, 970, Opterons, Itanium?).

In fact, the reason for developing the 970 was to make things like this, etc, not for Apple. Apple was lucky to have IBM and have some influence in the design (Altivec/VMX/Velocity) and IBM has a 3rd party to show what this chip can do on the desktop (and in the supercomputer world, ironically).

Until a year and a half ago, Apple didn't have any offering in the enterprise/data center.

Originally posted by Tulse
I wonder how happy Big Blue will be, or whether there aren't already some sort of contractual limitations on future Apple server hardware that IBM demanded in exchange for pulling Job's cojones out of the fire with the 970.

As a sign of how strong the partnership is, look no further than the fact that these are 2x1.6Ghz units coming out when Apple will probably have a 2x2.5Ghz. Seems like they're giving their "competitor" their best parts as well as a 6 month exclusivity!

Competition? IBM has been competing with it's partners for the last two decades! They had OS/2 for a long time while they continued to sell and support Windows. On their low end unix servers you two flavors of Linux as well as their own AIX and Windows (they maintain something like 6 different operating systems and support nearly every other one). They license patents and technology to their competitors in the hard drive space (until Hitatchi bought the division) and the microchip space ("flip chip", copper interconnects, etc. are all IBM inventions), How do you think AMD Athlons were the first chips to break 1Ghz (Answer: IBM's copper interconnect technology was given by Motorola to AMD while Intel had miscalculated and didn't realize that aluminum would hit a nearly unbreakable barrier at those frequencies)... The list goes on and on.

Originally posted by Rocketman
Correct me if I am wrong but Linux eliminates Altivec from the scene?
Correction. You're wrong. Altivec optimizations can be done on Linux but unlike IIC on x86 (Intel's compiler which is rarely used by most but essential for IA64), the optimizations aren't done automatically.

However, IBM has a much stronger compiler division than Intel and there are plans by them to put such things into XL C (IBM's compiler) and possibly into gcc (Apple uses a patched version of this) if the gcc team will take them (GCC is based on portability, not speed).

BTW, the only reason Apple is #3 on the list and not IBM is because the IBM servers wouldn't be ready until Q1 2004. The machines Virginia Tech would have liked to buy would have been 1U 2x2Ghz 970 rackmounts with PCI-X (or blades if the bladecenter could house a switch for Infiniband). Check the talk, if you don't believe me--it was their first choice.

Originally posted by G5orbust
Anyone notice that IBM crippled the main feature of the Apple G5


The thing to realize is that power and space are premium on a blade. This means that depending on the vendor, some parts are switched out with notebook parts (Transmeta CPU in RLX, notebook hard drives...). In some extreme cases, they just use crappy parts and rips you off instead (Dell leaps to mind).

Having said that, I don't understand why they couldn't have put 8GB Chipkill (like ECC on steroids) RAM maximum on these things.

Originally posted by wrylachlan
Just out of curiosity, how important is having a hard drive to a server you would use as part of a cluster??? IBM fits 7 blades in a 5U enclosure, but those blades have room for 2 40gig harddrives in each one. Would removing them shrink the blades enough to fit in 3U?
We use the hard drive for the operating system and to run the software. The database and other essentials is offloaded via fiberchannel to a drive array. At least one hard drive is important, a second isn't needed since another blade can pick up the slack if one fails.

It won't shrink the blades at all. These things are very "deep" the depth can be shrunk, but that just leaves more room in the back--that depth is already fixed by the chasis design which I believe IBM and Intel is pushing as a standard.

IIRC, the drives are IBM/Hitachi 2.5" (notebook) drives anyway and are mounted directly on the blade, not those big 3.5" with hot swap mountings you see on the X-Serve.

Instead, the idea is any part on the blade fails, you simply pull out the whole blade (system failovers take care of the loss) and replace it.

But the nice thing about BladeCenters is it's expandability and flexibility.

Take care,

terry
 
Originally posted by tychay
We use the hard drive for the operating system and to run the software. The database and other essentials is offloaded via fiberchannel to a drive array. At least one hard drive is important, a second isn't needed since another blade can pick up the slack if one fails.

It won't shrink the blades at all. These things are very "deep" the depth can be shrunk, but that just leaves more room in the back--that depth is already fixed by the chasis design which I believe IBM and Intel is pushing as a standard.

IIRC, the drives are IBM/Hitachi 2.5" (notebook) drives anyway and are mounted directly on the blade, not those big 3.5" with hot swap mountings you see on the X-Serve.

Take care,

terry

I can't comment on the 5U form factor being a standard, the reason I was asking had more to do with rumors of a 3U apple device and the possibility of it being a blade server. That being said, since Apple has no desire whatsoever to produce x86 compatable hardware, I'm not sure why they would have to conform to a standard that IBM and Intel are setting. Is there enough of an economy of scale in the blade server market that conforming to the 5U standard could save Apple money?

As for the OS residing on the harddrive - netboot makes this unnecessary. And the software running off the harddrive - my understanding is that clusters are used in really difficult problems which take a long time to compute... in which case the ratio of time spent loading the executable from network attached storage, to actual processing time is negligable. I mean if a cluster is churning away on a simulation of a nuclear explosion for a day or so, then the difference between loading the program off a local harddrive versus loading it off an attached xRaid is insignificant.

I would think that the coding necessary to work around the lack of a swap drive would be more problematic than the issues you brought up.

And as for the depth versus height thing, if you start out with the premise of 3U and design the board from the ground up around that, then obviously it doesn't matter where in a 5U blade the HD is, just the difference in overall space. To phrase it another way - if you remove the HD and associated chipset from a 5U blade would the total volume of the blade be small enough that if you layed it out different it could fit in 3U?

Why would this be an advantage, you ask? Well if apple wanted to build on the success of the VA cluster and really corner that market... I mean they already make a unit specifically for clustering...
 
Originally posted by wrylachlan
I can't comment on the 5U form factor being a standard, the reason I was asking had more to do with rumors of a 3U apple device and the possibility of it being a blade server. That being said, since Apple has no desire whatsoever to produce x86 compatable hardware, I'm not sure why they would have to conform to a standard that IBM and Intel are setting. Is there enough of an economy of scale in the blade server market that conforming to the 5U standard could save Apple money?
Er, yes - in a pretty big way. One thing to keep in mind is that these devices are only really of interest to the enterprise market - and the higher end of that market, to be honest (most people that I work with are happier with fewer, larger high SMP machines (32-64 way) for example).

What does that mean? Basically, that there's been massive amounts of design work that go into making these enclosures bulletproof. Remember, the blade is designed to fail gracefully (and for that to be very, very rare), but if you case goes out you've just lost a large number of systems. There is no point whatsoever in spending tons of money trying to redesign something that, quite frankly, doesn't need to be redesigned.

Besides, with a standard case, I can mix and match 970s, Xeons, Itaniums, whatever I need. Why would I move to a proprietary case? Its the same argument that you could make against Apple releasing to a 15" rack width instead of the standard 19" (but more so). Yeah, they could, and it would take up less spaces, but it would cost them more money and nobody would ever significantly buy them.

Out of curiosity, what have you heard about this 3U device?

-Richard
 
Originally posted by rjstanford
Er, yes - in a pretty big way. One thing to keep in mind is that these devices are only really of interest to the enterprise market - and the higher end of that market, to be honest (most people that I work with are happier with fewer, larger high SMP machines (32-64 way) for example).

What does that mean? Basically, that there's been massive amounts of design work that go into making these enclosures bulletproof. Remember, the blade is designed to fail gracefully (and for that to be very, very rare), but if you case goes out you've just lost a large number of systems. There is no point whatsoever in spending tons of money trying to redesign something that, quite frankly, doesn't need to be redesigned.

Besides, with a standard case, I can mix and match 970s, Xeons, Itaniums, whatever I need. Why would I move to a proprietary case? Its the same argument that you could make against Apple releasing to a 15" rack width instead of the standard 19" (but more so). Yeah, they could, and it would take up less spaces, but it would cost them more money and nobody would ever significantly buy them.

Out of curiosity, what have you heard about this 3U device?

-Richard

OK, maybe I'm not making myself clear. I'm not talking about an IBM blade, I'm talking about a potential Apple blade which was hinted at in the 3U rumor a while back (maybe it was page 2) .If Apple were to make a blade server, which is the point I was asking about, they would have to design it themselves from scratch, since they don't currently offer a blade. So I'm not sure what your argument about redesigning something which already works has to do with it. Are you saying that IBM and Intel are making a 5U standard such that an IBM blade could work in a say Dell enclosure??? Even so, why would Apple, which is all about integration, design a blade to go in an IBM enclosure? What is the benefit of standardization to someone entering the market?

My question about an economy of scale was asking "Are the parts used in the blades or enclosure less expensive if you go with 5U because there are enough 5U blades out there to make an economy of scale on the various components?" If so, then I could see the cost benefit of going 5U, but my take on the amount of blades shipped is that it isn't that many...

I'm sorry if I'm belaboring the point, but I feel as though you are very knowledgably answering questions that are slightly tangential to what I'm asking. Let me ask it this way - "With the knowledge you have of blades, if you were in charge of designing an Apple blade specifically for the Apple market, what decisions would you make and why?"
 
Personally, I think the ideal would be to have IBM selling OS X compatible blades and offering OS X as an option... with enterprises, actual funcdtionality is not nearly as important as the name of the company selling the product. IBM is a NAME. Apple is a detriment. It's like cars - if you put the Toyota or Honda label on, people will buy it; even the Ford blue oval will put a car into high sales. The Chrysler label means you have to add $4,000 in incentives to an already-lower price to get sales. (And before anyone takes offense...I bought a Toyota...and a Chrysler. But to the average customer, Chrysler is somewhere below Hyundai, and to the average enterprise customer, Apple is somewhere below Packard Bell.)
 
Originally posted by wrylachlan
What is the benefit of standardization to someone entering the market?
Just that they could build the blade itself without having to design, manufacture, warehouse, ship and support all of the supporting hardware, such as the case. Its the same reason that Sony makes a car stereo, but doesn't make a car to put it in (although not quite to that extent).
Let me ask it this way - "With the knowledge you have of blades, if you were in charge of designing an Apple blade specifically for the Apple market, what decisions would you make and why?"
The blade market is pretty small - you're generally going after people who have massively parallel needs. These folk are not going to be easy to convince to switch to unproven hardware in the first place. I would say that Apple needs 2-3 more years in the Server marketplace before really adressing this market. So far, they've been unable to produce a decent server at a reasonable price with any kind of product roadmap (the current Xserve is, quite frankly, a bit of a joke).

If Apple proves that they've got what it takes to play in the server market, and has the solid uptime quality to back it up, people may start wanting denser solutions from them. But that's probably going to appeal to those people wanting over 40 CPUs - until that point, we're only talking a single rack (including power, monitor, switch, et cetera) which isn't that big a deal. Below this point, there's no reason to pay the premium for a blade type solution.

Finally, if I buy a standard solution from a vendor such as IBM, I don't have lock-in issues. I can drop in systems with 970s, Itaniums, Xeons ... whatever. Enterprise shops, while they tend to be loyal, also tend to value flexibility. Apple would really have to have a compelling solution from a pricing standpoint (without reducing the quality and SLAs available from IBM, et al) to gain significantly here.

Quite frankly, the best thing that Apple could do to gain headway in the enterprise would be to release a product roadmap. Show me guaranteed support times and desupport dates for the current hardware and software, and show me a couple of years worth of expected hardware product development, and I'll really get interested. Add some real hardware support, with decent SLAs, and I'm getting excited.

Right now, for example, if we had gone with Xserves and wanted to stay with the platform we'd be paying $5,500 for a dual 1.3ghz G4 box. That's just plain uncompetitive. To put it into perspective, I can source a dual 3ghz Xeon from Dell with redundant power supplies (not even an option on the Xserve) for $4,200 if drop the support down a couple of levels to match AppleCare's crappy coverage ("The hardware repair coverage provides onsite response within four hours during business hours, and next-day onsite response when you contact AppleCare after business hours"). That's 20% less cost for, conservatively, double the performance. IBMs prices are a little higher, but come with better SLAs and a more proven history (and still give better value than Apples server). IBM also has a really great channel sales whereas I haven't seen anything from Apple encouraging vertical resellers.

Until they address some of these issues, Apple will continue to be a marginal player in the enterprise field. And, not including a few universities, those are the customers most interested in high-powered space-dense solutions like blade servers.

-Richard
 
Originally posted by allpar
Personally, I think the ideal would be to have IBM selling OS X compatible blades and offering OS X as an option... with enterprises, actual funcdtionality is not nearly as important as the name of the company selling the product. IBM is a NAME. Apple is a detriment.
Forget the name, look at the track record. IBM has a great system with some really good people behind it. I can pick up the phone and get answers about future products, system tuning assistance, whatever support I need. No worries whatsoever. With Apple, when I look at their history, I see that they released the Xserve back in mid 2002, have provided a 30% speed increase over the past 15 months, and have absolutely no information about when, if ever, they'll update it (or even if they'll bother to keep producing it). This, quite frankly, is why IBM is so much more appealing to people having to spend six or seven figures on business-critical systems.

-Richard
 
Originally posted by wrylachlan
Are you saying that IBM and Intel are making a 5U standard such that an IBM blade could work in a say Dell enclosure?

Note, the above post wasn't a reply to mine but you quoted mine earlier and I think I should clear up some ongoing misconceptions.

First, I never claimed that Apple would use IBMs enclosure. I just claimed that IBM's 970 J20 has been expected for a while now and, in fact, it's later than expected.

I agree it is possible that Apple's 3U server is a blade center. Look at Dell's 6x2P3's in 3U configuration. Honestly is this the sort of configuration you're looking forward to? More likely is the Sun blades. (We considered purchasing this at the time it was still under development.)

But this sort of leaves you wondering: secrecy may be all well and good among consumers, but enterprise customers like to see a road map. You are talking people committing their company on a product for the next 3-10 years. If Apple has a 3U offering in the offing, they should inform people about it as well as other enterprise class plans. But, I guess Apple only gives such things to Pixar. ;)

No, the IBM enclosure is not 5U, it's 7U. It houses 14 blades and associated cabling (networking, KVM, and the like). We have one at work for the last half year or so. The design is excellent.

Yes, IBM and Intel have partnered up to standardize the design. That is exactly what I've been saying. In fact, the blade Intel sells is just a rebadged IBM one.

No, IBM blades do not work in a Dell enclosure. Because of the R&D involved, Dell can't cost-compete with IBM/Intel or anyone else for this matter so they are pushing their own standard based on PC-quality parts instead of PC-standard ones. What Dell has been doing for the last year is selling a three year old personal computer in a blade configuration and charging you blade prices. The supply chain advantages don't count for squat right now in the bladecenter market.

They will someday, that's why IBM and Intel have allied. IBM figures the key will be software and services to manage the blade (and the flexibility to use different architectures in the same blade center) which goes well with their business initiatives. And Intel figures commoditization of the blade market will sell more CPUs and more importantly get Itanium's foot finally in the Pentium door.

I don't know what your obsession is with removing the hard drive. There is no blade on the market that comes without it and they all use notebook hard drives which are quite small. Even the "Big Mac" cluster in Virginia finds the local ATA drives of their G5 compute nodes useful, and, if anything, when possible a big locally accessible drive for certain things is preferable to a centrally stored NAS or Fiber Channel.

There is no way Apple has a chance of "cornering the market" as long as IBM or Intel produce the processor being used. The big purchaser of blades are for rendering and then the per-computer OS license is paramount (hence the prevalence of Linux). They can, however be a decent niche player. Some SME's like us like the blade configuration because it has a low cost entry point but room to grow and a flexibility to avoid vendor or platform or OS lock in. If Apple introduces their own non-compatible platform though, they won't be considered by others like us.

Don't count out the possibility of Apple licensing their OS to IBM though. IBM is a company that has rebadged many a thing (anyone remember Palm V's badged as IBM) and they sell and support competitor's products willingly... (that "elephants can dance" thing).
 
Originally posted by wrylachlan
I'm sorry if I'm belaboring the point, but I feel as though you are very knowledgably answering questions that are slightly tangential to what I'm asking. Let me ask it this way - "With the knowledge you have of blades, if you were in charge of designing an Apple blade specifically for the Apple market, what decisions would you make and why?"

If I were Apple, I would not spend R&D dollars on a blade server design at this time. Apple does not have the credibility in the enterprise hardware market and Mac OS X Server currently does not solve any problems that Linux, Solaris and AIX do not.

If I were Apple, I would focus on building great dual and quad processor systems and innovative storage systems. The vast majority of the server market is dual and quad processor systems and Apple can use them to build its brand name. In addition, the R&D dollars can be used to bring out dual and quad processor workstations using many common parts. They could even be the same products only in veritcal oriented cases instead of horizontal rack mount ones.

I would create two server, three storage products and improve the software offerings.

One server would be a 1U dual processor system using PowerPC 970 processors as soon as possible. The second server would be a 3U quad processor system using PowerPC 970 processors, also as soon as possible.

One storage product would be a 1U RAID array with Serial ATA drives and a fibre channel interface. The second storage product would be a 3U RAID array with Serial ATA drives and dual fibre channel interfaces. The third storage product would be standard cabinet filled with individual 3U storage arrays and two or four fibre channel interfaces for the whole cabinet.

Finally I would work with companies like HP, IBM and Veritas to get their enterprise management software running under Mac OS X.

Only after I were able to accomplish all of that would I even consider going after the blade market. Mac OS X still would not offer anything above Linux, Solaris and AIX, but at least it would be close to parity so that Apple could use the lure of sexy Mac OS X workstations to tip the balance in their favor.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.