Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
My blade cabinets all have multiple, redundant power supplies. I wouldn't buy blade cabinets that didn't. My blades that have no hard drives are still connected over 2 distinct fabrics switchs that can be hot swapped out of the cabinet.

Everything on my blade systems are hotswappable. Heck, the blade themselves are (of course that blade is going to be offline, but I don't need to bring down all the blades to service one).

Blade systems are really a bad example when you want to claim hot swappable parts are a need of the past...

Okay, now this is just getting tiresome because you're overtly obfuscatory and deceptive.

You clipped out the part where I explicitly said that hot-swappable drives aren't a thing of the past. The point is that redundancy moves around throughout the stack. Yes, your blade enclosures have redundant power so that the blades themselves do not have to. I said blades, not enclosures. The point is, there are very many different ways to achieve a similar end, and you are now again admitting that your initial claim: "EVERY SERVER must have all redundant components!" is just not true.

So, fin. The point has been made. :)
 
Apple is for Consumers

Stay tuned for "Introducing the iTunes Home Server" from Apple. Create your own cloud to place your movies, photos, TV shows, apps that you can access from either your Mac, PC, iPhone, or iPad.
 
Okay, now this is just getting tiresome because you're overtly obfuscatory and deceptive.

You clipped out the part where I explicitly said that hot-swappable drives aren't a thing of the past. The point is that redundancy moves around throughout the stack. Yes, your blade enclosures have redundant power so that the blades themselves do not have to. I said blades, not enclosures. The point is, there are very many different ways to achieve a similar end, and you are now again admitting that your initial claim: "EVERY SERVER must have all redundant components!" is just not true.

So, fin. The point has been made. :)

And you're arguing semantics and nit picking. Blade systems are consolidated systems. They do have redundant power supplies, it's just that the cabinet gives this service to each individual blades.

I still consider each of those blades as having redundant power supplies.

The fact is, you still haven't addressed any of my arguments after name calling me a nerd for daring to say ERP in a discussion about enterprise.
 
I am one of Apple's small business clients running OS X Server 10.5.8 on a 2008 Mac Pro. I had considered buying an Xserve, but settled on the tower because the closet I store the server in is basically just that, a closet.

I think too many OS X Server clients are like me. Basically we only need one or two servers and our needs are basic. I need file and print and VPN mainly.

I'm sad to see the Xserve go, but am more concerned about the future of OS X Server. Once Apple has decided to end a product it ends. If there is not dedicated server hardware then I'm not sure there is much need for a server OS.

When OS X first came out we had 10.0 Server on a 533 G4 Server. There was no Xserve and it was a risk buying into the platform. I think that now that the product has failed the risk is probably even higher.

I had planned to replace the server in the first quarter of next year with another OS X Server, but I think I'll hold off until I hear whether or not Apple announces OS X Server Lion.

I'm not so sure that will be the case.
 
And you're arguing semantics and nit picking. Blade systems are consolidated systems. They do have redundant power supplies, it's just that the cabinet gives this service to each individual blades.

I still consider each of those blades as having redundant power supplies.

The fact is, you still haven't addressed any of my arguments after name calling me a nerd for daring to say ERP in a discussion about enterprise.

The fact is, you made precisely one claim that was specific enough to confirm or refute: "All servers must have redundancy at the component level." And you have now admitted that it is inaccurate, and the right choice is sometimes redundancy at a different level. If you hadn't clearly been relishing the chance to browbeat someone, I wouldn't have cared. But since you keep trolling these forums abusing people for every mistake, real or imagined, you need someone watching over your shoulder for the same every now and then.

That is all.
 
My point since the beginning. And even if this is ultimately the plan, they just pissed all over it by discontinuing the hardware without announcing this [3rd party hardware] at the same exact time.

You don't leave enterprise customers hanging and waiting in secrecy for an announcement that might or might not come along.

The phrase "at the same exact time" is crucial.

Apple has burned the bridges with the enterprise - no future announcement of virtualization or 3rd party hardware support will be taken seriously.

At best, some companies might use the newly announced support to merely delay their move to a non-Apple environment. It would be career-threatening to trust Apple ever again.
 
Next on the hit list.......


  • Final Cut
  • Motion
  • Color
  • Logic
Basically, anything that falls in to the "professional category".

:(
 
Engage a little common sense....

Yes, your blade enclosures have redundant power so that the blades themselves do not have to. I said blades, not enclosures.

The fact is, you made precisely one claim that was specific enough to confirm or refute: "All servers must have redundancy at the component level."

I think that you're being pedantic here.

Since blades don't have power supplies, how can they possibly have redundant power supplies? Or, how can they have non-redundant power supplies? They have *no* power supplies.

The only reasonable interpretation is that redundant *chassis* power supplies are equivalent to redundant *server* power supplies when applied to blades. (BTW, I have some IBM blades where each chassis has quad power supplies with 3phase 208 volt input - they use so much power that 3+1 redundancy is required. I know things about datacenter power needs that I wish that I didn't.)

It gets even more twisted when you look at DC-powered racks. None of the rackmount systems have power supplies - but the DC power is redundant.
 
I think that you're being pedantic here.

Since blades don't have power supplies, how can they possibly have redundant power supplies? Or, how can they have non-redundant power supplies? They have *no* power supplies.

The only reasonable interpretation is that redundant *chassis* power supplies are equivalent to redundant *server* power supplies when applied to blades. (BTW, I have some IBM blades where each chassis has quad power supplies with 3phase 208 volt input - they use so much power that 3+1 redundancy is required. I know things about datacenter power needs that I wish that I didn't.)

It gets even more twisted when you look at DC-powered racks. None of the rackmount systems have power supplies - but the DC power is redundant.

Its useless Aiden. This guy is just flinging ***** to cover the holes in his argument.
 
This. When you see someone who insists that you need a fully-redundant SAN with hot-swappable everything for their CMS... you're seeing someone who is a lot more interested in growing their budget than doing something sensible. People like this flourish in the much-ballyhooed "enterprise", because near as I can tell "enterprise" actually means "whatever definition is convenient for me in the current argument, but it definitely requires that there be a lot of money involved and ignoramuses with MBAs making the decisions."

Hah I know what you're talking about.. I had this conversation with our IT department.

I was asking why we have a five year lease on computers that are then so far out of date that you end up paying like $2,000 per computer that was only $500 to begin with.
Well they told me that if something breaks you just call Dell and they have to deal with it fix it withen 4 hours blah, blah, blah.
To which I replied.. You can give everyone Asus Atom Nettops for $250 each and if one breaks you just swap out the whole computer.. You could replace All of the computers every year and still come out paying less than you do with a lease/support.

It's not like they have files on them everything is stored on the Servers.. the computers are just nodes.

"But that's they way they want to do it, they want "support" and for it to be someone else's problem.

Bunch of B.S.
 
Its called "The Cloud"

Get use to it... It is the wave of the future. No more traditional Data Centers.
 
Hah I know what you're talking about.. I had this conversation with our IT department.

I was asking why we have a five year lease on computers that are then so far out of date that you end up paying like $2,000 per computer that was only $500 to begin with.
Well they told me that if something breaks you just call Dell and they have to deal with it fix it withen 4 hours blah, blah, blah.
To which I replied.. You can give everyone Asus Atom Nettops for $250 each and if one breaks you just swap out the whole computer.. You could replace All of the computers every year and still come out paying less than you do with a lease/support.

It's not like they have files on them everything is stored on the Servers.. the computers are just nodes.

"But that's they way they want to do it, they want "support" and for it to be someone else's problem.

Bunch of B.S.

Its not BS its smart business practices. Its not always about having cutting edge all the time. For IT departments its more about having stability in management. If you get new computers every year the ability to manage the computers drops drastically. Having a business computer and your home machine are two different things. IT departments need standards and roadmaps. Most IT departments have the next 3-5 years already planned out and budgeted.
 
Hah I know what you're talking about.. I had this conversation with our IT department.

I was asking why we have a five year lease on computers that are then so far out of date that you end up paying like $2,000 per computer that was only $500 to begin with.
Well they told me that if something breaks you just call Dell and they have to deal with it fix it withen 4 hours blah, blah, blah.
To which I replied.. You can give everyone Asus Atom Nettops for $250 each and if one breaks you just swap out the whole computer.. You could replace All of the computers every year and still come out paying less than you do with a lease/support.

It's not like they have files on them everything is stored on the Servers.. the computers are just nodes.

"But that's they way they want to do it, they want "support" and for it to be someone else's problem.

Bunch of B.S.

Not to but in with your perception, but...

The reason companies keep their equipment (usually 5 years), purchased or leased, it due to their accounting rules. They probably are on a 5 year depreciation cycle.

Believe me I have tried asking the accounting department to reduce that to 3 years because of the technology whirlwind, but it seems one they are put into place they never change.
 
Why do you need OSX instead of Linux/Unix servers?

Possibly because the upper level has stated they wish a single point of contact for for the server (hardware and OS)?

The above describes my issue. There are plenty of people involved in my organization that would just as soon pull every Macbook, iMac, iPod and iPad out of the field and replace them with Dell desktops, laptops and netbooks.

The offset argument used in the past (which secured a presence for Apple products) was the single point of contact via an Enterprise Applecare subscription.

Additionally (as has been mentioned earlier), emulating many of the services incorporated in OS X Server are a dubious proposition.

I'm not saying that they cannot be replaced, but the polish that Apple has provided is a selling point for the desktop (think Podcast Producer). Can SUS be recreated under Linux? Sure, but what if your employer isn't sold on homebrew services for something as important as software updates? What if AD is controlled by another department and refuses to extend AD and management has declared that OpenLDAP isn't allowed and you can't have the budget for Centrify?

Look, I'm not the Mac apologist here -- my background is as an independent consultant that is more accustomed to digging into FOSS projects for client solutions than recommending closed options; but when a client or employer tells you - "You cannot do that." That's that. You can petition, or slowly try to make a change; but rarely do you get to tell your boss, "Hey man, quit whining. Suck it up and toss some Linux on that box or deal with filling up the server room with 12U towers that aren't servers, loser."

You go that route, you wind up figuring out your next job before you planned on and (if you were right all along), they wind up pulling the platform once systems reach EOL instead of refreshing.

Some of you people kill me. "Well, I don't see a need for it, so quit complaining." And this from a crowd that generally wants to see more Macs in public spaces.

Wow.

Here's food for thought, many network admins feel the exact same way about even having a Mac on their network -- and many support level technicians feel the same way about having them on the desk. Your attitude is no different than theirs, so you shouldn't be too upset when someone comes along and tells you that you can't place your Apple branded device on the network at all.

Next on the hit list.......


  • Final Cut
  • Motion
  • Color
  • Logic
Basically, anything that falls in to the "professional category".

:(

Final Cut and Logic are already threatened by the cancer of "It should be more mainstream and user friendly..." I thought that's why we had iMovie and Garageband...

My fear (as I've stated a few times today already), is that we're going to wind up with iPads, iPhones and Air's running a hybrid OS X/iOS and everything else can go away.

Which, I suppose, might actually make things easier in the long run.

I'm personally tired of this fight today. I've spent most of my waking hours since Friday trying to find solutions and now I've got to begin preparing actual suggested courses of action. Budget plans don't like ambiguity.
 
Not to but in with your perception, but...

The reason companies keep their equipment (usually 5 years), purchased or leased, it due to their accounting rules. They probably are on a 5 year depreciation cycle.

Believe me I have tried asking the accounting department to reduce that to 3 years because of the technology whirlwind, but it seems one they are put into place they never change.

Or, instead of just using consumer crap that might or might not have identical parts, they lease LTS type hardware that is guaranteed to be identical each and every time, be it on first purchase or a year later, so that you don't have a proliferation of system images to administer.

That and availability of parts in case of failures and the fact most employees have no clue about what "storing files on the damn server" even means and keep saving their files to C:. :rolleyes:

The fact is, you made precisely one claim that was specific enough to confirm or refute: "All servers must have redundancy at the component level." And you have now admitted that it is inaccurate, and the right choice is sometimes redundancy at a different level. If you hadn't clearly been relishing the chance to browbeat someone, I wouldn't have cared. But since you keep trolling these forums abusing people for every mistake, real or imagined, you need someone watching over your shoulder for the same every now and then.

That is all.

But again, Blades do have redundant power supply, at the component level. The chassis provides that service. Next you're going to be arguing how my VM images don't have 2 VMDKs in mirror for the boot drive and how both those VMDKs should be stored on seperate data stores on different luns... :rolleyes:

You're basically multiplying redundancy for the sake of multiplying it. I'm arguing it just needs to exist and it's not "old ways" or "things of the past!" like you claimed unilaterally.

Seriously, insults, dismissal, and now you call me a troll ? Good job pot, you just called me a black kettle.
 
Or, instead of just using consumer crap that might or might not have identical parts, they lease LTS type hardware that is guaranteed to be identical each and every time, be it on first purchase or a year later, so that you don't have a proliferation of system images to administer.

That and availability of parts in case of failures and the fact most employees have no clue about what "storing files on the damn server" even means and keep saving their files to C:. :rolleyes:



But again, Blades do have redundant power supply, at the component level. The chassis provides that service. Next you're going to be arguing how my VM images don't have 2 VMDKs in mirror for the boot drive and how both those VMDKs should be stored on seperate data stores on different luns... :rolleyes:

You're basically multiplying redundancy for the sake of multiplying it. I'm arguing it just needs to exist and it's not "old ways" or "things of the past!" like you claimed unilaterally.

Seriously, insults, dismissal, and now you call me a troll ? Good job pot, you just called me a black kettle.

I agree with you that enterprise components are better than something you can pick up at BestBuy. However, with the speed of how things change now, you cannot guarantee that your 4 year old system will be supported by the software vendors these days. I am just saying I wish they would allow us a faster depreciation cycle.

Don't mean to criticize you, but do your really mirror your vmdk? Weren't you saying you run on a SAN (implying raid)?
 
Don't mean to criticize you, but do your really mirror your vmdk? Weren't you saying you run on a SAN (implying raid)?

No, of course we don't mirror the VMDKs, I was saying he would try to claim I should next.

The VMDKs are stored on the storage array on the SAN. The fabrics are already redundant, the LUN is virtualized over dual parity disk pools (or single parity disk pools for C&B/DEV/QA type systems). There's already plenty of redundancy on the underlying datastore itself that adding yet another layer is just multiplying redundancy, like his claim of "blades don't have dual power supplies!" entails.
 
No, of course we don't mirror the VMDKs, I was saying he would try to claim I should next.

The VMDKs are stored on the storage array on the SAN. The fabrics are already redundant, the LUN is virtualized over dual parity disk pools (or single parity disk pools for C&B/DEV/QA type systems). There's already plenty of redundancy on the underlying datastore itself that adding yet another layer is just multiplying redundancy, like his claim of "blades don't have dual power supplies!" entails.

Sorry my bad, got lost in all the back and forth...
 
Get use to it... It is the wave of the future. No more traditional Data Centers.

And where do you think that cloud is going to be stored and processed?

I agree with you that enterprise components are better than something you can pick up at BestBuy. However, with the speed of how things change now, you cannot guarantee that your 4 year old system will be supported by the software vendors these days. I am just saying I wish they would allow us a faster depreciation cycle.

Don't mean to criticize you, but do your really mirror your vmdk? Weren't you saying you run on a SAN (implying raid)?

This contract from Dell says my system hardware is supported for 4 years. This statement from Microsoft says Windows XP SP2's support would end on July 13th, 2010 a hair under 6 years after SP2's release. These kind of things let you plan for the future. So yes, you can plan for 4 years in advance.

A hey, we're gonna stop making these in 2 months with no viable, and they aren't, migration path at all is not how you deal with enterprise customers. Oh btw, we're not telling you if you'll have a server version of our new OS either or what its requirements will be.

A one sentence response to Enterprise customers and a marketing department that keeps believing if they continue to tell us the MacPro is a viable alternative we'll just start to believe it, is outrageous.
 
Last edited by a moderator:
My fear (as I've stated a few times today already), is that we're going to wind up with iPads, iPhones and Air's running a hybrid OS X/iOS and everything else can go away.

Which, I suppose, might actually make things easier in the long run.

I'm personally tired of this fight today. I've spent most of my waking hours since Friday trying to find solutions and now I've got to begin preparing actual suggested courses of action. Budget plans don't like ambiguity.

Yep. I'm already wondering what our 5-year plan looks like.

The phrase "at the same exact time" is crucial.

Apple has burned the bridges with the enterprise - no future announcement of virtualization or 3rd party hardware support will be taken seriously.

At best, some companies might use the newly announced support to merely delay their move to a non-Apple environment. It would be career-threatening to trust Apple ever again.

Yep.

Apple nuked those bridges. And they have put a lot of doubt in the minds of someone like myself that works with graphics and photo/video professionals.
 
Last edited by a moderator:
Mac Mini and Mac Pro are not server-class systems.

Any one who has opened an Xserve can see the difference immediately.

And to the people saying 'omg the mac pro has a 2tb drive vs the 160gb drive and lower cpu in the xserve'.... you guys need to focus on the details.

A 2TB consumer HD costs about £70 .. a Raid Edition 160GB Sata costs at least that if not more... I usually buy Xserves with SSD boot drive (just to keep all bays free), 3 x 160GB (just to get the caddies) and swap them out with 3 x WD RE4 2TB drives in Raid 5 = 4TB Mirrored + Striped data partition.

With 2 x 1GB Ethernet bonded as a single 2Gb trunk, I still never max out the 2.26GHz Quad Core CPU (one only, no need for dual cpus for most people).

Big, cheap, unreliable disks and a faster CPU, do not a server make.....

Sometimes I think Steve Jobs is too focused on the consumer to see the bigger picture. Apple can afford to lose a few $$$ to keep Enterprise and .Edu onboard IMO.
 
No, a good way is to announce EOL with at least 12 months of notice and to provide a migration path (be it a 3rd party hardware vendor for OS X server or a partnership with VMWare to run OS X Server off their ESXi or vSphere products).

This was not well executed at all.

Read the other threads, a Mac Pro doesn't replace a Xserve.


It's a good way for the company, not for the Xserve users. A migration path is never gonna happen with Apple into 3rd party hardware, whether they could have said before that they are gonna discontinue it, maybe.
 
Wirelessly posted (Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_1 like Mac OS X; en-us) AppleWebKit/532.9 (KHTML, like Gecko) Version/4.0.5 Mobile/8B117 Safari/6531.22.7)

HLdan said:
OK, that's it. I'm sick and tired of your continued douchebaggery. If someone has a different opinion than you do, it must obviously mean that he's a moron who does not know a server from a hole in the ground!

Just because someone disagrees with you does NOT mean that the other person is incompetent or just plain stupid. So give it a rest already.

Thank you!!!! That's why he's on my ignore list. He treats a lot of people on the forum like they are stupid and he's the only smart one. Impossible to have a conversation with him when he's the only one that's right. Put him on ignore, it works wonders. :)

If only the ignore blocked out quotes too, i'd use it...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.