Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
More blanket statements. Yeesh. Depending on the type of service, "RAID and redundant hardware" has monumental costs in administration, because of the close coupling of the compute and the storage. It is very obviously not the best choice for everything, particularly if we are talking about "The Enterprise" as in high capacity/many users/other raw number metrics.

Look, we get you like the Google model of 10,000 cheapo boxes that you can afford to lose 10% to 15% at any one time. The fact is, it works for Google because they designed their software in-house to scale to such an installation. Try getting similar from Oracle, VMWare, IBM, HP.

I don't want to have to write my own J2EE application server in house just because I want to run it off the surplus PCs that have accumulated over the years, in the end it's cheaper to just get a few Proliant boxes with proper hardware redundancy and serviceability and run WebSphere or JBoss or Glassfish. I'll still run them in Active-Active or Active-Passive configuration on top of having full redundant hardware that is online serviceable unless we're talking C&B, DEV or QA systems.
 
And you run a pretty funny shop where you can afford to leave your redundant systems down for any length of time when a simple hot-swappable mirrored drive would've provided an added level of security.

No one is advocating hot swap drives as the be all, end all. They are 1 tool in a big toolbox of redundancy and availability. Who cares that my mail services are using Active-Active type clustering over multiple nodes, it's never ideal to leave one of these nodes failed because I wanted to save 500$, nor is it ideal to have to plan downtime on a node because I was too cheap for it too.

Same for redundant power supplies, same for redundant nodes in a cluster, same for redundant fabrics in a fiber channel SAN, same for redundant mirrored storage arrays.

Each is part of a whole that makes sure your systems are available for as long as you can possibly plan for.

It's not ideal, but nothing ever is. Everything is a compromise. Saving $500 x hundreds or thousands of machines is extremely significant. Why would you spend thousands, tens of thousands, hundreds of thousands, millions of dollars to attempt to ensure that a redundant node can have a drive swapped without being taken offline? What is this drive being used for? If it's being used for persistent storage, then there is a line where the cost of the kind of assurance you're talking about becomes prohibitive compared to the cost of moving away from local storage. Everything is a continuum, everything is a compromise, and everything depends on the situation.

You keep talking in phrases that are way too general to say anything useful about them.
 
This person, and many mad people, need to realize that the new Mac Pro server that is replacing the Xserve is the same price, but has a better processor (2.8 instead of 2.2) and has MUCH more storage (160gb vs 2TB). I'm going with the Mac Pro on this one.

Taking only 1U of rack space is far more important than 2TB internal disk on a server. This is a HUGE hit, simply for the form factor alone.

Calculating BTU & space, especially in a datacenter where space is leased by the U, is going to be a major issue.

For my enterprise, with approximately 10,000 Macs, I now have my authentication strategy on end-of-life hardware. The fact that Apple thinks a ****ing tower server under a desk somewhere is going to be an adequate replacement for the XServe is crazy, terrible, and just... wow. It's really, really, really bad.

To me, this is a very bad sign for OS X Server, Directory Services, and Managed Preferences in the enterprise. This might work for small mom-and-pop design shops, but it'll never fly in the enterprise, especially those running a global Active Directory deployment and can't extend the schema to support Managed Preferences.
 
Wirelessly posted (Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_1 like Mac OS X; en-us) AppleWebKit/532.9 (KHTML, like Gecko) Version/4.0.5 Mobile/8B117 Safari/6531.22.7)

sportsfan said:
It's rough for enterprise users, but it's not like they killed off anything running OS X Server. It's just a different shaped machine now....

Apple did kill off something....the mini and PRO configured as a server may give you everything software wise. But it is missing the redundant power supply and hot-swappable drives....2 BIG/MAIN things required of a SERVER.

I also agree with the poster who said that this could cause sales of mac client computers to also go down if these servers were used to maintain/manage them....

Yet Google runs their servers with none of these. The need for hot swap parts is a sign of poor data center design. We are talking commodity hardware here, it is far cheaper to have an extra server or two take the load of any failed unit.

Often when listening to people talk about their data centers it is obvious they are wrapped up in the old ways. For what people spend on "server grade" hardware they could build multiple data centers with commodity parts.
 
Apple may only sell 10,000 Xserves but maybe each one of those supports a lab of 100's of MacPros and iMacs.

IT managers might be buying many less Macs because of this.

We use Xserves and we are going to be screwed!
 
Apple may only sell 10,000 Xserves but maybe each one of those supports a lab of 100's of MacPros and iMacs.

IT managers might be buying many less Macs because of this.

We use Xserves and we are going to be screwed!

Thats exactly the issue. What are those 10,000 servers doing? Most of them are servicing larger Mac deployments. The Mac is dead this is just a precursor.
 
Look, we get you like the Google model of 10,000 cheapo boxes that you can afford to lose 10% to 15% at any one time. The fact is, it works for Google because they designed their software in-house to scale to such an installation. Try getting similar from Oracle, VMWare, IBM, HP.

Nope. I've spent my life in the world you're talking about, not the Google world.

I'm talking about services running on racks of Sun gear, or HP or IBM (both commodity installs for running things like Oracle or web applicaitons, and HPC clusters). Etc. You're right that very few operations other than Google could pull off what they do (and frankly, they get too much credit for what they do--their uptime and data loss history on everything but their search is not one I'd take over the similar services from, say, Yahoo or MS).

You claim that you aren't say that hot-swappable drives are the be-all-end-all, but your very first post that I was responding to said exactly that. That's all I've been addressing. Now that you've taken it back, I accept that we're more in agreement than your initial hyperbole made it seem. ;)
 
hmm

I think they were not selling well because of who apple gears their stuff towards .They gear their stuff towards users who just need things to work. Big organizations i think would be using linux and windows for servers and the apples would be used on video production or users who wanted an apple.

I was running a g4 mdd with osx 10.5 on a windows 2008 server network just fine.

I do not see the need for an apple server. I can see mac pros and imacs but isee no need for an apple server.

Small businesses that run all macs i can see using mac mini's. once you start getting to medium and large scale dont see these xserves being popular.
 
Wirelessly posted (Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_0_1 like Mac OS X; en-us) AppleWebKit/532.9 (KHTML, like Gecko) Version/4.0.5 Mobile/8A306 Safari/6531.22.7)

Mattie Num Nums said:
The Mac is dead this is just a precursor.

Where have I hear that before :rolleyes:
 
It's not ideal, but nothing ever is. Everything is a compromise. Saving $500 x hundreds or thousands of machines is extremely significant. Why would you spend thousands, tens of thousands, hundreds of thousands, millions of dollars to attempt to ensure that a redundant node can have a drive swapped without being taken offline? What is this drive being used for? If it's being used for persistent storage, then there is a line where the cost of the kind of assurance you're talking about becomes prohibitive compared to the cost of moving away from local storage. Everything is a continuum, everything is a compromise, and everything depends on the situation.

You keep talking in phrases that are way too general to say anything useful about them.

You are confusing client computers and servers.

If something is need to be full time up it has to be running 99.9% of the time during that span it is required. For 24/7 server that means it can be down for 8.75 hours per year. That 8.75 includes downtime for maintenance and I can tell you 9 hours per year is cutting it pretty close just on maintenance time required. If things are really important you start adding 9's to the 99.9... so it gets really insanely small time span. As soon as you need 99.9 for any length of time you need it to be able to hotswapt and keep on runnings.

For you work stations and desktop for employee that just means having around a few spare desktops they can hope 2. For your servers you can not afford to have spare ones runnings. Power requirements alone would kill you.
 
You cannot pretend to sell servers if all you'll have is a single model.
 
You are confusing client computers and servers.

Not remotely. :) You need to read more closely, you're too eager for a fight.

If something is need to be full time up it has to be running 99.9% of the time during that span it is required. For 24/7 server that means it can be down for 8.75 hours per year. That 8.75 includes downtime for maintenance and I can tell you 9 hours per year is cutting it pretty close just on maintenance time required. If things are really important you start adding 9's to the 99.9... so it gets really insanely small time span. As soon as you need 99.9 for any length of time you need it to be able to hotswapt and keep on runnings.

You are simply, obviously, and demonstrably wrong. You clearly have no knowledge of data center design that uses redundancy at levels other than down at the drive bay or PSU. This is fine--there are a lot of things I don't know anything about, either--but you really shouldn't speak so authoritatively about the field.

For you work stations and desktop for employee that just means having around a few spare desktops they can hope 2. For your servers you can not afford to have spare ones runnings. Power requirements alone would kill you.

Ah, so you can conceive of some other level of redundancy, but apparently think it means having a duplicate of every machine... ? See above. ;)
 
Linux =! Mac OS X Server

Many posts seem to point to the fact that a Linux server can easily do instead of a Mac OS X Server.
Nope.

Using Mac OS X Server is all about managing Macs in a Mac way by Mac admins. Mac OS X Server uses Open Directory to polish the Macs on your network. NetBooting using DeployStudio gives you the best in image administration.
Ever tried setting up a Podcast Producer on a non Apple-Server..?

Mac OS X Server *has* to run on Apple hardware... well.. ATM.

So, if you want to run Mac OS X Server in your server room, you needed an Xserve.

I just wonder what will happen next:
- Mac OS X Server to be virtualised?
- Mac OS X Sevrer to be licensed to a 3rd part server maker (like Xserve RAID >> Promise VTrak move)? Ha.... how about Mac OS X Server PPC_64 on a Power 6 by IBM... :p
 
It's not ideal, but nothing ever is. Everything is a compromise. Saving $500 x hundreds or thousands of machines is extremely significant. Why would you spend thousands, tens of thousands, hundreds of thousands, millions of dollars to attempt to ensure that a redundant node can have a drive swapped without being taken offline? What is this drive being used for? If it's being used for persistent storage, then there is a line where the cost of the kind of assurance you're talking about becomes prohibitive compared to the cost of moving away from local storage. Everything is a continuum, everything is a compromise, and everything depends on the situation.

And as I said earlier, that's why we moved away from local storage to a SAN based solution. Guess what, I'm not leaving a failed drive in the storage array just because it's not sitting in a server anymore and I'm still not using a Mac Pro as a server just because all the storage needs are met by the fiber channel adapter.

You keep talking in phrases that are way too general to say anything useful about them.

And you aren't ? You never actually address any of my arguments, you keep simply dismissing them and insulting me. Yet you tout your big knowledge. At least I don't just dismiss others, I tell them why they are wrong. You may not like the manner in which I do it, but I still tell them exactly what is wrong with what they said.

You have yet to even answer one of my points. Keep on attacking me and just saying "it doesn't matter!".
 
And you aren't ? You never actually address any of my arguments, you keep simply dismissing them and insulting me. Yet you tout your big knowledge. At least I don't just dismiss others, I tell them why they are wrong. You may not like the manner in which I do it, but I still tell them exactly what is wrong with what they said.

This is clearly silly, but I'm not going to argue it with you. I'm not going to change your mind about yourself.

You have yet to even answer one of my points. Keep on attacking me and just saying "it doesn't matter!".

I've answered all of your points, inasmuch any of them were specific enough to answer. You say that hot-swap and multiple PSUs are mandatory, full-stop, for any "server". I say, you are quite clearly wrong (and you've now acknowledged it was an overreach so we agree). I listed counterexamples, such as the various blade ecosystems, and the one you also have now brought up--moving (at a certain point) to SAN over local storage.

Filling up a server room with MacPros would be dumb, and I never said it wasn't (go back and look). I'm just continually annoyed by the blanket generalizations around here.
 
I suspect they'll have to revisit this. I mean with the explosion of the iPhone, the iPad and associated web sales of apps for each there's a pretty decent case to be made that there's a potential custom intranet over the internet for businesses option available that would be unrivaled by anyone. I suspect Blackberry is doing this with their iPad rip off hardware, so if that starts pushing on Apple sales in the business sector down and business journals start reporting the Blackberry iPad has more server side options for Enterprise admins Apple will at least consider getting serious about selling and end to end App Dev./Deployment/iPhone-iPad-MBA solution for the enterprise sector. The more screwy thing is that Apple has been trying to get significant numbers with their "Utilities/iWork" apps to partially displace some of MS' Office power for YEARS and the iPad/iPhone is probably the crow bar that will let them do it.
 
Last edited:
I've answered all of your points, inasmuch any of them were specific enough to answer. You say that hot-swap and multiple PSUs are mandatory, full-stop, for any "server". I say, you are quite clearly wrong (and you've now acknowledged it was an overreach so we agree). I listed counterexamples, such as the various blade ecosystems, and the one you also have now brought up--moving (at a certain point) to SAN over local storage.

My blades and SAN both have hot swappeable online serviceable parts, including the drive trays.

I don't see how either are an example of "hot swap drives" being a thing of the past. Any server should at least have online serviceable storage if it's even going to use the local storage.

The Mac Pro is not a data center ready server, not even close. Heck, It's hardly a small group ready server, Dell has towers with hot-swap drive bays in them. I'll go even further, my home NAS made by some chinese company (QNAP) has hot swappable drive trays... in a 400$ package....

That is not surprising. The real question is how well Macs integrate in an Oracle environment.

Oracle is pretty client agnostic. If you're talking as a server platform, they don't yet ship 11g, but 10g release 2 is available.
 
As has already been posted repeatedly, they use Sun (now Oracle) servers and Solaris. Or, in other words, Apple has never eaten its own dog food. Now THAT should make one think -- obviously, Apple never believed that their own hardware and server operating system was up to the job, or else they would have used it.

When Microsoft bought Hotmail, they at least tried to migrate everything to Windows Servers. It appears that they show a different attitude towards their own products in Redmond.

This question was asked before the answer was given (that's how it usually works). You should read the threads before complaining like a girl.
 
Not remotely. :) You need to read more closely, you're too eager for a fight.



You are simply, obviously, and demonstrably wrong. You clearly have no knowledge of data center design that uses redundancy at levels other than down at the drive bay or PSU. This is fine--there are a lot of things I don't know anything about, either--but you really shouldn't speak so authoritatively about the field.



Ah, so you can conceive of some other level of redundancy, but apparently think it means having a duplicate of every machine... ? See above. ;)

Umm I believe you are showing that you have zero clue. Knight has been ripping you to shread and all you done is insult.

Also I never said you need ot have 2 computers for every employee. But at any companies you can generally assume that they have a few spare computers that are not in use at any moment in time. So you might only need 1 extra computer for every 10-15 employees and at that point it not uncommon to find an unused computer you can go to in a pinch.

I would sujest that you just give up instead of showing you are just another apple Appoligist here.
 
My blades and SAN both have hot swappeable online serviceable parts, including the drive trays.

I don't see how either are an example of "hot swap drives" being a thing of the past. Any server should at least have online serviceable storage if it's even going to use the local storage.

Your "if" in that last sentence is the key thing. Of course your SAN has hot-swappable storage--that's the whole point. Many (most) blades do not, and most blades do not have redundant power themselves. In part because of that "if". I'm going to say again, we're not actually that far apart here. The issue was just your original blanket statements--in your haste to tell someone how stupid they were, you were yourself generalizing to the point of falsity. That's all I cared about.
 
Also I never said you need ot have 2 computers for every employee. But at any companies you can generally assume that they have a few spare computers that are not in use at any moment in time. So you might only need 1 extra computer for every 10-15 employees and at that point it not uncommon to find an unused computer you can go to in a pinch.

I would sujest that you just give up instead of showing you are just another apple Appoligist here.

"For every employee?" What in the world are you talking about? Apparently you are the one with desktops on the brain, sir. :) We were talking about data centers.

Feel free to suggest anything you like, once you've caught up on the conversation.
 
I could buy 1 XServe, 10 copies of VMWare, 10 copies of OSX Server, and run 10 servers on one piece of hardware without voiding my EULA.

Do you mean "10 VM licenses"? Some VMware versions do not limit the number of VMs that you can run - you need one license. Others have limits on the number of active VMs, so you'd need one VMware copy with a 10 VM license.

I'm probably not up-to-date on all of the nuances of vSphere licensing - so please correct any mistakes that I may have made.


My last company had 2500 Xserves. The guys I talked to that still work at that location have told me that they just spent 10 million dollars in upgrades 2 months ago.

You say "still work at that location" - did a bunch of them get pink slips on Friday? :(


In this case, Apple make money on the Macs which are supported in the Enterprise and .Edu environment by their Xserves.

With Apple's record revenues and profits, it really seems short-sighted to cancel the XServe's place as a "link" in the Apple OSX ecosystem.

Companies that are hemorraging red ink need to do that - Apple's bottom line is not affecting by low XServe sales.


hahaha "the end of Xserve is a knight in our back" classic :)

Note that the sender of the message apologized for any mistakes, since he wasn't a native English speaker. It's hard to be perfect in a handful of languages.


I don't think a lot of people here have any idea what a server does or what Enterprise units do with Xserves.

Have them look at CNBC's graphics SAN using Apple XServe, and maybe it will help them realize why Mac Pros and Mini macs can't substitute for a rackmount system.


You cannot pretend to sell servers if all you'll have is a single model.

True - and something that I've said here several times.

People would be cheering the "death" of the XServe if Apple had said "We're killing the XServe, but announcing that Apple OSX Server will be supported on certified configurations of ProLiant DL360, DL380 and DL580 systems (both native and under Hyper-V and ESX)".

For those unfamiliar with HP servers, the DL360 base models are 1U systems roughly similar to the XServe (but can be optioned far beyond an XServe).

DL380 systems are basically the same, but in a 2U rack cabinet with more memory and PCIe expansion.

The DL580 systems are 4U systems with quad sockets (up to 32 cores), RAM up to 1 TiB, 11 PCIe slots, 8 hot swap disk slots with 512 MiB or 1 GiB battery-backed write cache, 4 gigabit Ethernet ports with embedded offload engines for both TCP/IP and iSCSI,....

Instead of destroying all cred with the enterprise - Apple could have embraced the enterprise even while shedding the XServe team.
 
Your "if" in that last sentence is the key thing. Of course your SAN has hot-swappable storage--that's the whole point. Many (most) blades do not, and most blades do not have redundant power themselves. In part because of that "if". I'm going to say again, we're not actually that far apart here. The issue was just your original blanket statements--in your haste to tell someone how stupid they were, you were yourself generalizing to the point of falsity. That's all I cared about.

My blade cabinets all have multiple, redundant power supplies. I wouldn't buy blade cabinets that didn't. My blades that have no hard drives are still connected over 2 distinct fabrics switchs that can be hot swapped out of the cabinet.

Everything on my blade systems are hotswappable. Heck, the blade themselves are (of course that blade is going to be offline, but I don't need to bring down all the blades to service one).

Blade systems are really a bad example when you want to claim hot swappable parts are a need of the past...

People would be cheering the "death" of the XServe if Apple had said "We're killing the XServe, but announcing that Apple OSX Server will be supported on certified configurations of ProLiant DL360, DL380 and DL580 systems (both native and under Hyper-V and ESX)".

Instead of destroying all cred with the enterprise - Apple could have embraced the enterprise even while shedding the XServe team.

My point since the beginning. And even if this is ultimately the plan, they just pissed all over it by discontinuing the hardware without announcing this at the same exact time.

You don't leave enterprise customers hanging and waiting in secrecy for an announcement that might or might not come along.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.