Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Also, what does it mean that empty bays have a "non-functional" drive-module, so you can't put a drive in there afterwards?
I don't think that's what it means. I would take this to me you can still install "qualified" third-party HDs.

(This being: "Drive bays not configured with a qualified drive module ship with a nonfunctional blank drive carrier and do not support installation of nonqualified third-party hard drives," from the tech specs.)
 
eSATA please - it's now a standard feature on many (maybe even most) systems, or an easy, cheap add-on.

Only problem with eSATA is that it is that you can't plug it directly into a router and share the RAID array that way like you can with a fibre optic connection. You have to plug it directly into the machine itself which limits its uses.
 
1U system are not enterprise worthy. We either buy blades for the density, management, and power savings or we buy 4U systems that can scale to 32 CPUs and a boat-load of memory. Apple makes neither of those types of systems. The 1U server is a dinosaur and not innovative at all.

4U servers are completely impractical in a datacentre. Blades have their own issues as well.

I would love to see a web host put lots of 4U servers in a datacentre and then promptly go bust because their density is too low and they need to pay for excessive rack space.
 
1U system are not enterprise worthy. We either buy blades for the density, management, and power savings or we buy 4U systems that can scale to 32 CPUs and a boat-load of memory. Apple makes neither of those types of systems. The 1U server is a dinosaur and not innovative at all.

In your opinion.

1U servers with 8 Nehelem cores running ESX3.5 attached through fiber to a SAN are very worthy.
Blades would have been great had we continued down the path of physical servers. And they still are for large data centers. But not for medium sized businesses. The chassis are ridiculously expensive, and the cost per blade is barely cheaper per unit than an equivalent 1U.
1U pizza boxes with ESX attached to a SAN are, in my opinion, the way to go.
 
Only problem with eSATA is that it is that you can't plug it directly into a router and share the RAID array that way like you can with a fibre optic connection. You have to plug it directly into the machine itself which limits its uses.

What are you calling "fibre optic", anyway.

I'm going to assume you mean Fibre Channel, since that makes the most sense.

I'll only say two things about FC.

  1. Dual-channel 4Gb Fibre Channel card [Add $600.00]
  2. Brocade 300 - switch - 8 ports - $2,659.99

A fibre SAN is a bit pricey - and the DroboPro is really better for a home server, it's hardly enterprise grade SAN storage.
 
cool if not a bit expensive.

To go along with my new XServe, I'm ordering a NEW DroboPro rackmount 8-bay enclosure!!

I'm in need of a solid backup arraingement and was looking at Drobo recently. Nice but a bit expensive. Still don't know which route I will take as a home built Linux server has a cost advantage. Drobo has a good reputation so is still a contender and setting up one of those new XServes to manage a backup volume has it's own appeal. Decisions decisions.

By the way a Nikon Maymia user here.
 
Better than the previous Xserve's integrated 64MB ATI chip. :D

But I always figured that you could throw a Mac Pro-compatible graphics card in one of these things, since the Xserve is basically a "squished" Mac Pro, right? Or is there even any physical room for a full-sized graphics card? :confused:

Better than the previous Xserve's integrated 64MB ATI chip.


Define "better". A server's graphics card only has to be able to display the desktop well enough to let you admin the system. Anything else is a waste of money and resources, and as such the ATI graphics would still be fine in my book.

throw a Mac Pro-compatible graphics card in one of these things

You might be able to fit a mac pro graphics card in there if there is room for the card's cooler (previous servers have had 'full height' card spaces so in that sense there is room for them on the back plate). Again though, why would you? If you need that kind of graphics running on that kind of hardware then Mac Pros are over there to your left...

And mini display port on a server? Thanks Apple, a non-standard (in server room terms) display socket that I'll have to fiddle around with adapters to plug into my rack KVM is just what I needed :rolleyes:.

Can't see an option for 10Gb Ethernet either :-(

I'll be buying one because our G5 based FCP data server is looking about ready for a rest, but things like display port and having to pay this much extra for hardware raid still seem a bit toytown on a server to me.
 
No kidding. Apple should license OSX Server to server OEMs and step out of that market themselves. OEM manufacturers can innovate and compete in this market segment that Apple clearly cannot.

1U system are not enterprise worthy. We either buy blades for the density, management, and power savings or we buy 4U systems that can scale to 32 CPUs and a boat-load of memory. Apple makes neither of those types of systems. The 1U server is a dinosaur and not innovative at all.

I don't think there's much wrong with 1U servers per se. Not so good as your sole server option though, that's for sure.
 
I don't think there's much wrong with 1U servers per se. Not so good as your sole server option though, that's for sure.

Why not?

It really depends on the system architecture as to whether or not solely 1U's are acceptable. As I said before, a few 1U's, a SAN, and VMWare ESX negates the need for large individual servers in a lot of cases, particularly in small to medium-sized businesses.
 
I have a very large family (let’s say 15 to 20 people in the house). I want anyone in the house to be able to go on any of our computers, and use it as if it were their own. Let’s say I have 3 or 4 Mac minis, and a couple notebooks, all of them "fare game" for anyone in the house.

Would xserve be a decent option?

No. There's no need for that much power for a simple file server. A machine designated as a server running Mac OS X would probably meet your needs just fine, say a Mac Mini. If you want to go all out, buy Mac OS X Server and put that on there. An Xserve in that case (and most others) is just a waste of money and electricity. The Minis are incredibly light on electricity, and have more than enough power to handle a small network.

Can someone who gets one of these new xserves do a quick test to see if you can install linux in it? I know its a pipe dream, but we have been talking to apple for so long about this, its a secret hope of mine that at some point they will add support for it in efi.

You can install Linux on any other Intel Mac, I see no reason why you couldn't on an Xserve. The question is, why would you want to? Linux will run on any x86 or x64 server, which can be had for far less than the Xserve. The only reason to buy an Xserve is to run Mac OS X Server.

X Serve Advatage over a Linux Unix Server.

I never really see an Advantage of Using an Xserve over a Linux/Unix Server. Is there something that makes them really stand out from the others. OS X in my opinion is a really good Desktop/Laptop OS but to do server functions is is just as good as Linux/Unix systems are.

When they were PowerPC chips I could understand the value for those who rather have PowerPC over Intel Architecture but now, I don't see any advatange

I would never use Mac OS X Server, but then I deal with web servers, where Linux is far and away the best choice. Hell, Mac OS X Server just runs Apache anyway.

By my understanding, Mac OS X Server is intended to compete with Windows Server rather than Linux. It's good for running file/netboot/administration services on a large corporate network, or for dealing with streaming media. If you run a web server on it, that would just be because you don't need to dedicate another box to the purpose.
 
After looking at the specs it looks like a nice upgrade.

Really it looks like a nice if not innovative upgrade to a server class machine.

As to the people think about this as a better / cheaper alternative to a Mac Pro it does have potential. Yes they are noisy but that can be dealt with. One way to do that is to simply locate the machine in another room. How well this might work with a display port cable is an open question as I don't know the maximum length permitted. It is intriguing to consider as an XServe in the cellar ought to isolate the noise to an acceptable extent.

One other thing that I'd like to get more info on is the power budget in the expansion slots. This to determine just what can go in there GPU wise. Not so much for use as a Mac Pro replacement but rather as an OpenCL platform. Suppsedly they have a 750 watt power supply in the machine so hopefully there is a few watts available for an OpenCL accelerator of a users choosing. I'm actually surprised that Apple hasn't agressively gone after the high performance computing space here, of course OpenCL isn't ready yet but they ought to be able to get the hardware out there.

I especially took notice of the following:

1.
The SSD boot drive is an excellent idea. Done right this becomes a drive that is primarily read and thus is a cheap way to beef up server performance.

2.
The Apple drive modules. Unfortunately this isn't something noted positively as one doesn't want to have to pay excessively for storage. Considering recent firmware problems in the industry I do understand a bit but it is excessive restrictions on customer flexibility in my mind. What if you want to through the latest and greatest SSD tech in there three months down the road? It is a mixed bag but alternative storage in these servers ought to be at the users discretion.

3.
The units have a nice port allotment but I'd like to see more USB ports. Especially one more at the front. The thing here is that USB devices have completely replaced just about everything else for file transfer that doesn't involve the network. Plus you still need a port for mouse and keyboard.

4.
Lots of memory!

5.
Still running 10.5.x. This of course is both good and bad. It is hoped that Snow Leopard server would dramatically improve performance on the new Intel processors. It makes the suggestion that people wait for Snow Leopard plausable.

Dave
 
Your perspective is not universal and frankly dated.


Better than the previous Xserve's integrated 64MB ATI chip.


Define "better". A server's graphics card only has to be able to display the desktop well enough to let you admin the system. Anything else is a waste of money and resources, and as such the ATI graphics would still be fine in my book.
Unfortunately the world does not read your book. The world has all sorts of uses for a 1u box that could benefit from servicable graphics. Not every 1u box ends up being a web serving slave.
throw a Mac Pro-compatible graphics card in one of these things

You might be able to fit a mac pro graphics card in there if there is room for the card's cooler (previous servers have had 'full height' card spaces so in that sense there is room for them on the back plate). Again though, why would you? If you need that kind of graphics running on that kind of hardware then Mac Pros are over there to your left...
Why would you ??? Well how about use as a computational server for one simple answer. I can see a lot of XServes being configured this way once OpenCL hits in force. That of course if the cards can fit in there and not otherwise compromise the system. This is still traditional server hardware stacking just with a different sort of ouput versus a web server.

By the way no the Mac Pro would not do the job here. The question in my mind is did Apple have the foresight to make sure the latest cards from ATI and Nvidia will work in the machine.
[/quote]

And mini display port on a server? Thanks Apple, a non-standard (in server room terms) display socket that I'll have to fiddle around with adapters to plug into my rack KVM is just what I needed :rolleyes:.
[/quote]
This makes about as much sense as the people that whine about HTML mail. The world moves forward my friend, take the attitude above and eventually you will get left behind.
Can't see an option for 10Gb Ethernet either :-(

I'll be buying one because our G5 based FCP data server is looking about ready for a rest, but things like display port and having to pay this much extra for hardware raid still seem a bit toytown on a server to me.

I really would have to think about the viability of hardware RAID on such a server. If you really think you need hardware RAID you ought to have an external box that can reasonably hold all the disks that you need. Software RAID, on a modern box, works fine with three drives. This isn't a G5 box at all. Save the slots for the faster networking cards.

Dave
 
Fibre Optic.

How so? Fibre optic is a perfectly valid way of describing the fibre cards that Apple supply.

Great - your wiki link points to a page with a photo of a 1.5 Mbps TOSlink !!


"Fibre optic" is a generic description of the physical medium.

It's the same as saying that you've connected a disk with copper. "Copper" could be USB, 1394, SATA, SAS, GbE, parallel port, ...

"Fibre" could be TOS, FDDI, 10GbE, FC and quite a few other protocols. It's ambiguous...
 
4U servers are completely impractical in a datacentre. Blades have their own issues as well.

I would love to see a web host put lots of 4U servers in a datacentre and then promptly go bust because their density is too low and they need to pay for excessive rack space.

Don't know where you're coming from. We have many HP DL580s that have VMWare on them and are running many web sites per system. We do a similar virtualization for our SQL Server instance on the same type of system. We've replaced over a hundred systems into a few VMWare clusters.

http://h10010.www1.hp.com/wwpc/us/en/sm/WF05a/15351-15351-3328412-241644-3328422-3454575.html

Blades are very mature and practical. Again I don't know where you're coming from.
 
I especially took notice of the following:

1.
The SSD boot drive is an excellent idea. Done right this becomes a drive that is primarily read and thus is a cheap way to beef up server performance.

Not really. It is far more effective at reducing energy costs when the bulk of your storage is off in the NAS/SAN (i.e. off the server). This way the OS drive which isn't doing much ( you shouldn't be heavily swapping more of the time) does nothing when they is nothing to do.

http://www.apple.com/xserve/performance.html [ go down to the disk section]

The SSD drive if want to pull/push alot of data is right in the same ballpark as the SATA drive. This is a SATA SSD drive. You can get nice random I/O response, but your OS should NOT be doing tons of random OS probes at the disk.

SSD are much better as a cache to the spinning drives or for doing tons of random I/O with block sizes that match their read/write block size.


2.
The Apple drive modules. Unfortunately this isn't something noted positively as one doesn't want to have to pay excessively for storage. Considering recent firmware problems in the industry I do understand a bit but it is excessive restrictions on customer flexibility in my mind. What if you want to through the latest and greatest SSD tech in there three months down the road? It is a mixed bag but alternative storage in these servers ought to be at the users discretion.

SSD on the PCI-e slot which isn't bounded by SATA/SAS speeds can have a bigger impact (if Mac OS had drivers for them).

http://news.cnet.com/8301-13924_3-10212989-64.html

so may not want to put them into Apple drive modules. The IOPs of SSD is in a different class that what is usually attached on SATA/SAS bus. Open question whether really want to use that bus for that in the future for latest/greatest SSD tech. It is nice to package as SATA since it is a driveless option, but that puts limitations on the drive.


Dell and HP have similar market segmentation for the drives that go into their server class boxes also.

If the limitation leads to substantially high likelihood of server uptime then most folks make that trade-off. Stick to the smaller subset of drives that the hardware vendor says is OK.


3.
The units have a nice port allotment but I'd like to see more USB ports. Especially one more at the front. The thing here is that USB devices have completely replaced just about everything else for file transfer that doesn't involve the network. Plus you still need a port for mouse and keyboard.

In a temporary, need a physical keyboard for some deep maintenance, context what is wrong with daisy chaining the mouse off the keyboard?
In norm mode nothing should be hooked to the USB slots on the front. If want to do KVM that is better to put in the back with the rest of the wires that permanently hang out the back of the box.

As for storage. The DVD ROM drive? If it is a specific file(s) that all the machines need after being restored, burn a disk and walk it around.


5.
Still running 10.5.x. This of course is both good and bad. It is hoped that Snow Leopard server would dramatically improve performance on the new Intel processors. It makes the suggestion that people wait for Snow Leopard plausable.

dramatic improvements. Not very likely. Some incremental a couple of percentage, yeah. But when has a new OS been 25-50% faster than the last one (that wasn't a complete dog.) ?
 

Better than the previous Xserve's integrated 64MB ATI chip.


Define "better". A server's graphics card only has to be able to display the desktop well enough to let you admin the system. Anything else is a waste of money and resources, and as such the ATI graphics would still be fine in my book.

Currently yes, but wait when OpenCL comes...
 
Why not?

It really depends on the system architecture as to whether or not solely 1U's are acceptable.

given you only have one kind of building block when this "depends" swings to a architecture that does require something other than a 1U then solely depending upon 1Us is a problem. If you want to narrow the scope solely to 1U problems/solution pairs then having only a 1U is fine.... but that is circular. All system architectures don't naturally fit into a 1U solution.

Just like the consumer market, Apple is niched in the server market. For companies that want to buy from just one vendor, no matter what the problem/solution that is a limitation.



As I said before, a few 1U's, a SAN, and VMWare ESX negates the need for large individual servers in a lot of cases, particularly in small to medium-sized businesses.

Exactly how many active virtual machines do you get running in a 1U/ESX box? I can perhaps see consolidating stuff that was only at < 10% utilization max load anyway. However, if had several 1U boxes at 40-50% capacity how do you consolidate those into a single 1U box?
 
Only problem with eSATA is that it is that you can't plug it directly into a router and share the RAID array that way like you can with a fibre optic connection. You have to plug it directly into the machine itself which limits its uses.

The only problem with e-sata is that Apple totally ignores it. It would be perfectly usable in xServe, iMac, MacPro, even Mac mini. Imagine plugging in an external storage box with extra drives and port multiplier support. Apple are stupid omitting eSata. FW is fine for camcorders but makes no sense as esata alternative.

Let the numbers speak:
FW 800: 800 mbit/s
eSata: 3 Gbit/s (i.e. you can put 3 current sata HDDs on one port through port-multiplier support and still not limit the performance of the HDDs by the interface).
 
Great - your wiki link points to a page with a photo of a 1.5 Mbps TOSlink !!


"Fibre optic" is a generic description of the physical medium.

It's the same as saying that you've connected a disk with copper. "Copper" could be USB, 1394, SATA, SAS, GbE, parallel port, ...

"Fibre" could be TOS, FDDI, 10GbE, FC and quite a few other protocols. It's ambiguous...

It maybe ambiguous in a general sense, but when discussing the Xserve and hard drive arrays (ala the old Xserve RAID) I assumed the meaning was implied. Sorry if you didn't see that.

Don't know where you're coming from. We have many HP DL580s that have VMWare on them and are running many web sites per system. We do a similar virtualization for our SQL Server instance on the same type of system. We've replaced over a hundred systems into a few VMWare clusters.

http://h10010.www1.hp.com/wwpc/us/en/sm/WF05a/15351-15351-3328412-241644-3328422-3454575.html

Blades are very mature and practical. Again I don't know where you're coming from.

My point was if you are offering 4U dedicated servers you are wasting rack space when you can fit more servers into the same density.

The only problem with e-sata is that Apple totally ignores it. It would be perfectly usable in xServe, iMac, MacPro, even Mac mini. Imagine plugging in an external storage box with extra drives and port multiplier support. Apple are stupid omitting eSata. FW is fine for camcorders but makes no sense as esata alternative.

Let the numbers speak:
FW 800: 800 mbit/s
eSata: 3 Gbit/s (i.e. you can put 3 current sata HDDs on one port through port-multiplier support and still not limit the performance of the HDDs by the interface).

eSATA would be good for desktop use I agree. But as I stated before it is not practical for pooled server resources because you can not directly network it, you always need to connect it to a computer and then share the resources using that computer.
 
I think they have a little more room in a 1U then the Mac Pro Chassis so they can fit those 2 extra dimms.

Looks like a pretty nice server, now I just need something to stick on it first before I drop $3000 on a server.


No, not really. It is because in the Xserve the DIMM slots are perpendicular to the main motherboard. In the MacPro they are parallel ( because off on a daughter card that is has limited space.)

HPs Zseries boxes (http://www.hp.com/sbso/busproducts-workstations.html)
are about the same size as a Mac Pro and have 12 DIMM slots. They don't have a CPU/Memory daughter card architecture though. It may be harder to do all the easy upgrades you can do with the Mac Pro though without having to get you hands into tight places or removing lots of stuff.
 
My point was if you are offering 4U dedicated servers you are wasting rack space when you can fit more servers into the same density.

Of course you can. Physical servers doesn't not necessarily equal the number of deployed servers. You can put more virtual machines with moderate workloads on a 4U box than you can a 1U box. Defacto you can turn a 4U box into somewhat of the equivalent of a Blade chassis. What matters is if you can run the 4U box at 80+% utilization all the time. If each server runs at < 20% utilization the problem is not 1U vs. 4U ; it is consolidation.

Virutalization workload may collapse CPUs but not collapse memory and/or I/O . Typically you can get more I/O and/or memory out of a 4U box then a 1U one. Especially when have multiple VMs going at it at the same time.



Similarly a datacenter isn't solely comprised of web servers. DB servers? Analytics, Data warehouses? those always fit on 1U boxes. ( you can cluster these things to some extent but if offering them up as services you have individualized workloads so may not necessarily be able to cluster just a single virtual instance. )


P.S. Somewhat also moot because XServe boxes aren't really prime ESX (and competitors ) targets. Somewhat a catch 22 if no 4U (or big boxes) then won't every drum up a big enough market for them to make them a target.
 
FW is fine for camcorders but makes no sense as esata alternative.

FW would work if they deployed the lasted standard. But that doesn't seem likely. FW 800 was made a standard in 2002. It isn't going to compete with few of the alternatives created in 2006+ . Neither would a server that used a CPU from 2002 compete with the CPUs created 2006+

Apple probably is OK with letting Firewire die off. Makes Intel happier (USB 3.0 gets no competitors) and it is one less battle to champion.

Be yeah this is another capability of firewire in that it was a cheap clustering interconnect technology. It gets tossed for USB and SATA which provide neither.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.