Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
http://www.solaris.com

Setup a ZFS array, enable iSCSI targets or just use NFS. Just do a little reading outside of the Apple world and you can do most anything. I just setup a 6 x 1.5TB array and it hosts everything from my network, including TimeMachine backups and the ZFS snapshots.

I wouldn't say it's all that easy.

Monster Ars Technica ZFS NAS thread

The issues with Mac support is that you can use a free iSCSI initiator from GlobalSAN but it can be a little flaky. The $$ initiators work better (like the Xtend from Atto) but at cost.

I'd be interested in knowing your hardware/software setup. Glad it's working well for you (especially the Time Machine )
 
I think many people are reading this thinking wow that is awesome, wish I needed one!

+1 Yup.

Apple would make some money competing with Windows Server, and all the HP Media and clone Servers floating around. The Mini just might not be enough for everyone.

:apple:
 
If you need a lot of storage for your Xserve, you should look into Xsan; it's a fiber-optic storage solution that can get your Xserve using up to 16 TB, but it's very expensive although very high-performance and reliable.

I think you're referring to the Apple-endorsed Promise RAID, which replaced Apple's own Xserve RAID. The regular Xserve still only supports up to 3TB.

I've been doing some research on Xsan and SANs, in general. Xsan is freaking awesome. It supports up to 2PB!!

nuckinfutz mentioned iSCSI and I mentioned ATA over Ethernet earlier. I've also heard of a new IEEE 1394 interface update which would allow FireWire over Ethernet. I suppose these native updates to OS X would allow the non-Fibre user the ability to use Xsan. This would be awesome.

Question for anyone who uses Xsan:
Apple has a diagram on their site where they show an Xsan network setup. They show an Xserve (5) connected to a Promise RAID (1) via a Fibre switch (2). Then some clients (6) are connected to that Xserve (5) to access the SAN. My question is, couldn't you just use this portion of the setup but replace the Promise RAID (1) and Fibre switch (2) with an external (or internal) hard drive? You'd connect that external (or internal) drive to the Xserve (5) or another Mac Server running Xsan. Then connect clients to that server (5) and tada!: you've got an affordable SAN. Minus the price of Xsan, of course.

So is this possible or does the storage have to be connected via Fibre?
 
Question for anyone who uses Xsan:
Apple has a diagram on their site where they show an Xsan network setup. They show an Xserve (5) connected to a Promise RAID (1) via a Fibre switch (2). Then some clients (6) are connected to that Xserve (5) to access the SAN. My question is, couldn't you just use this portion of the setup but replace the Promise RAID (1) and Fibre switch (2) with an external (or internal) hard drive? You'd connect that external (or internal) drive to the Xserve (5) or another Mac Server running Xsan. Then connect clients to that server (5) and tada!: you've got an affordable SAN. Minus the price of Xsan, of course.

So is this possible or does the storage have to be connected via Fibre?

Interesting question, I'd love to hear the answer on the possibility from someone that uses XSAN. My question for you, however, would be why would you? A Drobo connected to a server with this kind of infrastructure seems like kind of a waste when compared to capabilities of a huge SAN. Often it's forgotten that the throughput of your connection is limited by the spindles your SAN can spin (to write/read). This product is aimed at high throughput SAN's and it seems like it would be a waste to share a 1TB external storage array.
 
I think you're referring to the Apple-endorsed Promise RAID, which replaced Apple's own Xserve RAID. The regular Xserve still only supports up to 3TB.

I've been doing some research on Xsan and SANs, in general. Xsan is freaking awesome. It supports up to 2PB!!

nuckinfutz mentioned iSCSI and I mentioned ATA over Ethernet earlier. I've also heard of a new IEEE 1394 interface update which would allow FireWire over Ethernet. I suppose these native updates to OS X would allow the non-Fibre user the ability to use Xsan. This would be awesome.

Question for anyone who uses Xsan:
Apple has a diagram on their site where they show an Xsan network setup. They show an Xserve (5) connected to a Promise RAID (1) via a Fibre switch (2). Then some clients (6) are connected to that Xserve (5) to access the SAN. My question is, couldn't you just use this portion of the setup but replace the Promise RAID (1) and Fibre switch (2) with an external (or internal) hard drive? You'd connect that external (or internal) drive to the Xserve (5) or another Mac Server running Xsan. Then connect clients to that server (5) and tada!: you've got an affordable SAN. Minus the price of Xsan, of course.

So is this possible or does the storage have to be connected via Fibre?


Doesn't look likely.
StorNext closest solution is to have a cluster gateway give the "normal" LAN access to the files.
http://searchstorage.techtarget.com/news/article/0,289142,sid5_gci1249894,00.html


Somewhat likely that there is no abstraction between the client virtual volume software and raw Fibre Channel devices. ( that client is specially looking for fibre channel addresses for the blocks it is trying to get).

I'm no sure you are quite getting what the diagram is presenting. The XServe boxes , next to the 3 and 5, just have the metadata associated with the files. Not the data in the files themselves. When you say you want file "foo.txt" the clients ask those server(s) where those bits are stored on the SAN. Where to find those bits is sent back and the client directly goes to get those bits themselves. So the data inside the file isn't being served up by those servers; it comes directly off the SAN devices.

So the metadata for a file will have some aspects about where in the SAN network the blocked associated with that file are stored. ( plus whether the file is locked , date accesses , etc. ) For fibre channel networks that would leverage the FC protocols for what these addresses look like
http://en.wikipedia.org/wiki/Fibre_Channel_network_protocols

So unless they built an abstraction layer into the software to allow one to plug in other fabric protocols they are probably stuck with something that is very FC specific.

There are several cluster/shared file systems that are stuck to specific networks
http://en.wikipedia.org/wiki/Shared_disk_file_system

iSCSI has some aspects of Fibre Channel addressing/security protocols but
Mac OS X (as delivered from Apple) is bit lacking in iSCSI capabilities.


Fibre Channel 4GFC is just faster than 1 GB Ethernet ( http://www.fibrechannel.org/OVERVIEW/Roadmap.html): 800 MBps througput. Can play tricks with bounding multiple 1 GB ports into a faster channel but the same is true of FC. Eventually it looks like some combination of 10 GB ethernet and Infiniband will kill off FC , but right now if need to move 1 TB files around .... likely will run into FC.

A bit dubious about ATA over Ethernet as a cluster technology. ATA is a single host to device protocol. And ATA drives usually get hammers in multiuser situations. Some of the complexity that FC and iSCSI have is for management of large collections of devices. ATA forgoes security, authentication , authorization , global naming , data checksums, etc. which is fine for the 10 disks attached to just one box. But 100s of disks from different pools/servers is that really going to scale up?

Although bounded ethernet on a switch which supports it isn't likely going to be routed anywhere if really need performance so can see the point of forgoing TCP/IP. Can also get around TCP/IP with iSCSI over Infiniband. (which is as faster or faster than Fibre Channel. )
 
Apple doesn't care, at all, about the enterprise

It is nice to see Apple still cares about the enterprise markets. They still don't have a blade solution like all of their competitors but an updated xServe at least states they want to stay in the market.

The Xserve is a low end 1U server - it's a joke in the enterprise market.

The Xserve is good for building a render farm with rack-mount systems for a shop using FCP and other Apple tools. Some universities and labs use them as well for grid computing - although Linux owns this area overall.

The enterprise wants support, and server families. 4 hour 24x7 onsite support is required for some, for others same day or next day is OK.

Families are important - let's go from dual-core to 24-core using the same parts, the same tools, the same management infrastructure, the same everything. From 1 PCIe slot to 24 PCIe slots, from 8 GiB of RAM to 192 GiB of RAM - but the same infrastructure.

It also has to be non-proprietary. Any SAN or iSCSI disk array should be usable, not just one particular model from one minor vendor. Any FibreChannel or 10GbE or InfiniBand or iSCSI card should work.

To put it simply - at the high end the Xserve is useful for small Apple workgroups that need a small number of servers for a compute farm. At the low end, it's a Mac Pro in a 1U box - good for a web server or file server.

Apple is now the iGadget company - the Xserve is the ugly duckling....
 
The Xserve is a low end 1U server - it's a joke in the enterprise market.

The Xserve is good for building a render farm with rack-mount systems for a shop using FCP and other Apple tools. Some universities and labs use them as well for grid computing - although Linux owns this area overall.
... .

As long as Apple can use them for its own server farms and to sell vertical workgroup servers it will probably limp along.

One thing for Pixar/Disney to use Suns/Dells/HP/whatever as their render farm and quite another for Apple to have to use Sun/Dell/HP as their grid nodes. I'm sure Apple using other's folks boxes for internal ERP and enterprise software, but their grid datacenter seems likely to have a stack of XServe boxes in them. When they get to the point that they don't care about their own grid being on other folks stuff ..... bet XServe would be on much weaker footing. ( For instance other folks storage products are probably pervasive in their datacenters. )

However, that said there are lots of business/groups that don't need to grow to monster size. If you are 4 person partnership ( a doctor clinic , small specialized law firm , etc. ) then 1U could be sufficient for your "enterprise". Less than a half rack of equipment is all needed to run that "enterprise". Mac OS X server is more handy where company is so small there is no IT deparment. Just like there is no other departments.

Any SAN or iSCSI disk array should be usable, not just one particular model from one minor vendor. Any FibreChannel or 10GbE or InfiniBand or iSCSI card should work.

That's not even true of AIX, Solaris , Solaris boxes ... and yet they are enterprise ready. That mindset seems to be oriented toward Windows/Linux are the only Enterprise solutions because they are pervasive. Those too have their problems penetrating Entreprise at the highest levels.




The Apple Server market is a bit of a catch-22 If it were much larger there would be an argument for a more diverse set of servers. Host Bus Adapter (HBA) vendors would be motivated to write Mac OS X specific drivers for their cards. But as long as it is small many IT shops will avoid it because it doesn't have them.

The other factor is that Apple has shot its partners in the head multiple times. Wipe out the hardware vendors with Apple stores. Throws curve balls at the developers from time to time. etc.


It also would be another thing if technology from the servers was trickling down into the cheaper systems over time. However, that doesn't seem to be case. The Mac Pros update with tech just as fast as the servers and are more flexible. ( if need a 2U Mac server effectively buy a Mac Pro. Just doesn't fit into a standard rack. )
 
P.S. just noticed something else ...

My question is, couldn't you just use this portion of the setup but replace the Promise RAID (1) and Fibre switch (2) with an external (or internal) hard drive? You'd connect that external (or internal) drive to the Xserve (5) .... Then connect clients to that server (5) and tada!: you've got an affordable SAN. Minus the price of Xsan, of course.

For that internal/external connections to a server and then have the clients connect directly to it, if Mac OS X had a iSCSI target mode and finished up ZFS support you'd be done.

ZFS allows for huge files. Only it isn't a cluster file system. You'd need to put something else for multiple clients to connect to.


You could export the files via NFS to the clients from a server which has ZFS file system. (at least can on Solaris. would hope that works on Mac OS too but remains to be seen if can export ZFS files well. ). NFS file sizes are limited to local file system ( or 64 bits. which should be plenty.)

I don' think Apple is going to be in a hurry to enable Mac OS X Server to be a large file server. For now an OpenSolaris box is a better low cost SAN/NAS server than Mac OS X. ( yet another reason the market for Xserve is smaller than other servers. ). [ OpenSolaris examples:
http://www.nexenta.com/corp/index.php?option=com_content&task=blogsection&id=4&Itemid=128
http://www.sun.com/storage/disk_systems/unified_storage/index_v3.html

]
 
Interesting question, I'd love to hear the answer on the possibility from someone that uses XSAN.

I found a simple straight-forward answer on Apple's discussions page:

To access the storage directly, you need to pull fiber. Xsan requires fibre channel access to the storage, period. It won't go over IP; it's not iSCSI based.

You could always front-end the storage with a server, via AFP or NFS. But then you have a NAS, not native fiber channel storage.


This makes sense. Once you connect via ethernet, you're IP. Until native iSCSI, ATA over Ethernet or FireWire over Ethernet is in Mac OS X, connecting to a SAN storage front-end will just be another network share. I wish there were screen shots of the desktops in Apple's diagram. I think this would've gave me instant comprehension. Thanks, everyone, for your explanations.

My question for you, however, would be why would you?

Time Machine, for one. Yes, you can back up via Time Capsule or a network share, however, this uses the sparse image backup method. Time Machine creates an automatically re-sizable sparse image file on the Time Machine share and mounts this image to your desktop, and the backup files are stored within this image.

If you could connect to a hard drive as a SAN, this machine could act as a local drive on all of your Macs connected to this SAN, and the backup process would occur normally.

Both backup methods work, however, it would seem to be favorable to backup via the locally attached (or SAN connected) method. This is just one example of an application that behaves differently on a local drive (or SAN, presumably) vs. a network share.
 
You're thinking hardware. Think licensing. $999 unlimited vs. OS + CAL's for Windows. *huge* savings, especially for SMB's.

Good point. But not as funny. ;) You are right, of course. A good license trumps cost of hardware by a mile.
 
I wonder how much of a speed boost Virginia Tech's System X would get if they replaced their G5 xServes with Nehalem xServes. Add in Snow Leopard and wow!

System X has dropped off the top500 list (but apparently still in production http://www.checs.eng.vt.edu/resources.php) . They now have System G ( built with Mac Pros ).

They already have Macs with 2008 era Intel processors. However they run Linux ( (Probably why apple doesn't explicitly use them as an example when this came online. And perhaps why VaTech's press info doesn't really explicitly name the OS. However, assuming the filled out their top500 submission form correctly... )

http://top500.org/system/9833

(also down to 279 on the top500 list also. It is striving to be "green" so perhaps not so bad. )

http://www.vaeng.com/feature/new-system-g-supercomputer-introduced


It is using Infiniband for the interconnect.... like just most newer HPC clusters these days. Mac OS X didn't fit the bill. Exact same hardware with Linux drivers for fast connectivity are the driving force. [ There was a single smaller vendor SilverStorm that sold a Inifinband card with Mac OS X drivers but they were acquired by QLogic and that combo disappeared. ]

All the cute veneer of the Finder ( cover flow for your files , etc. ) doesn't really make a difference if nobody is having a one-on-one relationship with that single machine. It isn't really the OS just an application that runs on top. If Apple was reaching out to and giving leading vendors incentives to put high speed cards into their boxes perhaps would be running OS X. No Infiniband means out of the HPC market.
 
I wouldn't say it's all that easy.

Monster Ars Technica ZFS NAS thread

The issues with Mac support is that you can use a free iSCSI initiator from GlobalSAN but it can be a little flaky. The $$ initiators work better (like the Xtend from Atto) but at cost.

I'd be interested in knowing your hardware/software setup. Glad it's working well for you (especially the Time Machine )
It really is all that easy, ZFS is seriously brainless when it comes to setting up volumes. GlobalSAN seems to behave fine for me, but I admit I have looked at the paid initiators. The only thing you really need to pay attention to setting up Solaris or OpenSolaris is Hardware Compatibility, especially video.

ASUS A8N-SLI PREMIUM
Opteron 165 @ 2.7GHZ
4GB OCZ DDR2
6 x 1.5TB WD Green (raidz zfs storage pool)
2 x 320 GB WD RE2 (mirrored zfs boot)
 
It really is all that easy, ZFS is seriously brainless when it comes to setting up volumes. GlobalSAN seems to behave fine for me, but I admit I have looked at the paid initiators. The only thing you really need to pay attention to setting up Solaris or OpenSolaris is Hardware Compatibility, especially video.

ASUS A8N-SLI PREMIUM
Opteron 165 @ 2.7GHZ
4GB OCZ DDR2
6 x 1.5TB WD Green (raidz zfs storage pool)
2 x 320 GB WD RE2 (mirrored zfs boot)

Sweet! Thanks for this. I think as long as you stick to HW OpenSolaris works well with half the battle is over.
 
Nope. They use generic unix/linux boxen.

I wouldn't be surprised if there were a single X-serve to deal with Apple specific issues, but all the rendering and content libraries are on generic cheap(er) server hardware.

Rocketman

cites:

http://blogs.computerworld.com/pixars_rendering_software_big_on_linux_servers_not_mac

http://infotech.indiatimes.com/articleshow/1141476.cms

Even though Pixar was running Linux Renderman that doesn't necessarily mean that wasn't on Mac Pros or XServe. Once those switched to Intel processors can run Linux just as well. [ And if primarily just want to funnel money into Apple's coffers... it doesn't have to run Mac OS X after they sell it. The money would be just as green either way. ] Compared with similar Dells and HPs the XServes are competitively priced. If Pixar went with a more generic vendor or with blades that would be different ballgame.
The XServer would make their machine room "look nice". If that was criteria it would likely win. :)


On the second part. IBM will run anybodies hardware if you pay them to do it. IBM Services is not the IBM Hardware. They certainly like to double dip, but they'll run other folks stuff if that what the job entails. I'd also doubt you want to outsource management of the render farm for you movies to a lowest cost outsource vendor. The very custom, business critical stuff is often best kept in-house.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.