Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
But the more I think about it, booting from the RAID is not really very important.
That something that's always baffled me too. Your boot drives shouldn't really have much more than the OS and apps and better served by mirrored internal drives. Making striped Raptor boot drives are for those dorks who want their system to boot 15 seconds faster once a week and their five hour session with Final Cut Pro to being 3 seconds faster and don't care about decreasing their MTBF by 50%. It's penny-wise and pound-foolish.

The areas where you're storing video and other data files should be on different file systems than you boot from. Since we've only got four disk slots in a Mac Pro, that means for anything non-trivial, they're going to be on an external array not used for booting.
 
"Yay, thanks Apple for leading us to Norco Technologies Inc in Santa Fe Springs CA and Areca in Taiwan and Fremont CA. Just what I wanted!"

Also, check out the 3ware Sidecar:

http://www.3ware.com/products/Ext_serial_ata2-9000.asp
G5_image.jpg


PCIe SATA II hardware RAID controller, external IB connector, 4 drive external SATA drive cabinet. Controller can do RAID 0,1,5,6,10 - but of course with 4 drives RAID 6 and RAID 10 are essentially the same.

Does the Apple card support 12 drives like the one above?

What kind of hole do you need to cut in your PowerMac to get those three InfiniBand cables out to the disk drives? :eek:
 
If it can sustain 300 mb/s then it is LIGHTYEARS ahead of your software raid solution and more than twice what I have seen anything eSATA do.
What are you talking about? barefeats consistently gets 295+ MB/s in software RAID 0 arrays using four disks inside the Mac Pro. Until an independent third party reproduces 304 MB/s, my money says we'll see no measurable RAID 0 performance over software RAID 0 based on Apple's history of exaggerating results.
 
What kind of hole do you need to cut in your PowerMac to get those three InfiniBand cables out to the disk drives? :eek:

Step 1: remove a PCI slot cover from one of the unused slots
Step 2: Pull cable through vacant slot
Step 3: Laugh at Aiden Shaw

------------------

Good call on buying the thing, Multimedia :)

As I was reading that review, I found the whole setup really enticing. You really need to let us know how much you end up piling in there.

That is like my all time dream setup right there. I don't even do video stuff anymore (hell, I'm retiring from computer work for good in 2 weeks).

That's just cool. You GOTTA do RAID 6 though, holy crap that'd be cool.
 
Has ANYONE seen ANY posts ANYWHERE asking Apple for a new RAID card for the Mac Pro? I mean, what were they thinking?!?!
I have yet to see posts on any forums where people are saying "Yay, thanks Apple. Just what I wanted!".
I think they are completely out of touch with their customers.

oh yeah! what were they thinking? wild-bill won't need one, so nobody else needs one. and wild-bill is so close to apple-customer-support, that he exactly knows what the customer-needs are...

--- jeeezus. gimme a break. won't that ever stop? apple can produce whatever they want. if you don't need it, you might not be the target-audience. ever thought of that? guess what, they sell things to earn money. and if they saw that there is a market for RAID PCI-cards, then they'll produce them. apple still has a very loyal pro-audience, which didn't start with an ipod as their first apple-product..
 
You GOTTA do RAID 6 though, holy crap that'd be cool.

RAID 50 would probably be a better idea, and use two hot spares.

Unless performance (especially write performance) is important, then RAID 10 or RAID 0 depending on whether you'd mind losing all your data.

Performance or reliability - unfortunately you have to choose....

And of course get the BBM - a UPS is not the same thing.
 
RAID 50 would probably be a better idea, and use two hot spares.
That depends on how quickly you can get replacements and what kind of read performance or storage size you're looking for. RAID 50 with two hot spares wastes 1/3 of your disks if you've got a 12 drive chassis. If you keep cold spares around, RAID 5 gives better performance and even RAID 6 only wastes 1/6.

Unless performance (especially write performance) is important, then RAID 10 or RAID 0 depending on whether you'd mind losing all your data.

Performance or reliability - unfortunately you have to choose....
The answer isn't quite this simple because you're grouping read and write together. You can't.

Assuming the same number of disks over four or five (say eight) and your workload heavily favors read performance over write, both RAID 5 and RAID 6 have a good chance of outperforming RAID 10. As the number of disks and/or amount of reads increases, the difference becomes sizeable. The fewer disks you have or the more writes you have, then the reverse is true.
 
Has ANYONE seen ANY posts ANYWHERE asking Apple for a new RAID card for the Mac Pro? I mean, what were they thinking?!?!

I have yet to see posts on any forums where people are saying "Yay, thanks Apple. Just what I wanted!".

I think they are completely out of touch with their customers.
Are you kidding, RAID is EXACTLY what the MacPro needs. It should have ALWAYS been an option. I have RAID cards in all my machines at home, from desktops to fileservers. It's pretty damn convenient when a disk drive craps out. Not to mention the performance benefits from some of the higher RAID levels - like 10.
 
RAID 50 would probably be a better idea, and use two hot spares.

Unless performance (especially write performance) is important, then RAID 10 or RAID 0 depending on whether you'd mind losing all your data.

Performance or reliability - unfortunately you have to choose....

And of course get the BBM - a UPS is not the same thing.

I had to look up RAID 50 (though now I can see it was self-explanatory)

With two RAID-5 sets, I'm trying to figure out how this is better than a RAID-6

I can see it being better with 3 or more RAID 5 sets, but RAID 6 seems to have the same performance, but is possibly even more reliable when dealing with smaller numbers of drives.

If 1 drive dies in a RAID 5, the performance of the whole thing goes down until you rebuild.

If 1 drive dies in a RAID 6, the performance stays the same.

If two drives die in the same RAID 5 set in a RAID 50, you lose all your data.

If two drives die in a RAID 6 set, everything is fine but performance will be hindered until the drive is replaced and the replacement is rebuilt.

So therefore, unless you're dealing with tons and tons of drives, the RAID 50 doesn't seem to be like a better option.

The reason you may want a RAID 50 for, say, 100 drives is you could have up to 25 parity bits per stripe. The likelihood of 2 drives failing in a small RAID-5 set would be almost 0, so having your whole array composed of 3-4-drive RAID-5's would be good.

I'm only going through all this so you can correct me where I'm wrong, as I seem to have no idea why RAID50 is better than 6 for, say, a 12 drive set.
 
Are you kidding, RAID is EXACTLY what the MacPro needs. It should have ALWAYS been an option. I have RAID cards in all my machines at home, from desktops to fileservers. It's pretty damn convenient when a disk drive craps out. Not to mention the performance benefits from some of the higher RAID levels - like 10.

Yes, but APPLE didn't need to make a RAID card. There are plenty out there that are better AND cheaper than this one.

The market has enough options for most users out there.

This new card is only for people who care WAY too much about the looks of their computer to have an external array.
 
I had to look up RAID 50 (though now I can see it was self-explanatory)

With two RAID-5 sets, I'm trying to figure out how this is better than a RAID-6

"RAID 50" is a stripe set (RAID-0) of two or more RAID-5 arrays.

Assume 12 drives in the RAID-6, and 6 drives in each RAID-5.

If you write one sector you need to:

RAID-6: Issue 9 reads and 3 writes
RAID-50: Issue 4 reads and 2 writes

RAID 5&6 write performance drops as the number of drives in a set increases - because more reads may be needed.


The likelihood of 2 drives failing in a small RAID-5 set would be almost 0, so having your whole array composed of 3-4-drive RAID-5's would be good.

This assumes that drive failures are independent.

Unfortunately, there's lots of evidence that the drives in a particular production batch will have similar failure characteristics. If one dies, the chances are greater that drives with nearby serial numbers will also die.

And what happens in a RAID set with a failed drive? Lots more reads (and stress on the other drives). What happens when you replace the failed drive? Lots more reads (and stress on the other drives).

It is unfortunately not that unusual for a second drive to fail during a rebuild. RAID-50 and RAID-60 with hot spares are the best insurance that you can find today - if you really don't want to lose your data.
_______________________

So, the combination of poorer write performance and increased expose to multiple disk failure scenarios makes smaller arrays and RAID-10/50/60 a wiser choice compared to RAID-5 or RAID-6 volumes with many spindles.
 
RAID 5&6 write performance drops as the number of drives in a set increases - because more reads may be needed.



EDIT:
OK apparently it's just write access time that goes up the more drives you have. I didn't know that.

However, throughput on read/write also goes up the more drives you have.

So the RAID 50 is good because you can get better access time.
 
EDIT:
OK apparently it's just write access time that goes up the more drives you have. I didn't know that.

However, throughput on read/write also goes up the more drives you have.

So the RAID 50 is good because you can get better access time.

Write performance is the reason to get the battery backup.

The killer on RAID with parity write performance is the need to read n-2 disks in order to calculate the parity before doing the two (or more) writes (the actual data, and the parity).

With the battery backup, the controller can cache the write data, and return "OK" immediately. If, as is the case with long sequential writes, you end up writing an entire "stripe" -- then the controller can generate the parity from the data in the cache, and write all "n" disks without needing to do *any* reads.

If you don't write the stripe within a reasonable time, the controller can schedule a background task to do the reads - and once the stripe is in cache it can calculate the parity and do the writes.

Without battery backup in the controller cache itself, a power hiccup or reboot can lose these cached writes - and your data is gone.
 
Write performance is the reason to get the battery backup.

So if you had a battery, RAID 6 write access time would be unaffected by the number of drives.

Interesting, but I think we're now back to square 1: Why is RAID 50 with 12 drives better than RAID 6 even with a battery?

I guess there's that issue you raised about possible drive failure while rebuilding the array, but then wouldn't that be a problem no matter what?

You said the hot spare could be used to prevent this (I'm not sure how). Couldn't a hot spare be used with a RAID 6?

On a 12 Drive RAID, I'd only expect maybe 1 drive to go down at a time, maybe two. In the case of 1 drive going down, RAID 6 would be better because it wouldn't have performance loss prior to repairing the array. In the case of 2 drives going down, you could still rebuild.

Also, if a drive went down in a RAID 6, and ANOTHER went down while rebuilding, your data would STILL be in tact. This would not be true in a RAID 5 (because the drive to go down would certainly be in the replacement drive's nest).

With a RAID 50, you would pretty much have to rebuild immediately after the first drive failed as performance would be way down. Also, if 2 drives failed in the same RAID-5 sub-array, you would lose all your data.
 
So if you had a battery, RAID 6 write access time would be unaffected by the number of drives.

Contiguous sequential writes would be unaffected, other writes would not be helped (meta-data writes, filesystem log, directory updates, small or non-contiguous files,...)


I guess there's that issue you raised about possible drive failure while rebuilding the array, but then wouldn't that be a problem no matter what?

The probability of additional failures increases greatly as you add drives to the set. You not only have more drives, but the rebuild time increases - compounding your exposure to multiple failures.


You said the hot spare could be used to prevent this (I'm not sure how). Couldn't a hot spare be used with a RAID 6?

A hot spare means that the time between a disk failure and the start of rebuild approaches zero. Cold spares mean that the human has to notice the failure and take action.


On a 12 Drive RAID, I'd only expect maybe 1 drive to go down at a time, maybe two.

You'd be right most of the time, the rest of the time you'd lose all your data. :eek:

If you get related failures, the chances are much greater that you'll lose data. Unfortunately, buying a number of drives at the same time (nearby serial numbers) greatly increases your chance that you'll get related failures.

There's also the issue of "infant mortality" with disk drives. New drives fail at a much higher rate than drives after a few months of use (because some manufacturing defects cause problems right away).

Did you ever wonder why the XServe RAID is two mostly independent 7-drive controllers? It's simply that a 14-drive array would be a bad idea, and Apple doesn't want to give you the rope to hang yourself. (Apple recommends software RAID-50 to see all 14 drives as one array.)

In the case of 1 drive going down, RAID 6 would be better because it wouldn't have performance loss prior to repairing the array. In the case of 2 drives going down, you could still rebuild.

Also, if a drive went down in a RAID 6, and ANOTHER went down while rebuilding, your data would STILL be in tact. This would not be true in a RAID 5 (because the drive to go down would certainly be in the replacement drive's nest).

With a RAID 50, you would pretty much have to rebuild immediately after the first drive failed as performance would be way down. Also, if 2 drives failed in the same RAID-5 sub-array, you would lose all your data.

These are valid points, it might be wiser to use RAID-60 than RAID-50.

Or be really paranoid, and do RAID-66 (RAID-6 where each "disk" is actually a RAID-6 array) or RAID-61 (a RAID-1 mirror of RAID-6 arrays). Not a joke - high end storage systems actually use additional layers of RAID....
__________

Here are a couple of links to papers and stories on drive failure rates. I hope that these scare you. ;)

Disk drive failures 15 times what vendors say, study says
Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you?
Understanding and Coping with Failures in Large-Scale Storage Systems, Technical Report UCSC-SSRC-07-06, May 2007. [Petabyte-Scale Object-Based Storage]
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.