Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Lots people talk trash about raid.

Raid 5 writes SUCK. I mean STINK BAD. Reads rock, but if you're writing files much you'll pay.

You've been warned.

When using Intel desktop chipset's RAID-5, you're right. That's because for writing to a RAID-5, you have to calculate the parity bit. Intel's desktop chipsets do this by sucking performance from the host processor, so writing to a RAID-5 slows your processor down. (Although any modern processor should be able to keep up with even the fastest SATA hard drive, so you won't be bottlenecking the hard drives.)

However, the Mac Pro (and the Xserve, for that matter,) use a hardware RAID card for RAID-5. (OS X doesn't make RAID-5 available with the onboard SATA controller.) Hardware RAID cards contain dedicated chips for calculating the parity bits. A 4-drive RAID-5 with a hardware RAID card can easily beat 'software' RAID 0+1, and should even come close to the same performance. (About 2.5x the read/write performance of a single drive.) In addition, the Mac Pro RAID card includes 256 MB cache, so that writes to the array are cached at full PCI Express speed, then written to the physical drives as fast as possible.
 
Alright, i can give you a pretty good detail, i JUST installed tiger on RAID 0 using two western digital 250gb 7200 RPM 16mb cache hard drives.

photoshop CS3 before took around 20-25 seconds to load on a single drive.

now it loads in about 11 seconds.
 
When using Intel desktop chipset's RAID-5, you're right. That's because for writing to a RAID-5, you have to calculate the parity bit. Intel's desktop chipsets do this by sucking performance from the host processor, so writing to a RAID-5 slows your processor down. (Although any modern processor should be able to keep up with even the fastest SATA hard drive, so you won't be bottlenecking the hard drives.)

However, the Mac Pro (and the Xserve, for that matter,) use a hardware RAID card for RAID-5. (OS X doesn't make RAID-5 available with the onboard SATA controller.) Hardware RAID cards contain dedicated chips for calculating the parity bits. A 4-drive RAID-5 with a hardware RAID card can easily beat 'software' RAID 0+1, and should even come close to the same performance. (About 2.5x the read/write performance of a single drive.) In addition, the Mac Pro RAID card includes 256 MB cache, so that writes to the array are cached at full PCI Express speed, then written to the physical drives as fast as possible.

Even hardware Raid-5 writes are still slow. Parity doesn't come cost-free. I've used hardware Raid-5 and Raid-6 solutions that are higher end that what the mac has to offer and writes are still a very noticeable weak spot. All I'm saying is do the research and match your needs before spending the cashola.
 
When using Intel desktop chipset's RAID-5, you're right. That's because for writing to a RAID-5, you have to calculate the parity bit. Intel's desktop chipsets do this by sucking performance from the host processor, so writing to a RAID-5 slows your processor down. (Although any modern processor should be able to keep up with even the fastest SATA hard drive, so you won't be bottlenecking the hard drives.)

However, the Mac Pro (and the Xserve, for that matter,) use a hardware RAID card for RAID-5. (OS X doesn't make RAID-5 available with the onboard SATA controller.) Hardware RAID cards contain dedicated chips for calculating the parity bits. A 4-drive RAID-5 with a hardware RAID card can easily beat 'software' RAID 0+1, and should even come close to the same performance. (About 2.5x the read/write performance of a single drive.) In addition, the Mac Pro RAID card includes 256 MB cache, so that writes to the array are cached at full PCI Express speed, then written to the physical drives as fast as possible.

I'm assuming that you mean it can easily come close to hardware RAID 0+1 speeds. Where does RAID 10 fall into this? I'm also assuming hardware 0+1 is faster than software 0+1.

I was going to ask this on another thread, but I will ask it here instead (since I think it's basically already been answered). Is there any real benefit to a hardware controller over software? I'm assuming there is, one reason being saving the processor from an additional load.

I'm currently trying to decide on a "working images" drive setup, one that will give me protection from drive failure and as much speed as possible, both read and write. This will of course be backed up to a JBOD enclosure.
I'm going to assume that if the RAID is hardware controlled, it is probably best to go with a RAID 5 (instead of 0+1 or 10), for the extra storage I will get.
EDIT: Maybe not, reading the more recent post. Why can't I get any consistent info? I've heard it both ways now. Speed is most important to me.

Does the Apple RAID card work with external SATA enclosures? Are there any good external enclosures out there that have sufficient hardware RAID build in?
 
I have 3x 1TB Samsung Spinpoint F1, one on its own as the Boot Drive and the other two in RAID 0... i ran Blackmagic's Disk Speed Test and this is the result

Non-RAID
Disk Read Rate 96.9 Mb/s
Disk Write Rate 104.7 Mb/s

RAID 0
Disk Read Rate 206.9 Mb/s
Disk Write Rate 186.4 Mb/s

and i do notice the difference big time... copying over 5GBs can take less a minute, it just flies.. they put my external FW800 drives to serious shame.
 
I have 3x 1TB Samsung Spinpoint F1, one on its own as the Boot Drive and the other two in RAID 0... i ran Blackmagic's Disk Speed Test and this is the result

Non-RAID
Disk Read Rate 96.9 Mb/s
Disk Write Rate 104.7 Mb/s

RAID 0
Disk Read Rate 206.9 Mb/s
Disk Write Rate 186.4 Mb/s

and i do notice the difference big time... copying over 5GBs can take less a minute, it just flies.. they put my external FW800 drives to serious shame.

Man, this makes it VERY tempting to raid my OS and USR files as well. But how scared should I be about the data security? More importantly, if I have two drives striped, will I still see the advantages of having my OS and USR files / Apps on seperate drives (opening up the potential bottleneck of having only 1 sata channel for virtual memory and applications)?
 
Lots people talk trash about raid.

Raid 5 writes SUCK. I mean STINK BAD. Reads rock, but if you're writing files much you'll pay.

You've been warned.

That depends on quite a few factors. I have a linux box with an Areca ARC-1220 raid card. One volume is six 400G 7200RPM disks in RAID 5.

> dd if=/dev/zero of=tstfile bs=4k count=1024k
1048576+0 records in
1048576+0 records out
4294967296 bytes (4.3 GB) copied, 24.0309 s, 179 MB/s

For my purposes, sustained 179 MB/s write speed is quite sufficient ;-) I'm curious how Apple's RAID card compares with comparable disks.

For those of you who do video editing on the new 8-core macs, at what disk write speed to you transition from being IO bound to CPU bound?
 
Raid 0 is asking for trouble in my opinion. The more drives in a Raid 0, the greater your mathematical odds of losing all of your data when a drive fails. And drive failures are inevitable given enough time.


Also, why the hell is it you guys can carry on for two pages on Raids, but I can't get a decisive answer about alternative Raid controllers that will work on the Mac Pro? :rolleyes:
 
I have 3x 1TB Samsung Spinpoint F1, one on its own as the Boot Drive and the other two in RAID 0... i ran Blackmagic's Disk Speed Test and this is the result

Non-RAID
Disk Read Rate 96.9 Mb/s
Disk Write Rate 104.7 Mb/s

RAID 0
Disk Read Rate 206.9 Mb/s
Disk Write Rate 186.4 Mb/s

and i do notice the difference big time... copying over 5GBs can take less a minute, it just flies.. they put my external FW800 drives to serious shame.

What block size did you use?
 
Raid 0 is asking for trouble in my opinion. The more drives in a Raid 0, the greater your mathematical odds of losing all of your data when a drive fails. And drive failures are inevitable given enough time.


Also, why the hell is it you guys can carry on for two pages on Raids, but I can't get a decisive answer about alternative Raid controllers that will work on the Mac Pro? :rolleyes:

I've ran Raid 0 on all of my PC's since 2000 and never had a drive failure; I've used Seagate and WD drives (barracudas and raptors). I did have an issue with a curropt windows install once. I was backup in 25 minutes though thanks to good backups and hdd images. So no it's not asking for trouble. NOT backing up your data is asking for trouble, single drive, or 35,000 drives in raid 0, if you don't back up you're SOL.
 
Raid 0 is asking for trouble in my opinion. The more drives in a Raid 0, the greater your mathematical odds of losing all of your data when a drive fails. And drive failures are inevitable given enough time.

Your odds are not increased if you keep regular backups, which you should be doing anyway. So this argument holds no water.
 
Yeah, backups are dandy. The point is who the hell wants to assume the extra risk of having the array collapse into oblivion in the first place. :rolleyes:

You seem to have some sort of misunderstanding of why people setup RAID arrays. Especially stripped arrays. It's about performance, your chances of encountering a drive failure is very slim, to none, depending on drive manufacture. I've ran RAID 0 for almost 9 years now on many PC's and never had any problems.

You seem to be trying to preach some bogus message here to those wanting to setup RAID 0 arrays. :rolleyes: Proper backups and good drives make RAID 0 as safe as having 1 single hard drive.
 
Well, I have another Seagate Barracuda 750GB in dispatch to match the one I already have, which will both be run under a software RAID 0 until I can afford a hardware controller.

However, I believe your assertion that a single drive is as safe as multiple drives could be mathematically incorrect. Four drives in one RAID 0 means four times the probability that the array will be wiped out in the event of a drive failure. The reputation of the drive manufacturer be damned; my Mac Pro I intend to keep forever, though will probably retire it in three to five years - but still keeping it around. A drive failure is almost a certainty over such a length of time - and consequently I have little interest in seeing one drive sabotage all of the accumulated data - ever.
 
Well, I have another Seagate Barracuda 750GB in dispatch to match the one I already have, which will both be run under a software RAID 0 until I can afford a hardware controller.

However, I believe your assertion that a single drive is as safe as multiple drives could be mathematically incorrect. Four drives in one RAID 0 means four times the probability that the array will be wiped out in the event of a drive failure. The reputation of the drive manufacturer be damned; my Mac Pro I intend to keep forever, though will probably retire it in three to five years - but still keeping it around. A drive failure is almost a certainty over such a length of time - and consequently I have little interest in seeing one drive sabotage all of the accumulated data - ever.

Mathematically sure, but speaking from experience, no. I bought my two WD raptor 36gb drives in April of 2003, they're still running in my file server. They have been running non-stop since that time, never had any sort of drive failures. I have two Seagate Barracuda drives that are older then those WD's and they're still running too, they were also ran day and night since late 2001 or so. So yes, while RAID 0 does have some risks, they're not much greater, if any (depends on drive manufacture) then a single drive.

For speed + security go with RAID 0+1. For performance on a budget, go with Raid 0 + backups.

BTW, awsome photographs on that blog!
 
Thanks (about the pics).

I have stuck with Seagates and Western Digitals mostly, and haven't suffered a drive failure yet luckily enough. Could be a bit of paranoia, but I figure with my external drives stored at another location and eventually having an element of redudancy with the raid configuration - it'll be an extra layer of protection. :cool:
 
Your odds are not increased if you keep regular backups, which you should be doing anyway. So this argument holds no water.

Mathematically, the odds do increase so the argument does "hold water" ;)
aLoc is correct.
Doing regular backup does NOT change the math. I'm afraid you are wrong.
Having a backup just alleviates the pain ;)

I am quite surprised that this thread keeps going on and on - or even started
in the first place when you can do a simple search on Google and get all the "correct" answers !:confused:
 
Thanks (about the pics).

I have stuck with Seagates and Western Digitals mostly, and haven't suffered a drive failure yet luckily enough. Could be a bit of paranoia, but I figure with my external drives stored at another location and eventually having an element of redudancy with the raid configuration - it'll be an extra layer of protection. :cool:

My secret to no drive failures is finding raid class drives with MTBF's of 1,200,000 hours or better. My Seagates are 8 years old and are still no where close to the MTBF. Hopefully they lost me a little longer, but I will probably retire them before they go bad.
 
Mathematically, the odds do increase so the argument does "hold water" ;)
Doing regular backup does NOT change the math. I'm afraid you are wrong.

The odds of what? If you mean the odds of losing your data then no, they do not increase, because the odds no longer depend on the RAID but on the reliability of the backup.

If you mean the odds of the array as a whole failing then you are right, but big deal. The faulty HD would presumably have failed anyway if you had configured it as an individual drive. So you are doing a restore from backup no matter what. It will just be a bigger restore in the case of RAID.

So the question is, is the extra time to do the restore (something that will rarely happen) more than all the individual minutes you save every day from the faster disks? Probably not. So, purely on the numbers you should choose the RAID.
 
My secret to no drive failures is finding raid class drives with MTBF's of 1,200,000 hours or better. My Seagates are 8 years old and are still no where close to the MTBF. Hopefully they lost me a little longer, but I will probably retire them before they go bad.

http://www.storagereview.com/guide2000/ref/hdd/perf/qual/specMTBF.html

Read about MTBF. Your drives will not last 137 years as the MTBF would suggest. A higher MTBF does indicate a more reliable drive, but it doesn't translate to single drive lifespan.

Think of it more like this: If you had 100 drives running all day every day. The mean time of the first drive failure would be at 1.37 years. That's not to say that all of the drives would just start dropping dead, but rather it only indicates the estimate of the first failure on any drive.

On new drives, the MTBF isn't even statistically derived from those particular units. It's inferred from the MTBF stats for other similar units because the drive hasn't been around long enough to conduct meaningful tests.
 
http://www.storagereview.com/guide2000/ref/hdd/perf/qual/specMTBF.html

Read about MTBF. Your drives will not last 137 years as the MTBF would suggest. A higher MTBF does indicate a more reliable drive, but it doesn't translate to single drive lifespan.

Think of it more like this: If you had 100 drives running all day every day. The mean time of the first drive failure would be at 1.37 years. That's not to say that all of the drives would just start dropping dead, but rather it only indicates the estimate of the first failure on any drive.

On new drives, the MTBF isn't even statistically derived from those particular units. It's inferred from the MTBF stats for other similar units because the drive hasn't been around long enough to conduct meaningful tests.

Right they won't last forever, but they will out last most desktop drives.
 
Right they won't last forever, but they will out last most desktop drives.

You are absolutely correct. On average the enterprise will outlast the desktop. The way they market them implies that they are bulletproof, so when you said
My secret to no drive failures
a little mathematician in my head said "I hope he means 'reduced drive failures'" and I was concerned that you fell into the marketers trap.

That slightly misleading specification is why the occasional reviewer puts up an angry message on NewEgg talking about how his enterprise drive failed before his desktop drive and how "such and such brand is a bunch of lying cheats".
 
You are absolutely correct. On average the enterprise will outlast the desktop. The way they market them implies that they are bulletproof, so when you said a little mathematician in my head said "I hope he means 'reduced drive failures'" and I was concerned that you fell into the marketers trap.

That slightly misleading specification is why the occasional reviewer puts up an angry message on NewEgg talking about how his enterprise drive failed before his desktop drive and how "such and such brand is a bunch of lying cheats".

That was an exaggeration, but I do believe that my drives have lasted this long due to most of them being enterprise level vs consumer level drives. They obviously will not last 50 years, that just wouldn't be practical for the manufactures, but they should last 5-6 years easily. From what I've seen anyway.
 
I'm probably going top order a Caviar 16SE 750. Any thoughts on it?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.