Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
:rolleyes:When increased data security is required, administrators face some tough decisions. A RAID 1 http://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_1 (mirroring) solution is the simplest solution in many cases. It can never be a cut and dried decision, however, since RAID 1 offers few speed benefits over a single drive. Not only that, but sacrificing half of the available disk space may not prove universally popular.

An increasing number of decision-makers in smaller organizations are thus going down the RAID 5 http://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_5 road using ATA hard disks. The reasoning is that the hard disks are still considerably cheaper than SCSI http://en.wikipedia.org/wiki/SCSI equivalents - and the controllers are also reasonably priced. Not only that, but their performance is often not far behind the considerably more expensive SCSI solution.

We look at this new, attractive, mid-priced RAID category and subject five products that are representative of this category to our usual battery of lab tests: the 2410SA from Adaptec http://en.wikipedia.org/wiki/Adaptec , HighPoint's RocketRAID 1640, the ICP Vortex http://en.wikipedia.org/wiki/Vortex 8545RZ, the MegaRAID 150-4 from LSI Logic http://en.wikipedia.org/wiki/LSI_Corporation , and the FastTrak S150 SX4 from Promise.
RAID 5: Why And For Whom?

Unlike the now well-established RAID levels http://en.wikipedia.org/wiki/Standard_RAID_levels 0 and 1, which offer either faster performance or increased security in the event of hard disk failure, RAID 5 offers both performance and security benefits. RAID 5 needs some powerful logic processing capabilities to control the simultaneous operation of several hard disks and to write data and parity information across all disks in the stripe set. The latter function does not require a particularly complex architecture, but as transfer rates increase, the process of writing parity data on the fly increases CPU overhead accordingly.

We must differentiate here between software RAID, as employed by HighPoint, and hardware RAID, in which a RISC processor carries out all processor-intensive operations. Promise has included its own XOR processor in its controller; Adaptec, ICP and LSI Logic prefer to rely on tried and true chips from Intel.

To write data and parity blocks across all the drives in the array, a RAID 5 setup requires a minimum of three hard drives. The "wasted" disk space in this case would be 33%, which is still rather high. The space required to store parity information decreases as the number of drives is increased, although the risk of drive failure is increased. The worst-case scenario is, of course, when two hard disks fail at the same time.

We can draw the following conclusions from the above: With four drives, you "sacrifice" ¼ of the available storage space, while maintaining a low likelihood of the worst-case scenario taking place. And a four-drive RAID 5 array is not restricted by the performance limits of the PCI bus with its 133 Mbps (32 bit, 33 MHz).

It is worth mentioning here that a RAID 5 array is hardly ever a viable solution for a desktop PC. Even with workstations, RAID 0+1 (striping + mirroring) or RAID 10 (mirroring + striping) remain the superior and fastest solutions, even though they utilize more of your available hard disk capacity.
TECHNICAL: RAID Level 5 stripes data at a block level across several drives and distributes parity among the drives. No single disk is devoted to parity. This can speed small writes in multiprocessing systems. Because parity data is distributed on each drive, read performance tends to be lower than other RAID types.

The actual amount of available storage is about 75% to 80% of the total storage in the disk array. The storage penalty for redundancy is only about 20% of the total storage in the array. If one disk fails it is possible to rebuild the complete data set so that no data is lost. If more than one drive fails then all the stored data will be lost. This gives a fairly low cost per megabyte while still retaining redundancy.

An easy formula is 4 drives of 120gb == ~360gb of storage. 5 drives of 120gb == 480gb of storage. you must have minimum of 3 drives in a RAID 5 configuration. i.e. lose a ~drives worth of space.
 
Nice post. I've been giving this quite a bit of thought, so I'll add a few notes.

The actual amount of available storage is about 75% to 80% of the total storage in the disk array. The storage penalty for redundancy is only about 20% of the total storage in the array. If one disk fails it is possible to rebuild the complete data set so that no data is lost. If more than one drive fails then all the stored data will be lost. This gives a fairly low cost per megabyte while still retaining redundancy.

If you include the cost of a RAID5 capable card the $/GB changes quite a bit. The card alone (from Apple) is $800 (if I recall...). You can get very good 1TB drives for about $225. A RAID01 system which is faster than a RAID5 system, but has a 50% drive space penalty is safe, fast, and inexpensive.

Furthermore, once you have a giant RAID5 system it can be difficult to migrate to new systems. Let's say you have a 4TB RAID5 system and in 5 years you need to migrate that data to another system so you can augment the capacity of your current RAID5 system. Well, you need a large system to migrate to! Whereas with RAID01 you can simply backup to the new drives, set-up the RAID01 again and you are good to go.

Maybe I'm wrong and feel free to convince me otherwise, but I feel like RADI5 is oversold. I know it is widely used in an enterprise setting, but I don't think it is because of the $/TB, but probably because of a potentially reduced amount of time that techs have to spend screwing with them? Thoughts?
 
The apple RAID card is expensive because it's A) an SCSI solution, and B) an Apple upgrade with an Apple mark up. Other cards are not nearly as expensive, though I do not know what their compatibility is off hand. I run a little RAID0 stripped pair on a high end PC using hardware on the board, and have used software raid in the past.

The benefit works as advertised, and is extremely useful for speed, redundancy, or both. The only problem I could cite is rebuilding an array using parity data takes forever.
 
I don't doubt that RAID5 works as advertised, but I'm not convinced that the $/TB argument is a good one, simply because of the added cost of the RAID card.
 
The cost/GB isn't particularly good, but it sure beats RAID1 obviously. If you need it, you need it.
 
As an enterprise user of RAID5 systems I can confirm that it is a beast for performance if a good HW controller is used. We use it on large enterprise scale database servers and it really does kick arse for RW speeds.

I also use RAID5 in my NAS (4x1TB disks after formatting and redundancy = 2.72TB) and its very good - the Array is only a SW one (using the NAS's OS Busyox) but it performs well and I am happy that I have 2.72TB that can survive a disk failure.

If you want the ultimate in performance I guess a pure striped array is good but redundacy is nill. I used to use RAID10 in my old windows system 4x250GB Seagates giving two striped/mirrored arrays with 500GB total space).

This for me was good as I needed good redundancy of the mirror but the uber-performance of the stripe set. (was using an Adaptec HostRAID controller under PCIX as at the time I couldnt afford a HW controller) but it still performed great for video capture.

so what am I actually saying here? RAID5 is good as long as you use a Hardware controller (not onboard or HostRAID as supplied on nearly every PC motherboard and RAID cards less than £200ish).
 
:rolleyes:When increased data security is required, administrators face some tough decisions....
Nice post. .... Detailed post Play4keeps!
It is customary when cutting and pasting wholesale from another site, that you credit the source -- in this case Tom's Hardware.

Passing off copy and paste as your own post is not honorable play.

(If you are Patrick Schmidt, the original author of the Tom's Hardware article from 2004, I apologise)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.