Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

aponsin

macrumors member
Original poster
Mar 28, 2009
75
5
I am buying a Mac Pro next month, and I am thinking about how I should configure the drives.
- I already bought the new Intel X25-M 160G SSD 34nm as a system disk to place inside the optical bay #2
- Then what I need for the rest is 2 partition:
- one as fast as possible with no need for data security (photoshop scratch)
- one fast and with good data security (big fast 1G file photo storage/work)

I have been exploring the following options with the prerequisites:
- I can't afford the hardware RAID for now
- I can't afford the 74G W raptor for now
- I don't need that much space (50g scratch + 500g storage/work)
- I am getting an outrageously good deal on a set of barely used 4 Hitachi 1T 7.2K

So considering all this, I thought about:
- simple RAID 5 on 4 drivers with 2 partitions => same speed on both, overall writing speed may not be optimum ? Great total space (3T)
- RAID 1+0 with 4 drives => good overall speed, good overall data security, can set small scratch, rather great lost of data (2T) but is ok

I also considered those, but not good after all:
- RAID 0 with 3 drives + Standard with the 4th => great speed on scratch, scratch way to big (3T), no speed or data security gain on storage/work
- RAID 0 on 2 drives + RAID 1 on the 2 others => good speed on scratch, good data security on work/storage, scratch too big, no performance gain on storage/work

I think I am leaning toward the RAID 1+0 for the moment, but I am wandering if the performance would be better with the RAID 5 ?
Any benchmark I could look at ?

Thanks for your help !

Alex
 
I think I am leaning toward the RAID 1+0 for the moment, but I am wandering if the performance would be better with the RAID 5 ?
Any benchmark I could look at ?

Thanks for your help !

Alex


There is no software raid5 so your only option is then raid10.
Wich is nice,ok and everything,but.
Apple has apparently messed up the software raid a bit in leo and sl.
People have had big problems rebuilding the array after drive failure.
So if you would go via the raid10,be sure to test it before use!

Put the raid up,put some media there,take out a disk,erase it,put it back and rebuild.

I personally would definately go via the raid10 over raid0 as I value my working time.
Loosing the array in the middle of the working day would be at least a huge hassle for me (even with couple of hour old backup)
but on most of the days it would be catastrophical... I probably wouldnt have time to finish the work before deadlines...
The raid10 allows me time to finish up the works and save them. And if neccessary,finish the day and put it up rebuilding in the night.
 
Well RAID 5 is best, like Macinposh said it requires a hardware controller.

RAID 10 is best, but when you set the array up be sure to not set up a RAID 0+1! RAID 10 is a stripe of mirrors and while RAID 0+1 is a mirror of stripes.
 
I think RAID0 is best. For both price and performance RAID0 kicks ass on the other options. It's also the same relative level of security.

Factoid: All raid levels need to be backed up on a daily basis. (With incremental Backup software this usually only takes 3 min to about 10 min. Even on mammoth RAIDs of 6 or 8 TB.)

RAID0 with a backup is about the same level of security as RAID5, 6, or 10. Redundancy adds an extra layer which by the numbers published offers almost no real advantage for the home or small office user. In fact in a SOHO environment such redundancy is often a detriment in terms of speed, cost, and downtime. For example in a typical RAID5 if one drive fails you just pull the bad drive and insert an identical unit in it's place - right? Well, in the typical SOHO environment no one has the identical drive on hand. So there's a day to find and buy it. Then there's a 2 or 3 hour rebuild time after you install the new unit. This rebuild time is almost always the same time it takes to restore a backup to a RAID0. So where's the advantage? OK, you get to keep the 2 or 3 hours of work you did between last backup and drive failure - but even that's only if the drive just up and failed all at once - which is incredibly rare on a desktop machine.
 
I think RAID0 is best. For both price and performance RAID0 kicks ass on the other options. It's also the same relative level of security.


But the point is :Raid0 is lethal.
One goes,there goes the working day.Period.
If you are lucky,and you have fresh backups it is still a day wasted.
If you are unlucky,7 hours of work behind you and peep´s are starting to rev up the printing machines, you are shafted.Period.

Thus a system that can take 1 or even 2 (raid10,optimally) disk failures is and will be the saving grace for many of us that work in enviroments where you can´t loose the stuff that you have been working on.



And before the "importance" of backuping is mentioned,yes,you have to backup your raid systems.
Heck,it is good to backup the stuff you have been working on the lunchbreak if that is possible!


But as said,risk tolerances vary.
Mine is low so I go with the raid10 way...
:)
 
I would only do raid0 with apple's software. It works well and the speed is excellent. Proper back-ups/TimeMachime solves almost all issues of catastrophic failure. I have had no issues rebuilding the raid0 in 10.5 or 10.6.

4 x 1.5tb raid0
1 x 1.5tb Time Machine
1 x 1.5tb Clone drive

Its the best setup I've ever used on a workstation.
 
I think RAID0 is best. For both price and performance RAID0 kicks ass on the other options. It's also the same relative level of security.

You are WAY off base here. There are large difference in security between the RAID levels.

RAID 0 is used for completely different reasons than RAID 1 or even RAID 10.

RAID 0 is used primarily for speed and sometimes to larger logical volumes. It is absolutely the worst in terms of data security. Lose any drive and the data is gone. You must restore from a backup after you replace the bad drive in the array.

RAID 1 offers higher data availability. If you lose a drive, the other drive keeps working and all data is still available. Replace the drive and rebuild the array and you are back where you started without having to resort to restoring from backups.

RAID 10 offers a combination of speed and higher data availability. Up to two drives can fail (assuming the right 2 fail) in a 4 drive array and all data is available. Replace the drive(s) and rebuild the array and you are back where you started without having to resort to restoring from backups.

RAID 5 has its own issues that I will not get into here. Suffice it to say that I don't recommend for the non-professional RAID user.

S-
 
It depends on whether the OP needs redundancy (uptime) or not, in the event of a failure.

Software RAID isn't as secure as a hardware solution, even for the same array type, as it can't keep backup copies of the partition tables in firmware (really helps with rebuilds, as it can allow you to salvage arrays that would be DOA on software methods).

So if the uptime is needed, then a redundant array type would be required(i.e. server usage). If not, RAID0 makes more sense from a cost perspective (the time can be afforded in the event a rebuild has to be done from backups).

I ended up with RAID5, as I can't afford the array to go down while in the middle of a job being run. They take too long, so I don't want to have to spend the time fixing the array, restoring data, and then re-doing the work. That's me though, and I was willing to spend the cash on the hardware to do it. :p
 
I ended up with RAID5, as I can't afford the array to go down while in the middle of a job being run. They take too long, so I don't want to have to spend the time fixing the array, restoring data, and then re-doing the work. That's me though, and I was willing to spend the cash on the hardware to do it. :p

Oh yeah RAID 5 is definitely the best; a bit pricey, but definitely worth it.
 
Oh yeah RAID 5 is definitely the best; a bit pricey, but definitely worth it.

On paper, RAID 5 looks greats. But, in practice, it has a serious issue, the RAID-5 write hole, which means I, and anyone else that really cares about the integrity of their data, will never use it.

RAID-Z, on the other hand, is great, but you need to be using ZFS.

S-
 
On paper, RAID 5 looks greats. But, in practice, it has a serious issue, the RAID-5 write hole, which means I, and anyone else that really cares about the integrity of their data, will never use it.

RAID-Z, on the other hand, is great, but you need to be using ZFS.

S-
If for a software implementation, Yes, I absolutely agree. :) But with a hardware based controller, proper cards include an NVRAM solution for it.

That said, I do like ZFS and it's RAID variants rather well. NAS is extremely attractive for providing storage for multiple filesystems, and fits beautifully with a Linux based DIY unit. :D
 
Wouaa... I actually never realized that software raid 5 was not available on mac osx... I'm learning new stuff every day :)
Well then goes my options, I guess Raid 1+0 is the way to go then... because I am NOT taking the risk with RAID 0 on my "storage/work" partition. I can't afford to loose a day of work.
On the side for backup I have a 4TB Raid 5 NAS + a 1TB Time capsule, so I should be covered for "external" backup.

Any idea what I can expect in term of performance with this setup ?

How different would be the performance with the hardware RAID 5 ?

Thanks,

Alex
 
Wouaa... I actually never realized that software raid 5 was not available on mac osx... I'm learning new stuff every day :)
Well then goes my options, I guess Raid 1+0 is the way to go then... because I am NOT taking the risk with RAID 0 on my "storage/work" partition. I can't afford to loose a day of work.
On the side for backup I have a 4TB Raid 5 NAS + a 1TB Time capsule, so I should be covered for "external" backup.

Any idea what I can expect in term of performance with this setup ?

How different would be the performance with the hardware RAID 5 ?

Thanks,

Alex
A 4 disk RAID10 would have similar performance of 2x disks in a RAID0. So it's definitely slower than RAID5.

Assuming those disks can sustain a read throughput of 85MB/s for example, then the 10 should give ~170MB/s. It's overhead on the CPU is lower than RAID5, but it's lacking the degree of parallelism, as half the drives are a mirror.
 
Thanks ! I actually understand how to calculate the performance for RAID 0, but I don't for RAID 5 ?

Let's assume a single disk speed is 85 MB/s read and write

4 disks with 1 for parity checking right ?
- So you read with performance equals 3 x 85 MB/s
- Quid for the write speed ? 1 x 85 MB/s ??

Thanks,

Alex
 
Thanks ! I actually understand how to calculate the performance for RAID 0, but I don't for RAID 5 ?

Let's assume a single disk speed is 85 MB/s read and write

4 disks with 1 for parity checking right ?
- So you read with performance equals 3 x 85 MB/s
- Quid for the write speed ? 1 x 85 MB/s ??

Thanks,

Alex
Oops. I thought you were looking for the performance for 10.

I've seen it done the way you're doing for RAID5, but there's another way too, that seems to apply to hardware controllers (not Fake RAID models).

Throughput = n * (single disk throughput)*.85 <.75 in a worst case scenario, & typically for low port count models, as they have slower processors>

So as an example, 4 * (85)*.85 = 289MB/s, (figure an 8 + port card, likely having an 800MHz+ processor). If the card is a 4 port model, it likely only has a 500MHz model on it, and would be slower. Parity calculations on slower parts does slow you down. In the end, you get what you pay for. :p

But you can't do this with the MP without a true hardware controller. And they're not exactly most peoples idea of inexpensive.

If you short stroke it, you have to base the throughput of a single drive on one of the partitions to be used. If the partition is set up after the array, you can use the Max (not Burst) of a single disk, if you can test for it.

Keep in mind the results are approximate, but it's close enough to base some decisions on. Particularly if you need to figure out n to acheive a desired throughput in a particular array.
 
Thanks for the explanation ! Actually looking at some benchmark, I stumbled upon this: http://www.barefeats.com/hard104.html

This card is a dream for 167$ USD ?!?... I think I am buying one... and this would solve all my issues:
- RAID 5's speed and flexibility
- RAID array recognized under windows for the occasional gaming...
- I just hope they ship to HK...

Anybody has any experience with this card ? There is no battery inside like with the Apple Pro Raid, what are the side effects of it ?

Alex
 
Thanks for the explanation ! Actually looking at some benchmark, I stumbled upon this: http://www.barefeats.com/hard104.html

This card is a dream for 167$ USD ?!?... I think I am buying one... and this would solve all my issues:
- RAID 5's speed and flexibility
- RAID array recognized under windows for the occasional gaming...
- I just hope they ship to HK...

Anybody has any experience with this card ? There is no battery inside like with the Apple Pro Raid, what are the side effects of it ?

Alex
It's a Fake RAID controller (just a SAS chip & drivers, no cache or processor on it), so it uses the system's resources to do the parity calculations.

Also note, it won't boot OS X, if that's what you want to do.

If neither is an issue, it might be just what you need. I'm just not a big fan of Highpoint's Fake RAID controllers, as I've had too many issues with their products in the past, and won't use one again.

The RR43xx however, is a true hardware controller, and is actually designed and manufactured by Areca, which makes a good card. It also boots OS X if you need it, as does some of Areca's own cards (ARC-1680 series for example), and Atto Technologies.

Given your location, you might be able to find an Areca fairly easily, as they're in Taiwan.
 
A 4 disk RAID10 would have similar performance of 2x disks in a RAID0. So it's definitely slower than RAID5.
A RAID 10 array using the same drive sizes and same array capacity is ALWAYS going to be faster than RAID 5. Yes, it will take more drives in the RAID 10 array to get the same capacity.

S-
 
A RAID 10 array using the same drive sizes and same array capacity is ALWAYS going to be faster than RAID 5. Yes, it will take more drives in the RAID 10 array to get the same capacity.

S-
Not with the controllers I've ever used, and they're not slouches by any means. Last I checked, the Areca SAS cards where the fastest out there.
 
It's a Fake RAID controller (just a SAS chip & drivers, no cache or processor on it), so it uses the system's resources to do the parity calculations.

Also note, it won't boot OS X, if that's what you want to do.

If neither is an issue, it might be just what you need. I'm just not a big fan of Highpoint's Fake RAID controllers, as I've had too many issues with their products in the past, and won't use one again.

The RR43xx however, is a true hardware controller, and is actually designed and manufactured by Areca, which makes a good card. It also boots OS X if you need it, as does some of Areca's own cards (ARC-1680 series for example), and Atto Technologies.

Given your location, you might be able to find an Areca fairly easily, as they're in Taiwan.

I see... actually I don't think the overhead or the boot under OSX is a problem for me... However the issues you are mentioning maybe ? Can you detail a bit please ? I'll look up the ARC-1680 though.

Thanks,

Alex
 
I see... actually I don't think the overhead or the boot under OSX is a problem for me... However the issues you are mentioning maybe ? Can you detail a bit please ? I'll look up the ARC-1680 though.

Thanks,

Alex
RAID10 under OS X isn't bad on overhead, and 5 won't be that bad for a 4 drive array. But as Sidewinder pointed out, Fake RAID controllers do not solve the write hole issue. True hardware RAID controllers do however, as they include an NVRAM method (stored copy of the partition tables in firmware).

The Highpoint cards I tried would be ancient by today's standards, as they were IDE interface models. But gugucom very recently tried to use on of their SATA models (RR2642 IIRC), and had nothing but trouble with it. His was an '06 though, which uses EFI32. They may fare better under '08 or newer models, as they run EFI64 firmware.

BTW, the ARC-1680 series isn't the only model that will work in the MP, and some of the SATA cards do as well (that include boot capabilities). Take a look at the ARC1210 & ARC1220 (4 & 8 ports respectively). The controller is slower, but it's less expensive. Always a trade off. :p

Another pair they make that will work are the ARC-1212 & ARC-1222, which are 800MHz SAS models (4 & 8 port respectively as well).

ARC-1680 series

You'd have to compare the prices though, to see which is the best fit for your budget.
 
Not with the controllers I've ever used, and they're not slouches by any means. Last I checked, the Areca SAS cards where the fastest out there.
Come on now. You don't really believe that, do you?

Look here:

http://storageadvisors.adaptec.com/2007/04/17/yet-another-raid-10-vs-raid-5-question/

And here:

http://www.yonahruss.com/2008/11/raid-10-vs-raid-5-performance-cost.html

Do a Google search. There are many more examples.

At best, RAID 5 has nominally better read performance. However, write performance is terrible in comparison. If a drive fails in RAID 5, array performance is incredibly bad during the rebuild and the rebuild takes significantly longer than it would in a RAID 10 array.

The bottom line is that with any significant writing going on, RAID 5 takes a huge performance hit compared to RAID 10. No one that knows what they are doing and cares about write performance or performance during an array rebuild, would ever choose RAID 5 over RAID 10.

S-
 
The ARC 1210 is actually pretty decent although it will not support Bootcamp functionality to my eternal pitty. :eek:

It is a slick piece of kit otherwise with very flexible firmware options.

I recently saw it for appr. 180$ in an open box deal at Newegg or OWC IIRC. I would have loved to buy it for that price because it set me back 238€ without software CD here in Germany.
 
RAID10 under OS X isn't bad on overhead, and 5 won't be that bad for a 4 drive array. But as Sidewinder pointed out, Fake RAID controllers do not solve the write hole issue. True hardware RAID controllers do however, as they include an NVRAM method (stored copy of the partition tables in firmware).

I see... how big a deal is that write hole ? And isn't that solving it:
(extract from the 2640x4 user manual for RAID 5 configuration)
20090922-xsf6mwbma4q5fxrn5p3mp2xiyg.png


Alex
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.