Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The ARC 1210 is actually pretty decent although it will not support Bootcamp functionality to my eternal pitty. :eek:

When you said does not support bootcamp ? you mean you can't boot on bootcamp with it ? or you can't see the volume under bootcamp ?

I don't want to boot on the RAID array, I have my ssd for that... but I would like to be able to see the partition under windows...
 
Come on now. You don't really believe that, do you?

Look here:

http://storageadvisors.adaptec.com/2007/04/17/yet-another-raid-10-vs-raid-5-question/

And here:

http://www.yonahruss.com/2008/11/raid-10-vs-raid-5-performance-cost.html

Do a Google search. There are many more examples.

At best, RAID 5 has nominally better read performance. However, write performance is terrible in comparison. If a drive fails in RAID 5, array performance is incredibly bad during the rebuild and the rebuild takes significantly longer than it would in a RAID 10 array.

The bottom line is that with any significant writing going on, RAID 5 takes a huge performance hit compared to RAID 10. No one that knows what they are doing and cares about write performance or performance during an array rebuild, would ever choose RAID 5 over RAID 10.

S-


Hummm now I am confused... If I read the benchmarks on barefeats between RAID 0 & RAID 5 with whatever card, the performance of the RAID 0 is obviously better... but with the same number of disk !

And if I understand correctly RAID 10 with 4 drivers = performance of RAID 0 on 2 drives.

Also, the RAID 0 performance is linearly scalable depending on the number of disk. (figures taken from: http://www.barefeats.com/hard104.html)

So as an example with the RocketRAID 2640X4 with 4 SATA drives in RAID 0, max read = 391 MB / S
So logically max read on 2 drives RAID 0 should be = 195 MB / S

And that same benchmark is showing performance for the 4 drives in RAID 5 to be, max read = 293 MB / S so significantly better that RAID 0 with 2 drives...

Same process can be applied for write speeds: 183 MB / S vs 285 MB / S
So how a 4 drives RAID 10 is better than a 4 drive RAID 5 performance wise?

Thanks,

Alex
 
I see... how big a deal is that write hole ? And isn't that solving it:
(extract from the 2640x4 user manual for RAID 5 configuration)
20090922-xsf6mwbma4q5fxrn5p3mp2xiyg.png


Alex

No, it does not. The parity has to be calculated after the data block has been written. If a failures occurs before the parity is calculated and written to the parity block, the parity won't match the data. If that data has to be recovered, it will be recovered using incorrect parity data and garbage will be recovered. Worse yet, you'll never know until you read the data.

Don't use RAID 5. Use RAID 10.

S-
 
But the point is :Raid0 is lethal.
One goes,there goes the working day.Period.
If you are lucky,and you have fresh backups it is still a day wasted.
If you are unlucky,7 hours of work behind you and peep´s are starting to rev up the printing machines, you are shafted.Period.

Well if you had a print job or any job with a tight deadline and you aren't backing it up to multiple separate volumes with like, every other save or two then you're a complete buffoon and deserve to be fired only to join the ranks of other people alike to yourself - usually at MacDonalds, or the local CarWash.

Like I said RAID0 is about the same level of security in practice as any other RAID level. If your backup is a clone as it probably should be, then there is zero downtime. Just select the BU as the Start Disk and off you go.

For a SOHO in todays economy where ± $500 seems significant, there's just no other consideration. RAID0 Rocks too hard!
 
Well if you had a print job or any job with a tight deadline and you aren't backing it up to multiple separate volumes with like, every other save or two then you're a complete buffoon and deserve to be fired only to join the ranks of other people alike to yourself - usually at MacDonalds, or the local CarWash.

Like I said RAID0 is about the same level of security in practice as any other RAID level. If your backup is a clone as it probably should be, then there is zero downtime. Just select the BU as the Start Disk and off you go.

For a SOHO in todays economy where ± $500 seems significant, there's just no other consideration. RAID0 Rocks too hard!

I am not going to go RAID 0 on 4 drives, yes performance would be amazing, but I can't afford to buy another NAS to mirror the system...
 
When you said does not support bootcamp ? you mean you can't boot on bootcamp with it ? or you can't see the volume under bootcamp ?

I don't want to boot on the RAID array, I have my ssd for that... but I would like to be able to see the partition under windows...

The controller firmware either supports OS X booting with EFI or it is geared towards BIOS firmware on a PC. It will not let you boot alternatively into a Vista or SL RAID0 array, which is my requirement.

You can use the ports for internal or external RAID storage and you can boot nicely from OS X arrays. But if you want to boot Windows you have to hook your Windows HDD or SSD up with the Mac Pro's internal port. For me it is a turn off because I bought the ARC1210 to make some benchmarks for booting Windows from a two SSD RAID0 array.

For most users that will be no issue at all. So it is a great card.
 
Well if you had a print job or any job with a tight deadline and you aren't backing it up to multiple separate volumes with like, every other save or two then you're a complete buffoon and deserve to be fired only to join the ranks of other people alike to yourself - usually at MacDonalds, or the local CarWash.

What´s so bad about macdonalds? You mean people who work there are inferior,talentless,dumb and stupid people? I dont get it?

As said,dont worry about my backups.They are sorted.

But there is a lot of people who arent like you (or me),and save their working data to external disks all the time.Being lazy or untoughtfull,that is the normal human modus operandi.


Raid0=nice for civilian who have the time to spare and rebuild their systems.
Raid10=nice for people who actually work professionally,value their time and want to get their job done.


And before you start start ranting about "speed",you usually use external 4/8 disk raid0 boxes for media handling in editing where the speed matters.
Wich is properly backed up to a 4/8 disk box as well.
You are not using your internal 4 disk raid0 with system/programs/media on it.
Unless you are a complete buffoon deserve to be fired only to join the ranks of other...uuh..wait?
 
Maybe the good thing to do is to precise what I want by "performance" on the disk...
I read those 2 really interesting comparison between RAID 10 and 5, and it seems that if IOPS is the main concern, then yes RAID 10 is better... but I am not going to host a high efficiency database on my RAID...

I want the RAID for:
- 1 photoshop scratch drive
- 1 storage of RAW files dumped out my camera and for storage of my photo in edit mode: 500Meg tiffs or .psd or even bigger...

So I am thinking that what I need is "simple" and "brutal" fast sequential writing speed ? Am I right ?

Alex
 
Maybe the good thing to do is to precise what I want by "performance" on the disk...
I read those 2 really interesting comparison between RAID 10 and 5, and it seems that if IOPS is the main concern, then yes RAID 10 is better... but I am not going to host a high efficiency database on my RAID...

I want the RAID for:
- 1 photoshop scratch drive
- 1 storage of RAW files dumped out my camera and for storage of my photo in edit mode: 500Meg tiffs or .psd or even bigger...

So I am thinking that what I need is "simple" and "brutal" fast sequential writing speed ? Am I right ?

Alex
You won't get "simple" and "brutal" fast sequential writing speed with RAID 5. RAID 10 is a better choice for that. RAID 5 is hardly ever the right choice these days with disks being so inexpensive.

S-
 
You won't get "simple" and "brutal" fast sequential writing speed with RAID 5. RAID 10 is a better choice for that. RAID 5 is hardly ever the right choice these days with disks being so inexpensive.

S-

Right, I understand that the solution itself maybe better when you are free to add as many disk as you want, etc.

Me, I have only 4 slots for drives... no more, no less.
So on 4 drives do you maintain that overall performance are better on a RAID 10 than RAID 5 ?? you need to show me your calculation because I don't get it...

Alex
 
Alex,

Did you read the links I included?

Read this one too:

http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt

S-

I understand and agree with the analysis... but once again: this is true and valid when you can use RAID 10 with more drive than with RAID 5... in order to reach same performance under normal condition I need 6 drives in RAID10 to match 4 Drivers in RAID 5....

It's not that I don't want to put 6 drivers, it's just that I can't in the mac pro considering I am already using the optical bay 2 for my SSD boot disk.

Alex
 
aponsin,

The read performance of a 4 spindle RAID 5 array may be slightly better than the read performance of a 4 spindle RAID 10 array. But the difference is not huge. Note that data can be read from all four spindles in RAID 10.

The write performance of a 4 spindle RAID 5 array is noticeably slower than a 4 spindle RAID 10 array.

For RAID 5 to match RAID 10 performance, the RAID 5 array needs almost twice as many disks, not the other way around.

In your situation, RAID 10 is going to be faster, safer, and more reliable. But you will need to use 50% bigger disks to achieve the same capacity. RAID 5 is a bad choice, period.

S-
 
At best, RAID 5 has nominally better read performance. However, write performance is terrible in comparison. If a drive fails in RAID 5, array performance is incredibly bad during the rebuild and the rebuild takes significantly longer than it would in a RAID 10 array.

The bottom line is that with any significant writing going on, RAID 5 takes a huge performance hit compared to RAID 10. No one that knows what they are doing and cares about write performance or performance during an array rebuild, would ever choose RAID 5 over RAID 10.

S-
I see where the difference is.

As aponsin didn't specify read vs. write requirements, I made the assumption the read requirement was much higher than write (no listing of applications,..., if not explicitly). You went the other way. :p

If writes are the primary need, then I agree 10 is the better way to go.

Maybe the good thing to do is to precise what I want by "performance" on the disk...
I read those 2 really interesting comparison between RAID 10 and 5, and it seems that if IOPS is the main concern, then yes RAID 10 is better... but I am not going to host a high efficiency database on my RAID...

I want the RAID for:
- 1 photoshop scratch drive
- 1 storage of RAW files dumped out my camera and for storage of my photo in edit mode: 500Meg tiffs or .psd or even bigger...

So I am thinking that what I need is "simple" and "brutal" fast sequential writing speed ? Am I right ?

Alex
Given this little bit of information, you'd still be better off with 10, and you can stuff another pair of drives in the empty optical bay. Adequate memory would significantly reduce the need for a scratch disk/array/partition as well.

It's also cheaper than a hardware RAID controller in order to run a type 5 array. Even if you buy additional memory, if you're looking at the Areca SAS cards, and some, if not most of the SATA models as well (the additional ports add to the price rather significantly, as it's more than just extra ports soldered to the board).
 
I didn't go the other way. The only time RAID 5 might perform better than RAID 10, with the same number of spindles, is in a pure read only environment. In any typical real world environment, RAID 10 outperforms RAID 5.

RAID 5 is a bad choice here and in most cases.

S-
 
I didn't go the other way. The only time RAID 5 might perform better than RAID 10, with the same number of spindles, is in a pure read only environment. In any typical real world environment, RAID 10 outperforms RAID 5.

RAID 5 is a bad choice here and in most cases.

S-
I look at high reads as 90%+, not 100%, as that's extremely rare (not imposible though). But most have a slight difference in where they draw that line.
 
Well if you had a print job or any job with a tight deadline and you aren't backing it up to multiple separate volumes with like, every other save or two then you're a complete buffoon and deserve to be fired only to join the ranks of other people alike to yourself - usually at MacDonalds, or the local CarWash.

Like I said RAID0 is about the same level of security in practice as any other RAID level. If your backup is a clone as it probably should be, then there is zero downtime. Just select the BU as the Start Disk and off you go.

For a SOHO in todays economy where ± $500 seems significant, there's just no other consideration. RAID0 Rocks too hard!

Couldn't agree more, but I own a car wash (several actually).

I have used all forms of raid over the years. When they fail, they are all a PITA. Raid0 is the simpliest, easiest, and best performing. I don't want to deal with trying to recreate a raid on a live production system at all. If it fails, I boot/use the latest backup and keep on going. I rebuild it when there is time to concentrate on the task, this is true of any raid I use. Virtualization has changed the raid game a lot. My servers all have raid 5, they all run xen virtual machines and if a raid fails, i simply boot the copy on a different server and get it up, in minutes. Then I can deal with whatever the issue is on the failed raid without pressure. Any raid issue in a production environment sucks.
 
Couldn't agree more, but I own a car wash (several actually).

I have used all forms of raid over the years. When they fail, they are all a PITA. Raid0 is the simpliest, easiest, and best performing. I don't want to deal with trying to recreate a raid on a live production system at all. If it fails, I boot/use the latest backup and keep on going. I rebuild it when there is time to concentrate on the task, this is true of any raid I use. Virtualization has changed the raid game a lot. My servers all have raid 5, they all run xen virtual machines and if a raid fails, i simply boot the copy on a different server and get it up, in minutes. Then I can deal with whatever the issue is on the failed raid without pressure. Any raid issue in a production environment sucks.

I currently run a data center with lots and lots of drives in RAID arrays. These arrays need to be running 24x7 with five 9's up time. Drives fail on occasion and I have no real issues keeping the arrays up and running when drives do fail. I have spare drives on hand and I don't ever use RAID 0 or RAID 5. There are many instances of RAID 1, RAID 10, and some RAID-Z.

We also have a rather large ZFS disk pool (72 terabytes) used by a 6 systems (totaling 192GB RAM and 48 cores) Xen "cloud". There is no single point of failure in this setup. Sometimes drives need to be replaced. Again, no downtime.

Saying that RAID 0 offers "the same level of security in practice as any other RAID level" is ludicrous and just plain wrong. Using RAID 5 is almost stupid these days even with battery backup on the card.

S-
 
I currently run a data center with lots and lots of drives in RAID arrays. These arrays need to be running 24x7 with five 9's up time. Drives fail on occasion and I have no real issues keeping the arrays up and running when drives do fail. I have spare drives on hand and I don't ever use RAID 0 or RAID 5. There are many instances of RAID 1, RAID 10, and some RAID-Z.

We also have a rather large ZFS disk pool (72 terabytes) used by a 6 systems (totaling 192GB RAM and 48 cores) Xen "cloud". There is no single point of failure in this setup. Sometimes drives need to be replaced. Again, no downtime.

Saying that RAID 0 offers "the same level of security in practice as any other RAID level" is ludicrous and just plain wrong. Using RAID 5 is almost stupid these days even with battery backup on the card.

S-
I try to keep in mind though, that SOHO (which seems to be applicable to most users asking about RAID levels on MR IMO) use can't afford such methods, and so long as the user has access to the system, and drive space is limited, 5 can be an acceptable compromise. RAID5 certainly isn't the end-all-be-all of RAID levels. :D

I prefer other methods in enterprise use, as they can budget the necessary cash for what's needed.

As per RAID0, it has it's place I guess. Just so long as the user fully understands the compromise involved, which will translate to time, in the event of the eventual failure (when, not if). If the time involved, particularly down time, isn't acceptable, then the budget needs to be adjusted to select the correct level for the requirements. Hence RAID5 serves as the minimum (IMO using a hardware controller, whether the system can software operate 5 or not) for SOHO use. If additional cash can be budgeted, perhaps a different level would be a better fit, depending on specific needs.

It always comes down to the details for the system and intended usage to me. :)
 
I try to keep in mind though, that SOHO (which seems to be applicable to most users asking about RAID levels on MR IMO) use can't afford such methods, and so long as the user has access to the system, and drive space is limited, 5 can be an acceptable compromise. RAID5 certainly isn't the end-all-be-all of RAID levels. :D

RAID 5 is more dangerous in the SOHO because the person running a SOHO is less likely to have a spare drive on hand and is less likely to have a real RAID card with battery backup on it.

RAID 5 really sucks for the casual user. RAID 10 is a MUCH better choice.

S-
 
Ok, Ok I am convinced now :)
Actually it's my fault I never realized that a RAID 1 configuration was able to read twice as fast... hence giving RAID 0 read performance to a RAID 10 array of 4... This solves a lot of my wondering about RAID stuff...

So I am definitely going to go RAID 10 with 4x1TB which will give me just plenty enough space !
I think I am also going to to buy the fake hardware card like the 2640x4 just to make the ARRAY visible under windows. The card is really cheap (167USD on OWC), the performances are good (barefeats) and there is not really any downside (as far as I know) to have a fake RAID 10 hardware vs software RAID 10.

Anyway, thank you all very much for your help and insight :)

Finally someone said you could mount 2 drives in the second optical bay ?
How would one do that ? I though one could out a maximum of 5 drives + the optical drive (with the standard hardware)
Which is why I am going for a setup of:
- optical drive (standard)
- SSD in optical bay 2 -> Bootable install of SL and XP (bootcamp)
- 4 internal slots

Alex
 
RAID 5 is more dangerous in the SOHO because the person running a SOHO is less likely to have a spare drive on hand and is less likely to have a real RAID card with battery backup on it.
Some are bad with maintaining backups. The lack of spare drives and using level 5 on software + system resources, Fake RAID, or even a RAID card w/o a UPS or BBU are possible, particularly with SOHO users, it's up to them to learn the differences in array levels, methods,... and how important such things are. Ultimately, it's up to the user to be accountable for their systems, and RAID isn't all that quick and easy to do when starting with absolutely no idea as to any of it. It takes time and effort. ;)

RAID 5 really sucks for the casual user. RAID 10 is a MUCH better choice.

S-
Again, it depends. If you have the drive space (physical installation) and budget for capacity, or speed if there's a minimum for a given array type, wonderful.

Unfortunately, the MP has limited space in it, and the only solution beyond that is some external means. So for some, RAID 5 may make more sense (higher read usage, and needs the capacity over what 10 can provide). Granted the cost of the controller is there, but it allows them to grow as well. Not possible with OS X's built in RAID functions beyond a max of 6 SATA ports (5 ports in the '09's if they're unwilling to relocate the optical drive).

It's a situation that a somewhat rare case in general becomes more attractive in MP's given the physical constraints. That's all. If 10 can be handled on say 4 disks (speed, capacity, or best yet, both), then it's the least expensive way to go. Just buy 4x drives of ones preference. But greater than 6, and it's external and a controller of some sort anyway.
 
Couldn't agree more, but I own a car wash (several actually).

I have used all forms of raid over the years. When they fail, they are all a PITA. Raid0 is the simpliest, easiest, and best performing. I don't want to deal with trying to recreate a raid on a live production system at all. If it fails, I boot/use the latest backup and keep on going. I rebuild it when there is time to concentrate on the task, this is true of any raid I use. Virtualization has changed the raid game a lot. My servers all have raid 5, they all run xen virtual machines and if a raid fails, i simply boot the copy on a different server and get it up, in minutes. Then I can deal with whatever the issue is on the failed raid without pressure. Any raid issue in a production environment sucks.


We're of an extremely similar mind on the topic. Nice post too.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.