Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

darkcoupon

macrumors regular
Original poster
Aug 8, 2012
141
0
I've been looking for internal RAID solutions for a 2009 Mac Pro. Essentially what I'd like to do is set up the four internal HDD's as a RAID 50 array and use an SSD as a boot/scratch disk, either in one of the PCIe slots, or optical bay, or both. So I'm not really concerned with finding an OSX bootable controller. I've just been confused by a lot of the information that I've found about RAID cards for the 09+ Pros. It seems like I can't use a mini SAS controller because the 09+ Pro's no longer have the internal SAS connection? If so, what are my options for RAID cards under $400 that would support RAID 50? I've considered getting an Apple RAID card on ebay, but it doesn't support 50.
 
I've been looking for internal RAID solutions for a 2009 Mac Pro. Essentially what I'd like to do is set up the four internal HDD's as a RAID 50 array and use an SSD as a boot/scratch disk, either in one of the PCIe slots, or optical bay, or both. So I'm not really concerned with finding an OSX bootable controller. I've just been confused by a lot of the information that I've found about RAID cards for the 09+ Pros. It seems like I can't use a mini SAS controller because the 09+ Pro's no longer have the internal SAS connection? If so, what are my options for RAID cards under $400 that would support RAID 50? I've considered getting an Apple RAID card on ebay, but it doesn't support 50.
  1. In order to use the internal HDD bays with a 3rd party MiniSAS connector based RAID card, you'll need an adapter (here). Works well, but it does add another ~$130 + s/h to use a 3rd party card with the internal HDD bays.
  2. I can't think of an OSX compatible card that does 50 on it's own in your budget, but you can create a pair of level 5 arrays on the card, then use OSX to stripe them together, creating a 50 that is within your budget (card, need to add for the adapter). The ARC-1213 would do, and as it happens, it is bootable if you wish (requires you to change the firmware, which is available on their website <latest>, as well as comes on the installation CD <may not be the latest version>).
  3. Physically install the SSD in the empty optical bay, and use the SATA + power connector located there to connect the drive to the system (3.0Gb/s, but would still see a performance improvement during boot/load times, and it's cost effective).

All in, you're looking at a bit over $500, not including drives.

Also note, that with a RAID card such as this, you'll need to use Enterprise grade HDD's, not consumer models. This is not optional, as the consumer variants do not have the correct firmware to be used with such a card, and will not work properly at all (TLER timings coded in the drive's firmware are different than the consumer versions, which is critical for use with a RAID card, as it's in total control of the disks rather than the OS, as is the case with the on-board SATA ports). They also cost more as a result.

For compatible drive models, check Areca's site for the HDD Compatibility List, as it shows what they've tested that works with their cards. Following this list can, and usually does, save you loads of time figuring out that your drives aren't viable with the card when things don't go according to plan with drives not listed (consumer models in particular).
 
  1. In order to use the internal HDD bays with a 3rd party MiniSAS connector based RAID card, you'll need an adapter (here). Works well, but it does add another ~$130 + s/h to use a 3rd party card with the internal HDD bays.
  2. I can't think of an OSX compatible card that does 50 on it's own in your budget, but you can create a pair of level 5 arrays on the card, then use OSX to stripe them together, creating a 50 that is within your budget (card, need to add for the adapter). The ARC-1213 would do, and as it happens, it is bootable if you wish (requires you to change the firmware, which is available on their website <latest>, as well as comes on the installation CD <may not be the latest version>).
  3. Physically install the SSD in the empty optical bay, and use the SATA + power connector located there to connect the drive to the system (3.0Gb/s, but would still see a performance improvement during boot/load times, and it's cost effective).

All in, you're looking at a bit over $500, not including drives.

Also note, that with a RAID card such as this, you'll need to use Enterprise grade HDD's, not consumer models. This is not optional, as the consumer variants do not have the correct firmware to be used with such a card, and will not work properly at all (TLER timings coded in the drive's firmware are different than the consumer versions, which is critical for use with a RAID card, as it's in total control of the disks rather than the OS, as is the case with the on-board SATA ports). They also cost more as a result.

For compatible drive models, check Areca's site for the HDD Compatibility List, as it shows what they've tested that works with their cards. Following this list can, and usually does, save you loads of time figuring out that your drives aren't viable with the card when things don't go according to plan with drives not listed (consumer models in particular).

That's great info, thank you. After thinking about it a little more, I don't really need RAID 50 if I'm going to be booting off of an SSD and just using the RAID array as storage. I might as well find a cheaper card that just supports 1,5,10.

Speaking of booting off an SSD, what do you think of the Apricorn Velocity Solo X2? I've heard it's not full 6gb/s speeds, but still much faster than SATAII. Plus, I like the option of putting a second SSD in the optical bay if I need to expand.
 
There is no real raid card that's going to do raid 5 for much cheaper than the Areca 1213-4i. Maybe one of the highpoint cards. You will see a big price jump between these cards and the cheaper ones they offer, but they aren't true hardware raid cards. Most decent cards offering raid 5 will also do raid 50.

And I know this is bad but I've had WD caviar black WD2002FAEX drives in both an ATTO R348 and now an Areca 1213-4i and they do work.

I bought them back before I knew any better, and have been waiting for them to fail so I can get the RE4 drives... but still going ok. I did have one head crash which was replaced under RMA but I think was unrelated.
 
That's great info, thank you.
You're welcome. Glad it helped. :)

After thinking about it a little more, I don't really need RAID 50 if I'm going to be booting off of an SSD and just using the RAID array as storage. I might as well find a cheaper card that just supports 1,5,10.
The ARC-1213 4i will be the best you can do for a card that works in a MP, and comes in under $400 (does levels 0,1,10,3,5,6 and JBOD). Which is any standard level possible with 4x drives.

Brands in general are limited to begin with, and the two to look at are Areca and ATTO. Of the two, Areca is less expensive, but is still a proper RAID card (support is overseas, and the interface requires the use of a web browser to access). ATTO's support is located in the US, and their interface is a little nicer to look at and a bit easier to use. That's it, but the ATTO will cost you more money as a result.

Speaking of booting off an SSD, what do you think of the Apricorn Velocity Solo X2? I've heard it's not full 6gb/s speeds, but still much faster than SATAII. Plus, I like the option of putting a second SSD in the optical bay if I need to expand.
Your limit will be the SATA port in the MP the SSD is connected to (SATA II, which saturates at ~250-275MB/s real world), and the entire controller has a limit of ~660MB/s for all the drives (i.e. if you striped a couple of SSD's, you'd hit the SATA controller's bandwidth limit before you ever reached the max the set would be capable of).

There is no real raid card that's going to do raid 5 for much cheaper than the Areca 1213-4i. Maybe one of the highpoint cards. You will see a big price jump between these cards and the cheaper ones they offer, but they aren't true hardware raid cards. Most decent cards offering raid 5 will also do raid 50.
Depends on the drive count in this case, as though the controller is capable, there just aren't enough drives. With a SAS expander (on models that work with them), it's possible at that point.

But the ARC-1213 doesn't, and is one of the compromises it has vs. it's bigger brothers.

And I know this is bad but I've had WD caviar black WD2002FAEX drives in both an ATTO R348 and now an Areca 1213-4i and they do work.
What is the configuration (level)?

I ask, as they can be used reliably for 0,1,10, and JBOD. But once you try to use them with parity levels, you're basically playing Russian Roulette with your array (seen this many times by users that didn't heed the use of Enterprise drives).

Granted, so long as there's a proper backup, it's just time lost. But when you get into the recurring errors, that's a PITA at best, and can actually be what kills the drives (error, rebuild; wash, rinse, repeat, until enough damage occurs that one drive too many fail during the rebuild and *poof* the array is shot). Bad sectors accumulate during all of this, causing them to be too unstable or even defective (even remapped and used as a single disk).

Not specific to a particular brand, but Seagates have given me the most trouble over the last 4 years, including their enterprise line. Switched to WD for SATA back in 2008, and haven't had the problems I was seeing with Seagate (Hitachi isn't that wonderful in my experience either for enterprise models).

I bought them back before I knew any better, and have been waiting for them to fail so I can get the RE4 drives... but still going OK. I did have one head crash which was replaced under RMA but I think was unrelated.
Up to you, but you can always re-purpose those drives for say backup or archival storage before they're damaged.

Just a thought. ;)
 
In both controllers the drives have been raid 5. I do keep a tight backup, which unfortunately means I don't really have much purpose for the drives outside the system so I've just been waiting fro something bad to happen to them. As long as I'm getting away with it for now I see no reason to change :eek:

I was under the impression the drive would just get dropped from the raid set if the controller thought it was not responding? Is there a possibility the drive will be damaged and unable to be reformatted and used again on its own?
 
In both controllers the drives have been raid 5. I do keep a tight backup, which unfortunately means I don't really have much purpose for the drives outside the system so I've just been waiting fro something bad to happen to them. As long as I'm getting away with it for now I see no reason to change :eek:
Up to you. But when you run into the type of problem I'm describing, you'll understand the reasoning behind the warning. :eek: :p

Unfortunately, I've seen a lot of users have to learn this the hard way. :(

I was under the impression the drive would just get dropped from the raid set if the controller thought it was not responding? Is there a possibility the drive will be damaged and unable to be reformatted and used again on its own?
Consumer models are dropped faster than the Enterprise units. Now where this becomes a problem, is this tends to happen with multiple members of the set, making it unstable, and ultimately unusable (best case).

But they're not able to handle the vibration as well (missing sensors that are added to the Enterprise units), which is the primary cause for damage. Specifically, this can cause the to heads physically smack the platters, scarring it = data loss. A disk can only remap a certain amount of data (not just regarding the platters, but the memory capacity in the HDD controller board that stores the remap data <pointers>), and once this is reached, the disk becomes dead/useless to the end user (bad sectors that can not be remapped at this point = corrupted data).
 
So with these RAID cards is it recommended to keep the system on 24/7 like it was a server? I've heard this about the Apple RAID card as well, and I'd honestly prefer not to leave my system and HDD's running 24/7...
 
So with these RAID cards is it recommended to keep the system on 24/7 like it was a server? I've heard this about the Apple RAID card as well, and I'd honestly prefer not to leave my system and HDD's running 24/7...
You don't have to leave it on 24/7.

Where this comes from, is the concept that spin-up of an array causes undue wear and tear on the drives (heating and cooling of the drives; think of it like cold starting a car engine, which is where most of the engine's wear occurs over it's lifespan).

However, just like a properly designed automobile engine, a properly designed electronics and mechanical portions of the HDD should be designed with this issue taken into account during the design phase. Now if you're turning it on for 10 min, off 10 min, and repeating this constantly all of the time, then you might be right in being concerned (end up exceeding it's designed thermal cycle threshold). But this isn't likely going to be the case, so I wouldn't panic. ;)
 
But they're not able to handle the vibration as well (missing sensors that are added to the Enterprise units), which is the primary cause for damage. Specifically, this can cause the to heads physically smack the platters, scarring it = data loss. A disk can only remap a certain amount of data (not just regarding the platters, but the memory capacity in the HDD controller board that stores the remap data <pointers>), and once this is reached, the disk becomes dead/useless to the end user (bad sectors that can not be remapped at this point = corrupted data).

Ok but having a 4 drive raid setup, all in a regular PC tower case isn't going to cause much more vibration than if you have the 4 drives in there in a standard non raid config is it? I can understand a 24 disk array, all spinning at the same time might put some strain on the drives. I'm just hoping this isn't what damaged that drive that already failed...
 
Ok but having a 4 drive raid setup, all in a regular PC tower case isn't going to cause much more vibration than if you have the 4 drives in there in a standard non raid config is it?
You'd be surprised the difference when they're spinning in unison (head movement is the real culprit) vs. single disk operation. Because the platters are only supported on one end, they'll vibrate during their rotation (think oscillating shaft at the unattached end which is caused by the head movement), and if allowed to continue, will reach the point where the vertical movement is greater than the fly height of the heads (= platter gets smacked, and damage results).

And 4 units is enough you can see this (think random access in particular, as the heads go nutz). Given a 24 member set using 3.5" drives would very likely be physically spread over multiple enclosures, the difference between that many members and just 4 isn't as great as you would think. :eek: ;)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.