Thanks everyone so far. My head is spinning as I have spent two days reading forums online etc. I am learning a lot (slowly) but still not as much as most people online seem to know. I am starting to think I am over my head but also feel like I am close.
RAID isn't a quick and easy thing to understand. There's more than just the levels that need to be understood, such as HDD types (enterprise vs. consumer models), enclosures, adapters, ... And a mistake in any part of the system can be disasterous.
So I am using a Mac Pro Dual Core Intel Xeon 3GHz on Mac 10.5. Thinking about upgrading to 10.6. I only need Mac OS. I am really wanting HD space with speed. I like the idea of expanding in the future but not counting on it now.
It's not as expensive as you might think using internal cards (there's a cable that can be used).
Raid Card: I have been seeing a lot of people use Areca, so going to narrow it down to that.
areca ARC-1222 (also the x)
areca ARC-1231ML-2G
Don't do this, as there are other suitable models that may better fit your needs and budget.
ARC-1222 = SAS model (it also runs SATA disks)
ARC 1231ML = SATA only
Both are specced to 3.0Gb/s per port, as they were designed before the 6.0Gb/s specification was created.
The Areca 1880 series is a very recent card, and is 6.0Gb/s compliant (just came out a month ago).
I need a bootable card if I am running a hardware raid. Raid 5 has interested me the most as it seems the most secure for the amount of space. I have read about the different kinds of RAIDS many times and I get the general gist of it all. I would use the four bay's to hold the drives.
Actually, you only need to boot the card if the OS is on it. All that's needed otherwise are the drivers (i.e. separate boot disk or array attached to a different controller, such as the system's ICH).
As per using the internal disks, you need to decide for certain if you're getting a new system now or not, as there will be different adapters needed to make it all work between the different systems. So not moving it from the existing 2006 model will actually save you money.
The reason for this, is that the 2006 model has a cable that attaches to the HDD bays and plugs into a MiniSAS connector on the logic board (SFF-8087 port). Unfortunately, it tends to be too short to reach the card, so you have to use an extension cable to make it reach.
From 2009 on, the data is carried by traces directly on the logic board, so it needs a different adapter (a kit actually, as it's directly in each HDD bay) to get the HDD bays operational with a 3rd party RAID card (Apple's pile of junk RAID card can use the traces, but isn't worth having).
The second issue is how to use the SSD. From reading, it seems like using the SSD for a boot disc just makes all the apps load faster which I could care less about if that is the only advantage. I am more concerned with having speed when I use applications like photoshop. So thinking about maybe using it as a scratch disc instead. I also can't find any info if you can hook multiple SSD's to the optical. If so I could do one for start up and one for scratch.
If you'd rather use SSD for scratch, that's fine as long as you realize that they have to be replaced every 1 - 1.5 years (MTBR = Mean Time Between Replacement).
Physical installation is easy, but getting them connected to a controller can be more difficult, depending on which machine is used (2006 - 2008 systems are easier on this account if you want to use the ICH). A separate controller isn't that big a deal, exept there's more money involved.
For the 2009 systems, placing a single unit in the empty optical bay is easy, but you only have a SATA connection (ICH) for a single disk. This is where you'd need another card for a second disk at that location. Another option, one many don't like as much, is to remove the optical drive from the top bay, and move it to an external enclosure (USB is best, as it will work with any OS). This gives you another internal port for an SSD without consuming an HDD bay.
As you can see by now, the internal situations are specific to the models being used.
I'd seriously consider raid 10 if i were you .. raid 5 is a pain because of the constant parity overhead. Raid 10 will give you the best speed (faster read, write AND seek than raid5) and redundancy (best case can tolerate 2 failed disks vs raid 5's best case of 1 disk) coupled with the shortest rebuild time in the event of a disk failure (raid5 is a dog to rebuild). the only point in which raid 5 is superior to raid 10 is in total array capacity - if you used 4x 2TB disks raid 5 would give you 6TB where raid 10 would give 4TB.
This isn't completely true, especially with a proper RAID controller (not at all in this case).
Please don't take this as a harsh reply, as parity based arrays do mean the specifics are more complicated (i.e. software vs. hardware implementations in particular). I'll explain a bit further....
The parity overhead is totally moot with the equipment being considered, as it's done by the RAID card (Fake RAID controller = software implementation, you'd be correct in terms of system overhead). Since the Fake controller doesn't have a processor, it has to use system resources to get the calculations done, and it's more complicated than 10 = more clock cycles will be needed (shows up in CPU % utilization).
A proper RAID card however, has it's own processor, cache, and NVRAM solution to the write hole (another problem with software implementations for parity based arrays = not suited).
As for performance between RAID 5 and 10, definitely not true at all. Especially as you increase the member count. Using the same 4x disks, you'd get over 300MB/s out of the RAID 5 vs. 200MB/s for the same disks in a 10 configuration. The cost for the speed is that the redundancy is only a single disk vs. 2 for a 10 configuration. But given it's a workstation (user at the system = total control both in terms of settings accessibility and physically), it's an acceptable compromise compared to a remote system ( = netowork access for settings/management, not physical access; someone has to be sent out if there's a problem).
And in OS X's case, you do place the overhead on the system (not terrible by any means, but it's there).
If you were talking about a RAID 6, you'd be correct (same redundancy, but the parity calculations do slow it down a tad as it's more complex than 1+0; both using the same 4x disks).
Also, very important, back up your data! the drives WILL fail on you eventually. Redundancy is great but redundancy != backup. Backup also saves you from user error which redundancy does not.
Absolutely.
If you take 4x2TB internal in raid 10, then you could stick 4x1TB (or even 4x2TB if you've got the cash and want multiple backups) in an external esata raid enclosure, with an esata card in one of your pci slots and backup to that array.
You even have to be careful with the external solutions if you want to use parity based arrays, as some are software implementations, and others use a simple, inexpensive hardware controller (RAID on a Chip, aka RoC), such as an Oxford 936 or similar part from other vendors (LSI, JMICRON, VIA).
I would never say raid 10 is always faster ? pretty bold wide statement that I would not say is true ?
raid 5 or 6 with a good card is not a pain ? you set it up you are done ?
rebuild in background mode you keep working ?
Definitely true.
I suspect the confusion is based on experience with software implementations for RAID 5 (disasterous at some point = when a problem occurs, not if), not a proper hardware controller.
raid 6 ANY 2 drives build it with a spare ANY 3 Drives
It's better to think of a 4 disk minimum to build a RAID 6 (just as it's a 3 disk minimum for RAID 5).
look into the 1880 depending on budget
+1 on this, as it opens up future expansion.
I can forsee issues similar to the 1.5 to 3.0Gb/s transition, as we're starting into the 3.0Gb/s to 6.0Gb/s transition now (can be particularly problematic with RAID cards). This is the biggest reason to try and use 6.0Gb/s HDD's (they are beginning to show up), if you're going to be running SSD's on it as well (one of the attractive reasons for getting this card).
Simply put, it offers a longer usable lifespan as it's compliant with the newer specification.
1222x and 1231ML-2G are basically the same card ?
Actually, there's a difference that may or may not matter for a specific user's needs (see above).
you can get adapters of dif types to get your card to give you ports that hook up to external cases like these
http://www.pc-pitstop.com/sas_cables_adapters/
Do not attempt to use these types of adapters with SATA drives. The reason is, the voltages are too low (600 mV DC), = the array is unstable at best (you may not even get it to initialize).
If you're using SAS disks and/or SAS expanders, you can use them, as SAS uses much higher voltages (20V DC).
the reason I got my card is the 1880 series was not out
You, me, and most everyone else.
now depending on the model and what you do with it will you need external adapters and figure the battery being a extra $100 ?
the 1222x at $515 is a great deal no extra adapters etc.. needed
Again, skip the adapters with SATA disks. Even with SAS, there's a less expensive alternative.
Instead, use an
internal to external MiniSAS cable.
This will work with SATA disks (been there done all of this, so it's definitely not opinion; never thought the contact resistance would have been siginificant enough to cause all the problems that occured, but I was absolutely wrong on that one
).
you want to go to
http://www.areca.us and make sure the HDD are on their list of OK drives to use with the controllers
get the RE3 if you are starting from scratch
Absolutely. This simple step will save endless hours of aggravation due to trying to use what are ultimately the wrong drives (run through every possibility to get them running, only to find out that none of it helps at all).
Another thing that needs to be mentioned (just in case it hasn't or hasn't sunk in), is that when using a RAID card, you must use enterprise drives (that's all that will be listed on Areca's HDD Compatibility List), as the recovery timings used are different than consumer models. It has to do with how recovery is handled between the system's controller and the RAID card (consumer = 0,0; enterprise = 7,0; values are in seconds, read & write respectively).
Thanks tons for all the reading. I am still trying to understand it all but think I am grasping it more and more each time. So my budget really doesn't have a cap but when I see 3,000 I start to flinch. I was sort of expecting to pay 1g at first, fully knowing that every time I quote what I want to spend, I have to double it for reality. So 2g sounds good to me.
$1000 is probably going to be a bit too tight, especially with a card. Drives (capacity and quantity needed) and enclosures will have the greatest impact on the cost.
It seems that you don't need an SSD for a boot drive, and you can actually put the scratch space on the primary array when using such a solution (i.e. add in a separate scratch array or just expand the existing array for performance down the road as funds become more available).
The point is, once you start to narrow it down, there are ways to plan for future expansion to keep the initial costs down while getting you started, and allows you to grow in both capacity and performance (i.e. buy a sufficient card for what you really need, but only buy drives and enclosures that are needed at that instant).
There are options (might even have to transition from OS X's 10 implementation using enterprise disks to a RAID card later on; not sure of the capacity requirements just yet).
This is a worst case, but you can recycle the disks to the controller later on (no wasted funds). But you will need a proper backup system to start with so you won't loose any data you've already created that needs to be kept.
When I was reading the 1222x seems to jump out also. I like the idea of expanding, but for now I will probably stick with 4 HDD and 1 SDD and see how that goes. OR if you all think it would be smarter to go with 2 SSD (one for OS and one for Scratch) I could do that too.
The ARC-1222 (i or x) are good cards, and offer a nice price/performance ratio. It's why they're so popular for MP users (few if any would ever need to use SAS expanders; 1680 and 1880 series can use SAS expanders = up to 128 disks on one card).
As per SSD's, if your budget is that tight, then put upgrade budgets into RAM, the primary array, and backup systems.
I was not planning on getting an external case, but I am not opposed to that. I do like the idea of being able to expand.
You can use the internal card, the HDD bay adapter (assuming a new machine), and drives to get started. Then get the internal to external cable linked above, and a 4 bay SAS enclosure later on for expansion.
How easy is it to expand a RAID5 system? How does that work? Can you run any number of drives on a RAID5 if you have the 1222x card? Can all of those 8 ports/drives run together? So I could run 8 drives on the 1222x external as one RAID5?
- Expansion = easy. Just physically install the disk, and add it to the array (via the web interface). The system will do the rest.
- Minimum drive count = 3, up to the port limit of the card. As a general rule, I wouldn't ever go over 12 disks in a RAID 5, but that's not an option with the ARC-1222 anyway (limit = 8 in this case, and is safer anyway).
I am thinking about getting an new Mac Pro too. The Quad one ($2,500). Seems like a smart move when thinking about my taxes!
This is important, as the two different sytems are different internally in terms of adapters needed (how Apple dealt with the HDD bay connections).
Raid is like a disease
those that have it know why they have it how they got it and know they cant get rid of it !
I would say avoid the disease !
I think it may already be too late.
serious though unless you really have a reason avoid it for now ! grow into it and wait
so lets back up
why do you need 4 TB ? what files what kind of stuff ? video photos music ? and what programs do you use
I've asked all the perntinent questions, and am awaiting answers. Once given, we can go from there.
The mention of the $1000 target (not sure how hard this is), will verly likely be problematic. The internal card + HDD adapter kit for the 09/10 systems = $589 before shipping (still need enterprise disks, and with the 4TB capacity requirement, will go over by a notable amount).
The 1TB WD RE3 drives are $130 each at newegg, and the 2TB RE4 drives are $290 each.
If the capacity requirement can be lowered, then the disks would only add $520 (not too horrible, but usable capacity = 3TB, not 4 -and the additional member may be a tad faster setup). RAID system totals out to
$1109. Not horrible at all, but over the $1k limit mentioned.
But 3x of the 2TB units (results in 4TB usable capacity) = $870. RAID system totals out to $1459. This gets worse, and may be a tad slower. But it's also easier to upgrade with one additional disk (better performance and capacity than the setup listed just above).
The 1.5TB units wouldn't be as cost effective as either solution above IMO.
But none of this includes a backup system yet, which will push the price up yet.
For cost reasons, it's best to go with a PM enclosure kit (includes the eSATA card), and 4x 1TB Greens. Run it as a JBOD to get the maximum capacity without increasing the risk of data loss.
Enclosure (
Sans Digital TR4M) = $130
4x
1TB Green drives = $280
Backup System Total = $410
For the 3TB usable capacity configuration, Grand total = $1519. The 4TB set is $1869.
OEM disk = boot drive or could be added to the backup system (for the latter, it could shave off $70).