Well, initially I was planning a stripe of a pair of discs; as the year went on, I would add another pair at a RAID-5 level. After another year I would hopefully total seven drives in RAID-5 with a hot spare (if possible). Its unfortunate that only HBAs are showing the 6Gb/s instead of actual RAID cards - with the Seagates I was interested in, the throughputs are beginning to edge up to the standard. So the ARC-1222 might be a saving grace, since I don't think I'll ever
HBA's are easier to design and get out the door. They do become the basis for the RAID cards, but it takes more work (not just design, but with the testing involved), resulting in a later release date. If I had to guess, ~May 2010 before they'd show, assuming there aren't any major problems (they'll use the same basic design and parts in the 3.0Gb/s models where ever possible to reduce the workload, but the testing has to be complete. No shortcuts there.
Fortunately, the existing models will be able to run the Seagate 15K.7's just fine, as each of those SATA ports are good for ~270MB/s or so. It's still greater than that drive's throughputs, and won't throttle. It's with future versions that it will eventually happen, and especially so when SSD's go more mainstream. They're already really close as is, so the idea of getting a 6.0Gb/s card does make sense.
So you'll either have to be patient, or bite the bullet and pick up something now if the need is that pressing.
My throughput requirements are a 165 MB/s write speed (I think) for a live-recorded 1080 stream. While a single one of the 15K.7 drives, I would be able to accomplish that task - unfortunately I would assume that speed slows down as the data gets written further inward. That and I might start recording other streams of data, might work with footage with very high resolution - hence why I would start at two and keep expanding.
Yes, as you fill the drive, throughputs do slow down. But if you keep the capacity at the 50% mark, this can be mitigated. Otherwise, if you need to be sure to fill the capacity, you have to plan from the worst case (inner most tracks). Some benchmarks will give this to you (i.e. capacity in % with a throughput for that point, usually in 10% gradients).
Given your speed requirements, you don't need that many drives whether it be SAS or SATA. Assuming there's a budget involved, you may be better served with a SATA card (so long as it's less expensive than it's SAS counterpart). You just have to pay close attention to the cable length when using SATA, as the spec is:
1.0m = passive signals
2.0m = active signals
SATA's advantage is:
1. Cost/GB
2. Parallelism is less expensive, yet it can reach, or exceed what can be acheived with SAS (save random access) within the same or even on a lower budget, and provide additional capacity. That saves money in terms of having to add drives/swap drives to increase capacity as often.
Something to think about, as I'm not convinced SAS is the best way for you to go, given what your throughput and usage description is (1080p = large sequential files).
You could always use a SAS card and a couple of SAS drives for an OS/applications array (random access throughputs can make a difference here, but overall, isn't that often
if everything is to be on a single array), and use a SATA array for the working data files.
I would be archiving data to SATA drives as time went along as well to keep the array speedy, but I might be working with data/footage on the SAS discs for some time.
You can always use a simple eSATA card + PM enclosure system for archival/backup purposes. It works, and is cost effective.
While it certainly is savings, I also was under the impression that there's a caveat against combining external drives and internal drives in RAID... for whatever reason I was never aware of.
There's NO performance hit, if that's what you're wondering.
In terms of swapping a bad drive, external hot-swap enclosures are faster (MP's, and most workstations do NOT have Hot-plug capabilities; the card does give hot-swap, but you need the hot-plug aspect as well).
You also have to be extremely careful with cabling. The use of port adapters won't work well with SATA drives, due to the low voltages (~400 - 600 mV) & the contact resistance. SAS runs on 20V, and is much more stable as a result, and why the max cable length = 8.0m.
While indeed it would look like a good idea to buy instead of DIY, I already have the case that I want to use DIY on for sheer aesthetics. Which was why I was wondering. Thanks for the list of parts, I was concerned about what specifically I was going to need. The PSU, however, concerns me slightly - most of the PSUs that happen to be around the wattage I would be using are not the best out there, or at least that was my opinion (dirty power). I assume that there are good 300-400 Watt PSUs out there, but I don't know where to look.
Got it. I figured it was an attempt to save funds where possible.
As I mentioned, the Enhance products (and ProAvio as well, as it's the same enclosure), do come in silver. The smaller size is also nice, and potentially make them easier to place. Particularly if used with SATA drives and 1.0m cables (must be next to the MP).
As per the PSU, you'd be surprised. The ones used in enclosures are inexpensive units, and there's usually nothing special about them (some are redundant units, which is a pair; not cheap, and the power quality is no better than a single, and could be a little worse given the size limitations). I'd just look to a decent maker. Perhaps Seasonic would suffice for that small a unit (you can go up in W, but I wouldn't bother with more than 400W).
Besides, you're only using the 12V rail/s to run drives. (BTW, using computer models, you'd need to make sure you jump the black and green wires on the main board connector or it won't work).
Well, if one could make the assumption that 4GB of DDR3 (the standard nowadays) costs around $110 a DIMM, that's $2,200 for the same capacity as the entry level PCI-E SSD. That really is a gouge "on the streets".
Yep. Especially given the cost difference with Flash. Granted, there's the controller chip, and other components as well. There's a good bit added to the production cost for R&D recovery, and finally profit (all direct and indirect costs get figured in, of which R&D is one such expense, but it's a big one in this case).
Well, if it delivers a product that works, I guess... So, I guess the next question is, how do SSD makers switch off the cells that are known to be bad?
They don't actually switch them off, but do the equivalent of a remap (skips it).