The Areca is by far the better card. If the OP were comparing it with an RR43xx, it's closer, as they actually make those for Highpoint, and they're based on the ARC1680 series.Go with the Areca 1680X and use RAID 10 instead....
S-
It would work for that, if not a bit pricey for a stripe set. You could have gone with the RR24xx cards (SAS controller based), and used the system's resources (Fake RAID controller).I ordered the HighPoint as I found it for $100 less. Not running a sophisticated set up, running to external enclosure in RAID 0 for video streaming and general file storage.
...... RR43xx, it's closer, as they actually make those for Highpoint, and they're based on the ARC1680 series......
I didn't get it online, but over the phone with a few of the engineers.Hey Nano.... I've seen you post this several time but have not been able to find any other source to confirm. Can you please post a link???
But as per the RAID5 comment, meh. It does contain an NVRAM solution to the write hole issue...
I don't see it as "good or bad". It depends on the specifics. If the drive quantity is limited (or budget) combined with a specific minimum capacity requirement that prevents other array types (parity or non parity), it has it's place.Face it, RAID 5 is considered bad by anyone that runs high availability disk subsystems for a living. I know because I do and know lots of people at the largest data center in the Silicon Valley that do too. You passing RAID 5 off as something good to people that don't know any better is a bad thing.
I won't run it because of the issues when there is a problem. And you had better plan for there being a problem. That's where RAID 5 fails......
S-
Here's a source, and it's a decent price in the US. No idea where you are, but I'd think 0.5m/18" is adequate for the internals.Sorry for the ignorance, are there any suggestions on Mini SAS to 4 SATA cables? Additionally, I am still seeking an enclosure to house my (2) 5 bay backplanes and the power supply.
If you don't want to/can't follow what's listed above, then the only thing I can think of, is to keep an eye on eBay or Craig's List for a "parts" system.I have been scouring around in pursuit of either a Mac Pro or G5 case to act as the enclosure. Since this will be a home set up and more than likely visible, I am looking for something aesthetically pleasing as well as quiet.
I have been scouring around in pursuit of either a Mac Pro or G5 case to act as the enclosure. Since this will be a home set up and more than likely visible, I am looking for something aesthetically pleasing as well as quiet.
Was thinking Mac Pro or G5 to maintain some uniformity with my current machine, but cant seem to find a chassis at a decent price.
I know you don't see RAID 5 as "bad" because you keep recommending it to people.I don't see it as "good or bad". It depends on the specifics. If the drive quantity is limited (or budget) combined with a specific minimum capacity requirement that prevents other array types (parity or non parity), it has it's place.
Those that:
A. Have additional drive bays
B. Need a degree of redundancy than a type 5 array can't deliver
C. Have some other priority, such as performance in a specific area,
will make the necessary changes to do so. Even if it means adding additional funds to make it happen. But in systems that have some sort of limitation, a type 5 array may be the compromise that best suits their needs.
Just because it you don't like it, and larger organizations can take another route (make the needed financial commitment to do so), doesn't make it a total waste of time or resources. It's certainly a better alternative to a stripe set (type 0) for maximum capacity on a given set of disks. Physical drive locations and money matter for individuals (not unrelated). It can be simplified to funding alone, as the physical location issue can be solved with external enclosures, and most individuals and/or SMB's don't have the financial backing to do any more. Not every business has what essentially amounts to unlimited funds for the situation.
As most of the people asking on MR are either individual pros (or students), I've not only assumed, but asked about their budgets. Very few, if any, have had the funds to go past RAID5. I've never said it wasn't a compromise, but it has it's place. They typically have daily access to the systems, and can see if there's an issue. They can typically afford some down time as well, as they're working for themselves, and really don't have a choice. Simply because they don't have an IT staff. Any issues that come up, they have to address themselves. So there's no reason the set should suddenly fail, and be left unaddressed for any length of time, as they're using it themselves. It's not a server, but a workstation environment. At least that's what seems to be going on by the questions and PM contact I've had here on MR.
In a remote setting for example, no way would I trust it, let alone recommend it. That would mean a minimum of type 6, and only if the possibility of using a 10 (or other array type) isn't possible, needed, or practical for the budget.
Not quite sure of your thinking, unless you're intending to shut the single drives down (those off the logic board's controller). Glad to see you understand the drives off the logic board won't support hot swap, unless you spring for the Server edition ($500USD, ouch, just for that feature).Presently planning to plug the RocketRaid 3522 in my Mac Pro to run the set up. I purchased (2) BPU-350SATA 3x5.25" to 5x3.5 RAID Cage's, mainly so I could power each of the drives individually.
This would be possible, but at the cables being of 1.0m in length (internal->external->internal), may not be long enough for your intended placement in relation to the MP. As it's a passive installation, the max cable length needs to be 1.0m, and that's going to be tight. The case used for the drives will have to sit right next to the MP for the cables to reach.I figured I could sort of "ghetto" rig some type of tower enclosure by running the (2) Mini SAS - 4 SATA cables through the plates in the back of the tower enclosure directly into the backplanes. Additionally, I also thought that I could possibly run an additional 2 HDD's off the 2 hidden SATA ports of my Mac Pro, with the 1 HDD in each of the backplanes.
That backplane connector has 1 SATA port per drive, so it would work.I assume that I can run one drive in each backplane independent of the RocketRAID controller, running directly to the "hidden" SATA ports of my Mac Pro (understanding that they cant be hot swapped).
Check to see if the RR3522 supports staggered spin up. If so (I presume it does), it will spin up drives in sets of 4. So think 6x drives worst case (includes those off the logic board). @ ~40W each, that produces 240W as a minumum.Last question, assuming I am only powering the 2 backplanes, with (10) 7,500 RPM drives, what is my ideal PSU set up?
I presume you don't care for parity based RAID in any shape form or fashion. That's fine, but keep in mind, my reasoning is based in reality. Every single case has some limits, and the end result is a compromise. It has it's place for DAS on a small system and/or limited budgets.I know you don't see RAID 5 as "bad" because you keep recommending it to people.
RAID 6 is just as bad as RAID 5. It's both better and worse. No one with real world experience with redundant systems uses RAID 6 either.
If you need speed, run a RAID 0 array with daily image backups along with Time Machine.
If you need high availability, run RAID 1 along with Time Machine.
If you need high availability and speed, run RAID 10 along with Time Machine.
All of these solutions are better than RAID 5 or RAID 6.
S-
No, I am perfectly content with ZFS with RAID-Z and use it. It is so much better than RAID 5. They aren't even is the same league.I presume you don't care for parity based RAID in any shape form or fashion. That's fine, but keep in mind, my reasoning is based in reality. Every single case has some limits, and the end result is a compromise. It has it's place for DAS on a small system and/or limited budgets.
There's situations that a RAID 1 is too small for capacity. RAID 0 isn't an option, as there is a requirement for redundancy, not matter how well kept the backups are. 10 is too small in terms of capacity, as no disk large enough is made for the available drive locations, or too expensive, if there's numerous drive locations available.
Then there's instances where a nested parity array is needed (50/60). And we haven't even gotten to ZFS/Z-RAID/Z-RAID2 yet. But ultimately, NONE of it's perfect.
Every single case is different, and has to be evaluated individually. That's it.
Would I want a hospital (critical patient files, such as CAT scan results), bank account data,... to keep records on a type 5 array? NO. No matter the backup system in place. But they've much larger systems, and placed the budget to secure the data in other array types. And then there's the backups... onsite, offsite, and even these areas are hopefully n+1 or better.![]()
I never said they weren't. I happen to really like ZFS/Z-RAID/Z-RAID2, particularly to avoid the write hole issue without some forced hardware solution (NVRAM methodolgies in good RAID cards). Batteries will crap out in time afterall...No, I am perfectly content with ZFS with RAID-Z and use it. It is so much better than RAID 5. They aren't even is the same league.
You've analyzed your situation, and had an alternative, if not outright preference, and made the necessary equipment purchases (i.e. financial planning as well as technical).RAID 5 is just fine while everything is working. It falls on its face when there are drive failures. But I hope for the best and plan for the worst. So RAID 5 is not part of my plans.
Arguable in certain situations though. All of it is. I've seen a fair few systems that needed some redundancy in a non ZFS capable OS (min, better would be preffered), but a 0 is totally out, even with a minimum performance need due to software. 4 drive bays, so 10 was out, as the drive capacity was too small, and they couldn't get the external enclosures and drives needed (especially critical when the control is software based, so the ports available are very limited too). Short on cash. Seen it sooo many times, and getting worse lately.RAID 5 is simply a bad choice today.