Separate names with a comma.
Discussion in 'Mac Pro' started by dsa420, Oct 12, 2009.
SSIA, it has come down to these two, the pricing seems to be within $70 of each other.
What kind of RAID(s) do you intend to set up? Easier to advise if we know what your intentions are.
Either RAID 5 or 0, nothing too fancy. Just looking for the fastest and most reliable solution that will provide best updates on drivers.
Do you want to boot any systems from the RAID drives?
No need to boot from RAID
RAID 5 sucks.....
Go with the Areca 1680X and use RAID 10 instead....
The Areca is by far the better card. If the OP were comparing it with an RR43xx, it's closer, as they actually make those for Highpoint, and they're based on the ARC1680 series.
But as per the RAID5 comment, meh. It does contain an NVRAM solution to the write hole issue in parity based arrays, and the use of 10 may not allow for adequate capacity due to the limited drive space. External enclosures would be required, and at a notable cost increase. Othewise, it might be the better way to go.
Hard to say really, without any details from the OP.
I ordered the HighPoint as I found it for $100 less. Not running a sophisticated set up, running to external enclosure in RAID 0 for video streaming and general file storage.
It would work for that, if not a bit pricey for a stripe set. You could have gone with the RR24xx cards (SAS controller based), and used the system's resources (Fake RAID controller).
At any rate, just make sure you've an adequate backup system, as it's critical for any drive, but more so with a stripe set as you don't have any redundancy at all. So any failure will require you to rebuild off of your backup/s. If not, the data's gone, and no way to get it back.
Hey Nano.... I've seen you post this several time but have not been able to find any other source to confirm. Can you please post a link???
I didn't get it online, but over the phone with a few of the engineers. Even the guys over at Atto Technologies were aware of it.
But there is a hint or two if you take a really close look at the actual card (check out the actual P/Ns of the components soldered down), and the spec sheet. As it turns out, the majority of the design is identical. The differences can be for port counts, whether or not the cache is expandable <DIMM or SODIMM slot>, and I believe the ROM size (per Highpoint as a means of cutting costs).
It's just Systems Engineering of the ARC-1680 series applied to the specification request from Highpoint.
Face it, RAID 5 is considered bad by anyone that runs high availability disk subsystems for a living. I know because I do and know lots of people at the largest data center in the Silicon Valley that do too. You passing RAID 5 off as something good to people that don't know any better is a bad thing.
I won't run it because of the issues when there is a problem. And you had better plan for there being a problem. That's where RAID 5 fails......
I don't see it as "good or bad". It depends on the specifics. If the drive quantity is limited (or budget) combined with a specific minimum capacity requirement that prevents other array types (parity or non parity), it has it's place.
A. Have additional drive bays
B. Need a degree of redundancy than a type 5 array can't deliver
C. Have some other priority, such as performance in a specific area,
will make the necessary changes to do so. Even if it means adding additional funds to make it happen. But in systems that have some sort of limitation, a type 5 array may be the compromise that best suits their needs.
Just because it you don't like it, and larger organizations can take another route (make the needed financial commitment to do so), doesn't make it a total waste of time or resources. It's certainly a better alternative to a stripe set (type 0) for maximum capacity on a given set of disks. Physical drive locations and money matter for individuals (not unrelated). It can be simplified to funding alone, as the physical location issue can be solved with external enclosures, and most individuals and/or SMB's don't have the financial backing to do any more. Not every business has what essentially amounts to unlimited funds for the situation.
As most of the people asking on MR are either individual pros (or students), I've not only assumed, but asked about their budgets. Very few, if any, have had the funds to go past RAID5. I've never said it wasn't a compromise, but it has it's place. They typically have daily access to the systems, and can see if there's an issue. They can typically afford some down time as well, as they're working for themselves, and really don't have a choice. Simply because they don't have an IT staff. Any issues that come up, they have to address themselves. So there's no reason the set should suddenly fail, and be left unaddressed for any length of time, as they're using it themselves. It's not a server, but a workstation environment. At least that's what seems to be going on by the questions and PM contact I've had here on MR.
In a remote setting for example, no way would I trust it, let alone recommend it. That would mean a minimum of type 6, and only if the possibility of using a 10 (or other array type) isn't possible, needed, or practical for the budget.
I've seen extremely large budgets, but never truly unlimited. Meaning, if I'm asked about say a 100 drive array using FC, and I say the cost is $10Trillion USD, they're:
1. Going go laugh thier @sses off
2. Wonder what I'm smoking, swallowing,....
3. Tell me in some polite manner to "Go fly a kite, and we'll call you".
Sorry for the ignorance, are there any suggestions on Mini SAS to 4 SATA cables? Additionally, I am still seeking an enclosure to house my (2) 5 bay backplanes and the power supply.
I have been scouring around in pursuit of either a Mac Pro or G5 case to act as the enclosure. Since this will be a home set up and more than likely visible, I am looking for something aesthetically pleasing as well as quiet.
Was thinking Mac Pro or G5 to maintain some uniformity with my current machine, but cant seem to find a chassis at a decent price.
Here's a source, and it's a decent price in the US. No idea where you are, but I'd think 0.5m/18" is adequate for the internals.
What are the details on the backplane enclosures?
Each fits in 3*5.25" drive bays?
Assuming this is the case, then look for a mid tower that has 6*5.25" bays on the front. Then use an adequate PSU, and bridge the green and black wires (allows it to turn on). You'd also have to find a SATA to MiniSAS board (or however you plan to connect it to the card, such as eSATA, or MultiLane/InfiniBand connectors). You can add such a card (i.e. SATA to SFF-8088, Port Multiplier, SATA to MultiLane,...) to a PCI bracket, or mod the enclosure for a fit. Perhaps make a plate to cover the main board area (where you find audio, Ethernet,... connections).
It's not impossible, but it will take some work. Otherwise, sell of the existing backplane units, and get an enclosure. Take a look at Enhance technologies or even Sans Digital (not all inclusive, but they do make nice units both functionally speaking as well as appearance, that matches well with a MP).
If you don't want to/can't follow what's listed above, then the only thing I can think of, is to keep an eye on eBay or Craig's List for a "parts" system.
Hope this helps.
Define a decent price. A Fleabay search yielded a few good options that were sub-$100.
As for keeping things quiet, just pick out quiet case fans. In fact, how necessary are they? I can see the need to move air over HDs, but I don't think it would have to be a whole lot as long as its uniform.
Personally, since I was considering a DIY case, my big worry is about the PSU. I was thinking about using something akin to the 5.25" PSUs that some members have been using for their 4870s, but I don't know if that's the best option to use.
Thanks, good stuff. As you can tell I am kind of a novice and starting to realize I might be in over my head.
Presently planning to plug the RocketRaid 3522 in my Mac Pro to run the set up. I purchased (2) BPU-350SATA 3x5.25" to 5x3.5 RAID Cage's, mainly so I could power each of the drives individually.
I figured I could sort of "ghetto" rig some type of tower enclosure by running the (2) Mini SAS - 4 SATA cables through the plates in the back of the tower enclosure directly into the backplanes. Additionally, I also thought that I could possibly run an additional 2 HDD's off the 2 hidden SATA ports of my Mac Pro, with the 1 HDD in each of the backplanes.
I assume that I can run one drive in each backplane independent of the RocketRAID controller, running directly to the "hidden" SATA ports of my Mac Pro (understanding that they cant be hot swapped).
Is my logic totally screwed up here, does my proposed set up seem crazy?
Last question, assuming I am only powering the 2 backplanes, with (10) 7,500 RPM drives, what is my ideal PSU set up?
I know you don't see RAID 5 as "bad" because you keep recommending it to people.
RAID 6 is just as bad as RAID 5. It's both better and worse. No one with real world experience with redundant systems uses RAID 6 either.
If you need speed, run a RAID 0 array with daily image backups along with Time Machine.
If you need high availability, run RAID 1 along with Time Machine.
If you need high availability and speed, run RAID 10 along with Time Machine.
All of these solutions are better than RAID 5 or RAID 6.
Not quite sure of your thinking, unless you're intending to shut the single drives down (those off the logic board's controller). Glad to see you understand the drives off the logic board won't support hot swap, unless you spring for the Server edition ($500USD, ouch, just for that feature).
This would be possible, but at the cables being of 1.0m in length (internal->external->internal), may not be long enough for your intended placement in relation to the MP. As it's a passive installation, the max cable length needs to be 1.0m, and that's going to be tight. The case used for the drives will have to sit right next to the MP for the cables to reach.
That backplane connector has 1 SATA port per drive, so it would work.
Check to see if the RR3522 supports staggered spin up. If so (I presume it does), it will spin up drives in sets of 4. So think 6x drives worst case (includes those off the logic board). @ ~40W each, that produces 240W as a minumum.
I'd look at a 300W unit though, as there's rails on the PSU you won't use.
I presume you don't care for parity based RAID in any shape form or fashion. That's fine, but keep in mind, my reasoning is based in reality. Every single case has some limits, and the end result is a compromise. It has it's place for DAS on a small system and/or limited budgets.
There's situations that a RAID 1 is too small for capacity. RAID 0 isn't an option, as there is a requirement for redundancy, not matter how well kept the backups are. 10 is too small in terms of capacity, as no disk large enough is made for the available drive locations, or too expensive, if there's numerous drive locations available.
Then there's instances where a nested parity array is needed (50/60). And we haven't even gotten to ZFS/Z-RAID/Z-RAID2 yet. But ultimately, NONE of it's perfect.
Every single case is different, and has to be evaluated individually. That's it.
Would I want a hospital (critical patient files, such as CAT scan results), bank account data,... to keep records on a type 5 array? NO. No matter the backup system in place. But they've much larger systems, and placed the budget to secure the data in other array types. And then there's the backups... onsite, offsite, and even these areas are hopefully n+1 or better.
No, I am perfectly content with ZFS with RAID-Z and use it. It is so much better than RAID 5. They aren't even is the same league.
RAID 5 has the write hole problem. Battery back-up protects against the write hole problem only if the battery actually works. Then there is the performance hit the RAID 5 array experiences during the painfully long array rebuild process if a drive does fail. Then, if a drive fails during the rebuild process, all data is lost. Even worse, if even one read error occurs during the rebuild process occurs, all data can be lost. With larger drives, this becoming much more of an issue.
RAID 5 is just fine while everything is working. It falls on its face when there are drive failures. But I hope for the best and plan for the worst. So RAID 5 is not part of my plans.
ZFS with RAID-Z has none of these issues. I wish ZFS were an option on the Mac today.
RAID 5 is simply a bad choice today.
I never said they weren't. I happen to really like ZFS/Z-RAID/Z-RAID2, particularly to avoid the write hole issue without some forced hardware solution (NVRAM methodolgies in good RAID cards). Batteries will crap out in time afterall... So don't think I'm under the illusion the NVRAM solution is totally ideal. It's not. Spares and MTBR will really help it out, but it's not perfect.
But ZFS and it's variants aren't possible in a DAS system when the primary OS isn't Solaris/Open Solaris/Linux,... It's just not yet possible in other OS's for the moment, as their file systems suck (can't seem to be modified quickly given too much code has to be re-written due to dependencies on the old system- damnit). OS X was supposed to implement it in the SL Server edition, but ended up cutting it. Oh well.
You've analyzed your situation, and had an alternative, if not outright preference, and made the necessary equipment purchases (i.e. financial planning as well as technical).
It's always a case by case basis to me. No matter the situation.
Arguable in certain situations though. All of it is. I've seen a fair few systems that needed some redundancy in a non ZFS capable OS (min, better would be preffered), but a 0 is totally out, even with a minimum performance need due to software. 4 drive bays, so 10 was out, as the drive capacity was too small, and they couldn't get the external enclosures and drives needed (especially critical when the control is software based, so the ports available are very limited too). Short on cash. Seen it sooo many times, and getting worse lately. That left parity. Not enough performance from 6, so 5 was the solution by default. Even having to use particular drives to meet the performance target. Not ideal, but it has it's place.
Where I've seen issues, and it didn't seem to matter what the array type was (other than stupidity with backups - individuals here, not enterprise users with proper IT staff), was the complete lack of any spares. No extra battery, card, and especially drives. Odd, that SOHO/SMB's (those that usually only just miss SOHO in terms of size) that don't know much about their systems/needs, can finally understand the need for SAS (when needed). But still can't wrap their heads around the concept of keeping a spare drive on hand (for a 4 drive set).