Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

ScratchyMoose

macrumors regular
Original poster
Jan 13, 2008
223
15
London
Hi Everyone,

I've been having problems with an external RAID box (DataTale). It's got 4x 1.5Tb HDs set as a RAID 5 array, and the box looks after the RAID (oxford chipset). Originally i had the box running through a RocketRaid 2314 eSATA PCI card, but i thought that the problems were because of a conflict between the card and the raidbox (in the RR log it was "Disk '-' at controller1-channel3 failed" and the disc would unmount).

Sooo, i got a sonnet eSATA extender cable and routed the internal eSATA ports (MP 3,1) to the outside and connected the box directly without going through the RR card, thinking that all would be cool ... oh no!

Running the disc through Drive Genius 'Scan' to map bad blocks, i got the following console errors (all from DriveGenius):
07/12/2009 13:17:51 DGHelper(410,0xa063b720) malloc: *** mmap(size=2097152) failed (error code=12)
07/12/2009 13:17:51 *** error: can't allocate region
07/12/2009 13:17:51 *** set a breakpoint in malloc_error_break to debug
07/12/2009 13:17:51 DGHelper(410,0xb00a1000) malloc: *** mmap(size=2097152) failed (error code=12)
07/12/2009 13:17:51 *** error: can't allocate region
07/12/2009 13:17:51 *** set a breakpoint in malloc_error_break to debug

I'd taken the individual drives out and scanned each of them for bad blocks, and they all came out as 0 bad blocks, prior to this, so they seem fine.

Can anyone point me in any direction to solve this ... or indeed anyone know what's going on? ... could it just be that there was a conflict between the card and the box, and now that that's sorted, i'm getting a problem with DG and the large drive (4.5Tb) though there's nothing to worry about ... no?! Any ideas where i should go next? (I'm currently writing 0's with disc utility which seems to be going ok without any problems.)

Cheers
EDIT** sSATA in the title, what's that?!
 
I'm a bit confused.

When you took the drives off of the RR2314 (which were set up as a RAID5), you connected them to the logic board. :cool: What RAID level did you set up at this point, as OS X can't do RAID 5?

BTW, 2 things you might want to be aware of:
1. Do NOT use RAID 5 via software based implementations, as there's no solution for the write hole.
2. The Oxford chipsets are junk. Many controllers have problems with them, and result in dropouts, which is what you were experiencing on the RR2314 if I understood your post correctly.
 
The Oxford chipsets are junk. Many controllers have problems with them...
Hi
Do you have any links about problems?
I was considering buying the 4-drive Promise box that Apple sell to go with the Mac mini server (which has the Oxford 936QSE chip).
 
Hi
Do you have any links about problems?
I was considering buying the 4-drive Promise box that Apple sell to go with the Mac mini server (which has the Oxford 936QSE chip).
Here's a few (older parts, circa 2006 w/ 911/912/922 parts). Here's something specific with on the Mercury Elite-AL Pro Qx2 from OWC that you might find interesting (scroll down to the user comments). Perhaps it's not as bad, but given the history, I don't trust units with Oxford chips with PM controllers in them. They just don't seem stable to me. :(
 
Hi Nanofrog, thanks for replying :eek:

When you took the drives off of the RR2314 (which were set up as a RAID5), you connected them to the logic board. :cool: What RAID level did you set up at this point, as OS X can't do RAID 5?
The box looks after the RAID, so it's hardware implemented within the box, and OS X sees it as one disc without the RAID.

2. The Oxford chipsets are junk. Many controllers have problems with them, and result in dropouts, which is what you were experiencing on the RR2314 if I understood your post correctly.
it seems that that was the problem ... now that it's on the mother board, disc utility has just said (via the console) that
"Secure Erase Free Space completed successfully in 1 day, 7 hours, 23 minutes.
Erase complete."
so i guess that means that the disc is fine and working, and that perhaps the malloc error report from DriveGenius might have been a symptom of DG rather than a bad disc / RAID etc ... would you say?!

Knowing that the Oxford chipsets aren't the best, i probably should have got another sonnet / proavio box and left the RR card to do it's stuff - i did like the idea of being able to transfer the box to another computer via firewire if the need arose (ie the MP dying).

Thanks again
 
The box looks after the RAID, so it's hardware implemented within the box, and OS X sees it as one disc without the RAID.
Actually, it's not. There's no true RAID processor, cache (NVRAM), or battery in it at all. It's still based on drivers and the MP's system resources.

So it's not really viable for RAID5. If there's a power outage for example, any data being written at the time will be corrupted.


it seems that that was the problem ... now that it's on the mother board, disc utility has just said (via the console) that
"Secure Erase Free Space completed successfully in 1 day, 7 hours, 23 minutes.
Erase complete."
so i guess that means that the disc is fine and working, and that perhaps the malloc error report from DriveGenius might have been a symptom of DG rather than a bad disc / RAID etc ... would you say?!

Knowing that the Oxford chipsets aren't the best, i probably should have got another sonnet / proavio box and left the RR card to do it's stuff - i did like the idea of being able to transfer the box to another computer via firewire if the need arose (ie the MP dying).

Thanks again
The Oxfords have interoperability issues, and best to stay away from them.

As it happens, the RR2314 is a Fake RAID controller (still uses drivers + system resources), so it's also vulnerable to the write hole issue.

If you want a proper card to run RAID5, you'd need to look at other brands. Highpoint's offerings are mostly junk, save the RR43xx models (ODM'ed by Areca, which is a good company).

Cards to look at (note, these companies offer models that can boot OS X):
  • Areca (work well, and offer a good value for what you get)
  • Atto Technologies (good cards, but more expensive than Areca's, and don't offer a couple of features that can be had on some of Areca's models, such as the ability to expand the cache).
  • Highpoint (RR43xx ONLY), but thier support side is horrible, so be warned.
 
Actually, it's not. There's no true RAID processor, cache (NVRAM), or battery in it at all. It's still based on drivers and the MP's system resources.

So it's not really viable for RAID5. If there's a power outage for example, any data being written at the time will be corrupted.
no way:eek: are there other problems apart from the corruption of written data (i've got it on a UPS, so i'm thinking i'm kinda covered there ...?



The Oxfords have interoperability issues, and best to stay away from them.
do you mean if it breaks and i put the discs into a different box / computer, i won't be able to get access to the data?

As it happens, the RR2314 is a Fake RAID controller (still uses drivers + system resources), so it's also vulnerable to the write hole issue.
Dude, you're messing up my morning:rolleyes: Would this have been better than the RR2314?
http://www.scan.co.uk/Products/Areca-ARC-1210-4-Port-PCI-E-SATAII-RAID-Controller (ARC-1210)

And then i guess, i should have bought another sonnet / proavio box to work with that, rather than the Oxford box ... well, at least it's not monday morning.

So, right now i've now got a proavio 4PM box running through the RR2314 and the Oxford box running straight off the motherboard ... which would you place your most important data on ... or would you sell the RR and the oxford box, and then get the Areca and another PM box instead?

Sorry for all the questions (that i didn't know i had!)
 
Actually, it's not. There's no true RAID processor, cache (NVRAM), or battery in it at all. It's still based on drivers and the MP's system resources.
Hi
According to the Oxford 936QSE data sheet there is an ARM processor with its own controlling firmware and on-chip cache running the RAID sets - RAID 1 or RAID 5. There are no drivers.

Since Oxford only sell the chip (and presumably a reference board design) it maybe depends on individual manufacturers to best ensure robustness in action, especially with regard to error-flagging etc.

The Promise DS4600 box that Apple are selling has its own specific Mac setup software - and drivers, but they seem only to be for backup strategies. It will be interesting to hear if that box has any operational flaws...
 
no way:eek: are there other problems apart from the corruption of written data (i've got it on a UPS, so i'm thinking i'm kinda covered there ...?
A UPS is definitely a good thing to have with RAID.

do you mean if it breaks and i put the discs into a different box / computer, i won't be able to get access to the data?
Most likely. To be able to use it, it needs to be the same device, or from the same manufacturer. If they're attached to a motherboard controller (i.e. Intel ICH chip), they won't be able to read it.

Transferring cards allow to transfer an array from one system to another.

Dude, you're messing up my morning:rolleyes: Would this have been better than the RR2314?
http://www.scan.co.uk/Products/Areca-ARC-1210-4-Port-PCI-E-SATAII-RAID-Controller (ARC-1210)
If you wanted to run an array internally, you'd have to use the optical bays to locate the drives with that particular model. An ARC-1210ML has the SFF-8087 connector to accept the HDD cable on the logic board.

For external use, and using the box you've got, you only need a simple eSATA card, not a RAID card, as it's a "RAID in a Box". It's meant to do it with circuits and software in the box as is.

If you want a proper RAID card, you'd need a different enclosure ("Dumb box"), as it only has a PSU, perhaps a monitoring board, and space for drives + all the internal cabling. Really simple. All the RAID operations are handled by the card.

So, right now i've now got a proavio 4PM box running through the RR2314 and the Oxford box running straight off the motherboard ... which would you place your most important data on ... or would you sell the RR and the oxford box, and then get the Areca and another PM box instead?
If it were me, I'd dump both the RR2314 and the Oxford based enclosure.

Hi
According to the Oxford 936QSE data sheet there is an ARM processor with its own controlling firmware and on-chip cache running the RAID sets - RAID 1 or RAID 5. There are no drivers.
It's an RoC (RAID on a Chip), but they're still limited, as it's a low cost solution. Not much different than units with a small motherboard with an Intel Atom. It's ultimately a software design, and neither have a proper NVRAM solution.

Cache values and batteries are the simplest things to look for if unaccustomed to RAID to differentiate the models. Cost is another, but again, if there's not much familiarity, it's hard to distinguish the differences.

Since Oxford only sell the chip (and presumably a reference board design) it maybe depends on individual manufacturers to best ensure robustness in action, especially with regard to error-flagging etc.
I'm not seeing any possiblilties for much in the way of customization. No pin-outs for additional cache to make an NVRAM solution. It just wasn't designed for it from what I'm getting off PLX's pages. Perhaps the acqusition of Oxford by PLX will improve matters (Oxford's long history of issues likely made it a bargain for PLX to pick up ;)).
 
If it were me, I'd dump both the RR2314 and the Oxford based enclosure.

Didn't quite expect that this morning, but thanks for all the time it will have taken you to reply ... i've been searching the boards, and i've got a lot of thinking to do as i'm not as protected as i thought i was.

you wrote in a different post to someone who has a card similar to mine who's using a RAID 5
Pull back to single disk mode + backup, or a smaller stripe set with a backup. ..SNIP.. I'd rather see you with a stripe set + backup than a type 5 without one. The stripe set in this case is actually safer.

so to be safe, i'd get an ARC-1120ML instead of the RR2314, and have that attached to the ProAvio 4PM box (eSATA) that i already have (which luckily has WD1000FYPS drives in it and is, i believe, a 'dumb box') and have that as a RAID 5 array, then ditch the Oxford box and get something like a Netstor NA750B (are there any you reccomend?) and try it with the 4 drives i already had in that box which are (don't shout at me:rolleyes:) Samsung EcoGreen F2 1.5TB Hard Drive SATAII (the Areca compatibility matrix doesn't list them, but here's hoping!), and then get a spare 1Tb drive and a spare 1.5Tb drive ... sound like a plan? I guess the cost would be about £550 + Tax + new drives but that could be partly offset by selling the RR and the Oxford box ...

Thanks again, it's much better to know i'm actually :eek: when i thought i was :cool:
 
so to be safe, i'd get an ARC-1120ML instead of the RR2314, and have that attached to the ProAvio 4PM box (eSATA) that i already have (which luckily has WD1000FYPS drives in it and is, i believe, a 'dumb box') and have that as a RAID 5 array, then ditch the Oxford box and get something like a Netstor NA750B (are there any you reccomend?) and try it with the 4 drives i already had in that box which are (don't shout at me:rolleyes:) Samsung EcoGreen F2 1.5TB Hard Drive SATAII (the Areca compatibility matrix doesn't list them, but here's hoping!), and then get a spare 1Tb drive and a spare 1.5Tb drive ... sound like a plan? I guess the cost would be about £550 + Tax + new drives but that could be partly offset by selling the RR and the Oxford box ...

Thanks again, it's much better to know i'm actually :eek: when i thought i was :cool:
Houston, we have a problem... :eek:
1. The ARC-1120 is a PCI-X model, and will NOT work in your system (PCIe slots).
2. The ProAvio 4PM is meant to be used with an eSATA card, and any RAID is software based (drivers).

Given the inferences I can make from what you've listed, you be better off with:
ARC-1210ML
ProAvio EB4 MS (uses a single cable, but actually uses all 4 ports on the card = faster throughputs; $360USD at BHphoto).
External cable needed (keep the length at no more than 2.0m, as it's the limit for SATA).

These are US based sites, but the gear should be available in the UK (I know Areca can be found there, and you've already got ProAvio, so I presume that should be possible as well).

This combo has the ability to boot OS X if you wish, and isn't horribly expensive either, for what you get. The card is a true hardware RAID card, and has an NVRAM solution to the write hole issue for parity based arrays.

Personally, I'd recommend getting a card with 4 more ports than you need now for future expansion (it's cheaper down the road, as you add drives until all ports are full before having to replace them with larger capacity drives).
 
ARC-1210ML
ProAvio EB4 MS
External cable needed

Hi - apologies for not replying sooner after you'd taken the time to give me your advice: a new baby, the busiest month so far this year and reading lots of the boards have kept me away till now ... but now i think i have a plan!

I also think i've decided on needing 3 copies of my data (as i mentioned i'm a photographer, and if the data (contact details / photos / portfolios / website etc) was to disappear it would almost be almost fatal!). I did read that RAID 5 isn't the panacea that everyone thinks it is - obviously if a drive fails while you're rebuilding the array after a previous drive failure, then the whole thing's gone.

So i'm thinking: two copies on site, and one copy off site. Of the two copies on site, one would be a RAID 5 array, and the other would either be JBOD or single disc.

To do this with the 1210ML (which uk suppliers say is discontinued?) would cost £250 for the card and £300 for the EB4 MS + discs, ie £550 + discs.

To do that with 8 new bays rather than 4, it would cost £490 for an EB8 MS and £350 for the card + discs, ie £840 + discs.

But if i was to keep the RR2314 and just buy a new 10 bay pro avio box EB10 PM that'd cost £615 + discs. With the extra 10 slots i could easily set up a RAID 5 and JBOD.

SO, what i'm thinking is buy the 10 bay box, run it through the RR2314 that i've already got with my primary copy on a RAID 5 array, but using Chronosync back that array up nightly to a JBOD within the same box ... and keep the EB4 PM that i've got, and do the same there - so 3 of the drives have 1TB drives in a RAID 5 and the last bay is filled with a 2TB drive that nightly backs up the RAID ... i'd love your opinion on whether i'm mad or not ... a week ago i thought my backup's were sorted!

The only last thing that i'd love to get some feedback on is whether the EB10 PM at £615 is way better than the NetStor 10 bay PM box which costs only £320
see here: ProAvio at £615 and NetStor at £320

I'd love the cheaper one to be an option, but i'm guessing that you get what you pay for?! I'm pushing my budget to buy the EP10 option, but i realise how important the data is to me. The 1210ML option is just about possible money wise, but with the new baby / recession it's kinda not! I'd love to save £300 with the NetStor, but i'm guessing that with me saying how valuable the data is to me, i guess i should just suck it up?

Many thanks again:)
 
I also think i've decided on needing 3 copies of my data (as i mentioned i'm a photographer, and if the data (contact details / photos / portfolios / website etc) was to disappear it would almost be almost fatal!). I did read that RAID 5 isn't the panacea that everyone thinks it is - obviously if a drive fails while you're rebuilding the array after a previous drive failure, then the whole thing's gone.

So i'm thinking: two copies on site, and one copy off site. Of the two copies on site, one would be a RAID 5 array, and the other would either be JBOD or single disc.

To do this with the 1210ML (which uk suppliers say is discontinued?) would cost £250 for the card and £300 for the EB4 MS + discs, ie £550 + discs.

To do that with 8 new bays rather than 4, it would cost £490 for an EB8 MS and £350 for the card + discs, ie £840 + discs.

But if i was to keep the RR2314 and just buy a new 10 bay pro avio box EB10 PM that'd cost £615 + discs. With the extra 10 slots i could easily set up a RAID 5 and JBOD.

SO, what i'm thinking is buy the 10 bay box, run it through the RR2314 that i've already got with my primary copy on a RAID 5 array, but using Chronosync back that array up nightly to a JBOD within the same box ... and keep the EB4 PM that i've got, and do the same there - so 3 of the drives have 1TB drives in a RAID 5 and the last bay is filled with a 2TB drive that nightly backs up the RAID ... i'd love your opinion on whether i'm mad or not ... a week ago i thought my backup's were sorted!

The only last thing that i'd love to get some feedback on is whether the EB10 PM at £615 is way better than the NetStor 10 bay PM box which costs only £320
see here: ProAvio at £615 and NetStor at £320

I'd love the cheaper one to be an option, but i'm guessing that you get what you pay for?! I'm pushing my budget to buy the EP10 option, but i realise how important the data is to me. The 1210ML option is just about possible money wise, but with the new baby / recession it's kinda not! I'd love to save £300 with the NetStor, but i'm guessing that with me saying how valuable the data is to me, i guess i should just suck it up?

Many thanks again:)
I'm not aware the ARC-1210ML is discontinued, but it's quite possible, and they're thinking users will turn to the next best alternative in the line: ARC-1212 (similar, but SAS chip, so it can run both SATA and SAS disks).
Areca's ARC-1212 page
Scan.com page for ARC-1212 <269.25GBP> (it still uses the SFF-8087 connector = will work in '06 - '08 MP's with the internal HDD's)

Please understand you really need to use enterprise drives with RAID cards, especially SAS models, as they're picky about SATA drives (consumer models typically won't work or are stable if they do given the differences in the recovery timing data in the drive's firmware).

There is a reason you shouldn't use the RR2314 for parity RAID, as it's not really capable of handling it. There's no solution to the write hole issue. Without it, you run the risk of data corruption (no NVRAM solution that's available in the Areca cards I've listed). A UPS a necessity as well, and the card's battery is a recommended option.

Without these, you're going to be bitten some day, not maybe.

You could use the RR2314 to attach to the enclosure (PM unit for backups ONLY), but you could also sell it off, and just use an eSATA card that's capable of running a PM enclosure.

Of the two enclosures you listed, I can't see a difference that warrants the price. I'd go with the less expensive unit, and a SIL3132 card (no boot function, but you won't need it = cheap card; example).

Given what you've posted about your needs, you can't be "cheap" with it.

Recap of what you need:
1. Proper RAID card for the primary array
2. UPS for the system and drive system (PM enclosure)
3. PM enclosure
4. Enterprise drives for the primary array, consumer drives will be fine for the PM enclosure (saves money, and the drives in the enclosure won't have the duty cycle that the primary drive units have either)

Options:
5. Battery for the RAID card
6. Some sort of off-site backup account (monthly fee based)
7. SIL3132 card (really inexpensive) and sell off the RR2314 = some funds back from the previous purchase to cover a little of what you actually need, given your post.
 
ARC-1212 SNIP (it still uses the SFF-8087 connector = will work in '06 - '08 MP's with the internal HDD's)

I've never quite felt so stupid ... thanks for bearing with me :eek:

The 1212 site says "internal Min SAS connector" too. Am i right in thinking that the 1212 is only for internal drives? My MP drive bays are full right now (1x os, 3x RAID0 for scratch and user). I'd like to keep the RAID 0 as it gives me speed when working with files, but i'm not fussed with that once it comes to archiving them.

Clarify my situation?: When i'm working on files, they're on the RAID 0 internal drives. Once the files are finished (or if that's taking quite some time), they're then copied off to storage (x3 copies - 2 onsite, and 1 offsite) and deleted from the RAID 0.

There is a reason you shouldn't use the RR2314 for parity RAID, as it's not really capable of handling it. There's no solution to the write hole issue. Without it, you run the risk of data corruption (no NVRAM solution that's available in the Areca cards I've listed). A UPS a necessity as well, and the card's battery is a recommended option.
I've got the UPS (small mercies!) and i'm believing you when you say that the RR2314 is bad news because of the write hole issue.

You could use the RR2314 to attach to the enclosure (PM unit for backups ONLY), but you could also sell it off, and just use an eSATA card that's capable of running a PM enclosure.
So, i now know that the RR2314 is bad for making a RAID5. Is it wrong to think that a JBOD made via the RR2314 is ok? Would it be better to use the SIL3132 card to make the JBOD? Also, would it be better to have a RAID5 from the RR with a UPS (with some redundancy but a write hole) or either of the two JBODs above that don't have any redundancy? If i had both the RAID5 and was copying that onto a JBOD nightly for my second copy of the data, wouldn't that cover me? Sorry for all the questions just i keep running into a brick wall over which RAID card would sort me out, primarily because it's all greek to me:eek:

Of the two enclosures you listed, I can't see a difference that warrants the price.
Yay :D

I'd go with the less expensive unit, and a SIL3132 card (no boot function, but you won't need it = cheap card
Great discovery for me!

Given what you've posted about your needs, you can't be "cheap" with it.

Recap of what you need:
1. Proper RAID card for the primary array
2. UPS for the system and drive system (PM enclosure)
3. PM enclosure
4. Enterprise drives for the primary array, consumer drives will be fine for the PM enclosure (saves money, and the drives in the enclosure won't have the duty cycle that the primary drive units have either)

Options:
5. Battery for the RAID card
6. Some sort of off-site backup account (monthly fee based)
7. SIL3132 card (really inexpensive) and sell off the RR2314 = some funds back from the previous purchase to cover a little of what you actually need, given your post.

1. Primary array - RAID0 internal, via disc utility
1b. External RAID5 array ... but which card? (cost could come from the cheaper 10 bay and selling the RR)
2. got
3. the cheaper 10 bay
4. is caviar black ok for the RAID 0?

5. Yup, i understand
6. I think i've got too much (ie 3Tb of images)
7. Nice idea

Thanks again, for this crash course education. I'm a bit woozy from it, but very grateful:)
 
I've never quite felt so stupid ... thanks for bearing with me :eek:
NP. RAID can get complicated, and rather quickly. ;)

The 1212 site says "internal Min SAS connector" too. Am i right in thinking that the 1212 is only for internal drives? My MP drive bays are full right now (1x os, 3x RAID0 for scratch and user). I'd like to keep the RAID 0 as it gives me speed when working with files, but i'm not fussed with that once it comes to archiving them.
It's an internal card, but it can be used with external enclosures with the right cable.

But as I figured the parity array would be internal, you'd have a problem with the enclosure you're looking at.

As a basic setup, here's one way you could get the security with backups you want:
1. Primary data area (using the internal drives + proper RAID card if you want to run a parity based array 5/6)
2. External backup (eSATA card + PM enclosure). This is the least expensive way of mass storage for backup purposes I can think of (more than any current single disk is capable of in terms of capacity).
3. Off site means, usually an account, though it could be yet more external drives stored in a bank safty deposit box.

4x internal drives in RAID 5 would satisfy your throughput needs, and offer some redundancy for the primary data (less time fixing a stripe set when they go on you, as type 5 can run in degraded mode until the fault is solved, and rebuild on it's own). No need to reload the OS, and restore the data from backups every single time something goes wrong.

You use it for all of what you do daily: OS, applications, data,... on a single partition.

Clarify my situation?: When i'm working on files, they're on the RAID 0 internal drives. Once the files are finished (or if that's taking quite some time), they're then copied off to storage (x3 copies - 2 onsite, and 1 offsite) and deleted from the RAID 0.
I'm not the biggest fan of a stripe set (type 0 arrays), given the effort required to fix them when something goes wrong.

If you want to keep this however, you can skip the RAID card, and just use the eSATA card and external enclosure as the on-site backup means. It's cheaper too.

If you want:
1. RAID 0 for working data
2. Parity based array = 1st backup source
3. eSATA + external enclosure = 2nd backup source

You're going to run into some issues.
1. The enclosure you're looking at CANNOT run with the RAID card (or any using SFF-8087/8088 connectors) that you're looking at. Wrong connections on the back.

This means 2 different enclosures, and it's going to be more expensive than you're willing to do, and not much point to it either. Especially if a stripe set is used for the primary data.

I've got the UPS (small mercies!) and i'm believing you when you say that the RR2314 is bad news because of the write hole issue.
What size?
If it's too small, it won't work if the power ever goes out.

So, i now know that the RR2314 is bad for making a RAID5. Is it wrong to think that a JBOD made via the RR2314 is ok? Would it be better to use the SIL3132 card to make the JBOD? Also, would it be better to have a RAID5 from the RR with a UPS (with some redundancy but a write hole) or either of the two JBODs above that don't have any redundancy? If i had both the RAID5 and was copying that onto a JBOD nightly for my second copy of the data, wouldn't that cover me? Sorry for all the questions just i keep running into a brick wall over which RAID card would sort me out, primarily because it's all greek to me:eek:
You'd have to look into the RR2314's ability to work with PM enclosures. Personally, I don't trust Highpoint's gear as far as I can throw it (save the RR43xx series, which is ODM'ed by Areca). Seriously.

Tech support sucks too, not just the hardware being difficult to figure out. It doesn't work that well (highly tempermental, fails often in my experience).

1. Primary array - RAID0 internal, via disc utility
1b. External RAID5 array ... but which card? (cost could come from the cheaper 10 bay and selling the RR)
2. got
3. the cheaper 10 bay
4. is caviar black ok for the RAID 0?

5. Yup, i understand
6. I think i've got too much (ie 3Tb of images)
7. Nice idea

Thanks again, for this crash course education. I'm a bit woozy from it, but very grateful:)
Read the above, and let me know what you want to do. The reason is, I wouldn't use a stripe for the primary array, and parity for the backups. That's actually quite foolish IMO, as the primary array is your first line of defense (and it needs to be able to run if at all possible with redundancy = less time and effort required to recover from a problem).
 
Read the above, and let me know what you want to do. The reason is, I wouldn't use a stripe for the primary array, and parity for the backups. That's actually quite foolish IMO, as the primary array is your first line of defense (and it needs to be able to run if at all possible with redundancy = less time and effort required to recover from a problem).

Thank you again ...

Yeah, i see what you're saying: the first line of defence should have all the redundancy to it, and if that fails, then go onto the next line of defence which has a lesser (or no) redundancy. The funny thing is that the data which has gone into storage is, for me, the most important. I'll have 100's of job's worth of data that's gone into storage, but only 5 or so job's data that's currently being worked on and kept directly on the MP, so for me it's the stored data that's the most important data. Also, once the data comes off the camera's CF cards, it goes onto both the MP for working, and the storage array incase the MP's RAID 0 fails. The great thing about RAID 0 over RAID 5 for me is the write speed ... am i right in thinking that RAID 0 is much faster? With deadlines always tight (usually exacerbated by the b%&$dy printer) i'm grateful for all the speed i can get.

I've found this card by 3ware that would run a RAID 5 on this box. If i was to get that, i'd keep my RAID 0 on the MP for speed and have two RAID5's of 4 drives each for storage, keep the RR2314 to run a JBOD on the proavio 4pm box that i've already got (or sell it and buy that card you linked to instead?)

I think i'm getting closer to what you would advise:eek:
 
Thank you again ...

Yeah, i see what you're saying: the first line of defence should have all the redundancy to it, and if that fails, then go onto the next line of defence which has a lesser (or no) redundancy. The funny thing is that the data which has gone into storage is, for me, the most important. I'll have 100's of job's worth of data that's gone into storage, but only 5 or so job's data that's currently being worked on and kept directly on the MP, so for me it's the stored data that's the most important data. Also, once the data comes off the camera's CF cards, it goes onto both the MP for working, and the storage array incase the MP's RAID 0 fails. The great thing about RAID 0 over RAID 5 for me is the write speed ... am i right in thinking that RAID 0 is much faster? With deadlines always tight (usually exacerbated by the b%&$dy printer) i'm grateful for all the speed i can get.

I've found this card by 3ware that would run a RAID 5 on this box. If i was to get that, i'd keep my RAID 0 on the MP for speed and have two RAID5's of 4 drives each for storage, keep the RR2314 to run a JBOD on the proavio 4pm box that i've already got (or sell it and buy that card you linked to instead?)

I think i'm getting closer to what you would advise:eek:
I understand where you're going, and the reasoning. That's what I needed.

There's a couple of things to note:
1. You can run the primary array from the external enclosure, and the added drives can allow you to well exceed a 3x disk stripe set in both read and writes.

The RAID card can allow to to expand as you go (until all the ports are filled, or in the case of some SAS cards, you can then go with enclosures that contain SAS expander boards/separate units to go past the physical port count <1:1 port - drive ratio>). Areca's that are SAS expander capable can run up to 128 drives, and ATTO's gear can run up to 256 drives with this method.

The additional drive count provides a level of parallelism (higher throughputs) than a MP is capable on it's own by a notable margin with greater flexibility in array levels available to you as well. Expandability, flexibility, and better throughputs is what really makes RAID worth it IMO.

BTW, a level 5 array is close to a stripe set of the same drive count (~85% capable in most cases, and the reduction is a result of the parity calculations). Not a substantial cost for the redundancy it provides while maintaining a high level of capacity.

2. For backups, you can use the same card (I'll link you to a larger port version that can handle both the primary and backup), as well as a better enclosure type (the one you linked uses SFF-8470 MultiLane connectors which are slower than SFF-8088 MiniSAS external connectors; the screws in the ML external connectors are a PITA too). MiniSAS are all locking quick connects (latches, no screws).

2a. You can also keep to using a RAID card ONLY for backups if you wish, but it's not as practical IMO, given what the card is capable of providing you (assuming it contains adequate ports). I'm thinking a 16 port card would suffice. Beyond that is a 24 port model (for 1:1 connections with drives, and eliminate the need for SAS expanders).

3. The 3Ware card you linked will NOT work. They can't handle the EFI environment or OS X at all. Those are BIOS only, and limited to Windows or various flavors of Linux.

I'd strongly recommend reading up on wiki's RAID page.

Areca ARC-1680ix16 (SAS model that actually has 20 ports on it; 16 internal + 4 external connector). Price example. Internal ports can be routed to external enclosures by a specific cable (example, and don't exceed 2.0m in length for SATA drives or it won't be stable. Best to keep it to 1 - 1.5m <1.5 - 1.75 m cables are a special order, but are available>).

For a SATA only model, the ARC-1261ML is also a possibility (best price). It can now boot EFI in a MP, as well as driver operation (EFI boot support is recent on this card). The same cable requirements (type and length) still apply. It's cheaper, but there's no ability to use SAS expanders with it. You can use special cables to run it with PM enclosures, but it will be slower this way (and will depend on the exact specifics). Any MiniSAS connector based card can with the right cable, but it's usually an internal one. It works, but isn't pretty, and the length is limited to 1.0m (special order might be possible here, but I've not tried to get one before). I have with the internal to externals linked above, and that's how I know you can get them. ;) Order in the US, made in China, and sent by air freight.

Neither are exactly as inexpensive as what you were looking at, but they do offer far more for your money in terms of flexibility, can grow with your storage requirements, and speed.

Here's an example of an 8 port MiniSAS enclosure (btw, these are also available in silver).

Think carefully, and get back to me. Pay special attention to connectors, OS support, and boot capabilities, as it's easy to get cards and enclosures that don't match (especially the Port Multiplier versions which don't work well with the level of cards you're looking at; PM chips reduce throughputs, but can work with the right cable), or won't work with EFI or OS X. That's why you need to stick with Areca or ATTO. Of the two companies (the only ones that offer EFI boot support), Areca offers more value for your money. ATTO will cost more for the same port count, and may be short of a very useful feature (the ability to upgrade the cache in some of Areca's models, of which the two 16 port cards linked are capable of). It helps speed things up. I can hit up to 1.39TB/s with 8x enterprise SATA drives on the ARC-1231ML they're attached to. The SAS unit I use only has 4x drives on it for now, given their cost per drive. SATA offers better cost/GB than SAS (and SSD is the absolute worst).

You can still use an enclosure like the one linked with a smaller card, and just run a parity based array for backups only. Say an 8 port card, and a 8 port enclosure (the port counts are in groups of 4, as each MiniSAS connector = 4 SATA/SAS drives in the same cable).
 
ok, quick question, i've got 4x Samsung EcoGreen F2 1.5TB drives, are they good enough to use as RAID 6 ... if i was to get another couple, it'd be a great size, but are they good enough? In the other box, i've already got RE2 GP's which i guess are good enough (being RE) but the Samsung's were bought primarily for archive and not bought for speed.

Cheers :)
 
ok, quick question, i've got 4x Samsung EcoGreen F2 1.5TB drives, are they good enough to use as RAID 6 ... if i was to get another couple, it'd be a great size, but are they good enough? In the other box, i've already got RE2 GP's which i guess are good enough (being RE) but the Samsung's were bought primarily for archive and not bought for speed.

Cheers :)
With an Areca RAID card, NO. Samsung drives are problematic on the SATA cards, and won't work at all on the SAS models.

Take a look at this (HDD Compatibility List). If you ignore this, you're likely to run into headaches due to dropouts or flat out incompatibility.

I'd go with the WD RE3 or RE4 models myself (I use them, and haven't had issues with them so far). Seagate's gotten too unreliable lately IMO (the issues with the 1.5TB consumer drives were actually more wide spread than that, as the enterprise line <ES.2's> had something called the "Boot of Death"). So I've lost faith in Seagate drives until there's sufficient proof to me that they've got their crap together.
 
ok ... i feel like Baldrick in Blackadder, but ... i have a plan :D Tell me if you like it:

Buy the ARC-1680X connected to Pro Avio EB8 MS by 2x SFF-8088 multilane cables, and fill with the 4 RE2's that i've currently got and buy 3 further RE3's (no conflict with the RE2's?) and RAID 6 them. So a total of 7 discs means a 5TB array, with the possibility of 2 drives failing. Onto this i'd put everything: OS, User, Current files, Backup files (but partition a slice for the PS scratch disc?).

I'd use the 4x 1.5 Tb samsung drives that i've got within the MP as a JBOD and use them as secondary copies of the data and time machine. (I couldn't RAID them as they'll conflict with the ARC.)

The RR2314 i'd sell, and buy a SIL card to run the EB4 PM box that i've got, and fill it with assorted drives that i've also currently got to make a JBOD, which i'd then store off site.

I know you'd rather me get the ARC-1680x12 / 16 but i just can't afford it ... and hey, i can always put another RE3 into the EB8 to give me a 6TB array, and that should last me 3 years from now i imagine.

What do you think?
 
ok ... i feel like Baldrick in Blackadder...
To be expected. :eek: RAID gets rather complicated, as the smallest issue can completely wreck your plans (or system if you've bought gear).

Buy the ARC-1680X connected to Pro Avio EB8 MS by 2x SFF-8088 multilane cables, and fill with the 4 RE2's that i've currently got and buy 3 further RE3's (no conflict with the RE2's?) and RAID 6 them. So a total of 7 discs means a 5TB array, with the possibility of 2 drives failing. Onto this i'd put everything: OS, User, Current files, Backup files (but partition a slice for the PS scratch disc?).
The RE2's and RE3's should work just fine with one another, and the drive count is adequate for RAID 6.

The partition/data scheme you're planning is fine.

I'd use the 4x 1.5 Tb samsung drives that i've got within the MP as a JBOD and use them as secondary copies of the data and time machine. (I couldn't RAID them as they'll conflict with the ARC.)
I doubt you'd even be able to run them in JBOD. I won't use a "failed testing" drive under any circumstances with a RAID controller. I'd actually be shocked if the Samsung's you have on hand would work, even as individual drives in Pass-Through mode.

If you want to use these drives, you'd be better off going with a simple eSATA card and separate PM enclosure.

Use the EB4 PM for your backups + SIL3132 card (eBay UK source). IIRC that card is capable of JBOD.

The RR2314 i'd sell, and buy a SIL card to run the EB4 PM box that i've got, and fill it with assorted drives that i've also currently got to make a JBOD, which i'd then store off site.
See above. If you want more drives done this way, you'd have to get another PM enclosure or a larger one (i.e. 8 or 10 bay unit, and sell off the existing 4 bay model).

The SIL3132 based cards have 2 ports, so can run a max of 10 drives. :)

I know you'd rather me get the ARC-1680x12 / 16 but i just can't afford it ... and hey, i can always put another RE3 into the EB8 to give me a 6TB array, and that should last me 3 years from now i imagine.

What do you think?
That's up to you. I just go with current port requirement + 4 to allow for some future expansion without the need to swap out all the drives each time to increase capacity. It can even allow you additional throughput (additional parallelism, if added to the existing set), and different array levels, if the drive count was too small/capacity would have been too small. Moving from 5 or 6 to 10 comes to mind.
 
yay :D the first time i've got a pass mark!

I doubt you'd even be able to run them in JBOD. I won't use a "failed testing" drive under any circumstances with a RAID controller. I'd actually be shocked if the Samsung's you have on hand would work, even as individual drives in Pass-Through mode.

I was thinking that i'd have them internally on the MP using all 4 bays, and then concatenated into JBOD via disc utility, and so they're not connected to the ARC card because all of it's ports would be running the EB8 box ... would this still count as "under any circumstances"?


I just go with current port requirement + 4 to allow for some future expansion
Yeah, i hear you, it's just the cost - which is £70 between the ARC-1680X and ARC-1680x8 - which might not be that much in the long run ... i guess there's a good chance that i could migrate the card to future MP's?

Thank you for taking me through this minefield - even my girlfriend thanks you :eek: It's really appreciated that you've been so kind
 
yay :D the first time i've got a pass mark!
:p

It takes time to learn all the details. Getting the right gear due to connectors can be intimidating, but RAID in a MP is harder, as little is MP compatible, and even less can boot in the system.

I was thinking that i'd have them internally on the MP using all 4 bays, and then concatenated into JBOD via disc utility, and so they're not connected to the ARC card because all of it's ports would be running the EB8 box ... would this still count as "under any circumstances"?
No.

I was under the impression you would attempt to use them on the RAID card. Sorry about that. :eek: As you can see, it doesn't take much to make a mistake. So long as it's sorted in the research/planning phase, you're OK. :p

Off the logic board they'll work. They'd also work in the PM enclosure attacted to the eSATA card, so you have a couple of options with them.

Yeah, i hear you, it's just the cost - which is £70 between the ARC-1680X and ARC-1680x8 - which might not be that much in the long run ... i guess there's a good chance that i could migrate the card to future MP's?
Yes, you can migrate those cards from one system to another. It's one of their major advantages IMO. Far easier than software RAID in most cases.

Thank you for taking me through this minefield - even my girlfriend thanks you :eek: It's really appreciated that you've been so kind
:cool: NP. :)

That's what forums are for. :D
 
I've bought!

Hi - just wanted to update you, and ask another question if you don't mind!

I've bought the ARC-1680X, EB8 MS box, 3x RE3's (to add to my current 4x RE2's) and two 8088-8088 (0.5m) cables (told by the salesman that they're all multilane, even if it doesn't say so). Thank you for steering me through this minefield, without your advice i would have ended up with some donkey solution! Happy Christmas too, i hope you have a really good year :)

I'm pretty sure that i'd now (nuthin' like after the event, eh?:eek:) like to get just one Intel 80GB G2 for the OS (haven't got the cash for 2), and then have my user on the 7x drive RAID 6 array on the proavio box, and perhaps get another RE3 to make the array with 8x HDDs (because you can't just add another drive and keep the array intact, i believe) ... would you advise using the SSD? Two things that i'd read: 1) latency(?) time of an array doesn't equal that of the individual drives that it's made from, and 2) 10.6.2 doesn't play nicely with RAIDs. That makes me wonder 1) is the gap between a one disc SSD and a 8x drive RAID 6 wide enough to make it worth while, and 2) would i stick with 10.6.1?

I'd put the SSD in the 2nd optical bay - i've got an old optical drive that i can use the metal case plates, but am i right in thinking that the data's go to be via one of the sata ports on the mother board (which is much better than via the IDE/PATA cable that would plug into the optical drive, yeah?) Does it matter which port on the mother board?

The SIL2132 card you linked to, seems so inexpensive:cool: I'd use that to connect with the PM box that i've already got for JBOD storage to go offsite.

And then, that's me about done! Thanks again, and have a good year:D


Recap:
Mac 3,1
10.5.8
main software use is PS and LightRoom
16Gb RAM
4x regular internal HDD bays will be used for duplicate storage


 
Hi - just wanted to update you, and ask another question if you don't mind!

I've bought the ARC-1680X, EB8 MS box, 3x RE3's (to add to my current 4x RE2's) and two 8088-8088 (0.5m) cables (told by the salesman that they're all multilane, even if it doesn't say so). Thank you for steering me through this minefield, without your advice i would have ended up with some donkey solution! Happy Christmas too, i hope you have a really good year :)
You've got the right cable, but it's not really MulitLane. The connectors are MiniSAS (MultiLane's are SFF-8470 connectors, and use screws). That said, they make cables with an SFF-8088 on one end, and a SFF-8470 on the other. Either way, it gets the data to and from the box and card. ;) The real trick is to pay close attention to the ports on the card and box.

MS = MiniSAS
ML = Multilane

The internal SFF-8087 ports (not on the card you've got), are typically referred to as ML, not MS which is what they actually are. That's fine, as they only use SFF-8087 internally. It's the external ports that can be troublesome. Hint: E8-MS, so it's a MiniSAS connector. Again, you have the correct cable. :D

I'm pretty sure that i'd now (nuthin' like after the event, eh?:eek:) like to get just one Intel 80GB G2 for the OS (haven't got the cash for 2), and then have my user on the 7x drive RAID 6 array on the proavio box, and perhaps get another RE3 to make the array with 8x HDDs (because you can't just add another drive and keep the array intact, i believe) ... would you advise using the SSD? Two things that i'd read: 1) latency(?) time of an array doesn't equal that of the individual drives that it's made from, and 2) 10.6.2 doesn't play nicely with RAIDs. That makes me wonder 1) is the gap between a one disc SSD and a 8x drive RAID 6 wide enough to make it worth while, and 2) would i stick with 10.6.1?
If you want to add a drive to an existing array after it's been set up, you can do that with a card. It's called Online Expansion, and is easy to do. :D

1. No, it tends to vary, and it's complicated. The array level, the data distribution, file size, stripe size,... all matter. Testing is really the only way to find out what you've got in your specific situation. That's how you dial in the stripe size to get the best performance you can. But for graphics work, large stripes will be best, as it's sequential data.

Especially using the SSD/s for a boot/apps drive.

I'd put the SSD in the 2nd optical bay - i've got an old optical drive that i can use the metal case plates, but am i right in thinking that the data's go to be via one of the sata ports on the mother board (which is much better than via the IDE/PATA cable that would plug into the optical drive, yeah?) Does it matter which port on the mother board?
No, the port won't matter.

Please note, I'm not so sure you can get AHCI mode enabled on the ODD_SATA ports in an '08, and it can matter for an SSD. If possible, it might be faster on one of the HDD bays.

Let me know of the physical setup internally, and hopefull, you can find out about AHCI under OS X on the ODD_SATA ports.

The SIL2132 card you linked to, seems so inexpensive:cool: I'd use that to connect with the PM box that i've already got for JBOD storage to go offsite.
That card should be just fine for use with a PM enclosure. Worst case, it's cheap. Just keep in mind, the drivers are as simple as they come, so there's NO RAID capability in the drivers (so JBOD may not be possible either). You could try LaCie's drivers for their SIL3132 card, but I don't know if there's additional features in them or not (it could be the SIL provided drivers as is, or modified to add what's missing in terms of RAID support).

As per the HDD bays:
I'm not so sure if you won't have to use one of the HDD bays for the SSD, if the ODD_SATA ports can have AHCI activated in OS X (it won't do it for Windows, nor even boot a Windows disk from those ports). OS X will work, including boot, but I'm not sure on whether or not AHCI mode is active (or can be made active, as the ports are in a 4 + 2 configuration in the ICH9 on the board, aka SouthBridge).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.