Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I am moving towards this solution:
- UPS
- Areca sas-card
- 12-disk raid6 (4 internal + 8 external), with possibilities to expand.
- separate bootdisk (understood thats is possible by freeing up 4 internal sata-ports in the '08, when using a sas card)
- 2 sets of PM esata-enclosures for backup, one off-site.

That will give me 20 TB expandable raid6 storage, with 2 backups.
If you want to be able to expand without having to upgrade equipment (just add), then go for more than a 12 port version of the Areca 1680 series (those come in 12/16/24 port models; ARC-1680ix-12/16/24). Nomenclature is easy enough. ;)

Those cards can also use SAS expanders (can be had separately or built into enclosures), which allow up to 128 disks. It's a little more difficult to set up, but at that point you'd need a rack, HVAC system to keep it cool, and of course SAS disks (cable issues alone, as SATA is limited to 1.0 meter or less, SAS can go to 8.0 meters; SAS = smaller capacity drives - 450GB is the largest currently available, and rather expensive).
 

if you were serious about SAN, that is teh netapp

But since you prob want something for a home office (?), i would suggest looking into drobo elite, it doesnt do NAS allegedly, only iSCSI. Hope you have a good switch (router) (and an isolated subnet for your iscsi traffic if you wanted to do it right)
 
if you were serious about SAN, that is teh netapp

But since you prob want something for a home office (?), i would suggest looking into drobo elite, it doesnt do NAS allegedly, only iSCSI. Hope you have a good switch (router) (and an isolated subnet for your iscsi traffic if you wanted to do it right)
It's for a Single system, so DAS would be the best solution in this case (NAS or SAN is pointless in this instance, as there's no other systems nor will there be, according to the OP's last post - reduced the machine count to one, and no need for additional systems).
 
It's for a Single system, so DAS would be the best solution in this case (NAS or SAN is pointless in this instance, as there's no other systems nor will there be, according to the OP's last post - reduced the machine count to one, and no need for additional systems).

I don't think the OP really knows what he's after here. He want's to setup a media repository with a Kalidescape type system. This BEGS for network attached storage. Even if he went with a DAS system he's just going to share the volumes out to media center PC's around the house. That's exactly the job that a NAS is built for.
 
DroboPro is a nice simple solution but simply can not give you the space you need. With Dual Disk Redundancy in use (protection from 2 HDD failures) and 8 2TB drives installed the maximum space available is 10.89 TB. You would need to get 2 of them to get to 21.78TB.

nanofrog knows this stuff better than most around here. Following his advice is probably not a bad idea.
 
I don't think the OP really knows what he's after here. He want's to setup a media repository with a Kalidescape type system. This BEGS for network attached storage. Even if he went with a DAS system he's just going to share the volumes out to media center PC's around the house. That's exactly the job that a NAS is built for.
From what was posted, the Kalidescape was an idea, but he wasn't quite sure what was needed to accomplish the goal.

If there were multiple systems involved, then I'd agree NAS would be the better choice. It seems to be a single person, presumably in a single bedroom apartment (given the mention of space constraints). But even then, I'd have thought that one might want a media server for the living room and bedroom, but that doesn't seem to be the case.

But that's not the case from the last post, as it's a single system (single usage point, and apparently no intent to change from this). So DAS makes the most sense in thise case IMO.

I see this instance as a special case for a media server, as the basic idea is a single system to serve multiple rooms/HD sets. No actual need for NAS, as there's nothing else to network to ;) (nor is it a new competitor to NetFlix and the like, trying to serve anyone willing to pay the monthly fees :eek: :p).
 
DroboPro is a nice simple solution but simply can not give you the space you need. With Dual Disk Redundancy in use (protection from 2 HDD failures) and 8 2TB drives installed the maximum space available is 10.89 TB. You would need to get 2 of them to get to 21.78TB.
At $1500USD each, they're not inexpensive, especially when you consider they're software based. Fine for 0/1/10, but not parity based arrays. There's no dedicated hardware controller that can solve the write hole issue. Granted, as I've mentioned before, a good UPS can help this issue, it's still not as reliable IMO, especially for that much data.

I'd think frimple would agree here (software implemented parity arrays are too dangerous), and he does have knowledge of this stuff as well.

Where the issue is IMO, is that the OP has a single system, and no intention of adding more. So NAS isn't necessary. It's rather easy to get into a mind set when dealing with enterprise systems to stick with those solutions (SAN or NAS of some sort). But it can get complicated in some instances where you get cross over (i.e expectation of minimal needs, when the required needs are far more substancial, such as an independent actually needing a large capacity DAS rather than just a few drives in a stripe set + backup). In this case, it's sort of like a reverse cross-over (NAS would be the expected solution, but there's one detail that turned the tables as it were - single system/served location).
 
If you want to be able to expand without having to upgrade equipment (just add), then go for more than a 12 port version of the Areca 1680 series (those come in 12/16/24 port models; ARC-1680ix-12/16/24). Nomenclature is easy enough. ;)

I see that Areca ARC-1680ix-16 has 4 internal + 1 external ports, can you use all together or do you have to choose (4+0 or 3+1)?
Does the amount of memory you have on the card do a big difference, except providing more speed during certain conditions?

What I meant with "poor-mans-Kalidescape" was not the ability to distribute media in a big house, but to store the full dvd's on disc and provide a nice interface.

I will actually have an apartment with 3 rooms (main+kitchen, bed room and one small room used as craft studio). However, I have during the last 30 years never had a tv set or computer in the bedroom, and combining film watching with wood work is kind of pointless for me. So it will be a single room solution.
 
I see that Areca ARC-1680ix-16 has 4 internal + 1 external ports, can you use all together or do you have to choose (4+0 or 3+1)?
You can use which ever ports you wish in whatever configuration you wish (all internal + external at the same time if you want, but the external port is shared IIRC). It can be convenient too.

Does the amount of memory you have on the card do a big difference, except providing more speed during certain conditions?
Yes, but it does depend on the specifics (array type, usage,...; for example it's much more notable in sequential operations).

But what you're using it for, it's not necessary to up the DIMM capacity.

I will actually have an apartment with 3 rooms (main+kitchen, bed room and one small room used as craft studio). However, I have during the last 30 years never had a tv set or computer in the bedroom, and combining film watching with wood work is kind of pointless for me. So it will be a single room solution.
I didn't figure on the den/craft room, but it seemed clear to me that it was a single system/location used with your previous post.
 
From what was posted, the Kalidescape was an idea, but he wasn't quite sure what was needed to accomplish the goal.

If there were multiple systems involved, then I'd agree NAS would be the better choice. It seems to be a single person, presumably in a single bedroom apartment (given the mention of space constraints). But even then, I'd have thought that one might want a media server for the living room and bedroom, but that doesn't seem to be the case.

But that's not the case from the last post, as it's a single system (single usage point, and apparently no intent to change from this). So DAS makes the most sense in thise case IMO.

I see this instance as a special case for a media server, as the basic idea is a single system to serve multiple rooms/HD sets. No actual need for NAS, as there's nothing else to network to ;) (nor is it a new competitor to NetFlix and the like, trying to serve anyone willing to pay the monthly fees :eek: :p).

Indeed indeed, DAS is the requirement here. It's also not like he wouldn't be able to serve the volumes out in the future if he moves into a bigger apartment down the line :D

nanofrog said:
At $1500USD each, they're not inexpensive, especially when you consider they're software based. Fine for 0/1/10, but not parity based arrays. There's no dedicated hardware controller that can solve the write hole issue. Granted, as I've mentioned before, a good UPS can help this issue, it's still not as reliable IMO, especially for that much data.

Totally agree to these points. Drobo's are wonderful with their built in JBOD implementation. But transfer times are pretty bad even on the Pro model and when you're spending that kind of money you have other options available.

Sounds like the OP is leaning towards the 1680IX 16 port, an amazing card IMO :cool:. Are you planning on re-using the sonnet and firmtech enclosures? Also have the drives been discussed yet?
 
Indeed indeed, DAS is the requirement here. It's also not like he wouldn't be able to serve the volumes out in the future if he moves into a bigger apartment down the line :D
Yep, and not too much effort either (in terms of the MP's needs). Running the Ethernet (assuming wireless isn't used) or HDMI cabling would be the hardest part to serve mulitple locations (depending on whether or not there's a system or just an HDTV at each end point location). Either way, it's doable. :)

Totally agree to these points. Drobo's are wonderful with their built in JBOD implementation. But transfer times are pretty bad even on the Pro model and when you're spending that kind of money you have other options available.
Yeah, I'm not that fond of Drobo's given the limitations that exist in that price range. There are usually better alternatives.

Sounds like the OP is leaning towards the 1680IX 16 port, an amazing card IMO :cool:.
They're a very good line of cards. :D

Are you planning on re-using the sonnet and firmtech enclosures?
I'm of the impression these will continue to be used as the backup source.

Also have the drives been discussed yet?
WD drives are intended, and there's only one model currently listed that will work; RE4-GP (WD2002FYPS) for 2TB models (of any maker in fact, though it lists a Hitachi still under testing).

I expect the RE4 will as well, but they've been in limited supply, and may not yet have sorted the firmware in order to function properly with the Areca's (HDD List is dated Dec. 17, 2009; it's a little behind). So ATM, it's a risk.

But given what the OP wants to do, the RE4-GP would be just fine, as high bandwidth isn't actually needed (7200 rpm isn't necessary for the intended usage). They also happen to be less expensive. :eek: :D
 
Yes, the RE4-GP will be fine for me. I have no need for the faster model.

The old enclosures and non-enterprise disks will be used in the backup solution.

Thanks for all advice! Really educating!

To rhett7760:
The unraid from lime technology looks like a nice NAS system. However, I would rather go with something a little smaller, since I am a little low on space. Is it a well-proven technology, by the way? Since it is software based I guess it might be a little less robust than a hardware solution?
 
Anyone here thought about using ZFS instead of a dedicated hardware RAID controller?
If a NAS were needed, then it would be a good way to go. But for a single system (and for ZFS, Linux or Open Solaris would be required to use it), it's not the best route IMO (either a separate system to be used as the storage server, or attempt to do it via VM).
 
If a NAS were needed, then it would be a good way to go..

Oh sorry, I thought that the 10-15TB are about to be used on a distinct machine not directly connected to a OS X machine.
Since OS X won't have a stable and fully implemented ZFS support in the foreseeable future, than that's no option, I agree.
 
ARC-1680ix-16: problem

hello,

I have a ARC-1680ix-16-16 controller.
I am having a trouble to connect the controller to an external enclosure (Sans Digital Tr4x); SFF-8088 external port to SFF-8088 Tr4x.
my HDDs failed everytime I connect them thought the Sans Digital Tr4x.

As I read through the menu over again,
it claims that The ARC-1680ix-8/12/16/24 or ARC-1680IXL-12/16
series attach directly to SATA/SAS midplanes with 2/3/4/6 SFF-
8087 internal connector or increase capacity using one additional
SFF-8088 external connector.
I dont understand what does it mean?
Do I always have to connect the HDDs directly to the raid controller?
How and why do you use the SFF-8087 to SFF-8088 cable? how does it work?
Please help!! Thank you!






Use a RAID card for the primary storage, and eSATA + PM enclosures for backup.

For the RAID (capable of containing a sufficient level of storage), I'd go with:
ARC-1680ix-16
External 8 bay enclosures (MiniSAS connections, and they come in silver too)
Cables (need 1x per port used on the card, so 4x if you use 2x enclosures)

Use enterprise HDD's, as it's a SAS card. To find those that will work, check the HDD Compatibility List. Personally, I've had good luck with WD's enterprise drives, and they work with this controller.

By going with this route, you can implement RAID (more than a stripe set), as it sounds like you need redundancy (you won't lose all the data if a drive dies). There's multiple levels, and as a bare minumum, you'd want to go with a RAID 5 (can tolerate the loss of a single disk), though a RAID 6 may be more suitable (allows the failure of 2x disks and still retain data).

For a RAID 5, you'd need to run a minimum of capacity requirements + 1 additional disk (used for parity data). For RAID 6, it's 2x additional disks.

So to start out with 12TB (using 2TB drives):
RAID 5 = 7 drives required
RAID 6 = 8 drives required

BTW, the hardware listed (not including disks), is under your budget.
 
RC-1680ix-16-16 controller problem

hello,

I have a ARC-1680ix-16-16 controller.
I am having a trouble to connect the controller to an external enclosure (Sans Digital Tr4x); SFF-8088 external port to SFF-8088 Tr4x.
my HDDs failed everytime I connect them thought the Sans Digital Tr4x.

As I read through the menu over again,
it claims that The ARC-1680ix-8/12/16/24 or ARC-1680IXL-12/16
series attach directly to SATA/SAS midplanes with 2/3/4/6 SFF-
8087 internal connector or increase capacity using one additional
SFF-8088 external connector.
I dont understand what does it mean?
Do I always have to connect the HDDs directly to the raid controller?
How and why do you use the SFF-8087 to SFF-8088 cable? how does it work?
Please help!! Thank you!
 
I have a ARC-1680ix-16-16 controller.
I am having a trouble to connect the controller to an external enclosure (Sans Digital Tr4x); SFF-8088 external port to SFF-8088 Tr4x.
my HDDs failed everytime I connect them thought the Sans Digital Tr4x.
OK, I've a few questions for you;
1. What drives are you using (exact P/N's please, as it tells me everything I need to know)?
2. What are the cable lengths used?
3. What type/s of cables are used?
4. Are there any adapters (i.e. internal to external adapters used to get the signal from an internal port to the external enclosure)?

As I read through the menu over again,
it claims that The ARC-1680ix-8/12/16/24 or ARC-1680IXL-12/16
series attach directly to SATA/SAS midplanes with 2/3/4/6 SFF-
8087 internal connector or increase capacity using one additional
SFF-8088 external connector.
You've lost me here.

Usually, they're expecting you to use internal ports with internal drives, which means an SFF-8087 to 4i*SAS/SATA cable (aka break-out cable). One end connects to a port on the card, and then has 4x lines with SATA/SAS connectors on the other ends (looks like a SATA cable end, but the actual cable is smaller). No difference technically though, just thinner wire is used (and the breakouts tend to have locks on them, which is a good thing).

Both internal and external cables handle 4x drives (assuming 1:1 drive/port relationship). That means 4/8/12/16 drives = 1/2/3/4/... cables on the card side, and 4/8/12/16/... on the drive side. For externals, the cable is split internally in the enclosure. It makes for a much cleaner cabling (i.e. "rat's nest" isn't as bad :p).

If you're using the SFF-8088 port on the card (means SFF-8088 to SFF-8088 cable), then don't even worry about trying to figure the language out.

Do I always have to connect the HDDs directly to the raid controller?
No, not if you use a SAS expander. That's essentially a Port Multiplier equivalent for SAS systems (they can handle SATA drives). Ultimately, it does a couple of things for you:
1. Allows the connection of more drives to the card (the 1680 series can handle up to 128 drives this way).
2. Allows for longer cable lengths (get's past the SATA limits by using SAS signals).

But if SAS expanders aren't used, then Yes (1:1 = 1 drive per port).

There is a cost to SAS expanders however:
1. SAS expanders increase the cost of the storage system (cheap for what they do however).
2. Throughputs are limited vs. 1:1 typically (it gets detailed as to the specifics, but that's where the internal to external cables can also help you; more ports used = more bandwidth = improved throughputs).

How and why do you use the SFF-8087 to SFF-8088 cable? how does it work?
This cable takes an internal port (card), and takes it externally to the enclosure. No adapters to mess with the signal (particularly relevant to SATA drives). The down side is, you do have to run the cables through an unused PCI bracket (no spares in the MP's case, so it means a port in this case, unless you're willing to mod the case (cut/drill sufficient holes)).
 
Thx Nanofrog,

Sorry for too much questions, I just start using a raid array for couple months. @ @"

1. What drives are you using (exact P/N's please, as it tells me everything I need to know)?

WD Caviar Black WD1001FALS
(I used to use them with the MaxConnect SAS/SATA BackPlane Attachment;
it works flawlessly connected to the 1680ix with the SFF8087 - 4 x SATA break out cable)


2. What are the cable lengths used?

about .5 meter shorter than 1 meter for sure.


3. What type/s of cables are used?

a SFF-8088 to SFF-8088 cable, which comes with the package of Sans Digital Tr4x enclosure.

4. Are there any adapters (i.e. internal to external adapters used to get the signal from an internal port to the external enclosure)?

inside the enclosure, there is a backpanel connected to the HDDs with a SFF-8087 output. the SFF-8087 links to an adapter that convert the SFF-8087 to a SFF-8088. then, I use the SFF-8088 to SFF-8088 cable connect the enclosure & the controller. but drives fail & need to be rebuilt.

This cable takes an internal port (card), and takes it externally to the enclosure. No adapters to mess with the signal (particularly relevant to SATA drives). The down side is, you do have to run the cables through an unused PCI bracket (no spares in the MP's case, so it means a port in this case, unless you're willing to mod the case (cut/drill sufficient holes)).

what if I use the external SFF-8088 port on the 1680ix instead of the internal port,
and connect the SFF-8087 to the backpanel diractly that linked to the HDDs?
is it the same kind of connection that you mentioned? & will it work?


Both internal and external cables handle 4x drives (assuming 1:1 drive/port relationship). That means 4/8/12/16 drives = 1/2/3/4/... cables on the card side, and 4/8/12/16/... on the drive side. For externals, the cable is split internally in the enclosure. It makes for a much cleaner cabling (i.e. "rat's nest" isn't as bad ).

If you're using the SFF-8088 port on the card (means SFF-8088 to SFF-8088 cable), then don't even worry about trying to figure the language out.

which mean a single cable (like SFF-8088 to SFF-8088 or SFF-8087 to SFF-8088 etc) does a better cleaner connection than a break out cable. but in my case, what is the reason why my HDDs fail using a SFF-8088 to SFF-8088 cable to connect the controller & enclosure. Do you think that the external SFF-8088 port on my controller is defective?

Any good suggestion of raid array for editing?
here is my current raid setup.
4 x crucial 64GB SSD raid 0 array 64kb for OSX, which connected to 1680ix

4 x WD black HDDs split into 3 volumes: 1600GB raid 10 array 64kb for storage,
500GB raid 0 128kb for editing, 300GB raid 0 64kb for scratch disk,
that connected to 1680ix using maxupgrade backpanel attachment.


Machine spec,
Mac Pro 2009
latest firmware Snow Leopard
latest firmware 1680ix-16-2GB
current raid array,
4 x crucial 64GB SSD raid 0 array 64kb for OSX, connected to 1680ix
4 x WD black HDDs split into 1600GB raid 10 array 64kb for storage,
500GB raid 0 128kb for editing, 300GB raid 0 64kb for scratch disk,
that connected to 1680ix using maxupgrade backpanel attachment.

I want to free the internal bays for booting windows; so I decide to use an external enclosure.


Thx million. :)


OK, I've a few questions for you;
1. What drives are you using (exact P/N's please, as it tells me everything I need to know)?
2. What are the cable lengths used?
3. What type/s of cables are used?
4. Are there any adapters (i.e. internal to external adapters used to get the signal from an internal port to the external enclosure)?


You've lost me here.

Usually, they're expecting you to use internal ports with internal drives, which means an SFF-8087 to 4i*SAS/SATA cable (aka break-out cable). One end connects to a port on the card, and then has 4x lines with SATA/SAS connectors on the other ends (looks like a SATA cable end, but the actual cable is smaller). No difference technically though, just thinner wire is used (and the breakouts tend to have locks on them, which is a good thing).

Both internal and external cables handle 4x drives (assuming 1:1 drive/port relationship). That means 4/8/12/16 drives = 1/2/3/4/... cables on the card side, and 4/8/12/16/... on the drive side. For externals, the cable is split internally in the enclosure. It makes for a much cleaner cabling (i.e. "rat's nest" isn't as bad :p).

If you're using the SFF-8088 port on the card (means SFF-8088 to SFF-8088 cable), then don't even worry about trying to figure the language out.


No, not if you use a SAS expander. That's essentially a Port Multiplier equivalent for SAS systems (they can handle SATA drives). Ultimately, it does a couple of things for you:
1. Allows the connection of more drives to the card (the 1680 series can handle up to 128 drives this way).
2. Allows for longer cable lengths (get's past the SATA limits by using SAS signals).

But if SAS expanders aren't used, then Yes (1:1 = 1 drive per port).

There is a cost to SAS expanders however:
1. SAS expanders increase the cost of the storage system (cheap for what they do however).
2. Throughputs are limited vs. 1:1 typically (it gets detailed as to the specifics, but that's where the internal to external cables can also help you; more ports used = more bandwidth = improved throughputs).


This cable takes an internal port (card), and takes it externally to the enclosure. No adapters to mess with the signal (particularly relevant to SATA drives). The down side is, you do have to run the cables through an unused PCI bracket (no spares in the MP's case, so it means a port in this case, unless you're willing to mod the case (cut/drill sufficient holes)).
 
1. What drives are you using (exact P/N's please, as it tells me everything I need to know)?

WD Caviar Black WD1001FALS
(I used to use them with the MaxConnect SAS/SATA BackPlane Attachment;
it works flawlessly connected to the 1680ix with the SFF8087 - 4 x SATA break out cable)
Did you modify the firmware (WD's TLER utility to adjust the recovery timings)?

I ask, as that drive's not actually on the HDD Compatibility List. What this means, is those drives are unstable with their original (based on my own experience using unadjusted firmware). No matter if they're internal or external.

It's easily rectified though. A PC is the easiest way to do it though (others have had a very difficult time doing this on MP's).

2. What are the cable lengths used?

about .5 meter shorter than 1 meter for sure.
I need to know the length of the external cable used with the Sans Digital enclosure since they're no longer attached internally via a break-out cable.

inside the enclosure, there is a backpanel connected to the HDDs with a SFF-8087 output. the SFF-8087 links to an adapter that convert the SFF-8087 to a SFF-8088. then, I use the SFF-8088 to SFF-8088 cable connect the enclosure & the controller. but drives fail & need to be rebuilt.
The adapter in the enclosure is fine. I meant add-on (i.e. they make internal to external bracket adapters - fits a PCI bracket). They're unstable with SATA drives. Read the next part, as it's critically important.

If you're talking about something like this used inside the MP, then that's the problem. They are likely to be found in the enclosure though (some use break-out cables, others to 4x SATA per SFF-8088 port).

To use an internal port on the card with an external enclosure, you need to use one of these. It's the longest version you can use with SATA drives as well, so DO NOT get a longer cable, or you'll still have the same problem.

It's what I was trying to explain earlier with the voltages used between SATA and SAS drives. Such adapters are fine with SAS, as they run at 20 volts, but not with SATA, which is 600mV max (less than 1 volt). That's also why the cable length is critical as well.

what if I use the external SFF-8088 port on the 1680ix instead of the internal port,
and connect the SFF-8087 to the backpanel diractly that linked to the HDDs?
Use the external cable with between the SFF-8088 port on the Areca and the enclosure. Get rid of any adapters you may be using. It's simple and direct.

If you have to use an internal port (or will in the future for expansion), then use the internal to external cable I linked (SFF-8087 to SFF-8088).

here is my current raid setup.
4 x crucial 64GB SSD raid 0 array 64kb for OSX, which connected to 1680ix
Are these working fine?

I'd use a 4x 2.5" backplane enclosure that fits in the empty optical bay, and use a break-out cable. About $90USD (cheaper than the HDD kit offered by MaxUpgrades in order to use 3rd party cards and the HDD bays).

Then run the mechanicals in the external enclosure.

It would work, and be the most cost effective way to do it. BTW, I'd get enterprise HDD's for use with the RAID card. Modding the TLER values is fine for backup, but I wouldn't trust it for a primary array.

Also make sure you've a good backup solution (doubly critical with RAID 0) and a UPS.

4 x WD black HDDs split into 3 volumes: 1600GB raid 10 array 64kb for storage,
500GB raid 0 128kb for editing, 300GB raid 0 64kb for scratch disk,
that connected to 1680ix using maxupgrade backpanel attachment.[/COLOR]
You've lost me a bit here.

Is this 4x drives that have been split into mulitple partitions? Or are there other drives involved here?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.