Is it possible with the areca card to combine the disks in that external cabinet (8) with the internal disks (4) to get a 12 disk raid 6 setup?
(I have understood from other threads that you can connect the internal disks to a sas raid card)
Yes, this is possible.
But to use the HDD bays, you'd need an adapter sold at one place, MaxUpgrades (
here). Tad pricey, but it works.
What is the best way to arrange the cables from the internal sas-connecters to the external cabinet. As long as you have one expansion slot free I see a possibility, but if that one gets occupied, do you have to drill extra holes?
Run them out of a PCI bracket if at all possible. If however you do have all the slots filled, then you would have to modify the case (cut or drill holes to fit the SFF-8087 connectors through).
While the Areca cards are good, there are others that work as well. The Highpoint 4322 has served me well for about 18 months. Down side is that their customer service is lacking. Alto makes good cards as well, but $$$. I like the idea of CalDigit but they seem to be twitchy.
Areca is the ODM for Highpoint's RR43xx series. Unfortunately however, Highpoint doesn't have the best support out there (ask those that have one how difficult it's been to obtain the EFI boot portion of the firmware).

Nor do they include internal cables (SFF-8087 - 4i*SATA). This can up the cost in the end (typically go for ~$30USD each, so for a 12 port card, there's ~$90USD worth of internal cables in the box).
I also like the fact you can upgrade the cache via the DIMM socket (watch the ranking though, as it needs to be 8 or 16 ranked; 4 won't work).
As for SAS drives, get ready for sticker shock. Good SATA drives can be just as reliable with good thruput without the need sell a kidney.
Fortunately, you don't have to use SAS disks.

SATA will work (much cheaper, and come in larger capacities), but you do have to watch the HDD Compatibility List, as SAS cards are picky, and the cable length has to be kept to 1.0 meter (so it can be a PITA for placement of the enclosure).
Typically only enterprise versions will work, though with WD brand, you can "cheat" and use the TLER Utility on the consumer lines to change the timings for RAID. Great for backup arrays and/or tight budgets (but I wouldn't recommend this for a primary array).
...Do you really need that much storage?...
It would make sense if the usage is for video/graphics work (they burn through capacity like mad as I understand it).
Otherwise, it is definitely in the OP's interest to carefully examine the needed capacity requirements.
What is the RAID 5/6 rebuild time going to be if using 2TB "blocks" ?
Using largest disks possible as RAID 5/6 building blocks lends to the longest rebuild times (assuming the disk bandwidth is fixed). That will be the time will the RAID is at reduced protection capacity. It is roughly going to be about the time to zero and initialize a 2TB disk. That takes a while.
~1hr per TB with RAID 5 (128k stripe size). RAID 6 is going to be a bit longer (1.25hr / TB or so IIRC). With recent drives anyway (i.e. WD RE3 & Seagate ES.2's, ~1.5?yr old models now).
The other issue with RAID 5/6 is that you do not want to expectantly loose power. They both require writing a logical data block out out multiple disks to complete as a transaction to have a coherent representation of the data.
Definitely not.
A UPS is a necessity with parity based arrays rather than an option. The card battery is a good idea too, but it has limitations (i.e. cache can't contain all of the data), were a UPS can do better, assuming it has adequate run time. Ideally, users should run both.
If don't need continuous 10-12 TB of disk space ( one logical volume) then multiple external enclosures should work. Not sure it would be wise to use the internal drives to be members in a logical volume with the external ones.
It works, and you can still recover. Areca's even have a hidden recovery function that allows arrays that would otherwise be toast to be rebuilt. There are limits of course, but it's better than other makes out there in my experience.
The other issue is that sending all of the your data through one PCI-e slot. You don't care about speed (some kind of online digital archive perhaps), but also have a single point of failure.
At this point, you're getting in to a second card and identical array. Much more fault tolerant, but tends to be on the really expensive side (I typically only see this on servers, not workstations).
So this is getting into SAN territory (i.e. commonly found using FC based gear).
Again if speed isn't an issue can use one of the two Etherports as a dedicated iSCSI connector ( a modest 4 or 8 way Gigabyte switch to 4 to 8 storage units and wouldn't consume a PCI-e slot ). As long as only need Gb/s amounts of data all of it on that single pipe will work out. Additionally, backing up 10TB is going to be a pain (and again easier if not one huge logical volume. )
Possible, as is ATA over Ethernet (if network access isn't necessary). Either way, you're dealing with a storage server independent of the system. Speed can even be assisted by teaming the Ethernet ports.
Something the OP could think about though.
