Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
@NANOFROG:
Thanks big times for your reply. I think I never received such a detailed and helping reply in a forum before, so KUDOS :D

That's a lot of info to process and I need to think it through.
It took a little over 3 hrs to write, so it's definitely going to take some time to digest. ;)

I was actually surprised to see you replied this quickly (expected ~2 - 3 days actually).

I am not particulary fond of the idea of separating my SSDs out of the RAID array and make them stand alone volumes. If there really is no speed gain when putting them in a RAID-0, I at least gain a single volume with more space.
It was based on cost reasons (better use of what you already have, so you didn't have to run out and get another SSD for scratch and possibly a 2nd for Windows).

Using 2x lower cost units for cheaper capacity is a good idea if you've the bays available (sometimes works out, sometimes it doesn't, so you'll have to cost it out and see for the specific parts you're after).

So that being said, I want to keep the SSDs in a RAId array. Whether on the ICH or the RAID card, I don't care, as long as I will not lose any bandwidth on the ICH.
ICH can't really handle more than a 2x disk SSD stripe set, but you will get better performance on the RAID card, and it has the ports (and comes with the cable needed to connect them).

As for your question how my ODBs are currently being used:

The original optical drive is *not* connected to the ICH, it is connected to a separate PCIe SATA controller. The two SSDs on the lower ODB are connected to the ICH. Together with the four other drives in the internal drive bays that makes 6 drives total on the ICH, so the answer to your question is *yes*, all 6 ICH connections are in use (by 2 * SSD and 4 * HDD).
I figured this was the case, from what you posted, but needed to be sure (you were clear enough that the optical disk was external - figured USB though to keep it bootable for multiple OS's).

As for booting into Windows - that would be nice to have, but it's not important, really.
All you need is a single disk on the ICH, which you can do this at any time (cheap too, all things considered).

Also, point taken re: enterprise grade disks. Budget was updated accordingly. Don't tell my wife though!
I presume this system is being used to earn a living, so there's your logical explanation. ;) Hopefully this tactic will work, but keep a bag packed and be ready to run for it just in case... :eek: :D :p

The only concern I have with your recommended setup is my Windows boot disk (which also serves as a time machine destination). I really don't know where I should put that, now that all internal bays are being reserved for the Raid controller and the optical drive bay is consumed by the two SSDs.
Get TM off of that disk, and use the Drobo as your backup solution. If you want to keep TM as well (i.e. backup of the Home folder), then use a separate disk (external, say via the eSATA card).

What size is the Windows disk?

ATM, I'll presume it's 3.5". Unfortunately, I'm not seeing a ready-made mount to stuff 2x 2.5" + 1x 3.5" drives in a single optical bay.

You may need to use the mount you have for the SSD's, and set the 3.5" on top, via something you can do DIY (i.e. use the metal tray off of an old optical drive; just drill some holes).

Other materials can be used as well, but there will be more work getting it sized properly (need to cut as well as drill holes), such as bare PCB material, thin plywood (such as you'd find in a hobby shop for model plane building), or thin plexiglass (hard plastic sheet).

You could also try one of these, but I don't know about the height when stacked on top of the 2.5" mount.

Another note: SSD's don't have any moving parts, so they can be stuffed anywhere with things like zip ties and Velcro.

One last alternative, would be use the 4x 2.5" backplane cage, and get a 2.5" disk for Windows (what I planned in the last post), and attach it to the ICH via Right Angle SATA cable (means buying a disk; SSD or laptop mechanical will fit the bill in this case).

You have options, but it means either buying a disk (nice and clean though), or possibly making a mount. Your choice. ;)

oh, and one more important question:

You guys mentioned the Raid controller should go into PCIe slot #2 (I guess because it is 8x so has higher performance?). The thing is, I have one of those grafic cards that occupy the first *two* PCI slots. Will that be a problem? Because I can't lose that card!
You may have a problem due to the slot configuration in the MP.

Let me explain:
The chipset (X58) has 36 lanes, 32 of which are dedicated to Slots 1 and 2 (16x each), which leaves 4x lanes total. So what Apple chose to do, was use a PCIe Switch in order to share those lanes for Slots 3 and 4.

What this means for slots 3 and 4, is if they're used simultaneously, the speed is reduced per card due to the switching back and forth (idle time + switch latency). Not a problem if you make sure both cards are not running at the same time (or at least keep such instances to a minimum = reduced negative effect in this situation).

Now all the slots in your system are PCIe Gen 2.0, which means they top out at 500MB/s each, so long as the card is also PCIe Gen 2.0 compliant (you need to check this). If not, your speed will be cut in half.

In the case of the RAID card, it is Gen 2.0 compliant, so it would max out at 2GB/s in slot 3 or 4 (usable to start with without throttling on the slot, but may be a problem as you scale up - using SSD's, not mechanical). This is based on 500MB/s for the OS array + 250MB/s for a scratch disk + 350MB/s for the RAID 5 (worst case starting point from what you've posted so far).

What I'd recommend doing, is trying out your second graphics card in Slot 3 or 4 and see how it goes (I presume the eSATA card sits on one of these as well, which is why I mentioned that these two ports are shared).

It would also be useful to know what exactly the 2nd graphics card is, and what you're doing with it (remember, graphics cards cannot even saturate a 16x lane Gen 1 slot).

Details on the eSATA card might help too, though Slot 3 or 4 is the only logical location.

Now... those old drives are in a softare raid-0 array. What if I unplug them and put them in a cheap external eSATA case (I do have an eSATA controller)? Will OS X recognize the software raid array and let me use it? Any recommendations here?
You move the SSD's to the card.

For the existing mechanical disks, you have a few options:
Depending on the eSATA card (both throughput and if it supports Port Multiplier enclosures or not).
  1. Get a Port Multiplier enclosure (Sans Digital TR4MP; it even comes with a 6.0Gb/s card and is stated to work with OS X <driver support only>), and stuff them in there (useful for another backup location, archival storage <i.e. movie + music library>, and clones).
  2. Use separate (single disk) eSATA enclosures for the above reasons (keeping OS clones are especially useful).
3. Sell them (least desirable IMO, as you won't get much for them, and the usefulness as clones is immeasurable).​

I just checked with their website and also with their german distributor, but none of them mention that the 1880ixl is sharing one of the internal ports with the external one. They are saying I can connect up to 12 drives to the controller (8 internal, 4 external). Although the "8" in the name does actually suggest otherwise....

the price difference to the "true 12 port model" you mention is not that big, but you may know how wifes react if you spend money on something that's not for them :D

So can we confirm this information somehow?
It's what they've done with the 1680 series. Yes, you can connect drives to it, but they were shared, not independent (can affect throughputs due to shared bandwidth; and the disk count matters significantly). The external port was really meant for using SAS Expanders (can allow you to run up to 128 disks on one card; they're just not that fast given they're sharing 4x ports for bandwidth compared to a 1:1 ratio).

But you should contact Areca directly (phone or email; they're located in Taiwan, but do speak English). So you'd be able to communicate with them sufficiently I think (their English isn't that great, but you can usually figure it out after reading it carefully a couple of times for difficult issues; simple issues/questions are fairly easy to interpret the answer). You'll see what I mean... ;)

But they do know what they're talking about if you have a problem, so don't panic that you're on your own (like you would be with other companies like Highpoint). :)
 
Hey Nanofrog,

I am not at home right now and will only return on saturday. Won't find the time until then to get back to you in detail. Will do on saturday. Thanks! :)
 
Ok,

it took me a while to get back to this, but here I am, offering more questions to be answered :D

First off, I am not making a living off my Mac Pro, all of this is a hobbyist (some call this prosumer) setup. More or less. I do make a living on IT (networking/security) and I use my Macs for that as well, but my actual profession does not justify this kind of setup. So... money is an issue here, although not a big one.

As for how I will set up my drives later on, I will figure that out later. I may end up just putting a 3.5 hard disk on top of the two SSDs in the ODB. Still, I am concerned about all the cabeling.

Anyways. My planned setup after reading all of this:

Code:
[B]Level 1:[/B]
2 * SSD boot/app disk, RAID0 on the Areca, placed in ODB.

4 * 1 or 2 TB enterprise disks in the drive bays on the Areca with MaxUpgrades kit for intermediate data storage.

1 * 2 TB hard disk for time machine and bootcamp.

1 * external RAID box for backups

Now I am still having questions on the Areca cards:

First of all, I was figuring I could just place my old three 1 TB drives in the external RAID box as single drive/JBOD. That way I could still use Apple's software RAID on them to be able to migrate my data over to the internal disks.

However, the guys who are selling me the Areca stuff said that would not be possible. The Areca card would only be able to either use RAID or single disk/JBOD. They said I can not create RAID arrays on some drives and put other drives into single disk mode or JBOD mode.

Can you guys confirm this?


As for the grafic card issue blocking slot 2: I was wrong. Apple was intelligent enough to leave enough room between PCI slot 1 and 2 so that a large GPU can fit without blocking slot 2. So I can happily connect the Areca in slot 2. Yay.

You suggestion to put the time machine and bootcamp disk on my eSATA controller is good and I wanted to do this, but unfortunately that cards needs to have drivers installed, so I can neither boot into Bootcamp, nor can I do a time machine restore when booting from the Mac OS DVD. Actually, that's a real shame. It seems to be hard to find eSATA controller that would do this. The only one I could find does not support port multipliers.

Now for the bad part. I was given a quote for the Areca (12 lane version with 1 GB of cache RAM), along with a simple Areca RAID box and a couple of hard disks (4 * 2 TB enterprise grade Hitachi stuff) and it blew me out of my socks. It amounts to nearly 2000 EUR (I think that's about 2.600 US$).
 
You suggestion to put the time machine and bootcamp disk on my eSATA controller is good and I wanted to do this, but unfortunately that cards needs to have drivers installed, so I can neither boot into Bootcamp, nor can I do a time machine restore when booting from the Mac OS DVD. Actually, that's a real shame. It seems to be hard to find eSATA controller that would do this. The only one I could find does not support port multipliers.

Unfortunately, this is correct. There are no bootable (cheap) PM support eSATA cards out there.
However, you could still use such a card and pick an external enclosure with both eSATA and USB or FW. I've got such a setup running and I simply connect the enclosure via FW when I have to restore my system. Doesn't happen really often so speed isn't a concern.
 
There are probably cards with external SATA/SAS connectors you could boot from (Highpoint comes in mind), however, these generally don't use PM supported ports either, rather than miniSAS connectors which allow you to connect multiple drives.
 
Anyways. My planned setup after reading all of this:

Code:
[B]Level 1:[/B]
2 * SSD boot/app disk, RAID0 on the Areca, placed in ODB.

4 * 1 or 2 TB enterprise disks in the drive bays on the Areca with MaxUpgrades kit for intermediate data storage.

1 * 2 TB hard disk for time machine and bootcamp.

1 * external RAID box for backups
This looks fine.

First of all, I was figuring I could just place my old three 1 TB drives in the external RAID box as single drive/JBOD. That way I could still use Apple's software RAID on them to be able to migrate my data over to the internal disks.
Assuming you make no changes to the current configurations of those disks, you should be able to do it (should be able to read the GPT partition scheme stored on the disks themselves and run once moved to an eSATA card).

But if the configuration changes with the current disks (anything under Disk Utility that causes an initialization procedure), it will wipe any existing data.

I'm presuming you want to move the existing disks to an external enclosure (keeping current data intact), create the array, then copy data from the the original disks to the array.

If this is incorrect, let me know (makes things simpler when putting all of this together as there's no data loss to worry about - restored from separate backup source/s instead).

However, the guys who are selling me the Areca stuff said that would not be possible. The Areca card would only be able to either use RAID or single disk/JBOD. They said I can not create RAID arrays on some drives and put other drives into single disk mode or JBOD mode.
They presumed you wanted to hook all of this up to the Areca RAID card. And in that instance, it's correct.

You can set their cards for RAID, and still have single disk operation (Pass-Through = single disk), but cannot run RAID and JBOD simultaneously on one card (you can only choose one operation type per card; JBOD or RAID, and what they were trying to explain). That particular instance using Areca RAID cards will require 2x cards (one of them would be best as an eSATA card, or 1x RAID card + the ICH for the JBOD).

In your case, get an eSATA card (read on - I'll link what you need). :eek: :D

As for the graphic card issue blocking slot 2: I was wrong. Apple was intelligent enough to leave enough room between PCI slot 1 and 2 so that a large GPU can fit without blocking slot 2. So I can happily connect the Areca in slot 2. Yay.
Nice that this will work out for you.

I thought it was that both Slots 1 and 2 were occupied.

You suggestion to put the time machine and bootcamp disk on my eSATA controller is good and I wanted to do this, but unfortunately that cards needs to have drivers installed, so I can neither boot into Bootcamp, nor can I do a time machine restore when booting from the Mac OS DVD. Actually, that's a real shame. It seems to be hard to find eSATA controller that would do this. The only one I could find does not support port multipliers.
Highpoint eSATA for Mac.
  • Bootable
  • Supports PM chips

BTW, unless OS X will be on a single disk attached to the ICH (would allow you to run Boot Camp), you will need a separate disk for Windows (attach it to the ICH to make sure it will boot).

Now for the bad part. I was given a quote for the Areca (12 lane version with 1 GB of cache RAM), along with a simple Areca RAID box and a couple of hard disks (4 * 2 TB enterprise grade Hitachi stuff) and it blew me out of my socks. It amounts to nearly 2000 EUR (I think that's about 2.600 US$).
Unfortunately, this sort of stuff isn't cheap. :( But you can transfer it from system to system, so it lasts a good while (makes it cheaper over time).

Provantage will ship to you (they do handle international orders), and they sell both Areca and Sans Digital (you will need to ask about the eSATA version, as they don't have those models listed; just USB and MiniSAS units).

There's got to be somewhere else you can get it from without getting ripped off.

Unfortunately, this is correct. There are no bootable (cheap) PM support eSATA cards out there.
However, you could still use such a card and pick an external enclosure with both eSATA and USB or FW. I've got such a setup running and I simply connect the enclosure via FW when I have to restore my system. Doesn't happen really often so speed isn't a concern.
Highpoint claims the eSATA for Mac does. I've not used it, so I can't confirm that it works as advertised, but it's the only option.

The next step is a or another bootable RAID card set in JBOD mode (any existing data will be wiped, which ATM, I suspect is an issue). Even if the data isn't a problem, the cost would be (~$300USD or so).

ATTO's 6.0Gb/s non-RAID HBA may also be bootable, but it's $400USD.
 
nanofrog to the rescue?

Hi all -

I've read nanofrog's posts with wonder and appreciation (although a limited degree of comprehension) ;) I wonder if I might impose just a bit more on your kindness.

A tech friend built me a RAID setup for video production - that has unfortunately been plagued with failures in its one year of operation. I've despaired of asking him for further advice.

I wonder if you can put your finger on the fatal flaws?

Here's my current configuration:

- Mac Pro from 2007 (2x 3 Ghz Xeon, OS 10.5.8, 16 GB internal RAM, ATI Radeon HD 3870 in Slot 1, Sonnet Tempo SATA E4P in Slot 2, PCI-to-PCI bridge in Slot 3)

- Areca 1221x mini SAS controller in Slot 4

- this eight bay enclosure:
http://www.pc-pitstop.com/sata_enclosures/scsat84xt.asp

- WD20EADS Caviar Green drives (7 channels at RAID 5) (8th bay used as a pass-through)


And the problems:

1) Many drives have reported failure (at least six of them in a just few months.) One brand new drive reported failure within a few hours of insertion. Curiously, these 'failed' drives, when dropped into separate SATA enclosure can be reformatted by Disk Utility - which then says they are ok.

2) Kernel panics when system is left unattended, and not working on anything. I've come back to office many mornings to find system crashed overnight.

3) RAID volume has disappeared from desktop repeatedly and had to be rescued numerous times by DiskWarrior directory rebuilds. Diskwarrior has found tons of Overlapping Files in these rebuilds.

I've contacted Areca, and they haven't been able to help. I went through firmware updates with them; replaced my entire enclosure on suspicion of power supply problems; and taken note of their protest against non-enterprise drives. (My tech friend says he has built numerous arrays using the WD Caviar Green drives and swears by them.)

I'd be TRULY GRATEFUL for advice getting this thing back up and running.

Is my friend truly & profoundly mistaken about using the Caviar drives? Should the Areca controller be moved to a different PCI slot - or is it a piece of dangerous junk?

Thanks in advance for any help you are able to give me!
 
Is my friend truly & profoundly mistaken about using the Caviar drives?

That's exactly the problem. Those Green drives don't have TLER enabled (it can be done, though, the drives you've got are the last 2TB Green models that support this), which is why they drop out of the RAID array.

For a hardware RAID, you have to chose enterprise drives that are made for it, such as the RAID Edition drives by Western Digital.

For all other questions, lets wait for nanofrog. :D
 
I wonder if you can put your finger on the fatal flaws?

Here's my current configuration:

- Mac Pro from 2007 (2x 3 Ghz Xeon, OS 10.5.8, 16 GB internal RAM, ATI Radeon HD 3870 in Slot 1, Sonnet Tempo SATA E4P in Slot 2, PCI-to-PCI bridge in Slot 3)

- Areca 1221x mini SAS controller in Slot 4

- this eight bay enclosure:
http://www.pc-pitstop.com/sata_enclosures/scsat84xt.asp
This is all fine.

Areca makes some of the best and fastest cards available (definitely the best price/performance ratio right now).

The enclosure, though butt ugly, is functional.

- WD20EADS Caviar Green drives (7 channels at RAID 5) (8th bay used as a pass-through)
This is without a doubt your problem.

They're consumer units, which means they do not have the proper recovery timings programmed into the firmware (read up on the TLER wiki for more information). In the past, you could get around this by running the WDTLER utility and change the default timings. I'm not sure if these disks will allow it (suspect not), as what's currently offered will no longer allow users to do this (they didn't have RAID Edition versions of the Green models, but as they've finally released them, disallowed this to keep it from cutting into the profits of the RE versions). But it's worth a try (they may be old enough that it will work).

You'll need to run a DOS boot disk (you can either try to download the file, or try something like the Ultimate Boot Disk, as that contains the TLER utility IIRC - worst case, add it to the ISO before you burn it).

Others have had problems with their systems and such disks, so I'd recommend using a PC to give this a go (put the drives on a SATA port, boot, and run the utility and give a shot).

If it won't work, you will have to get enterprise grade disks (which I'd recommend anyway, as there's more than just the firmware that's different). Better specs (meant to take the abuse of RAID), and uses additional hardware (feedback circuits) to keep the disk from going hay-wire and destroying the platters (i.e. heads physically smack the platters when the vibration exceeds safe limits, which is more likely to happen with consumer units).

BTW, Western Digital is the only company's disks that this could ever be done with (none of the others have ever released such a utility).

And the problems:

1) Many drives have reported failure (at least six of them in a just few months.) One brand new drive reported failure within a few hours of insertion. Curiously, these 'failed' drives, when dropped into separate SATA enclosure can be reformatted by Disk Utility - which then says they are ok.

2) Kernel panics when system is left unattended, and not working on anything. I've come back to office many mornings to find system crashed overnight.

3) RAID volume has disappeared from desktop repeatedly and had to be rescued numerous times by DiskWarrior directory rebuilds. Diskwarrior has found tons of Overlapping Files in these rebuilds.

I've contacted Areca, and they haven't been able to help. I went through firmware updates with them; replaced my entire enclosure on suspicion of power supply problems; and taken note of their protest against non-enterprise drives. (My tech friend says he has built numerous arrays using the WD Caviar Green drives and swears by them.)
This is the behavior I'm always going on about when using consumer disks under a proper RAID card - it's unstable as hell.

Now you know what it is first-hand, you won't forget it.... ;) So the lesson should really stick. :eek: :p

Unfortunately, learning this way tends to be expensive too. :(

Is my friend truly & profoundly mistaken about using the Caviar drives? Should the Areca controller be moved to a different PCI slot - or is it a piece of dangerous junk?
Absolutely.

Consumer disks are fine for software based controllers (they're stable at any rate), so long as the level implemented does not use parity (software RAID cannot handle the write hole associated with RAID 5 or 6 - Period).

BTW, I hope you're at least running a decent UPS, and ideally, a Battery Backup Unit (for the RAID card) as well.
 
Options going forward

Hello, and thanks so much for your weigh in!

Given all your cautionary notes, I don't think I'll try to change the firmware on the drives.

While I lick my wounds and think about when I will be able to afford enterprise replacement drives, can you answer me this:

- what if I reformatted my eight Caviar Greens into a RAID 1 (or 1+0?) config? Or even paired them off into two-drive mirrored sets? It wouldn't be as space efficient, obviously. But would it be stable under the Areca card?

- if not that: could I at least use them as individual pass thru disks, or JBOD? (i.e. is there a way the RAID controller and my already-purchased hardware can play together nicely)

Otherwise I have no idea what to do with all of those little suckers.

Thanks again for your help. You are right - expensive lessons are not soon forgotten!
 
Hello, and thanks so much for your weigh in!

Given all your cautionary notes, I don't think I'll try to change the firmware on the drives.
As you already have them, you might as well give it a try. Once stable, you can replace them with enterprise disks in say a year, when you've had a chance to recover from what you've already spent (card and enclosure are fine, so that's not a waste of funds at all, and will serve you well for some time ;)).

Just keep in mind, they're not as robust as enterprise units (typical MTBR <Mean Time Between Replacement> of 3 years to reduce the chance of them from dying on you while in use in a RAID set - they could still be used for archival/backup use, or clones, so long as they pass a full surface scan without a lot of errors on the platters).

Which is why I mentioned a year between now and replacement, assuming they'll take a change to the TLER timings to begin with. If not, the other uses described are still valid (backup, archival, and clone use).

- what if I reformatted my eight Caviar Greens into a RAID 1 (or 1+0?) config? Or even paired them off into two-drive mirrored sets? It wouldn't be as space efficient, obviously. But would it be stable under the Areca card?
The only way those disks will be stable under the Areca, is to adjust the TLER settings. The RAID level used will not matter. You'll still experience the random drop-outs you've been seeing until the firmware is adjusted or you replace them with enterprise grade units.

- if not that: could I at least use them as individual pass thru disks, or JBOD? (i.e. is there a way the RAID controller and my already-purchased hardware can play together nicely)
You could give this a go, but it's still possible you'll experience issues (TLER settings should be fine for this, but I've not tested consumer disks in this manner; I just won't use consumer grade mechanical disks on a RAID card - ever).

SSD's are a different matter (consumer units have been used successfully in stripe sets <should be fine in 1 and 10 configurations as well - have seen successful tests, but not sure about long term stability; but again, should be fine>, and MLC's limitations are not currently suited to parity based arrays - 5/6/50/60). There are enterprise grade SSD's, but they're also horribly expensive.
 
battle plan

All right, nanofrog - i will try the WDTLER project. (I'll have someone do it for me since I have no PC and nor expertise with one.)

- I've been hunting for a place to download the WDTLER utility?

- Are you aware of any issues with WDTLER and drives over 1 TB in size? (mine are 2 TB WD20EADS units with 2009 and 2010 manufacture dates)

- Will WDTLER endanger data on the drives? or can I leave my data on them?

- Any chance of WDTLER bricking my drives?

And, two other points I wasn't quite clear on:

- is my Areca 1221x card ok in Slot 4, or should it be in another slot? (My system config appears earlier in this thread)

- I'd like to use one bay of my enclosure for pass-thru drives. I have a variety of SATA drives (WD, Hitachi, maybe others) lying about. Some of them I have bought bare, some of them I have pulled out of their G-Tech enclosures. My hope is just to pop them in, work with them, and write/copy to them as needed. I think you said TLER settings were NOT a worry with individual pass-thrus?

thanks in advance!
 
- I've been hunting for a place to download the WDTLER utility?
Here you go: WDTLER.zip (direct download) :D

- Are you aware of any issues with WDTLER and drives over 1 TB in size? (mine are 2 TB WD20EADS units with 2009 and 2010 manufacture dates)
No. :)

- Will WDTLER endanger data on the drives? or can I leave my data on them?
No, it won't harm your data. :)

So Yes, you can leave it there, though it's a good practice to make a backup before proceeding with the firmware adjustment or any other change (just in case mentality, as there are odds, though low, that the disk will die during the proceedure or upon the next attempt to access it).

- Any chance of WDTLER bricking my drives?
Not that I'm aware of (never seen it, or recall hearing a disk died as a result of a TLER adjustment). :)

- is my Areca 1221x card ok in Slot 4, or should it be in another slot? (My system config appears earlier in this thread)

- I'd like to use one bay of my enclosure for pass-thru drives. I have a variety of SATA drives (WD, Hitachi, maybe others) lying about. Some of them I have bought bare, some of them I have pulled out of their G-Tech enclosures. My hope is just to pop them in, work with them, and write/copy to them as needed. I think you said TLER settings were NOT a worry with individual pass-thrus?
Yes, you can use it for a 7 member RAID 5 of green drives without throttling. :)

Here's why:
  • Green disk avg. sequential read = ~77MB/s per disk
  • RAID5 ~= n * single disk throughput *.85
Put in the numbers, and you get: 7*77*.85 = 458 ~=460MB/s.

Now recall that the card's PCIe spec is 1.1, so the lanes will only operate at 250MB/s. 4x of them = 1GB/s. This means that even with the other disk as a Pass Through (8x disks total), you'll still be fine for what you're trying to configure. :D

The only thing you need to keep in mind for this configuration, is do not allow anything in Slot 3 to run at the same time if there's a card installed there.

If you were running more mechanical disks (and faster ones) or SSD's, this may/ will (respectively per disk type) change things in terms of throttling. As will running anything in Slot 3 with the RAID card in Slot 4 simultaneously.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.