Read through the reviews for the 3510 and the majority of issues seem to be stemming from motherboard incompatibility issues from home-built systems and Windows Vista.

The 3520 (same card with another mini-SAS port) seems to get good to great reviews and I have a feeling the lower price point of the 3510 is inducing some poor reviews due to the user-error.
I read them, and wanted you to. Not to dissuade you, but to point out a few things.
Primarily what stood out to me, were the complaints of drives dropping out. This is the type of behavior that happens with incompatible drives. Hence the need to check the card manufacturer's site for their HDD Compatibility List.
RAID cards don't automatically work with any drive. I would also expect those who had issues, were using consumer drives, not enterprise drives. Combine this fact with they can go from just fine to DOA in under 2 months, is a really good reason to avoid them. This is why the enterprise drives are worth the additional $$. Fewer problems. And, ultimately, less expensive when you consider the need for returns, down time, and the costs associated with diagnostics, etc. Especially for a company. I'm assuming you will be the one who does all the support/repair, so there's no cost in this regard. But the $$ for returns adds up fast.
The 3520 will be fine, and it has the ability to run enough drives, since you would have a hard time fitting in any more than that. It's a Mac Pro, not a warehouse.
I really do understand the appeal for running a single OS drive, especially in the case of a RAID card failing. But I'm being swayed towards the 3520 and booting from the RAID card simply because I have the option to expand the array to 6 internal drives, making both my OS and data access faster. Five 1.5TB drives in a RAID 5 would FLY and having an extra drive as a hotspare gives me enough peace of mind.
It won't boot faster.

The card has to initialize first, then load the OS. If you look around, you might notice boot times of a single drive vs. an array. The single drive is faster, and typically can boot in ~40 - 45 sec. The array can add 30s + to that time. I've seen an array take over 2 min to boot.
Nevermind the fact that the bare VR and MaxUpgrades sled setup is worth a pretty penny. Enough to cover the two 1.5TB drives that I need to run a barebones RAID 5 (already own one).
If you go with the Velociraptor, go with the WD3000GFLS (cable connect version), and use the mount from Maxupgrades, if you want to install it in one of the sled locations.
It's cheaper (~$250 with the mount), vs. the WD3000HFLS (backplane model) that goes for ~$310.
If you're creative, you can mount it elsewhere with either a DIY means (decent, not tape or anything

). Noise Blocker X-Swing might help as well. You mount it under one of the 3.5" drives attached to a sled. You might need to keep slot 4 unused though. You'd have to check for clearance on this.
PCB and plexiglass are good materials to use for DIY mounts, and can be found online for sure, and maybe locally, depending on where you live. Even Radio Shack carries breadboard (sort of a tan color PCB with a lot of holes).
Remember, the Velociraptor is a 2.5" drive, and it
Does Not need to remain in the aluminum carrier/mount. WD even sells it without any mount at all, as they don't run so hot it's actually needed. (Small is easier to find space for anyway).
Or maybe I should just keep the VR as my system drive, get the 3520 and run a 4x1.5TB RAID 5 with a hotspare...
I would use the Velociraptor. (That's what I have, BTW).

You won't be disappointed.
If you don't leave the system unattended for days, you won't need the hot spare. Though nice, the space available in the MP is at a premium as is, and you've already indicated you need every bit you can get for more than 4 drives.
Moving the optical drive into an external enclosure is also a viable possibility.
If you have an old 5.25" drive (CD, DVD, etc), you can use the bottom plate from that to mount the drives on rather than use Maxupgrade's optical bay mounting system. It's very similar, and quite expensive for what it is.
DIY skills will pay off if you're comfortable doing it, and it isn't hard.
2.5" drives can also free up some additional space. You can squeeze 4 into a single 5.25" bay. (Hot swap backplanes for 2.5" drives exist, and fit in a single bay).
How much different is the throughput on a 4 vs 5 drive RAID 5?
4 drives should produce ~300MB/s realistically.
Add 75MB/s for each drive additional.
A rough formula would be as follows:
(Sustained drive throughput SDT * number of drives n)*0.85 = ~RAID 5 throughput
I based the #'s on:
SDT = 88MB/s (typical of current enterprise drives)
n = 4
Keep in mind, this isn't exact, but gives a realistic estimate. In some cases, the % can be as low as 75% (0.75).
The jury is still out. Got some thinking to do!
Research, research, and more research.
I will need to boot multiple OS's, however a majority of the time I will run the others through fusion in OS X. So the three are OS X (of course), Linux and Windows (Server 2008 and Vista).
The drive setup is as follows:
4 X25M's in RAID 0 (these will be the boot/app drives.. they format down to 75 each so I should get a total of 300gigs to work with across all the OS's/VM's)
4 Raptors in RAID 5 (maybe 6) for data/workspace
All internal, 2 X25's will fit into the space of one 3.5 so the plan is to use those 4 in the second optical bay. I've already resigned myself to making a custom housing for them. The 4 raptors will go in the 3.5 sleds in the main housing.
So I'm looking for a card that can support the boot array as well as the storage array. Thoughts and suggestions are welcome.
Thanks for helping me hammer this stuff out! I'm slowly accumulating the parts (got the x25's... raptors next) for this dream machine and I'd love to hear any thoughts you have!
Highpoint won't work.
You'd need to use an Atto or Areca. Atto is more expensive, so I lean towards the Areca. Not to mention, it's faster.
Take a look at the
ARC-1231ML. 12 ports, not 8 (bare min for your needs), fast IOP, and has the ability to upgrade the cache via a DIMM. Not exactly cheap at ~$650 or so, but would scream with the drives you're considering. It also has the ability to operate at RAID 5 and 6. Not to mention some other useful features, like a Partition Table backup. (You will truly love this feature if something ever goes wrong, as it will save your butt).
I would truly recommend going with the separate OS drive. (See previous posts).

A Velociraptor would work well for this.
In reality, the SSD's aren't needed, and have a limited number of write cycles. ~100k writes with current Flash technology. I'm not a big fan of RAID 0, and given the write cycle limitations, I
absolutely would not recommend this. In this case, figure 25k writes, then FAILURE. Just too much money to throw away. You'd be better running the drives independently as OS drives (1 per) than the RAID 0 route.
Velociraptors are good drives, but you might want to look at WD's RE3's if you need greater capacity(Raid Edition 3). Fast, enterprise drives, and come in large sizes, up to 1TB. WD is also still providing some level of customer support. Other manufacturers have completely eliminated it, leaving the end user swinging in the wind. Hitachi... Oh, did I say that one out loud?

Seagate's almost there too, even though their ES.2 enterprise drives are decent.
I'd recommend RAID 5 exclusively for its balance of speed and redundancy, even for an OS array (if you do go this way). At least, it will still work in a degraded mode (slow) if a drive goes out, and you won't lose the data.
If you wonder why, check out the costs for data recovery. The last quote I got on an 8 drive array was well over $20k!

And this quote is recent (~2.5 months ago).
