I realize that if all the drives were all going full bore that I'm going to saturate things... I think in practice that's unlikely for me (not saying it couldn't happen just that I think if it *did* happen it would only be occasionally)
You're the one that has to determine this, but you're aware of the potential at least. Others have stumbled across this realization after they spent a fair bit of cash, and it may have been a problem.
My thoughts on striping the SSD's were because the difference between buying two OWC 50GB drives wasn't that much more than buying a single 100GB drive. I personally thought that 50GB might be a bit tight but didn't want to pay a ton of money for two larger drives, I also knew that I really didn't need the performance of striping the SSD's and wasn't going to shell out the cash for two larger drives.
A striped set can be cheaper in terms of capacity when compared to a single large SSD. But you have to check the pricing to be sure if this is the case, as there's another "cost", which is it consumes additional ports.
I thought about just keeping the two SSD's separate and having one for OS/Apps and the other for scratch space/temp space. But I thought striping them would offer me more flexibility with regards to space, I'd end up with 100GB of space that I could leave all together or if I wanted carve up logically into one partition for OS/Apps and the other for Temp/Scratch.
I don't recommend partitioning SSD's, given the write amplification issue. You want to have as much unused capacity available for wear leveling as possible.
If you want to use a separate SSD for OS/applications and scratch, that's fine. Just be aware that the scratch disk will die faster, and need to be replaced much more frequently, as it's being used under a high write condition (the one area SSD's are weakest at than any mechanical disk currently available).
Since the models you're interested in are MLC based, I'd figure 1 - 1.5 years as a replacement cycle for the scratch disk (covered before, if you search; will offer you more detail than is in this post).
With the introduction of the 40GB model from OWC ($110USD last I checked), this isn't that big a deal IMO, if you're earning a living with the system. The increased workflow should allow you to earn additional profits above the anual SSD replacement cost for the scratch disk.
I'd love to know how striping SSD's ends up being slower than two separate drives...
Stripe size and the controller both have an influence here, so the details would be needed.
As per the stripe size, you'd need to experiment with different sizes, and test the throughput for each (includes real world usage, not just AJA or any Windows benchmark software).
That sounds like a great option if I also wanted to do hardware RAID for the storage but I'm not sure it buys me much in the price/performance department.
A stripe set is a form of RAID (it and JBOD are the bastard kids of RAID

).
But if you notice, the first link is for a
non-RAID HBA (it can't do RAID on it's own, but it can be used with Disk Utility = why it's cheaper, as there's less to it), while the second is a RAID card.
As per price/performance, you'd actually do well, as it opens up options, and will increase your speed (allows you to go past the ICH limitations). In terms of raw cost, it's expensive compared to a standard SATA/eSATA card.
This diminishes BTW, when you need a bootable version of a SATA/eSATA card. The Highpoint model linked is the only bootable eSATA out there (no SATA versions I've seen; they're all driver support only for OS X). $230USD is a tad expensive for a 2 port card, no matter how you slice it IMO.
And let me re-state, I'm not a fan of Highpoint, and am hesitant you even try it. Seriously. Their support sucks, and if you've a problem, they're libel to tell you to go fly a kite (recent thread on one of their RAID cards where that's what seems to have happened).
So if I put the optical bay in an external enclosure and connected it via eSATA, I'm still going to have the boot issue aren't I? Thinking if I were to go that route I'd have to add in a single SSD use a software 'cloner' package to migrate the OS onto the SSD before pulling the optical drive out, etc.
It will depend on whether or not the card has EFI firmware. For most, this will be a problem. The Highpoint card is the only eSATA model that will boot (3.0Gb/s BTW; the 6.0Gb/s version = driver support ONLY).
That's another reason why the 5.25" USB enclosure is a good solution to your problem.
It's also the cheapest.
I plan on running Windows 7 within VMWare Fusion, and I don't see me using Firewire on the Mac Pro for anything other than my CF card reader. If it's going outside and it's a hard drive or SSD it will be via eSATA.
A stripe set created under OS X doesn't play well with multiple OS installations, so you'd most likley need to keep Windows on a separate disk (not sure how well it works/doesn't work with VM in this instance, but definitely won't work with Boot Camp).
I'm thinking it might just be a whole lot easier to get one larger SSD and mount it in the second optical bay. It would keep me from having to move the optical drive to an external enclosure or do some funky cabling and setup. Granted I'd loose some performance but I'm not sure how much that would actually matter practically.
This would be easier, but may not be the most cost effective. It will depend on the exact disks you're considering + cost of the USB enclosure for the optical drive if you build a striped set of smaller SSD's.
I don't see me ever buying a lot of SSD's to store some of my data on them until the price drops a TON...
Most users have the same exact sentiment.
Thanks for the responses this is really helpful... It's amazing how much more annoyingly complex it is to crank up performance on a Mac compared to a Windows system, it seems Apple isn't at all in the business of making it easy for the small group of people who want to get more performance or storage out of their Mac Pro then Apple ships them with...
Things like the ICH are applied to any Intel based system that doesn't have any additional disk controllers to help distribute the load.
But MP's have a tendency to require special adapters and have cabling issues you don't have to deal with for PC's. Try looking at the threads involving RAID cards and internal HDD bays to get an idea how ugly it can get....
