That's a great idea in theory, but probably not practical for a number reasons. Some jobs need to stay on local drives longer than others, need to double check correct file naming convention, need to trash the old incremental saves & workups no longer needed etc.
I know it's not always possible, so it was just a thought to save some time and aggravation (figured you could set the automation late, based on worst case completion times so you didn't have to do it by hand).
Yes, 8 drives is correct. 4 SSD's on the Areca card, Bays 1-4 also on the Areca card. The only other drives will be the external 2 or 4 bay units currently under discussion.
I'd put 7x on the Areca, and 1x (OS/applications drive) on the ICH.
But the ARC-1880 can boot an array or single disk (you'd have to flash it with EFI firmware file). But there'll be a little more work to do this (involves the flashing, cloning, and perhaps a tad more physical location work), but most importantly, on the remote chance the card dies, you can't boot the system (tends to make a major difference in time and effort spent getting the system back up and running again).
I see what you are pointing to, all the stuff mentioned really underscores the need to do some initial testing on one workstation first. If we assume that 64GB of ram will be sufficient, and I never hit the scratch disks, then the large volume SSD's are wasteful spending.
I'm thinking that 64GB is overkill, not insufficient (suspect with the information provided so far, that 32GB would be quite enough <8x DIMM slots filled>, perhaps 24GB <triple channel per memory controller>; which would allow you to stay with UDIMM's, which are definitely cheaper).
However, that's not something I'm willing to assume without some testing.
Which is precisely why I mentioned it.

RAM, especially 8GB RDIMM's, as well as SSD's are a tad on the expensive side yet.
Granted, overkill will get you some nice systems, but your employers won't be too happy about it if they discover that there were thousands of dollars spent needlessly. Given the current job market, .... well, you get the idea.
Just the other day, I had a 12GB image and a 5GB image open at the same time. If I do run out of RAM, the SSD's will be nice to have. The thinking behind getting the large volume SSD's, rather than the smaller, cheaper ones, is that we might hammering those things 10 hours a day for 3-4 years. More cells to spread out the wear and tear over time. I know the OWC RE versions come with a five year warranty, but not sure how that would work. SSD's will slowly degrade over time. It's not like they suddenly a HD that just dies and is gone for good. I'd hate to go back and ask for more funding in a year and a half because the SSD's I talked everybody into aren't working out.
The RE versions have more memory available for over-provisioning (Pro units = 7%, RE = 28%), so given you're looking for 3 - 4 years, I'd go with the RE versions. What you need to keep in mind, is they're still MLC based, which is aimed at consumer usage. SLC is aimed at enterprise usage, but it's also more expensive per GB, and they're not as common.
What I can't get to the point of, is which you'd really need, as I've no idea what your scratch usage will actually be (how much capacity per day is needed, not hours). Since the scratch space has the entire array do use, it will rotate between every cell before any are written to again, and so on. So if say you use 2x RE 50GB units (100GB total capacity), and only write a fraction of that per day, you'd be fine for 3 years I think. It's when you go over that per day (say 1TB per day), that you're cutting the lifespan. Unfortunately, there's no hard real world data out, so it's a bit hard to predict ("back of envelope" calculations are all that's possible right now).
This is why the RAM capacity is critical, as if you've enough, you won't need to use the scratch space that often, if at all (ideally, you want the efficiency rating under Photoshop at 100% = scratch never used). I still realize it's a good idea to have it (conditions where the average RAM usage that works say 99.9% of the time will be exceeded). RAM is also faster, so if you get a sufficient RAM capacity, I think you'll be fine with the RE versions.
BTW, I'd plan a 3 year MTBR for your drives (SSD or enterprise mechanical in this case), and it's a good idea to keep a spare on hand. Nothing sucks more (and makes you look like an idiot), than a system that's not running because you're waiting on replacements for parts you spec'd out.
Regarding the scratch speed of the 3 SSD's being way higher than the HDs I will be reading from and writing to, I see your point, and it's a good one. But consider that I might open an image once in the morning, hit the scratch disk about 100 times (if my history pallete is set to save 100 iterations) and I'm running filters, rotating, merging, brushing etc. I might hit the scratch drive constantly during retouching. But will only open once, and save every 20 minutes or so.
See above.
You need to test out the average usage (software, including filters,... on a typical large file) to get your memory usage requirement for a typical job. If the efficiency is under 100%, you can use that to calculate what you'd need, and then round up to the configuration that best fits the new systems (you still add a scratch space, but you don't need to go crazy, and there's other benefits in performance and probably cost savings as well, as SSD's wear out; RAM lasts much longer).
I think your existing equipment can give a good idea, and can let you nail down your memory configuration based (i.e. do you need 4 GB UDIMM or 8GB RDIMM sticks to get the job done without overspending?). Doing this now, will save you headaches, time, and embarrassment down the road.
Yes! Effing awesome suggestion!

I will do this. Some might think it's like being a kid in a candy store, being able to get what ever you want. But it's quite a stressful process for me. I've learned so much in a short amount of time by hanging out here. Three months ago, I was like "what's a raid card". (Giggle) Mistakes can be expensive, and there's alot of pressure on me to do this right. Especially since the IT guy and I are not not seeing eye to eye on anything. He knows servers, but not much about macs. I have to sell him on configuration concepts, and a big freaking budget, without making him feel look like an idiot, or myself look like and idiot by setting up something that does not work, or is a misplaced allocation of funds.
Try and see if you can approach him on how to evaluate memory requirements, RAID levels (if you haven't already),... Basically, make the guy an ally, not an enemy. Better for a working relationship down the road too (perhaps the concept of both of you learning would be an approach, as I'll presume for the moment he knows more than just networking, just has a "black hole" in his knowledge when it comes to a Mac; not all do though).
Sounds good. My first choice would have been the RE3s or RE4s as well. But they would be a nice middle ground compromise for my best guy who really wanted 1500k drives. How can I sell him on the RE3s instead of the Raptors?
Explain that the VR's are good for random access, but standard 7200RPM SATA are fine for sustained throughputs and they have a better cost/performance ratio for this area as a result; it is a business afterall... (VR's may be a tad faster than the RE3 for sustained as well, but they're cheaper/GB).
Let him dig up the proof to support his argument if there's logic to it (i.e. he can prove that the independent benchmarks for the VR's are sufficient to warrant using those instead of the REx models).