Primary Use: Creative Photo Retouching & Illustration. Sometimes a single image can grow to 8 gigs or more in the working stage before the layers are simplified or flattened upon client approval.
Primary Speed Bump Incremental saves throughout the day. With large files, a save (on my G5) can take 20-30 minutes. Puts me into a catch - 22. I'm stressed that a crash will evaporate all the work I've done in the last hour, but a save might cost me 30 minutes.
Configuration & Upgrades:
24Gb RAM (3 x 8),,, 4th slot empty $1,095.99
Optical Bay : OS + Apps
60Gb OWC extreme pro SSD,,, $179.99
Bay 1 : Current Job + Scratch
First of two 50 Gig OWC SSDs in Raid 0,,, $209.99
Bay 2 : Current Job + Scratch
Second of two 50 Gig SSDs in Raid 0,,, $209.99
Bay 3 : Completed Jobs
1.5Tb 7,200 rpm HD pulled from old G5
Bay 4 : Completed Jobs
1.5Tb 7,200 rpm, HD pulled from old G5
External 1: Back Up
Existing OWC Mercury Elite 1.5 Tb Firewire
External 2 : TM Back Up
Existing OWC Mercury Elite 1.5 Tb Firewire
Small Partition for Bootable Clone of OS + Apps
Thanks in Advance! - JB
Unfortunately, I wouldn't go this route.
Consumer grade SSD's (MLC based) are
not meant for high write conditions, which scratch usage is (I've posted on this before, so if you're interested in the specifics, you may want to do a search on MR
).
You'd be better off using mechanical disks in a RAID configuration compared to SSD's.
And as you're post indicates you're earning a living at it, a stripe set for working data is not a good idea either, given the time spent fixing a problem, or worse, an insufficient backup system = lost data (not just having the disks for backup, but the frequency that those backups are made; the longer the time, the more work that has to be re-done to get the missing data back).
SLC based disks would be sufficient, but the capacities are low, and they're quite expensive yet (i.e. Intel X25-E models). At those funds, you have better options (redundancy, still better write cycle conditions, and higher sustained throughputs for similar funds).
Another thing to consider is that the ICH (SATA controller) in the system has a throughput limit of ~660MB/s. You'd throttle with 3x of those SSD's (~250MB/s each, so simultaneous access will throttle).
Option 1:
- Use an SSD as a OS/applications disk in the empty optical bay (boot/applications disk)
- Substitute mechanical disks for the SSD's in the stripe set (better suited for scratch)
- Use the other mechanical disks as the primary data location (working data, as it's safer than a stripe set)
- External backups as configured
The advantage here, is lower cost (it's cheaper than your original configuration), and better safety for your working data. But it's no where near where you should be IMO for earning a living with the system (for a hobbyist, it would be acceptable, as the data's not critical).
Option 2:
- Use an SSD as a OS/applications disk in the empty optical bay (boot/applications disk)
- 4x Mechanical disks in HDD bays 1 - 4, in a RAID 10 configuration (speed is ~ that of a 2 disk mechanical stripe set, but has the redundancy of 2x disks before data is gone). This is used for scratch and primary data (working files)
- External backups/archival locations (can use single disks or JBOD; eSATA card and possibly a Port Multiplier enclosure would be less expensive over time, as you just add disks)
This is still inexpensive (especially for what you get), as you now have the minimum performance requirement, and some redundancy as well. But the performance isn't as good as it could be.
Option 3:
- Use an SSD as a OS/applications disk in the empty optical bay (boot/applications disk)
- Use a proper RAID card, and use either RAID 5 or 6 (can handle the write hole issue associated with parity based arrays; software implementations, such as the Highpoint 2314 are not). Specifics can be gone over if you're interested in this configuration, as there's more information and options to consider (internal, external, mixed = hybrid, disk count, future expansion requirements, OS requirements, boot requirements,...).
- External backups/archival locations (can use single disks or JBOD; eSATA card and possibly a Port Multiplier enclosure would be less expensive over time, as you just add disks)
This is the best way to go IMO, and what I was referring to when I mentioned "better options for the funds" in terms of using SLC based SSD's.
You've more options and expansion capabilities with this route as well (i.e. use a hybrid = internal + external disks in the array). This is why the port count matters, disk count affects the arrays possible (5/6; or even nested parity 50/60, though I doubt you'd need to go this route). IF you've sufficient ports, you can increase capacity and performance just by adding disks (really nice, and the redundancy is a necessity given what you're doing with the system IMO).
The
ARC-1222 or
ARC-1222x are good cards to take a look at, as is the
ARC-1680 family (12 port card+ for future expansion may be needed, depending on your capacity expansion). There's an
internal adapter kit that will allow the HDD bays to work with internal cards, and a
special cable that can take an internal port to an external enclosure. If you're more interested in an external only solution, you need to be looking at a MiniSAS (SFF-8088) compliant enclosure (example =
Stardom SohoTank ST8-U5).
External cables per MiniSAS port (handles 4x disks).
A few notes:
With mechanical disks, you want to stay at 50% or less full for performance reasons (inner tracks = slowest on the disks, and when you get into this area, your performance can drop below the minimum requirements; particularly to be noted on a 2x disk stripe set, or even a 10 array, which only offers you half the total capacity as a trade-off for the redundancy). In the case of a 10, you'd probably be best served by using 2TB disks.
With a RAID card (i.e. Areca), you need to run enterprise grade disks if you want it to work (consumer disks are unstable, so don't do it; their recovery timings are wrong for RAID cards). As a result of potential problems, it's advisable to use the HDD Compatability List to determine what drives to use (not all RAID card makers offer these, but Areca does, and is one of the reasons I like their products - saves a lot of hassle and aggravation).
You've not mentioned the need for a Windows disk, but if you create an array under OS X, you won't be able to boot Windows off of the ICH (SATA controller on the logic board). But this is fixable via a separate SATA controller and cable assembly (you'll have to make this by splicing 2x cables that are available together). Not expensive either (card + cables), and not hard to do.
RAID wiki might be a good thing to give a good look at, particularly 10/5/6 levels.
Also, you'd need to run a good UPS system as well (Online type is really what you should be using, though a Line interactive can be substituted so long as it has a step transformer in a budget pinch). BTW, Online types can be had refurbished to save on funds as well. A UPS isn't just an option with RAID, it's a necessity (you'll be burnt in terms of lost data if you try running without it).
I know this is a lot to read, and hope not too confusing, but it should help.