Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
A Common Image and My Read Write Testing

HonoMaui -

My new 6 core has arrived! And I've already been working on some live jobs over the weekend. Since the major speed bump for me and many others who work on big files, is the read/write times, I created a common test file that would be fairly simple for any PS user here to recreate.

1. Open a new doc, 74.7"w x 80" hi at 300ppi and CMYK. (Swop Coated_V2 in my case) that should give you one layer, at exactly 2 gigs.

2. Filter -> Noise-> Add Noise -> Gaussian, 100%

3. Duplicate that layer 3 times to give you a total of four.

4. Make the top layer multiply, the next one down screen mode, the next one down, overlay.

5. Don't flatten. Save as a .PSB large document format. Don't try saving as PSD. It wont work. The final file should be 8.18 Gbs

My Test Results
Write : 23 Minutes.
Read : 11 Min. 20 Sec

My Mac: This is from my new 6 Core, straight out of the box, no apple upgrades. IE... 3 Gigs RAM, Snow Leopard. I put in my 7200 RPM drives from my G5, so I have one boot drive, one scratch, one data, and my TM Drive.

My Photoshop Prefs:
50 History states
Cache levels at 4
PS uses 80% RAM
500 Gig dedicated scratch
Use bigger tiles plugin
Never maximize psd and PSB compatibilty

If you could spare the time, I'd really be interested in finding out your read and write times from your RAID 6 set up. I really appreciate all the insights and real world experience you've been sharing here.

Thanks to all, this place effin rocks!

Julian

EDIT : Feb 8, 2011 Just tested my six core with 32GB of ram. I'm currently able to save the same 8.18GB file in 1:32. (One minute, thirty two seconds) This is a HUGE revelation. RAM makes a huge difference in write speeds. Not often discussed!
 
Last edited:
will do it might be a day or so till I get that extra time :) but can do it for ya

I have to catch up with that work thing
 
anyone have thoughts on the below config? I really just use cs5 for photography projects, process my images, layered, etc up to 1-2gb, then move all the finished work externally.

2 striped 100gb ssd - then mirrored internally onto another pair
(mounted in icey docks)

into the optical bay using the various 2x drive holders

1 200gb os/boot drive with all apps
1 2T re4 or another ssd for temporary archive from the working pairs.

this requires all external archive to be mirrored in some way but what i do is have it all on one large 2T or 4T, then also on a "transport" drive that i take to the other studio where it is manually copied onto the other large esata drives or the server.

this way the work always exists in 2 locations at all times. before i was always counting on it staying alive until backed up externally.

thus why i like the idea of internal mirroring. the only reason for stripe, which will also be new for me, is for write speed. read has been quick enough but saving my large finished work files takes forever.

why would anyone mirror their os, i guess for the same reasons but luckily i've never had a boot drive fail in 10 years....
 
a bit confused ?

the two raid 0 SSD ? is this your working HDD ? the one that holds your working files ?

1 2TB RE4 ? just get a black ? is this bu then to your raid 0 ssd ?

with files this big where is your scratch for PS ?
and how much memory ?
and writing to SSD will help but not that much check out what works at macperformanceguide.com for huge files :)
 
ok, i'm confused too. can't the scratch drive have anything on it? i will have 24gb ram so i don't anticipate needing scratch space much. i can go up to 32gb ram if that removes the need for scratch space.

i was planning to have the image files i'm working on each day on the two 100gb ssd. the scratch space would also be there, but those are also then mirrored. only mirrored of course because that's where the work exists until it's copied to the 2T and externally.
 
my files are currently not any larger than 5700x7500 pixels or so. this will increase when the 1dsmk4 comes out or inclusion of a digital back.

i save all my layers as a PSD file but maybe that should be a tiff with layers instead?

the psd with all image and adjustment layers ends up around 1.8 gb or less. this takes 2 minutes to open on my old g5, and scratch say "2.65gb/2.38gb"

what the heck does that mean anyway? if i have 24gb ram in the new mac pro, won't that be more than enough to never need scratch space?
 
the main thing slow to me is the save time and i like to save a few times during the work flow, 1st after bringing in all image layers and aligning them often manually, then 2nd after all adjustment layers/work is done, then often a 3rd time when some last little thing is adjusted.

i save, then flatten and save in 16 bit, then 8 bit and sharpen for client use. i then delete the tiffs used for the composite. all raw files are on their own esata archive only used for raw files.
 
for fun and info in PS go up to windows and choose info !
then with the little down triangle in the upper right of that window choose panel options
then check off scratch size and efficiency and timing

you can see what your computer is doing

the timing is just that how long what you last did took ?

efficiency is memory basically if you drop below %100 you are sending info out to scratch its basically the last thing you do not the total ? so keep a eye on it as you do things

your scratch sizes
left number memory being used total (all open docs not just the current so one at a time is better)
the right is the amount of ram available

basically its going to show you your scratch info like when its going and how much etc..

these tools as you are working on things will see how your setup is doing ;)

in your case your images are taking up more memory than is available
I bet if you do things and check your efficiency you will drop below %100 at times :)
you want that number on the left to be lower than on the right basically :)


so do a few things on a image and notice the change after doing some things ? adjustment layers etc..
this shows you how as you work your file scratch needs change

again you are working with huge files under 500 megs or so this stuff is not a big deal ? its fun to see and watch though as it gives you a better understanding of PS :)



does this make sense and help :)

I would say google
CS5 panel options efficiency and scratch size
or something ? and dont take my word but always double check things :)

also learn a bit about the purge command ? after complex actions I often put a purge in it ! (Under edit menu)

sometimes I put a before snapshot in the beginning then a purge so if I want to jump back its easy and quick :)
 
if i have 24gb ram in the new mac pro, won't that be more than enough to never need scratch space?

the second you open PS it allocates a small amount for scratch !! think of it as getting ready incase it needs it step !!!





I am building another new pro for our layout stuff unless I decide to take it over :)

after waiting a bit to see some real world times etc.. I chose the 3.2 4 core and am putting 24 gigs in it with the thought I will check timings and such and decide if more is needed ? but I doubt it :)

when I recomend doing the math how much time vs how many documents are open etc. .
I still say the 6 core makes sense for many one has to decide on something :)

my math is that this machine uses a layout program that then puts out layered PS files when its all done in a batch
so its not a big deal to save a few seconds as other things can be done

my machine I work on image to image so every step is more critical !!
just want to say this last part incase others read my other posts and think I am contradicting what I say when recommend some to get the 6 core :)
 
ok, i'm confused too. can't the scratch drive have anything on it? i will have 24gb ram so i don't anticipate needing scratch space much. i can go up to 32gb ram if that removes the need for scratch space.

i was planning to have the image files i'm working on each day on the two 100gb ssd. the scratch space would also be there, but those are also then mirrored. only mirrored of course because that's where the work exists until it's copied to the 2T and externally.

tackled this last as the other things should answer this as a whole :) if not post up will try to answer what I know :)
 
I do not see much benefit to ssd drives now. If I have enough ram, I won't be needing scratch disk space, or at least not a dedicated ssd for it.

Secondly, the current on board sata controller is only 3gb/s but there are hdd like the 10k rpm 600 gb WD VelociRaptor that is 6gb/s.

Some people do argue over which type of drives, ssd vs hdd are more reliable over long term and heavy use. No moving parts in SSD seems the obvious winner, but may need to be reformatted every year to remain good.

I suppose one plus of the ssd though is it's read/write performance as it becomes full but then again, does it matter, who would use small ssd for archive that is not needed very often.

Can I pair 2 ssd into a raid 0 as described on the first page of this thread and then have a 2nd pair that is redundantly a mirror of the first pair? raid 1 right? just like if you have raid 1 of 2 hdd in case one fails.

is raid 0 combining 2 ssd into one space, the same as "striping" across multiple ssd? for reference, like in diglloyd's read/write test of 2,3 and 4 ssd stripe sets.

i can then have 1 2T re4 internally which is also manually copied to an esata 2T to be safe.
 
I do not see much benefit to ssd drives now. If I have enough ram, I won't be needing scratch disk space, or at least not a dedicated ssd for it.
It's my understanding that PS is still more likely to go to scratch.

As it happens, you can get a mechanical stripe set put together on the cheap, but given the OWC 40GB unit with the same or better sustained throughput and only $100, it may still be a consideration as a scratch location (keep in mind the 1 - 1.5 MTBR; posted on this in the other thread you're discussing RAID possibilities).

Secondly, the current on board sata controller is only 3gb/s but there are hdd like the 10k rpm 600 gb WD VelociRaptor that is 6gb/s.
Keep in mind however, that mechanical disks can't even saturate 3.0Gb/s. Recent SSD units can (i.e. C300 claims 330MB/s sustained transfers, and SATA 3.0Gb/s tops out at 270 - 275MB/s).

The reason the mechanical disks are going to 6.0Gb/s is more to do with parts availability (semiconductors used on the drive PCB) and marketing than anything else.

Some people do argue over which type of drives, ssd vs hdd are more reliable over long term and heavy use. No moving parts in SSD seems the obvious winner, but may need to be reformatted every year to remain good.
Both are valid, as it depends on the usage pattern. Mechanical components can and do break, but so do semiconductors.

The difference is, SSD's are both more reliable and faster for random and sustained throughputs (single disk) than their mechanical counterparts. But HDD's are better suited to high write conditions, which scratch space is.

There's also the cost per GB to consider. $100 can get you an SSD, but it's only 40GB, when that can get you at least 1TB on mechanical.

It all depends on the intended use.

Can I pair 2 ssd into a raid 0 as described on the first page of this thread and then have a 2nd pair that is redundantly a mirror of the first pair? raid 1 right? just like if you have raid 1 of 2 hdd in case one fails.
Go backwards. Create the mirror first (RAID 1) of each pair. Then stripe (RAID 0) the two mirrored sets you just created (OS X will allow you to do this).

But there is a potential problem, and in your case, would see it using another SSD as a boot/applications disk (described in the other thread = ICH has a throughput limit of 660MB/s, and you'd be trying to push ~750MB/s, assuming each SSD can sustain 250MB/s; some can do more).

To solve this, you'd need a proper RAID card (fast enough to accomodate the SSD's) and an adapter kit to use the internal HDD bays. That will run you ~$450 for a SAS/SATA 3.0Gb/s unit, and $624 for a 6.0Gb/s model (both are 8 ports with a sufficient processor and bandwidth to handle current SSD's; but the latter unit can continue to be upgraded with newer, faster units in the future).

As it happens, using SSD's, you could get all of this internally.

But let's keep the RAID in one or the other threads to prevent confusion. ;)
 
i've been reading about so many different cards, internal using mini-sas, external using esate II/III however i have not found any that have both through the same card esata and internal sata 6gb/s but maybe that's the adapter you mentioned?

the apple website actually has a rocket raid esata 6gb/s card now, 4 port, $259 and owc only has the 4 port sonnet e4p so far or a 2 port 6gb/s by newertech

it sounds like i could set up with the 5 ssd and when i find the right card, get a little more out of that set up but it will work ok without the card for now.
 
i've been reading about so many different cards, internal using mini-sas, external using esate II/III however i have not found any that have both through the same card esata and internal sata 6gb/s but maybe that's the adapter you mentioned?

the apple website actually has a rocket raid esata 6gb/s card now, 4 port, $259 and owc only has the 4 port sonnet e4p so far or a 2 port 6gb/s by newertech

it sounds like i could set up with the 5 ssd and when i find the right card, get a little more out of that set up but it will work ok without the card for now.
Highpoint's products are mostly junk. The RR43xx series is OK, but their support is terrible, and I've a feeling that's a problem for you (still seem unfamiliar with RAID).

Nor do you want to use adapters with SATA signals, as the voltages are too low (data signals are only 600mV, where as SAS are 20V, and what the adapters are for).

The adapter I was referring to, was this (allows you to use the HDD bays with an internal RAID or non RAID HBA that uses an internal MiniSAS port, also known as an SFF-8087).

Card Examples:

The newertech card was meant for 1x disk per port, and if you get the correct version, can also support Port Multiplier based enclosures (allows for up to 5x disks on a single eSATA port). It could be used for SSD's in a pinch up to 2x (throughput limit of the card is 500MB/s in a Gen 2.0 slot).
 
You've more options and expansion capabilities with this route as well (i.e. use a hybrid = internal + external disks in the array). This is why the port count matters, disk count affects the arrays possible (5/6; or even nested parity 50/60, though I doubt you'd need to go this route). IF you've sufficient ports, you can increase capacity and performance just by adding disks (really nice, and the redundancy is a necessity given what you're doing with the system IMO).

The ARC-1222 or ARC-1222x are good cards to take a look at, as is the ARC-1680 family (12 port card+ for future expansion may be needed, depending on your capacity expansion). There's an internal adapter kit that will allow the HDD bays to work with internal cards, and a special cable that can take an internal port to an external enclosure. If you're more interested in an external only solution, you need to be looking at a MiniSAS (SFF-8088) compliant enclosure (example = Stardom SohoTank ST8-U5). External cables per MiniSAS port (handles 4x disks).

NanoFrog:

I'm leaning towards your option #3 at this point. Count me in for an Areca card if needed. Question: I'm wondering what you think about the OWC Mercury Elite-AL Pro Qx2. Could this work instead of the Stardom SohoTank ST8-U5 that you linked for me? The aptly named SohoTank, seems like a big beast offering me a plethora of options and room for expansion, but (forgive me for getting all artsy on you) it's going to look downright fuglly sitting next to my 6-core. How about using the OWC Qx2 four disk array (in raid 10) in the configuration shown below?. I'm hoping you might do your typically masterful job of pointing out any speed bottle necks or configuration inefficiencies with this setup.

-----------

On 32GB Ram
Four 8GB sticks,,, $1,500

-----------

Empty Optical Bay : Boot + Apps
OWC 100GB Extreme Pro RE SSD,,, $370
Currently using 25GB, leaves 75GB free.

-----------

Bays 1 - 4 : Scratch RAID 0
4x 640GB : Western Digital Caviar Green,,, $53 Each
SATA-II HDD 64MB Cache w/3yr Warranty

-----------

External 1-4 : Working Files, RAID 10
4x 1TB : OWC Mercury Elite-AL Pro Qx2, 3yr Warranty,,, $600
Currently using 1.1TB, leaving 900GB Free
http://eshop.macsales.com/shop/hard-drives/RAID/Desktop/
http://eshop.macsales.com/item/Other World Computing/MEQX2T4.0S/

-----------

External 5-6 : Back Up All, JBOD Span
Use existing 1.5 Tb OWC External FW Drives for Back up.
I currently have 1.1TB of Data, and plan to put 25GB of OS and Applications on a 100GB SSD
No need to back up RAID 0 Scratch Disks

-----------

On Scratch SSDs: I've given up on the idea of SSDs for scratch. I did some testing and I created a 100GB scratch file on one of my big images in no time at all, about 20 history states. 400GB of Scratch using a pair of premium OWC 200GB SSDs would cost me $1360. A pair of 40GB the lower end OWC SSDs at $118 each is still enormously tempting, but sadly, not suitable for me. If I opt for 4 mechanical drives instead, it's WAY cheaper at $212 for 2.56TB . I also dig (in a very 70's muscle car kinda way) the idea of having the inside of the Mac being all engine. Just boot, apps and scratch, with all my working data and backups strapped to the roof with externals.

On 4 Scratch Disks, RAID 0
: Since I will have my scratch array separated from my working files array, there is no need to back up the 4x 650MB (2.56 TB) scratch disk. I don't know how the video editing folks use scratch space, but for my saving habits in PS, it's just a temp scratch pad, something photoshop reaches for when it runs out of available RAM or to store some of my history states as I'm working on the file. In fact, the 2.56TBs exceeds my scratch volume requirements, and I'll probably take HonoMaui's advice and partition these down for a shorter stroke, and greater speed as well.

On RAID 5 Vs 10
: On Nanofrog's sugestion, I've looked at RAID levels 5, 6, and 10. Regarding RAID 5, I do like the idea of speed + redundancy with only 3 disks, and that seems like an elegant and economical fit for the 4 bays inside the box. You could do a boot from bay 1, and then Bays 2-3-4 for RAID 5. Then put your scratch and backups on externals. I've tentatively settled on RAID 10 for now. I could be wrong, but from most accounts RAID 10 will give me faster writes than 5. I understand some of the write speed performance issues (Raid 5 vs RAID 10) have to do with sequential vs non sequential data, the speed of the RAID card controller, available cache, I/O bandwidth, etc.. but sifting through all these interconnected factors falls outside my area of expertise.

On 24GB Vs 32GB RAM
: It's official, the 6 core will accept 32 gigs of ram using four 8GB sticks. It's also now common knowledge that the using the 4th slot knocks the ram down to 1 channel mode. I'm going with 32GB because I'm assuming that the additional 25% gain in RAM will outweigh the 3-15% efficiency hit with single channel mode. Quote: "OWC has confirmed that the 8GB modules do work in the 4/6-core 2010 Mac Pro! According to OWC, using 3 modules shows a ~ 15% memory bandwidth gain over 4 modules, so the configurations with a * at right are the optimal ones. Whether real-world tasks are affected by this small difference remains to be tested, but in past testing I never measured more then 3% hit from using 4 modules instead of 3." --Mac Performance Guide
http://macperformanceguide.com/blog/2010/20100819_MacPro32--macpro-memory.html

JBOD / Partition Question
: Could I create a 100GB partition on one of my two existing external Firewire drives (before or after they are joined) for a bootable clone of my OS + Apps drive? Then I could have Time Machine use the remaining 2.9TB to back up 1.1TB of data. That would give me a longer term answer for both incremental TM backups and future expansion.

BTW, G5 Vs. 6-Core : I've opened up and played with some of my previous jobs I did on my old G5, doing some typical retoucher type moves. The stock 6-Core with only 3Gbs RAM is WAY faster than my G5, loaded to the hilt with 8GB Ram. This is going to be really sweet.


Thanks again for your excellent suggestions! -- Julian
 
I have not forgoten you wanting me to do the tests :) just have to finish up a few things might try to get to it tonight :)

interesting on your 100 gig scratch :) and cut the disks in half for short stroke ?
but a good reason I always tell people to tests and try things out with the timings scratch size and efficiency themselves :) heeheh

I know for me the two 40s will be fine and what I do is just point the next scratch in line to my raid ? that way if I get over flow it moves on but I dont work on large files like you anymore :) thank goodness cause it was back in the G5 days :) SLLLOOOWWWWW
eat lunch check not done go to beach check not done eat dinner check OK next step !!!!

sounds like a sweet setup
my only thing might be the raid boxes ? and will try to kick that test out for you so you can see what a Areca 1222x can do in relation to something ;)
I still think a true raid card is going to spank those boxes big time but NanoFrog can tackle that explination :)
 
NanoFrog:

I'm leaning towards your option #3 at this point. Count me in for an Areca card if needed. Question: I'm wondering what you think about the OWC Mercury Elite-AL Pro Qx2. Could this work instead of the Stardom SohoTank ST8-U5 that you linked for me? The aptly named SohoTank, seems like a big beast offering me a plethora of options and room for expansion, but (forgive me for getting all artsy on you) it's going to look downright fuglly sitting next to my 6-core. How about using the OWC Qx2 four disk array (in raid 10) in the configuration shown below?. I'm hoping you might do your typically masterful job of pointing out any speed bottle necks or configuration inefficiencies with this setup.
The cards and enclosure gear listed was meant to be used for your PRIMARY working data.

The OWC Qx2 isn't that fast, and is really only suitable for backup given what you've posted.

As per external enclosures, there's other brands, and even other models from Sans Digital that would be better suited in terms of appearance I think.

There's even newer RAID cards out that could be an option for you, such as Areca's 1880 series (6.0Gb/s compliant).

But without exact details, I'm not sure how many ports you need, which will affect the card and enclosure selection, as well as whether or not you want/should use the internal HDD bays with the card. :confused: If you wish to go this route, let me know, and we'll go over it (every RAID solution is specific to the user's needs; there is no one size fits all). But it will allow you better performance and capacity usage vs. 10, and in the case of the ICH and Disk Utility, will keep the processing off of the system's CPU.

You can get past the SATA port limit in the MP by using non-RAID Host Bus Adapters, but it still requires a software implementation, and that means the system's resources (CPU, RAM) will be used to do the calculations. Not that big a deal for a 4x member set, but as it gets larger, so will the CPU cycles needed to run it. Just something to keep in mind.

You also need to realize that if you do go with a proper RAID card, the disks used will need to be enterprise models, such as RE3 or RE4 series from Western Digital (RE = RAID Edition). The reason is primarily the recovery timings, but they're also more rugged models (additional sensors and better specifications for things like MTBF and Unrecoverable Bit Error rates compared to their consumer counterparts).

Not a bad idea to use them in RAID, even if it's a software implementation such as Disk Utility.

On 32GB Ram
Four 8GB sticks,,, $1,500

Empty Optical Bay : Boot + Apps
OWC 100GB Extreme Pro RE SSD,,, $370
Currently using 25GB, leaves 75GB free.
These will be fine (nice that OWC did confirm the 8GB sticks will work). :)


Bays 1 - 4 : Scratch RAID 0
4x 640GB : Western Digital Caviar Green,,, $53 Each
SATA-II HDD 64MB Cache w/3yr Warranty

External 1-4 : Working Files, RAID 10
4x 1TB : OWC Mercury Elite-AL Pro Qx2, 3yr Warranty,,, $600
Currently using 1.1TB, leaving 900GB Free
http://eshop.macsales.com/shop/hard-drives/RAID/Desktop/
http://eshop.macsales.com/item/Other World Computing/MEQX2T4.0S/
Not sure what you're doing here, as the parity based array is meant to replace the 10 array for the primary working data.

Another important note, is that you want the scratch space to be no faster than the primary array, as it's a waste of funds and added complexity for no benefit (i.e. scratch read needs to be = primary write for ideal performance).

With a parity array, you can even place the scratch space on the array (done successfully before), as the array is fast enough to accomodate both tasks (and rugged enough to take the write cycles).

You can still leave the scratch space separate (nice actually), and even have the option of using SSD/s for it, so long as you're willing to accept a MTBR (Mean Time Between Replacement) schedule of 1 - 1.5 years (toss it, and replace with a new one). The $100 price tag on the OWC 40GB disks has made this possible (cheap and fast). With more expensive models, not so much, so this is a recent development, as other inexpensive SSD's were slow (not much different than their mechanical counterparts in terms of price).

External 5-6 : Back Up All, JBOD Span
Use existing 1.5 Tb OWC External FW Drives for Back up.
I currently have 1.1TB of Data, and plan to put 25GB of OS and Applications on a 100GB SSD
No need to back up RAID 0 Scratch Disks
This is where the OWC Qx2 will be a good thing to have (backups). And you're correct in realizing you don't need to backup scratch space.

On Scratch SSDs: I've given up on the idea of SSDs for scratch. I did some testing and I created a 100GB scratch file on one of my big images in no time at all, about 20 history states. 400GB of Scratch using a pair of premium OWC 200GB SSDs would cost me $1360. A pair of 40GB the lower end OWC SSDs at $118 each is still enormously tempting, but sadly, not suitable for me. If I opt for 4 mechanical drives instead, it's WAY cheaper at $212 for 2.56TB . I also dig (in a very 70's muscle car kinda way) the idea of having the inside of the Mac being all engine. Just boot, apps and scratch, with all my working data and backups strapped to the roof with externals.

On 4 Scratch Disks, RAID 0
: Since I will have my scratch array separated from my working files array, there is no need to back up the 4x 650MB (2.56 TB) scratch disk. I don't know how the video editing folks use scratch space, but for my saving habits in PS, it's just a temp scratch pad, something photoshop reaches for when it runs out of available RAM or to store some of my history states as I'm working on the file. In fact, the 2.56TBs exceeds my scratch volume requirements, and I'll probably take HonoMaui's advice and partition these down for a shorter stroke, and greater speed as well.
2x of the 40GB disks may still be an option (if you're willing to accept the MTBR idea of 1 year, maybe 1.5 years on the outside), and it's less complex than 4x mechanical units for about the same money and performance (probably 100MB/s faster for the SSD's than mechanical, based on 2x SSD's vs. 4x mechanical running at 100MB/s each).

BTW, last I looked, the 40GB units were $100. May have expired though, not sure.

If you do go with mechanical, the short stroke partitioning will improve performance for you (no matter the member count). Nice little trick. ;)


On RAID 5 Vs 10
: On Nanofrog's sugestion, I've looked at RAID levels 5, 6, and 10. Regarding RAID 5, I do like the idea of speed + redundancy with only 3 disks, and that seems like an elegant and economical fit for the 4 bays inside the box. You could do a boot from bay 1, and then Bays 2-3-4 for RAID 5. Then put your scratch and backups on externals. I've tentatively settled on RAID 10 for now. I could be wrong, but from most accounts RAID 10 will give me faster writes than 5. I understand some of the write speed performance issues (Raid 5 vs RAID 10) have to do with sequential vs non sequential data, the speed of the RAID card controller, available cache, I/O bandwidth, etc.. but sifting through all these interconnected factors falls outside my area of expertise.
Using any of the cards previously linked, you'd be fine with RAID 5. And by using a larger member count (remember, this is what the external enclosures linked are for), can tear 10 a new one. :eek: :p Same with RAID 6, though it's slower than 5, and will depend on member count vs. 10. As you scale up in member count, it will outrun 10 as well (especially if the 10 is run on the ICH, as the most you can do is 6x members if you use both optical bays as well).


JBOD / Partition Question
: Could I create a 100GB partition on one of my two existing external Firewire drives (before or after they are joined) for a bootable clone of my OS + Apps drive? Then I could have Time Machine use the remaining 2.9TB to back up 1.1TB of data. That would give me a longer term answer for both incremental TM backups and future expansion.
Yes, you can do this.
 
I have not forgoten you wanting me to do the tests :) just have to finish up a few things might try to get to it tonight :)
Ya man, no worries :). I'm in no big hurry. Get the work out. Go for a swim, and grab a beer on me. By the way, you do PS for a living and you live in Hawaii? I think you just became my personal hero! I create a fair amount of print images for the Corona Light and Corona Extra Beer brands. Perhaps I could have you art direct me when It's 20 below here in Chicago and I've lost all memory or ability to visualize what a tropical beach actually looks like.

Julian
 
for the test are you in 16 bit mode ?

my Cache is SSD drive single only 40 gigs left on a 100 RE version (waiting on my 3 other 40s to come in) and used my extra one now for boot ? so down to one for scratch and it shares my LR catalogs :)
history states 101
cache levels 6 tile 1024
open GL off
ram set to %70 I have 14 gigs total on this box


16 bit mode
28 minutes write but this was using my computer a lot :) copying stuff off that drive across the network etc.. and doing my other work and LR open etc.. while this was going on ? figured I might as well try this and see what happens :) this is a worst case scenario :)


6.12 read !
same for the read downloading client files 3 streams worth to the HDD :)
got two more jobs to get out tonight and 3 more tomorrow :) AHAHHHHH I hate pressure :) OH and LR opening and working :) hehehehe
so lots going on while I was doing this test :)


will get some time reboot and try it out with nothing else

my thought the read ? not sure how much quicker it will get ? some I am sure
the write was pretty good though :) for how much was going on


when the new 3.2 gets here I have the ram now 24 gigs and might throw the raid on it for fun ? can run it again but that will be in a few weeks ?
 
for the test are you in 16 bit mode ?

Big thanks for taking the time to do some testing for me, much appreciated! :) My test file was created in 8 bit mode, and saves out to exactly 8.18gb. I think 16 bit mode adds something like 66% to the file size. Do you still have the file? If I had your final save size I could still extrapolate it on a MB per second basis.Still on the subject of Save/Open speed, I've found some statements on the Lloyd Chambers site that speak to this issue in no uncertain terms.

Quote: 1 Why a fast hard drive or RAID might do nothing for open/save speed. "With single-threaded open/save, Photoshop is “CPU-bound”, meaning that its running time is a function of CPU speed. In other words, if 95% of the time is spent computing, disk I/O that takes zero time can only speed things up by 5%. Example: opening a 754MB test file took 14 seconds on a 4-way striped RAID, and 18 seconds on a single moderately fast hard drive (on my Mac Pro). The modest improvement reflects the CPU-bound single-threaded operation of Photoshop CS4. Still, 18 vs 14 seconds is a 20% reduction, pointing out the value of a fast striped RAID, something far more cost effective than (for example) a 3.2GHz machine over a 2.8GHz machine, which offers only a 14% speedup.
-- Mac Performance Guide

Quote 2: Photoshop is using only one CPU core for the save operation. "The save can run only as fast as that CPU core, so hard drive speed has little influence. The 4-way striped RAID used here needs about 2 seconds to save a 722MB file. If that time were reduced to zero, then the 48 second save would still be 46 seconds—effectively the same. If you must save as PSD or compressed TIF, then your only choice is to get faster machine eg 3.2GHz instead of 2.8GHz. That could cut down a 48 second save to about 42 seconds. Not so great for a $1400 premium. Note that hard drive speed does matter for saving uncompressed TIF files; the time was cut by 56% in this example. Unfortunately, there is no option to tell Photoshop to not compress PSD files, so if PSD format is mandatory, you’re stuck with poor performance."
-- Mac Performance Guide
http://macperformanceguide.com/OptimizingPhotoshop-Configuration.html#PhotoshopOpeningSaving

My take away from those statements: I keep looking for ways to drastically slash Open/Saving times in PS. There does seem to be room for improvement by 20% or so. But not the huge improvement I'm looking for. At the end of the day the Application itself (if saving files in a.psd format is required) continues to be the primary speed bump. It seems likely that I'll get incremental, but diminishing returns on any dollars invested beyond a 20% reduction in Open/Saving times. And it seems my intuition has already lead me down the right right path! IE. The fast 3.33 GHz clock speed on my 6 core, and a four disk array RAID 0.

Thanks Again :) -Julian
 
ahhh OK
yeah I saved the files :)

at least that gives you a idea of files that are in 16 bit was 12.77 gigs
so not all a waste :) hehehehe


8 bit was 8.19 gigs on disc still have the file will re run with the 8 bit for ya today :) from fresh reboot
 
part of why I wanted to do hardware raid was in the past when I had it I loved how I could do other things and not be choked to a total stop
and could do other things yes a slow down but not total choke kinda slow down that happens with regular HDDs :)
 
I've got a 1680x card hooked up to 7 1tb RE3's, primarily for PS and LR ... i've just timed saving a 1.9Gb multi layer file in PS and it took 2 min 37 sec (while it's AJA The interesting thing is that according to the disc activity in iStat, the disc is not actually being written to for the vast majority of the time that the process is going on, just a spike of activity here and there (at 360 Gb/s). If this is accurate, it implies that there will be a ceiling that you will hit regardless of how fast your machine / drives are (mine is MP 3,1, Octo 2.8, 16Gb RAM)

I was going down a similar route to you, and my eternal thanks to nanofrog for all his amazing help :D:cool: He is a complete star! :D

i went with a RAID6 array on an Areca 1680x, SSD for OS in optical bay, 4x 1.5Tb drives internally for back up....continued.

If it isn't possible to save an 8 Gb file in under 10 mins (there or there abouts, and i don't know if this is true or not, though from my timings above i'd say there's a good chance) perhaps you should go at it from the point of making your system as stable as possible ... perhaps a mac mini to play music / surf etc while your main MP is on a user with just the os and photoshop, hooked up the a RAID 5 or 6 array?

Cheers

Slater - Sounds like really sweet setup! :)

I'm not sure I have it pictured accurately though, especially the 7x 1TB drives?
Q: Is that a four disk array for raid 6 for your working files and the 3 left over for scratch? or you have your scratch and working files on the same volume?

Wow, you saved a 1.9GB file in 2min 37 seconds. Awesome! It's so fast I'm scratching my head a bit though. This would indicate saving an 8GB file in 8-10 Minutes, way faster than the 23 it's taking me.
Q: is the 1.9GB the amount on the disk after the file is saved, or the amount of scratch, or the amount before the save?
Q Are you saving the file as a layered psd, a flattend psd, a layered tif, or a flattened tif.?

Thanks! - Julian
 
OK, I feel like I'm making some progress on this! Getting really close to Nanofrogs option #3 , Just need a little help with the details.

32GB Ram
Four 8GB sticks,,, $1,500

-------

Empty Optical Bay : OS + Apps
50GB OWC Extreme Pro RE SSD,,, $200
Currently using 20GB, leaving 30GB free

-------

Internal Bays 1-4 : RAID 0 Scratch

4x 500GB Western Digital WD RE3 (16MB Cache) ,,, $95.00 each

- or -

4x 1TB Western Digital RE3 (32 MB Cache) ,,, $150 each

-------

8 Bay External Box, Sans Digital TowerRAID ,,, $390
with an Areca Raid Card, need help choosing appropriate model from amongst the many already linked.

-------

External Bays : Working Files

6 Drives on RAID 10
- or -
5 Drives on RAID 6
-or-
some other way to utilize the eight bays?

-------

External FW : Back Up

My existing 2x 1.5TB OWC FW drives, Spanned Together for 3GB Time Machine. Plus a small 50 GB Partition for bootable clone via SuperDuper.

-------

On WD RE3's : I don't need the volume of the more expensive one, but it has 2x the cache.
is it worth the extra $50 Each?

On RAID 5,6 10 : My inner Warren Buffet is telling me never invest in anything you don't understand. RAID 10 makes total sense to me. I can't fully wrap my mind around RAID 5 or 6 so there is a bit of hesitation. I'm willing to experiment a bit when I get the box and the drives.

Thoughts, Counterpoints?
 
32GB Ram
Four 8GB sticks,,, $1,500

Empty Optical Bay : OS + Apps
50GB OWC Extreme Pro RE SSD,,, $200
Currently using 20GB, leaving 30GB free
As before, these are fine. ;)

Internal Bays 1-4 : RAID 0 Scratch

4x 500GB Western Digital WD RE3 (16MB Cache) ,,, $95.00 each

- or -

4x 1TB Western Digital RE3 (32 MB Cache) ,,, $150 each
You don't need a lot of capacity for scratch space, so you can chose to use smaller disks.

Another option, is to use 2x of the OWC 40GB SSD's (assuming you're willing to deal with a MTBR of 1 - 1.5 years). Another member is already using them in a stripe set, and the sustained throughput per disk is ~175MB/s IIRC (makes sense as well, as the 285MB/s throughput stated is likely burst speed, not sustained). So a pair should produce ~ 350MB/s, which is what you'd get out of a 4x disk mechanical set as well (larger drives are a bit faster than smaller ones). The advantage is less initial cost ($200 vs. $380+ for the models you're considering above), and less latency. The compromise is the MTBR. Even with mechanical, the typical MTBR is 3 years anyway, so it works out to about the same funds.

This should also be fine with the system's ICH, as ~660MB/s - 350MB/s = ~310MB/s for the OS/applications disk, which will be sufficient (SATA 3.0Gb/s is only good for ~270 - 275MB/s per disk anyway; the ~660MB/s limitation has to do with the DMI bandwidth allowed to the SATA controller in the ICH so the USB and Ethernet controllers aren't stalled <allows all 3 controllers to be used simultaneously>).

As per internal drive space, that's not a problem as you're taking the Areca's members and backups external.

So an SSD (2x members) or mechanical set (4x members) is possible with what you're doing. It's up to you which way you want to go.

8 Bay External Box, Sans Digital TowerRAID ,,, $390
with an Areca Raid Card, need help choosing appropriate model from amongst the many already linked.
So you like the look of that one I see.... :D

  • As per the card, do you want more ports (relevant to future expansion/performance)?
  • Are you going to go with SSD's in the near future (i.e. replace mechanical with SSD's on that card in say the next 5 years; partly gets to the 3.0Gb/s vs. 6.0Gb/s bit)?
  • What kind of budget?

External Bays : Working Files

6 Drives on RAID 10
- or -
5 Drives on RAID 6
-or-
some other way to utilize the eight bays?
Not sure what you're wanting to do here....

The card opens up a lot of options, and depending on the card, you can go up to 24 disks without the need for SAS expanders (Areca's SAS cards can actually operate up to 128 disks when using these :eek:).

So until the card model and initial member count are nailed down (goes to the port count and enclosure selection), it's too hard to determine what to go with just yet.

But if that's the enclosure you go with (assuming there's not another enclosure used), you can use up to 8x disks in any configuration the card is capable of (0/1/10/5/6/50/60 as the models you'll be considering can do all of these). Bolded = better for what you're trying to do.

External FW : Back Up

My existing 2x 1.5TB OWC FW drives, Spanned Together for 3GB Time Machine. Plus a small 50 GB Partition for bootable clone via SuperDuper.
I'm not sure of what you'll end up with yet, but I'd say this will be too little capacity.

Once we get the card, drives and enclosure/s set, then we can cover this (make sure you've sufficient capacity and performance to handle the backup in a reasonable amount of time).

There's products out there that could be of use to you if what you have won't be sufficient for all of it. They're still helpful though. ;) Clones/secondary backup locations at the very least.

On WD RE3's : I don't need the volume of the more expensive one, but it has 2x the cache.
is it worth the extra $50 Each?
Not really, as the card and member count will certainly get you some serious performance. :D

B]On RAID 5,6 10[/B] : My inner Warren Buffet is telling me never invest in anything you don't understand. RAID 10 makes total sense to me. I can't fully wrap my mind around RAID 5 or 6 so there is a bit of hesitation. I'm willing to experiment a bit when I get the box and the drives.
Each of these levels are a compromise of performance and redundancy (affects usable capacity).

RAID 10:
Failure count = 2
Performance = (n * single disk performance)/2 <n = member count>
Capacity = half of the total capacity of the set (so: (8 disks * 1TB disks)/2 = 4TB usable capacity)​

RAID 5:
Failure count = 1
Capacity = (n -1) * capacity of a single disk [so: 8 disks @ 1TB = (8 -1) * 1TB = 7TB usable capacity]
Performance = ~ (n * performance of a single disk) * .85 <on average> [so: 8 * 100MB/s *.85 = 680MB/s]​

RAID 6:
Failure count = 2
Capacity = (n - 2) * capacity of a single disk [(8 - 2) * 1TB = 6TB usable capacity]
Performance = ~ (n * performance of a single disk) * .75 <on average> [so: 8 * 100MB/s * .75 = 600MB/s]​

Real performance data varies from card to card, and setup to setup. So the performance figures I've listed are what I've approximated from test setups (picked the 1222 and 1680 series as a reference point in terms of performance with SATA drives <WD RE3 series to be more specific>). Cache values, the card's processor speeds, and exact disk models all make a difference on the specifics (i.e. the available data on the newer 1880 series is faster than the 1222 or 1680 series cards), but hopefully it will illustrate the capacity/performance/redundancy trade-offs between the levels.

For further information on the redundancy and capacity aspects, take a look at the RAID wiki (if you haven't already, and there's others out there too if you search). Performance data is harder to find, and usually is a result of searching out a specific model number for a card. The review will describe the test setup they used, and present the test data. General formulas are impossible to find, unlike levels 0 or 10 (even harder to find level 1 data, as there's differences between hardware and software implementations).

But above all, you must understand the card you'll be using is capable of all 3 of these, and is a proper hardware controller = it can properly handle parity based arrays (5/6 and nested parity 50/60). The cost of the equipment is the same, so you have options as to what you can do, depending on the performance, capacity, and redundancy requirments that best fit your needs.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.