However, speed and safety are not mutually exclusive concepts for a pro retoucher. Allow me the luxury of extended explanation. I might still be willing to live with the risk of disk failure with raid0, in exchange for the abolishment of another type of risk. Specifically the risk that, because it takes so long to save my file, that I don't save as often as I should. When working on a 9gig, hundred layer image, there is increased risk of an application freeze or system crash. So ideally you'd save every 15 minutes or so right? Well not if every time you save, you are left checking your email for 30 minutes. So I end up not saving for hours at a time because I've got 3 art directors an 3 account managers showing up in the studio at 4.30 pm. That is the flavor of risk I'm primarily concerned with getting rid of. Everything else is frosting on the cake.
This is usually the case, but usage pattern and budgets in particular, tend to carve out the actual solution implemented.
Redundancy and performanc cost money, and the combination of the two can get downright expensive. But if you need it, you need it.
Given the dangers associated with RAID 0 and your file saving habits, I'd recommendy you stay away from it, and go with a proper RAID card instead (either 5 or 6). This wouldn't solve your risk due to file saving frequency entirely, but reduce the amount of risk due to a primary array failure.
Not sure your comment here:
Three SSDs in a raid0 would cause a throttle?
3 Individual SSDs in three sep ports, but accesses at same time causes a throttle?
Yes. If they're in a stripe set (or otherwise run simultaneously), then you're trying to shove 750MB/s over a SATA controller than can only pass ~660MB/s to the processor.
This is called throttling, as you're SSD throughput has to slow down to accomodate the SATA controller (ICH in this case).
You can get around this by using a PCIe card (such as a RAID card), as you're utilizing a bus that can handle more bandwidth.
You've piqued my curiosity on this option. I'd be interested in any solution at almost any price, in order to get the saving time (write times) of my currently working images down to a fraction of what it takes on my current system(s)
Take a good look at the links you've got in terms of the Areca's, enclosures,... as you can not only get redundancy, but additional throughput as well, depending on how far you want to take it (I can actually exceed 1GB/s with my own system).
Think about what kind of throughputs would put you where you'd want to be in (work backwards from time per process if you have to), capacity requirements over time,...
And do you want a "Working Data" array + Archival storage area for completed jobs, or a single array that contains all of it?
I ask, as it's easy to make multiple arrays on one card using separate disks (you can even partition disks that appear as separate arrays, though this doesn't bode well for performance, or redundancy if one fails either).
At any rate, there's a lot of options to consider here. Mostly performance and redundancy in your case, as your capacity requirements may not be that large (you'll still be using drives in parallel for a performance increase, and it also affects capacity, though both differ between levels on a fixed number of disks).
Think about it, and get back to me. Then we'll go from there.
Uhmm.. the edge of my knowledge forest arrives right around what you are stating above. Sounds kinda important though.
A UPS system is a necessity when you're dealing with RAID, especially so if you go with a parity based array on a proper RAID card, such as the Areca's linked previously.
i write very very poor sadly
whats funny is for 3 years I worked for HP as a trainer ? in person in front of people I have the gift yet I cant write to save my life ! so sorry
the previews are generated in batch you import your images and use a pull down to select what size during the import they are built you can build them one at a time ? but really I dont think to many do it this way ? most do it on import
you can also select build previews as a menu item ?
it really is very low priority to write it and the write speed is not at all important
but once the data is in cache LR uses that info to generate the images you work on along with your original images !
that data is used to build the images you work on so the read of that data is critical to speed of getting to work on the images
some on normal HDs have about 1-2 seconds until you can work on a image
a short stroked hitachi 1Tb to 100 gigs takes about .78 or so seconds and the SSD takes about .53 seconds but this .25 to some who are working on thousands of images it ads up quite quickly
and moving from image to image quicker also make a nicer workflow and does not break your thoughts so actual production goes way up more than the time saved !
I have noticed about a %25 time savings
a normal job is about 5 hours now about 4 hours
so those who work in LR have to decide if that time will make up for it ?
I would say the time and cost is easy if you work with LR for a living as a photographer its worth it
if its a hobby ? raid 0 short stroked seutps are not bad
this is the timing chart I came up with over a few days of testing
and this for LR users is about how quickly the white sliders come in to view in Dev mode !
Nice assesment.
You've hit the nail on the head as the old saying goes, as it all comes down to what is your time worth, and is the added cost justifiable in that context?
For a pro, the answer is more likely that an SSD is a valid solution for the funds. To an enthusiast, not as much, unless they've more money than brains.
that's not particularly well supported. What is being discounted here is that the Mac OS X HFS file system can hand you sequential blocks. For very large files it can't. The automatic defrag mechanism doesn't work on large allocations.
I was talking about application specific situations, specifically for Photoshop actually (though I didn't name it). In this case, the files are large enough that the filesystem can't assist in the way you describe.
In cases of small files, then Yes, I agree with you. And in such cases, it's up to the user to determine if the advantage is justifiable for their situation (i.e. pro earning a living vs. enthusiast/hobbyist with limited budgets, so choices have more impact, as adding A, may mean B can't be had).
For me, it comes down to what's justifiable for the funds (price/performance ratio). For some, going all out is worth it, even if it means a short MTBR (i.e. swapping out SSD's in say 1 - 1.5 years). But this will be out of budget for many here on MR, from what gets posted, and why I posted what I did (aimed at the described usage).
I didn't mean to create any confusion.
Julian did you read the times I had with my files ?
5.47 gig file 118 seconds to save to my raid
again I can say a raid setup might be something to look into ?
the 1222x I have with the battery module in our line of work is worth it
drop 8 WD 1TB RE3 drives in it you have a nice setup that is going to be faster for large file writes than SSD in my testing
a touch over $2000 for this setup ? but worth it for the amount and speed you get
let nanofrog jump in on this ? but the other controllers that you can extend the cache on to 4GB cache on the raid controller might help ? that I do not know
this pro PS user can not afford to loose my work again 4 hours work goes down the drain thats 4 hours to redo 4 hours you lost and that puts you 4 hours behind schedule !!!
I would really go read this also
http://macperformanceguide.com/Reviews-MacProWestmere-Photoshop-diglloydHuge.html
He seems to be interested in going with such a solution.
And in with Areca's, I usually do add a larger DIMM for cache (though 2GB sticks are the "sweet spot", as performance gains from additional cache aren't linear).