Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
thanks for the thoughts yeah was going to try to guess usage ? but man thats impossible ;)
I think the tough thing is they just have not been out long enough ?

so far to me it is proving to be a bit quicker in real world as a cache drive for Lightroom

using LR seeing about %50 gains in getting the sliders ready to work but this is more a seek issue I feel since its reading the cache not writing

LR has been tougher to test ? I did some by taking video and then look at the time code to determine when sliders are coming in to view
PS is easy with its timing and such

with the LR catalogs also on SSD their is a lot of small writing to the DB and some to the new preview files but its not that big a gain in real world

of course having more ram and CS5 in 64 bit is a huge bump

and of course I have been trying to say writing large files to disc I think and my testing my system the raid 6 on the areca has been faster than SSD even in raid 0

for me SSD are still my boot and cache for LR along with Scratch for PS but going to do some more testing on this ?

also those reading cache of bridge is a huge improvement with setting Bridge cache to a SSD
 

nanofrog

macrumors G4
May 6, 2008
11,719
3
thanks for the thoughts yeah was going to try to guess usage ? but man thats impossible ;)
I think the tough thing is they just have not been out long enough ?
SSD's are getting better (i.e. newer controllers, TRIM, and garbage support, and the actual Flash is improving as well), but it's still not a mature technology just yet.

so far to me it is proving to be a bit quicker in real world as a cache drive for Lightroom

using LR seeing about %50 gains in getting the sliders ready to work but this is more a seek issue I feel since its reading the cache not writing
So long as the primary usage is reads, you won't have any significant issues with buring out cells (very high write frequency).

with the LR catalogs also on SSD their is a lot of small writing to the DB and some to the new preview files but its not that big a gain in real world

of course having more ram and CS5 in 64 bit is a huge bump

and of course I have been trying to say writing large files to disc I think and my testing my system the raid 6 on the areca has been faster than SSD even in raid 0

for me SSD are still my boot and cache for LR along with Scratch for PS but going to do some more testing on this ?
This could be helpful.

As per Photoshop, I'd really recommend getting the scratch space off of the SSD.

Not sure about LR, as I've no idea on the write frequency of that application. :confused:

also those reading cache of bridge is a huge improvement with setting Bridge cache to a SSD
For reads, absolutely. SSD's are brilliant solutions for random access reads (I presume it's random access, given your comments above).
 

deconstruct60

macrumors G5
Mar 10, 2009
12,309
3,900
A. Could be wrong but, I don't think photoshop is ever accessing (writing) to the scratch drive and saving the current file at the same time, because saving your file, is a one thread deal.

Not so sure that is correct. If what you are saving needs data from the incremental states stored in scratch then they compete. Not at 100% but may be some low amount of reads mixed in there.




B. .... I don't think fragmentation will be an issue will at least 75 gigs of free space at all times.

Fragmentation isn't as big of an issue with SSD drives. Fragmentation leads to skipping to different parts of the drive to get data. That is absolutely the normal mode for a flash based SSD drive. (the data is stored all over the drive. It logically looks like it is continuous perhaps, but that is not the physical layout. )



C. I will get 2x the speed of an allready fast SSD for both tasks

Generally RAID was invented to get properties of disks that are more expensive using "inexpensive" drives. It isn't speed you are getting with RAID-0 flash SSD drives but reliability and useful lifetime.

A two element stripe RAID across flash SSD drives increases wear lifetime 2x (i.e., you have cut writes by 1/2 ). A SLC SSD drive is 6x - 10x better than a MLC drive. You just need to find three independent 2x factors to get a 8x increase ( 2 * 2 * 2 ). That will put you much closer to SLC wear lifetimes.

The wiggle part is if the internal disk controller is going to give you that 2 * 2 (or better ) improvement.

The OWC RE drives have about a 20% over provisioned. Not all of that is gc queue. some is checksum and some other duplication and management. It will save you from blowing out the 'available clean cell' write buffer. Not going to 100% contribute to better lifetime. Just makes living with TRIM more of a non issue (so whether it is passed through by RAID card or not doesn't matter. The OS isn't passing it either in the first place. )


Putting the working file as decreases the flash SSD drives lifetime. If trying to extend it. The don't put those files there. They too have to be written and erased.

The more data you place on a flash SSD drive the less space it has to juggle not reusing the same cells over and over again. In other words if some file squats on 10% of the drive that is 10% that can't be used to avoid wearing. If the squatting percentage goes to to 50% then that is just 40% more space that can't use to spread out the writes.

If going to write 5-10x more than "normal" then need to minimize the files that are squatting on the drive.

[quite]
As for the increased danger of data loss with Raid 0, I'm thinking that the external FW drive with time machine will save my bacon with auto saves every hour on the hour.
[/quote]

Time machine pointed at scratch drive is bad. You don't want old scratch stuff. It was temporary when you were dealing with it and it is usefulness after you are done. Time Machine is going to try to keep different versions that have no value.

I think this is being motivated because the working file is there. I that moves then there nothing to backup here. Minimally you'd need to restrict to the folder where storing working file ( which is a different one that scratch). So
/Volume/WorkingStuff

The scratch two top level folders

/Volume/WorkingStuff/PSScratch

/Volume/WorkingStuff/TheFile


Time Machine would only be aimed at "TheFile" folder. You can see that over the days move files in and out of that folder that you'll be lots of versions of what is essentially different things.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,309
3,900
External 2 : TM Back Up
Existing OWC Mercury Elite 1.5 Tb Firewire
Small Partition for Bootable Clone of OS + Apps

If TM is backing up the OS drive then if you want to maximize recovery you would want to split these onto different drives.

If your TM disk dies then you have a clone. If your clone dies then you have a TM backup. Both on same drive means if loose drive loose both. That is OK if have a one backup drive but appears to be two here. Might as well take advantage of the different devices.

The other factor is that TM keeps lots of duplicates. While clones can be exactly the same size for TM to be effective you need like 130-160% of the total volume(s) you are backing up. (sure your disks aren't 100% full but the duplicates are going to eat space. Especially if have larger, bulky files. )
 

deconstruct60

macrumors G5
Mar 10, 2009
12,309
3,900
Slightly different configuration:


Configuration & Upgrades:

24Gb RAM (3 x 8),,, 4th slot empty $1,095.99

Optical Bay : OS + Apps
60Gb OWC extreme pro SSD,,, $179.99

Bay 1 : Scratch
First of two 50 Gig OWC SSDs in Raid 0,,, $209.99

Bay 2 : Scratch
Second of two 50 Gig SSDs in Raid 0,,, $209.99


replace these every 2.5-3 years regardless if stats say are good or not.
In a year, there will be eMLC drives which put you into the current SLC class of wear lifetimes without the 3-4x price multiple. Only need to use these two long enough for them to pay for themselves and then chuck them.

2 x 50GB works here because expect that scratch file size don't go over 30-40GB high water mark (that's about 3.75-5 times file size). If it is much higher than that, then you need to switch to spinning hard drives.

Likewise not surprising to see SSD as scratch disk problems if scratch file is >60-70% of disks capacity . Ideally want to be minimally down in the over provision allocation percentage space plus a small percentage increase. (e.g., would be 10-15 GB for the OWC RE drives. ).



Bay 3 : RAID-0 Partition 500 GB current job, RAID-1 Partiton Completed Jobs
1.5Tb 7,200 rpm HD pulled from old G5

Bay 4 : RAID-0 Partiton 500GB current job , RAID-1 Partiton Completed Jobs
1.5Tb 7,200 rpm, HD pulled from old G5

[assuming that since 1.5TB drives these are relatively new (less than 1.5 years old. Also somewhat assuming that these are not in a RAID-1 setup. More so represented old jobs and really old jobs. A fraction of the really old going to push somewhere else. ]



External 1: 1TB partition: Back Up Clone , 500M partition Bootable OS + Apps Clone
Existing OWC Mercury Elite 1.5 Tb Firewire

External 2 : TM Back Up of 1TB RAID-1 space
Existing OWC Mercury Elite 1.5 Tb Firewire


External 3: Warehouse drive for "really old" not-so-current projects.
New Firewire drive.

[ Files that haven't changed in 2-3 years you should get out of your back up rotation. They only suck up time and space. If haven't looked at files in a couple years take them "offline". When you update the Warehouse, clone it and then store that clone somewhere else as backup. ]


The obviates buying some expensive RAID card where going to start using partiy. RAID parity is a significant price increaser because at that point need UPS power backup in addition to data backup.

If going to commit to RAID card I'd do the following.

Don't use the optical drive space.

DISK 1 60 GB OS + Apps.

DISK 2 50 GB scratch RAID-0

DISK 3 50 GB scratch RAID-0

DISK 4 1TB Back-UP Clone , 500 GB OS+App Clone


You hand the TM disks and the "Current Job" stuff off the separate SATA controller. Primarily trying to avoid saturating the internal SATA controller... it only goes so fast. [ In part that is what is seeing when external RAID-5/6 goes faster than some internal over allocated internal drive configuration. ] Stuffing the OD slot and all four sleds is only going to expose that more. Likewise when doing "current job" cloning separate controllers for "from" and "to".



External

RAID 5 (or 6) the 1TB Current Space. 3 x 500-600 GB drives. (or 4 x in double parity case )

RAID 0 Current file 2 x 500GB

[ if can mix and slice three 1.5TB drives with a card then use them to keep drive count down. If RAID card only consumes whole drives then drive count goes up. Can go 3 way strip over those 500GB slices. since need 3 because of the parity issue anyway. Presumes not a big issue to put "current file and "current jobs" because don't do open saves on those at the same time. ]

TM backup drive. [ depending upon external drive enclosure budget either in it or firewire. ]

Warehouse drive(s) [ still firewire since really not accessing it all that much. ]
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
As per Photoshop, I'd really recommend getting the scratch space off of the SSD.

Not sure about LR, as I've no idea on the write frequency of that application. :confused:

For reads, absolutely. SSD's are brilliant solutions for random access reads (I presume it's random access, given your comments above).

yeah the PS scratch I am a bit in the not sure going to test stage ;)
might go back to some short stroked setups like I used to use ? have to do some time testing ? if I can save one hour the drive will pay for itself ? so that will be my basis of time opening and closing 40 - 80 large documents 250 megs to 500 megs going to check in PS about the efficiency and scratch size and try some real world testing and decide figure its worth trying :)


just good info for ya nanofrog the LR cache is a cache it writes the files out about half the size of a raw file so 5d files in the cache are about 6-7 megs and thats it sits till its needed and read no more writing back
the write takes place when you tell it to build previews usually on import and its all based on how big it is ? so their is some write as you ad more images and depending on the size how it flushes but really very very very low write issues

now the LR catalog has lots of writes both to a database so very small hits and larger hits of 700KB to 4 megs or so depending on preview sizes selected

this is also why I have been keeping my catalogs on my raid and it seems to offer the same performance

the cache when set in LR is set to the size you want and once it fills it just purges out older files and stays at that size unlike PS which you have no control over ?

so its easy to say get a 40 gig and tell it 30 gigs for cache

those looking at LR cache I posted a graph I did a bunch of timing tests and raid 0 or a single about .01 seconds difference ? so to me the raid 0 thing would only be used for smaller SSD to have the size you want ?
 

nanofrog

macrumors G4
May 6, 2008
11,719
3
Not so sure that is correct. If what you are saving needs data from the incremental states stored in scratch then they compete. Not at 100% but may be some low amount of reads mixed in there.
SATA = Half Duplex
SAS and FC = Full Duplex

But keep in mind, the second channel in SAS and FC can't be used simultaneously for data (one for data, one for commands).

Even in the case you describe, the data should be read, processed by the system's CPU, and then the output written to the disk. If it's fast enough, the disk will at least have to wait for the platter to rotate to the correct position before the write can be executed (i.e. reads on the first pass, system processes the data, then writes on the second rotation; assumes the servo remains in the same position).

I'm just not aware of what you're describing actually occuring, though I think I see where you coming from (based on PMR head tech I presume). But the data still has to be processed by the system if I understand your point correctly.

Bay 1 : Scratch
First of two 50 Gig OWC SSDs in Raid 0,,, $209.99

Bay 2 : Scratch
Second of two 50 Gig SSDs in Raid 0,,, $209.99


replace these every 2.5-3 years regardless if stats say are good or not.
In a year, there will be eMLC drives which put you into the current SLC class of wear lifetimes without the 3-4x price multiple. Only need to use these two long enough for them to pay for themselves and then chuck them.
I understand your point (and a really good idea to recommend a short MTBR btw), but why not just use mechanical until the eMLC based SSD's actually ship?

Cheaper to get going, and can allow for prices to lower a bit as well. Makes more sense to me, as a mechanical array is sufficient for the intended usage.

At least SSD's don't have to deal with track locations though, which does mean for best performance, don't let the array get more than 50% full for mechanical platters. A definite advantage, but pricey ATM.

Just need sufficient time to pass before SSD's really hit the point they're able to replace mechanical for most things.

yeah the PS scratch I am a bit in the not sure going to test stage ;)
might go back to some short stroked setups like I used to use ? have to do some time testing ? if I can save one hour the drive will pay for itself ? so that will be my basis of time opening and closing 40 - 80 large documents 250 megs to 500 megs going to check in PS about the efficiency and scratch size and try some real world testing and decide figure its worth trying :)
Short stroking the disks is another nice little trick (use it myself for OS's and their clones).

It'll be interesting to see what you find out, as it could help others eek out as much performance from their systems without spending substantial funds. :)

just good info for ya nanofrog the LR cache is a cache it writes the files out about half the size of a raw file so 5d files in the cache are about 6-7 megs and thats it sits till its needed and read no more writing back
the write takes place when you tell it to build previews usually on import and its all based on how big it is ? so their is some write as you ad more images and depending on the size how it flushes but really very very very low write issues

now the LR catalog has lots of writes both to a database so very small hits and larger hits of 700KB to 4 megs or so depending on preview sizes selected

this is also why I have been keeping my catalogs on my raid and it seems to offer the same performance

the cache when set in LR is set to the size you want and once it fills it just purges out older files and stays at that size unlike PS which you have no control over ?

so its easy to say get a 40 gig and tell it 30 gigs for cache

those looking at LR cache I posted a graph I did a bunch of timing tests and raid 0 or a single about .01 seconds difference ? so to me the raid 0 thing would only be used for smaller SSD to have the size you want ?
The write frequency seems lower, but I'm not sure how rapidly the previews are generated (i.e. single? large batch?).

Assuming my understanding is correct, it would probably be fine on SSD's, but given the costs involved, it may not be worth it, given the differences between SSD and RAID (stripe set), unless my understanding of what you've tested and on what is way off.

Your post was a tad confusing, so I appoligize if this is making matters worse. :eek: :p
 

JulianBoolean

macrumors regular
Original poster
Aug 14, 2010
142
5
Unfortunately, I wouldn't go this route. Consumer grade SSD's (MLC based) are not meant for high write conditions, which scratch usage is (I've posted on this before, so if you're interested in the specifics, you may want to do a search on MR ;)).

Ok, got it. After some reading here and elsewhere, I'm dubious regarding the use of SSDs for writing scratch and working files. Thanks!

You'd be better off using mechanical disks in a RAID configuration compared to SSD's. And as you're post indicates you're earning a living at it, a stripe set for working data is not a good idea either, given the time spent fixing a problem, or worse, an insufficient backup system = lost data (not just having the disks for backup, but the frequency that those backups are made; the longer the time, the more work that has to be re-done to get the missing data back).

Agreed. Saving working files to a raid0 drive is risky buisness. I get it.

However, speed and safety are not mutually exclusive concepts for a pro retoucher. Allow me the luxury of extended explanation. I might still be willing to live with the risk of disk failure with raid0, in exchange for the abolishment of another type of risk. Specifically the risk that, because it takes so long to save my file, that I don't save as often as I should. When working on a 9gig, hundred layer image, there is increased risk of an application freeze or system crash. So ideally you'd save every 15 minutes or so right? Well not if every time you save, you are left checking your email for 30 minutes. So I end up not saving for hours at a time because I've got 3 art directors an 3 account managers showing up in the studio at 4.30 pm. That is the flavor of risk I'm primarily concerned with getting rid of. Everything else is frosting on the cake.

I can live with the dif between this or that filter taking 122 seconds vs 6oo seconds on a 12 core 2.93. All those speed charts on Mac Performance Guide make for interesting reading, but I need a system that can solve one freakin huge problem that photoshop has created for me. Saving is a single task, single thread operation. Unfortunately, throwing RAM, cores, or clock speed at the problem does not fix it.

SLC based disks would be sufficient, but the capacities are low, and they're quite expensive yet (i.e. Intel X25-E models). At those funds, you have better options (redundancy, still better write cycle conditions, and higher sustained throughputs for similar funds).

Understood, thanks! FWIW, capacity of an SSDs for either scratch or working files is not really an issue. I really only need 100 gigs max for currently working jobs.

Another thing to consider is that the ICH (SATA controller) in the system has a throughput limit of ~660MB/s. You'd throttle with 3x of those SSD's (~250MB/s each, so simultaneous access will throttle).

Not sure your comment here:
• Three SSDs in a raid0 would cause a throttle?
• 3 Individual SSDs in three sep ports, but accesses at same time causes a throttle?

Option 1:
  • Use an SSD as a OS/applications disk in the empty optical bay (boot/applications disk)
  • Substitute mechanical disks for the SSD's in the stripe set (better suited for scratch)
  • Use the other mechanical disks as the primary data location (working data, as it's safer than a stripe set)
  • External backups as configured
The advantage here, is lower cost (it's cheaper than your original configuration), and better safety for your working data. But it's no where near where you should be IMO for earning a living with the system (for a hobbyist, it would be acceptable, as the data's not critical).

This would indeed cut my 40 minute photoshop saves (via one x 7200 rpm HDD) in half. But 20 minutes is still kind of a deal breaker, I need faster writes.

Option 2:
  • Use an SSD as a OS/applications disk in the empty optical bay (boot/applications disk)
  • 4x Mechanical disks in HDD bays 1 - 4, in a RAID 10 configuration (speed is ~ that of a 2 disk mechanical stripe set, but has the redundancy of 2x disks before data is gone). This is used for scratch and primary data (working files)
  • External backups/archival locations (can use single disks or JBOD; eSATA card and possibly a Port Multiplier enclosure would be less expensive over time, as you just add disks)
This is still inexpensive (especially for what you get), as you now have the minimum performance requirement, and some redundancy as well. But the performance isn't as good as it could be.

Awesome, I like this one a bit better as well but I still get the same write times as option A.

Option 3:
  • Use an SSD as a OS/applications disk in the empty optical bay (boot/applications disk)
  • Use a proper RAID card, and use either RAID 5 or 6 (can handle the write hole issue associated with parity based arrays; software implementations, such as the Highpoint 2314 are not). Specifics can be gone over if you're interested in this configuration, as there's more information and options to consider (internal, external, mixed = hybrid, disk count, future expansion requirements, OS requirements, boot requirements,...).
  • External backups/archival locations (can use single disks or JBOD; eSATA card and possibly a Port Multiplier enclosure would be less expensive over time, as you just add disks)
This is the best way to go IMO, and what I was referring to when I mentioned "better options for the funds" in terms of using SLC based SSD's.

You've piqued my curiosity on this option. I'd be interested in any solution at almost any price, in order to get the saving time (write times) of my currently working images down to a fraction of what it takes on my current system(s)

You've more options and expansion capabilities with this route as well (i.e. use a hybrid = internal + external disks in the array). This is why the port count matters, disk count affects the arrays possible (5/6; or even nested parity 50/60, though I doubt you'd need to go this route). IF you've sufficient ports, you can increase capacity and performance just by adding disks (really nice, and the redundancy is a necessity given what you're doing with the system IMO).

Got it, thanks!

The ARC-1222 or ARC-1222x are good cards to take a look at, as is the ARC-1680 family (12 port card+ for future expansion may be needed, depending on your capacity expansion). There's an internal adapter kit that will allow the HDD bays to work with internal cards, and a special cable that can take an internal port to an external enclosure. If you're more interested in an external only solution, you need to be looking at a MiniSAS (SFF-8088) compliant enclosure (example = Stardom SohoTank ST8-U5). External cables per MiniSAS port (handles 4x disks).

Awesome links. I'm terrible at finding things I don't know I might need. Big thanks on this .

A few notes:
With mechanical disks, you want to stay at 50% or less full for performance reasons (inner tracks = slowest on the disks, and when you get into this area, your performance can drop below the minimum requirements; particularly to be noted on a 2x disk stripe set, or even a 10 array, which only offers you half the total capacity as a trade-off for the redundancy). In the case of a 10, you'd probably be best served by using 2TB disks.

Got it, understood.

With a RAID card (i.e. Areca), you need to run enterprise grade disks if you want it to work (consumer disks are unstable, so don't do it; their recovery timings are wrong for RAID cards). As a result of potential problems, it's advisable to use the HDD Compatability List to determine what drives to use (not all RAID card makers offer these, but Areca does, and is one of the reasons I like their products - saves a lot of hassle and aggravation).

Got it, thanks!

You've not mentioned the need for a Windows disk, but if you create an array under OS X, you won't be able to boot Windows off of the ICH (SATA controller on the logic board). But this is fixable via a separate SATA controller and cable assembly (you'll have to make this by splicing 2x cables that are available together). Not expensive either (card + cables), and not hard to do.

Nope don't need a windows disk.

RAID wiki might be a good thing to give a good look at, particularly 10/5/6 levels.

I will for sure dive into that link, thanks!

Also, you'd need to run a good UPS system as well (Online type is really what you should be using, though a Line interactive can be substituted so long as it has a step transformer in a budget pinch). BTW, Online types can be had refurbished to save on funds as well. A UPS isn't just an option with RAID, it's a necessity (you'll be burnt in terms of lost data if you try running without it).
Uhmm.. the edge of my knowledge forest arrives right around what you are stating above. Sounds kinda important though.
I know this is a lot to read, and hope not too confusing, but it should help. :)

Really great stuff man. Not too confusing at all, very well articulated. I'm hitting the hay for the night. Back attcha soon.

-JB
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
The write frequency seems lower, but I'm not sure how rapidly the previews are generated (i.e. single? large batch?).

Assuming my understanding is correct, it would probably be fine on SSD's, but given the costs involved, it may not be worth it, given the differences between SSD and RAID (stripe set), unless my understanding of what you've tested and on what is way off.

Your post was a tad confusing, so I appoligize if this is making matters worse. :eek: :p

i write very very poor sadly :(
whats funny is for 3 years I worked for HP as a trainer ? in person in front of people I have the gift yet I cant write to save my life ! so sorry

the previews are generated in batch you import your images and use a pull down to select what size during the import they are built you can build them one at a time ? but really I dont think to many do it this way ? most do it on import
you can also select build previews as a menu item ?
it really is very low priority to write it and the write speed is not at all important
but once the data is in cache LR uses that info to generate the images you work on along with your original images !
that data is used to build the images you work on so the read of that data is critical to speed of getting to work on the images

some on normal HDs have about 1-2 seconds until you can work on a image
a short stroked hitachi 1Tb to 100 gigs takes about .78 or so seconds and the SSD takes about .53 seconds but this .25 to some who are working on thousands of images it ads up quite quickly
and moving from image to image quicker also make a nicer workflow and does not break your thoughts so actual production goes way up more than the time saved !
I have noticed about a %25 time savings
a normal job is about 5 hours now about 4 hours
so those who work in LR have to decide if that time will make up for it ?

I would say the time and cost is easy if you work with LR for a living as a photographer its worth it

if its a hobby ? raid 0 short stroked seutps are not bad

this is the timing chart I came up with over a few days of testing
and this for LR users is about how quickly the white sliders come in to view in Dev mode !
 

Attachments

  • chart2.gif
    chart2.gif
    21.8 KB · Views: 4,856

deconstruct60

macrumors G5
Mar 10, 2009
12,309
3,900
Now consider the following:
  • SSD's are great for random access usage, but expensive for sequential access (ignoring available SATA ports, which can be added cheaper than than the cost of an SSD; simple SATA card for example)
  • Scratch relys on sequential throughputs more than random access as I understand about the applications used (application based, not OS).

This means that you can get fast sequential access via mechanical for less money than SSD, and is better suited for the job. :eek: ....

that's not particularly well supported. What is being discounted here is that the Mac OS X HFS file system can hand you sequential blocks. For very large files it can't. The automatic defrag mechanism doesn't work on large allocations.

Second, most SSD drive controllers have been tweaked so sniff out sequential reads. At that point, if there is a nice sized RAM cache on the flash disk can fill it almost as fast as it is being emptied. You will get fast rates regardless if the file system has allocated sequentail blocks or not.

You can do it with disks. Folks do it for large data warehouse DB tablespaces. However those typically take performance hits as soon as do a couple of updates to them and then try to read them again. To get fast sequential disk reads you have to line up all the blocks inside of a track and then the next blocks on exactly the next track. Once you do that for several tracks most file systems loose it. There will be a block somewhere else and you'll take a track seek latency hit. As soon as do that even 10% of the time t he SSD is going to gain ground because it is a random access problem.

As long as the flash SSD has a large enough "clean cell, ready for writing" write queue and not writing tooo much, there is not a problem.

10,000 writes means you can write to the same cell 11 times a day for 2.5 years (at 50 times a day you are at .5 years ) . If someone is spending hours processing a single photo (long enough for TM backups to trigger) how likely is it for the scratch file to be overwritten 11 times in a day ? That is being pessimistic that the drive controllers wear algorithms can't distribute it around. Can easily cut that in half if the controller has a pool of candidates that is just as big as the file.


If opening/closing 11 very large ( > 1GB) files an hour you have problems. Photoshop is going to make a copy in scratch. Then make incremental additions as make changes. That will have a bad impact on MLC SSD drive.
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
Julian did you read the times I had with my files ?
5.47 gig file 118 seconds to save to my raid

again I can say a raid setup might be something to look into ?

the 1222x I have with the battery module in our line of work is worth it
drop 8 WD 1TB RE3 drives in it you have a nice setup that is going to be faster for large file writes than SSD in my testing
a touch over $2000 for this setup ? but worth it for the amount and speed you get

let nanofrog jump in on this ? but the other controllers that you can extend the cache on to 4GB cache on the raid controller might help ? that I do not know


this pro PS user can not afford to loose my work again 4 hours work goes down the drain thats 4 hours to redo 4 hours you lost and that puts you 4 hours behind schedule !!!


I would really go read this also
http://macperformanceguide.com/Reviews-MacProWestmere-Photoshop-diglloydHuge.html
 

nanofrog

macrumors G4
May 6, 2008
11,719
3
However, speed and safety are not mutually exclusive concepts for a pro retoucher. Allow me the luxury of extended explanation. I might still be willing to live with the risk of disk failure with raid0, in exchange for the abolishment of another type of risk. Specifically the risk that, because it takes so long to save my file, that I don't save as often as I should. When working on a 9gig, hundred layer image, there is increased risk of an application freeze or system crash. So ideally you'd save every 15 minutes or so right? Well not if every time you save, you are left checking your email for 30 minutes. So I end up not saving for hours at a time because I've got 3 art directors an 3 account managers showing up in the studio at 4.30 pm. That is the flavor of risk I'm primarily concerned with getting rid of. Everything else is frosting on the cake.
This is usually the case, but usage pattern and budgets in particular, tend to carve out the actual solution implemented.

Redundancy and performanc cost money, and the combination of the two can get downright expensive. But if you need it, you need it.

Given the dangers associated with RAID 0 and your file saving habits, I'd recommendy you stay away from it, and go with a proper RAID card instead (either 5 or 6). This wouldn't solve your risk due to file saving frequency entirely, but reduce the amount of risk due to a primary array failure.

Not sure your comment here:
• Three SSDs in a raid0 would cause a throttle?
• 3 Individual SSDs in three sep ports, but accesses at same time causes a throttle?
Yes. If they're in a stripe set (or otherwise run simultaneously), then you're trying to shove 750MB/s over a SATA controller than can only pass ~660MB/s to the processor.

This is called throttling, as you're SSD throughput has to slow down to accomodate the SATA controller (ICH in this case).

You can get around this by using a PCIe card (such as a RAID card), as you're utilizing a bus that can handle more bandwidth.

You've piqued my curiosity on this option. I'd be interested in any solution at almost any price, in order to get the saving time (write times) of my currently working images down to a fraction of what it takes on my current system(s)
Take a good look at the links you've got in terms of the Areca's, enclosures,... as you can not only get redundancy, but additional throughput as well, depending on how far you want to take it (I can actually exceed 1GB/s with my own system).

Think about what kind of throughputs would put you where you'd want to be in (work backwards from time per process if you have to), capacity requirements over time,...

And do you want a "Working Data" array + Archival storage area for completed jobs, or a single array that contains all of it?

I ask, as it's easy to make multiple arrays on one card using separate disks (you can even partition disks that appear as separate arrays, though this doesn't bode well for performance, or redundancy if one fails either).

At any rate, there's a lot of options to consider here. Mostly performance and redundancy in your case, as your capacity requirements may not be that large (you'll still be using drives in parallel for a performance increase, and it also affects capacity, though both differ between levels on a fixed number of disks).

Think about it, and get back to me. Then we'll go from there. :)

Uhmm.. the edge of my knowledge forest arrives right around what you are stating above. Sounds kinda important though.
A UPS system is a necessity when you're dealing with RAID, especially so if you go with a parity based array on a proper RAID card, such as the Areca's linked previously.

i write very very poor sadly :(
whats funny is for 3 years I worked for HP as a trainer ? in person in front of people I have the gift yet I cant write to save my life ! so sorry

the previews are generated in batch you import your images and use a pull down to select what size during the import they are built you can build them one at a time ? but really I dont think to many do it this way ? most do it on import
you can also select build previews as a menu item ?
it really is very low priority to write it and the write speed is not at all important
but once the data is in cache LR uses that info to generate the images you work on along with your original images !
that data is used to build the images you work on so the read of that data is critical to speed of getting to work on the images

some on normal HDs have about 1-2 seconds until you can work on a image
a short stroked hitachi 1Tb to 100 gigs takes about .78 or so seconds and the SSD takes about .53 seconds but this .25 to some who are working on thousands of images it ads up quite quickly
and moving from image to image quicker also make a nicer workflow and does not break your thoughts so actual production goes way up more than the time saved !
I have noticed about a %25 time savings
a normal job is about 5 hours now about 4 hours
so those who work in LR have to decide if that time will make up for it ?

I would say the time and cost is easy if you work with LR for a living as a photographer its worth it

if its a hobby ? raid 0 short stroked seutps are not bad

this is the timing chart I came up with over a few days of testing
and this for LR users is about how quickly the white sliders come in to view in Dev mode !
Nice assesment. :)

You've hit the nail on the head as the old saying goes, as it all comes down to what is your time worth, and is the added cost justifiable in that context?

For a pro, the answer is more likely that an SSD is a valid solution for the funds. To an enthusiast, not as much, unless they've more money than brains. :eek: :p

that's not particularly well supported. What is being discounted here is that the Mac OS X HFS file system can hand you sequential blocks. For very large files it can't. The automatic defrag mechanism doesn't work on large allocations.
I was talking about application specific situations, specifically for Photoshop actually (though I didn't name it). In this case, the files are large enough that the filesystem can't assist in the way you describe.

In cases of small files, then Yes, I agree with you. And in such cases, it's up to the user to determine if the advantage is justifiable for their situation (i.e. pro earning a living vs. enthusiast/hobbyist with limited budgets, so choices have more impact, as adding A, may mean B can't be had).

For me, it comes down to what's justifiable for the funds (price/performance ratio). For some, going all out is worth it, even if it means a short MTBR (i.e. swapping out SSD's in say 1 - 1.5 years). But this will be out of budget for many here on MR, from what gets posted, and why I posted what I did (aimed at the described usage).

I didn't mean to create any confusion. :eek:

Julian did you read the times I had with my files ?
5.47 gig file 118 seconds to save to my raid

again I can say a raid setup might be something to look into ?

the 1222x I have with the battery module in our line of work is worth it
drop 8 WD 1TB RE3 drives in it you have a nice setup that is going to be faster for large file writes than SSD in my testing
a touch over $2000 for this setup ? but worth it for the amount and speed you get

let nanofrog jump in on this ? but the other controllers that you can extend the cache on to 4GB cache on the raid controller might help ? that I do not know


this pro PS user can not afford to loose my work again 4 hours work goes down the drain thats 4 hours to redo 4 hours you lost and that puts you 4 hours behind schedule !!!


I would really go read this also
http://macperformanceguide.com/Reviews-MacProWestmere-Photoshop-diglloydHuge.html
He seems to be interested in going with such a solution.

And in with Areca's, I usually do add a larger DIMM for cache (though 2GB sticks are the "sweet spot", as performance gains from additional cache aren't linear). :D
 

deconstruct60

macrumors G5
Mar 10, 2009
12,309
3,900
When working on a 9gig, hundred layer image, there is increased risk of an application freeze or system crash. So ideally you'd save every 15 minutes or so right? Well not if every time you save, you are left checking your email for 30 minutes.

9,000MB in 30 minutes ( 1,800 seconds ) is 5MB/s . That doesn't seem like a disk problem. You have some pokey disks (or really bad controller or really bad fragmentation ) if the bulk of that time is just writing. Either it is far more data than that is being read/written somewhere else, the channel is clogged, or that is a computational problem.

Before running off and spending $1,000's on some complicated SSD and/or RAID-card set up I'd buy another 1.5TB drive and put it into drive bay 2. (for scratch and current single file ). Then the other two from the G5. and just trying things out. Get some baseline performance numbers with just using 64-bit CS5 along with the 24GB of memory before dropping megabucks. There is a good chance you will be surprised after you do that.
[ your 32-bit Photoshop was killing your performance with that 8-9GB image file with constant swapping to/from scratch. ]
 

deconstruct60

macrumors G5
Mar 10, 2009
12,309
3,900
A UPS system is a necessity when you're dealing with RAID, especially so if you go with a parity based array on a proper RAID card, such as the Areca's linked previously.

For RAID levels with parity it is necessary because there are multiple writes to different drives that ideally should be atomic but are not. For 0 and 1 levels not as much. Each one of the 0 stripe updates is atomic. Likewise RAID 1 are done in sequence you haven't lost data or if parallel both failed.
Those two are no better/worse than a single drive if you cut power.



I was talking about application specific situations, specifically for Photoshop actually (though I didn't name it). In this case, the files are large enough that the filesystem can't assist in the way you describe.

Unless the application is reading/writing to raw disk they ALL go through the file system.

If look at the Download benchmark here you'll see that a SSD drive comes out on top:
http://www.anandtech.com/show/3734/seagates-momentus-xt-review-finally-a-good-hybrid-hdd/6
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
And in with Areca's, I usually do add a larger DIMM for cache (though 2GB sticks are the "sweet spot", as performance gains from additional cache aren't linear). :D

good to know as I was not sure exactly how it would effect that :) next year the 1800 series will be in my system and this will go down to second machine figure wait a bit let the drivers mature etc.. been following them ;) and will see what is going on in storage also to make em fly
 

nanofrog

macrumors G4
May 6, 2008
11,719
3
good to know as I was not sure exactly how it would effect that :) next year the 1800 series will be in my system and this will go down to second machine figure wait a bit let the drivers mature etc.. been following them ;) and will see what is going on in storage also to make em fly
I'm waiting for performance data, particularly as they're built off of Marvell RoC controllers (want to see what they're capable of; ATTO's products too, as they're using the same Marvell chips).

For RAID levels with parity it is necessary because there are multiple writes to different drives that ideally should be atomic but are not. For 0 and 1 levels not as much. Each one of the 0 stripe updates is atomic. Likewise RAID 1 are done in sequence you haven't lost data or if parallel both failed.
Those two are no better/worse than a single drive if you cut power.
I realize what you're getting at, but I'm thinking of power lost during work (processing and writting data occur during a power outage).

Re-performing the work sucks, and it can't be relied upon that the job will resume automatically (wasn't all aimed at atomicity, but real world conditions).

So I see this as one instance where such a generalization is quite applicable, though the specific reasons differ between parity and non parity based arrays.

Unless the application is reading/writing to raw disk they ALL go through the file system.
Not that they don't all go through the filesystem, but it's ability to improve matters has limits. For small files, it can certainly give an imrovement. Not as much as the size increases, to a point it's effects are harder to measure in real world terms.
 

ScratchyMoose

macrumors regular
Jan 13, 2008
221
15
London
I need a system that can solve one freakin huge problem that photoshop has created for me. Saving is a single task, single thread operation. Unfortunately, throwing RAM, cores, or clock speed at the problem does not fix it.

I've got a 1680x card hooked up to 7 1tb RE3's, primarily for PS and LR ... i've just timed saving a 1.9Gb multi layer file in PS and it took 2 min 37 sec (while it's AJA The interesting thing is that according to the disc activity in iStat, the disc is not actually being written to for the vast majority of the time that the process is going on, just a spike of activity here and there (at 360 Gb/s). If this is accurate, it implies that there will be a ceiling that you will hit regardless of how fast your machine / drives are (mine is MP 3,1, Octo 2.8, 16Gb RAM)

I was going down a similar route to you, and my eternal thanks to nanofrog for all his amazing help :D:cool: He is a complete star! :D

i went with a RAID6 array on an Areca 1680x, SSD for OS in optical bay, 4x 1.5Tb drives internally for back up. I understood that if you were to lose one of the drives in a JBOD the rest would go, not realising that you could get the data back via a recovery utility. Anyhow, the internals are there backing up on a schedule via chronosync, and os backup via superduper. Because i can loose 2 drives on the RAID6 and keep working, the backup schedule doesn't need to be aggressive (ie hourly).

I can't find the link, but i think it was Jeff Schewe who wrote in a forum that PS will not write to a file or write to scratch at the same time, so you're good with the two on the same drive.

If it isn't possible to save an 8 Gb file in under 10 mins (there or there abouts, and i don't know if this is true or not, though from my timings above i'd say there's a good chance) perhaps you should go at it from the point of making your system as stable as possible ... perhaps a mac mini to play music / surf etc while your main MP is on a user with just the os and photoshop, hooked up the a RAID 5 or 6 array?

Cheers
 

Ryan P

macrumors 6502
Aug 6, 2010
362
235
Did you look at your CPU usage during the save of this 1.9 GB file? I was surprised to find how much quicker my saves went when upgrading my processor last time. It's still the biggest bottleneck for me in Photoshop. I often put off saving because of it and then have a crash and loose work.
 

ScratchyMoose

macrumors regular
Jan 13, 2008
221
15
London
Did you look at your CPU usage during the save of this 1.9 GB file?
After reading your question - yes!

It's pegged at about 106-110%, whereas a surface blur filter on the same file will take it up towards 800%, so i guess that's not the bottleneck?
(using PS CS5 BTW)
 

Ryan P

macrumors 6502
Aug 6, 2010
362
235
After reading your question - yes!

It's pegged at about 106-110%, whereas a surface blur filter on the same file will take it up towards 800%, so i guess that's not the bottleneck?
(using PS CS5 BTW)

My thinking is it is CPU bottlenecked but Adobe hasn't bothered to multithread it.

I'm still waiting on my Mac Pro. My current system is a MBP with the i7 2.66 which has a turbo of 3.33 which replaced my core 2 duo 2.4. The i7 sped things up roughly 3 fold for saving files.
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
I can't find the link, but i think it was Jeff Schewe who wrote in a forum that PS will not write to a file or write to scratch at the same time, so you're good with the two on the same drive.

If it isn't possible to save an 8 Gb file in under 10 mins (there or there abouts, and i don't know if this is true or not, though from my timings above i'd say there's a good chance) perhaps you should go at it from the point of making your system as stable as possible ... perhaps a mac mini to play music / surf etc while your main MP is on a user with just the os and photoshop, hooked up the a RAID 5 or 6 array?

Cheers

curious where is your scratch pointed to ? :)

the Schewe thing are you sure he did not mean they cant write so they have to go back and forth ? meaning dont put a file and scratch on the same disks if you can ? or it will get fraged and become slow ! I am willing to bet on this over the other :)
I am never the smartest man at the table :) but I usually am pretty good about only commenting when I know :) and will admit when I am wrong :)

remember PS hits the scratch a lot and keeps it optimized as you are working ? so a dedicated scratch is always always best !
 

ScratchyMoose

macrumors regular
Jan 13, 2008
221
15
London
curious where is your scratch pointed to ? :)
top slice partition of my raid 6 array

the Schewe thing are you sure he did not mean they cant write so they have to go back and forth ? meaning dont put a file and scratch on the same disks if you can ? or it will get fraged and become slow ! I am willing to bet on this over the other :)
I am never the smartest man at the table :) but I usually am pretty good about only commenting when I know :) and will admit when I am wrong :)

remember PS hits the scratch a lot and keeps it optimized as you are working ? so a dedicated scratch is always always best !
Hi, i've finally found the link that i'd archived, but i'm getting a 404 when clicking on it - i guess that adobe have changed their forum addresses since august 2007.

Anyhow, i'm sure it's as i remember (that PS will not write a file to disc at the same time as accessing the scratch). Reasons:
The OP's problem is that he can't do anything else while PS is saving his file. Looking at iStat while my large file is saving, there is virtually no 'read' going on (a couple of bytes at the end). The time i gave above was for a newly opened file (ie has no history states). At 400Mb/s write speed, the file of 1.9Tb should only take around 5 seconds to actually write (but not generate).

All of this seems to me to point that it won't access scratch while writing the file ... i could see the argument that it will do this as it is "generating a full resolution composite" or other parts of the process, but it seems to me that the actual file (i was saving) is largely written to disc in a matter of seconds towards the end of the process.

Cheers
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
I hear ya on the writing at the last moment and using the scratch in theory it might work ? might be the way it is ? dont know so wont say cause that would be wrong :)

so might be true ? not doubting you :)

in reality in the past I have found it quite dif I used to make a living doing arch interior work so HUGE files
and in testing writing to scratch on same disc as files turned against me with longer times !

this I do know and what I am going off also my own times for similar file sizes



for grins what happens when you point your scratch toward a un used disc ? like a BU or something ?


also the 1.9 is that on the disk ? meaning you hit the get info or see it in finder as 1.9


I have a image that is set to 300 dpi print 36 inches wide about 8 layers ?
its 1.95 GB on disk I do not check the maximize on close ! since I am the only one opening these :)

once you hit the button takes 2-3 seconds for the saving to pop up ? but the timing is showing 16.4 (16.5 saved again as a new file new name etc.. and 16.6 a third time) for me to save that file to my raid ?
areca 1222x with battery module
both are raid cards are IOP 348 basically the same setup
I have slower HDs than you but you have one less HDD ? not sure how that balances out but very close systems
I have a 3,1 with 14 gigs my OS is SL boot in 64 using CS5 running in 64
my scratch is pointed to the SSD ? that might go back to a raid 0 short stroked setup ? still testing

but if I point my scratch to my raid the times go up to about 18-19 seconds ? not a huge difference

even a 5.5 GB image on disc is less than 2 minutes for me ?
this of course is a PSB not a PSD :)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.