Partitioning RAID0 set for PS CS4, need input.

Discussion in 'Mac Pro' started by Loa, May 18, 2009.

  1. Loa macrumors 65816

    Loa

    Joined:
    May 5, 2003
    Location:
    Québec
    #1
    Hello,

    My new Mac Pro should arrive by the end of the week, and I'm trying to figure out how to properly partition my RAID0 set (consisting of 3 500GB drives).

    Only "intensive" app = Photoshop.

    Here's my plan:

    1st partition = scratch disk: 96GB (per diglloyd's recommendation).
    2nd partition = PS files: 400GB (RAW images and PS files themselves)
    3rd partition = rest of speed-independant, non-PS related files.

    Is this a good set-up?

    Thanks.
     
  2. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #2
    I don't think you need to partition it, unless you just want to. :)
     
  3. Loa thread starter macrumors 65816

    Loa

    Joined:
    May 5, 2003
    Location:
    Québec
    #3
    Hello,

    But wouldn't partitioning for scratch be a good thing? First I'm pretty sure that scratch should be on a partition of its own, and it would ensure that it would be on the fast part of the disks...
     
  4. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #4
    No, not really, given the posts I've had with another member (Tesselator). (He's done experiments in this regard). His conclusion was that that information is based off of older technology. That is, when memory was more expensive than it is today, and HDD"s were also slower. RAID wasn't that viable either, again, due to cost. It makes a lot of sense to me, and quite reasonable IMO.

    A 3 drive stripe will give you a nice performance boost, and can be improved by adding a 4th drive. You should be able to run the entire system off of it quite nicely, and leave it as a single logical drive (no partitions).

    Just make sure you have a backup system of some sort in place, as if something does go wrong, all data is lost. The backup will allow you to retain your data, and make recover much easier (particularly a clone). ;)
     
  5. Loa thread starter macrumors 65816

    Loa

    Joined:
    May 5, 2003
    Location:
    Québec
    #5
    Diglloyd made some tests using very recent drives and his results confirm the idea that drives have faster and slower areas.

    http://macperformanceguide.com/Storage-WhyYouNeedMoreThanYouNeed.html

    Also, it's a simple physics problem: given a constant rotational speed ("if" it is constant), angular velocity will be greater on the outer part of the disk. And that means that data can be read/written at a faster rate ("if" the head can read/write that fast).

    Can you or tesselator give me different results?

    Loa
     
  6. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #6
    Here's a simple way to look at it. Any mechanical drive does have a variance in throughput on different sections of a platter. That is, the outer most tracks are faster than the inner most tracks.

    What I was getting at for throughput, was an average, that takes both extremes into account. Sometimes you will get faster, sometimes slower. There's multiple dependencies, which can include location on the platter, the data size, as well as the stripe size. Platter location alone doesn't tell the whole story.

    You can partition either a single drive, or an array, using them for a performance gain. But what Tesselator and I communicated about, was the idea of partitioning drives for the software you're running, is based on previous technological capabilities. Older drives where slower, and the RAM was too expensive to be able to load everything into memory.

    It's different these days, and is not really valid now. Memory is cheaper, and drives are faster, as well as the other parts of a computer system.

    There are situations that using partitions can actually help. Using separate drives is a further improvement. RAID is an even better way to go. Now we've got SSD's in the mix as well. As you improve the overall throughput, so does the cost of the solution.

    Separate partitions tend to make things more complicated as well. Particularly for simultaneous access. This is where separate drives come in handy. Any single disk, or RAID array, has a limited throughput. It would be divided amongst the requested files during simultaneous access, not multiplied. That bandwidth is fixed. So splitting up a RAID 0 won't speed up simultaneous access, which would definitely occur (scratch + PS primarily). If you had a separate array for each, that would be another story. ;) But then again, you can add those same drives to make one array, and get an additional improvement. The average throughput would definitely be increased, as would both the minimum and maximum.

    Creating an array, and basing performance on the average isn't perfect, but the easiest thing to do. As it happens, the throughput it can generate, would be more than sufficient for your needs, from the information given. Worst case is better, but will require additional drives and is more expensive. Since you're limited internally for drives, this may not be the best way to go, depending on specifics. (Say worst case, or minimum needed throughput, is 1GB/s. You can't fit that many drives internally (10 drives or more). The best chance would be 4 or 5 SSD's, and it's expensive, limited in capacity, and not the best drive technology for a high write environment).
     
  7. Loa thread starter macrumors 65816

    Loa

    Joined:
    May 5, 2003
    Location:
    Québec
    #7
    By that very simple reality, you should still put your most speed-dependant data on the fastest track.

    I don't really care about the average. Diglloyd's tests show a 2 to 1 speed difference between the extremes on the disks. Give me one good reason why I should "not" put the PS scratch on the fastest part.

    As long as we're talking about disks (circular and spinning data storage), you will get significant write/read speed ratios. It's physics.

    Of course SSDs and different. Of course a RAID0 of 5 SSDs would be faster. But it's way out of my budget and needs.

    I'm going to get a 3 disk RAID0 set for PS scratch and the rest of my files. Right now partitionning for scratch seems the most logical thing to do, to get close to a 2:1 speed increase.

    I wouldn't mind being proven wrong about this, but when experiments and physics point to the same thing, it's hard to ignore it...

    Loa
     
  8. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #8
    This makes sense sometimes. It has to do with SIMULTANEOUS ACCESS. It's also influenced by the size of the partitions, and data they hold. This seems to be what you're not understanding.

    Think of it this way. You partition the drive. Outer tracks set for scratch, inner for PS. Great. But the PS and scratch partitions would require being accessed at the same time. If you can keep both on the first 50% of the platters, you'd get the fastest transfer rates from each. It can be done, but may require some trial and error to find the best partition sizes that can still both remain in that outer half.

    But by just simply using the drives as a single set, and making sure the capacity never exceeds 50%, you still get the fastest throughput, with a lot less hassle.

    It's when the balance isn't right with partitions that it becomes a mess, and can actually slow you down. You could end up with really fast scratch, but the PS partition is running slower than it should (greater than the 50% part of the platter). So it's running on the slower tracks to begin with. Now figure in the fact BOTH partitions will be accessed at the same time (the heads can't be in two places at once), you now have to deal with full seeks far more often. Simply put, the performance will drop, and may be too slow for your needs.

    That's why planning on the average, or worst case if possible (better), is a good idea. It forces you to either add drives, increase the capacity of each (to remain within that 50% "sweet spot"), or find faster drives to make sure the array performs as needed.

    Testing is the best way to figure this out. It takes time, but you'll have your own experience to draw from, and would discover your true needs for throughput. Capacity as well, though it could come at a later time (presuming you don't fill the array up to 50% before you're done). ;)
     
  9. Loa thread starter macrumors 65816

    Loa

    Joined:
    May 5, 2003
    Location:
    Québec
    #9
    Unless I'm misunderstanding what happens, I don't think scratch and PS files will be accessed at the same time. PS files will only be accessed when opening and saving. And between those moments, the scratch will be used.

    I'm not planning, for example, to open many PS files while I do a major filter on an already opened file.

    Loa
     
  10. Tesselator macrumors 601

    Tesselator

    Joined:
    Jan 9, 2008
    Location:
    Japan
    #10
    Depending on how much RAM you have PS scratch may NEVR get accessed. I have 12 GB RAM, am constantly editing multiple 12 and 14 mega-pixel 16 BBP images, and it never ever uses the disk cache. So why partition for something that never happens? <shrug> :confused:

    12 or 16 GB RAM helps almost everything run faster and smoother too - and right now 2GB DIMMs are dirt-cheap. About $40 each. I got 4 just a month ago or so for like, $84 USD.

    This is part of what NanoFroggy is trying to tell you. The partition recommendations come from a time when only a very few very rich prople had 12 or 16 gigs of ram - back when most systems topped out at 512 MB and less. It might have even held true up through 3 or 4 GB RAM but I think it no longer is the case with 12 or 16 gigs.


    EDIT: But it's OK, you can partition it if you like. It won't hurt anything. :) Though I think you would be MUCH better off making partition #2 and #3 folders instead of partitions. The only noticeable speed differences are the 1st 5% and the last 15% of your platter area. The rest is within the average.



    .
     
  11. Loa thread starter macrumors 65816

    Loa

    Joined:
    May 5, 2003
    Location:
    Québec
    #11
    Ah! That's a lot more interesting.

    My *quad* nehalem is on its way, and I'll upgrade it with 8GB as soon as I can. On the other hand, I won't be able to justify (to myself) paying for 4GB chips until the prices come WAY down.

    Keeping an eye on the info panel in PS, the highest scratch size I've managed to get (and I was pushing it) was 15GB. That was the highest I think I'll ever go.

    Now, with 8GB, suppose the OS takes 1, and PS 3. That leaves only 4 GB for the scratch. Am I understanding this correctly?

    Loa
     
  12. Tesselator macrumors 601

    Tesselator

    Joined:
    Jan 9, 2008
    Location:
    Japan
    #12
    I really dunno. :D We need a PS expert here. All I know is that I never notice PS using cache any more. Either my unpartitioned RAID 0 is so fast or the RAM I have negates most of it's use. I know a few things tho:

    1. I know that the PS application on Mac is still 32 bit and thus has a very limited address space; 3072MB (~ 3GB).

    2. I know that if I define a massively huge file (15,000 x 15,000 x 32bits) or so then operations do hit the drive cache and things go real slow. A faster portion of the partition ain't gonna help much at that point - it would be the difference between 20 sec. per operation and 30 sec. per operation MAYBE. At that point who cares? Slow is slow!

    3. I Know that even a 28 mega pixel image is only about 6,000 x 4,500 x 16 and that I can increase 6 or 7 of those by 200% do hundreds of operations on any of them and never see the slowness I see in #2 (above) - ever.

    4. I know at the same time I can see the "Scratch:" read-out at the bottom of the image say "Scratch 20.xG/2.3G" with those 6 or 7 multi-layer images at 200% of 28 mega-pixels yet perform 100s of edits to any of them while monitoring my HDD I/O and never see the HDD be accessed for more than a few milliseconds.

    5. I know that it's not useful to have huge frame sizes for almost any reason at all. Huge image prints do NOT need huge pixel counts generally speaking. A 10 foot by 8 foot image needs not much more than a 10 inch by 8 inch image. The reason for this is viewing distance. One typically holds a 4x5" at 10 to 15 inches away, an 8x10" at 20 inches to 2 meters and an 8x10' image is normally viewed 6 to 8 meters away or from a car 20 to 50 meters away. :) With calculations of what the human eye can detect at these various distances it turns out that enlarging an image beyond what is needed for an 8x10" at 150 ~ 200 PPI is almost never useful or advantageous.

    But that's all I really know. Exactly what PS does, what the limits are, and where boundaries are, I dunno. I just know that with the max number of images it allows open at once with my 7300GT GPU (which is 7) at 12 or 14 mega-pixels in 16 bits with 4 to 8 layers each it never seems to use the drive cache. <shrug> Oh wait, there's one more thing I know. I know that on a PC with 512 MB of RAM or less that PS 4 ~ 8 hits the drive cache all the time for images over about 2K pixels - especially with multiple layers.


    .
     

Share This Page