RAID two WD Blacks-How much of a speed boost?

Discussion in 'Mac Pro' started by AppleWorking, May 19, 2009.

  1. AppleWorking macrumors regular

    Joined:
    Jan 20, 2009
    #1
    How much of a speed boost would I get if I setup a leopard software RAID with two 640 WD black drives instead of using just one? I have two of them and I was curious about this.

    I was thinking that I might put a SSD in the second optical for osx/apps, and use the two WD blacks that I have for writes to a RAID. Does setting up a RAID for the writes sound worth it? I only have two extra drives for a RAID so I don't know if it'll make much of a difference. :confused:

    Anyway, I am really new to a RAID setup so I have no clue how to set it up or what would be the optimal settings... If anyone wants to walk me through it, I would appreciate it. :)
     
  2. Tesselator macrumors 601

    Tesselator

    Joined:
    Jan 9, 2008
    Location:
    Japan
    #2
    2-drive RAID 0 is about 1.4 to 2.3 times faster than a single drive of the same type depending on the kind of operation. Tho for most operations the average is very close to exactly 2 times. And 3 times for 3-drive RAID 0. Look at the results for "RAID 1 (2 Drives)" to see the speed of these particular drives run as single drives - a RAID 1 is the same speed as a single drive.

    [​IMG] [​IMG]

    RAID 0 profiles a little different than single drives. Seeks are actually a little slower even.

    [​IMG]


    Also the Drive Cache size is doubled so if it was 32 MB you'll now have 64 MB of about 2.5 GB/s cache memory. :)


    SSD is good for small files - like, MANY small files. SSD's throughput is about the same as a 3-drive RAID 0. At some things a 3-drive RAID 0 will be faster than an SSD even. :) SSD is awesome for booting, thumbnail loading where on-the-fly generation isn't needed, spreadsheet and DB work where many smaller files are accessed, and stuff like that - I imagine your web browser cache would be lots faster too. 3-Drive RAID 0 is better for image editing, large file loading and saving, video editing, and such like that.


    It's on Apple's help site:

    1. Open Disk Utility (/Applications/Utilities).
    2. When the disks appear in the pane on the left, select the disks you wish to be in the array and drag them to the disk panel.
    3. Choose Stripe or Mirror from the RAID Scheme pop-up menu.
      You want stripe!
    4. Name the RAID set.
    5. Choose a volume format. The size of the array will be automatically determined based on what you selected.
    6. Click Create.

    http://support.apple.com/kb/HT2559
     
  3. kellen macrumors 68020

    kellen

    Joined:
    Aug 11, 2006
    Location:
    Seattle, WA
    #3
    How about for day to day tasks. I am thinking of putting another 640 WD black in and doing a raid 0 setup.

    Overkill or not? I have the space for it, just a matter of wasting money.
     
  4. Tesselator macrumors 601

    Tesselator

    Joined:
    Jan 9, 2008
    Location:
    Japan
    #4
    Define "day to day tasks".
     
  5. wheezy macrumors 65816

    wheezy

    Joined:
    Apr 7, 2005
    Location:
    Alpine, UT
    #5
    I setup two Samsung F1 Drives in a Software RAID0 and it's MUCH faster. I'm loving the extra speed on everything. I can't understand why some people say that setting up a RAID0 for your boot drive, general app drive etc is just a waste, that RAID0 is for fast data intensive transfer blah blah blah.

    Why would you limit your computer to running faster only during certain times? (Scratch Disk, video data disk etc). Why not have it run twice as fast all the time? I do. I love it. Do it, you won't regret it. You'll love it.

    My Mac Pro is now actually faster in everything compared to most other computers, not just faster in thinking. Faster in booting. Faster in opening apps. Faster in moving files. My two Samsung avg about 105MB/s each, my XBench scores are usually around 270MB/s for certain read/writes. I most certainly enjoy having that speed all day long.
     
  6. NoNameBrand macrumors 6502

    Joined:
    Nov 17, 2005
    Location:
    Halifax, Canada
    #6
    Because a RAID0 setup on N disks is N-times as likely to fail as a single disk*. I like using a 4-way RAID0 for scratch, but single, unstriped, volumes are fast enough for my apps and data (110MB/s on the WD-Blacks). The computer already boots in under 30s, not that it's ever off.

    *Ok, it's not, but it's close.
     
  7. Mac Husky macrumors regular

    Joined:
    Mar 28, 2009
    Location:
    Bavaria, Germany
    #7
    No, it isn´t. Calculation of propability isn´t that simple :D
     
  8. kellen macrumors 68020

    kellen

    Joined:
    Aug 11, 2006
    Location:
    Seattle, WA
    #8
    Aperture, some games, ripping dvd's, transferring files around. Going to pick up photoshop in the next month or two, just waiting to see my education discount.

    Not worried about the risk of losing a drive, really good about my back ups after a failure a year ago. Have a time machine drive, plus 2 copies of data, then a MBP I try to keep with the same info.
     
  9. NoNameBrand macrumors 6502

    Joined:
    Nov 17, 2005
    Location:
    Halifax, Canada
    #9
    Indeed, what I posted wasn't accurate, but it's close enough for small numbers of disks and low failure rates.

    If each disk has an identical failure rate of F, and you have N disks, then:

    Probability of total striped array failure = 1 - (1-F)^N.

    My glib P = N*F is close to this with a few disks and real world failure rates ( F ~< 0.008 (from Wiki)):

    F = 0.008; N = 4
    N × F = 0.032
    1 − (1 − F)^N = 0.0316...
     
  10. Tesselator macrumors 601

    Tesselator

    Joined:
    Jan 9, 2008
    Location:
    Japan
    #10
    OK, thanks. Yes, all of those things will be sped up. Ripping will be a lot! :)

    Yeah, you don't need to worry about that at all. The cockamamie math being interjected in this discussion does NOT apply to disk failure at all! AT ALL! It applies to the 1 in a trillion chance that a file I/O operation will fail. MTBF / MTBR remains exactly the same.

    Adding more drives to your system either independently or in a RAID does not increase or decrease the potential for drive of disk failure at all! Period.

    More of this completely wrong thinking? Why? Why spread false information? This is 100% totally untrue.
     
  11. NoNameBrand macrumors 6502

    Joined:
    Nov 17, 2005
    Location:
    Halifax, Canada
    #11
    Of course running a RAID0 doesn't increase the odds of a single drive failing! (which I never said). It increases the odds of total data loss.

    If one of the component drives in a RAID0 array fails, the data on the whole array is lost. I guess you could probably recover the portion of your data's blocks on the working drive(s). That helps. :rolleyes:

    Which? the math? the math is right. Same math as rolling a couple of dice and asking what the likelihood of getting at least one six. In this case 'rolling a six' means one of your drives goes tits up (odds are different, of course).

    Or did you mean the .008 (0.8%) failure number? that came from wiki. No idea if it's accurate - but one ought to be able to derive from the MTBF number a probability for the likelihood that a given drive will die in a given year.

    Anyway, my point (again) is that if you have your data on a RAID0 array, you're more likely to lose that data than if it's on a single disk!

    Here's a wiki article about it. It would be nice if that section had sources...
     
  12. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #12
    I recall spotting 0.34% listed as the Annualized Failure Rate on Seagate's 7200.12 series data sheet.
     
  13. Tesselator macrumors 601

    Tesselator

    Joined:
    Jan 9, 2008
    Location:
    Japan
    #13
    Well, then it's semantics and you're leading people to believe falsehoods (and that is what you said and that IS what Wiki is saying too.). It does not increase the odds of total data loss. It's bull-crap based on math. I can do the same thing with anything. The likelihood that the sky is falling for one square foot of sky it X so for a trillion square feet it's X•T. Bullcrap! The sky is NOT falling! The likelihood of the walls failing and the house collapsing in a 1 room house is X therefore a house with 100 rooms is 100 times more likely to collapse. Bullcrap! A properly built and cared for house doesn't collapse.

    That's why there are no sources for that article - it's bullcrap! Even the wording is telling you that it's bullcrap. "It's assumed", In an "ideal situation", "if", "if", "if"... Nope sorry, it's bullcrap. :) (I love that word in this context! :D )

    A well kept and properly backed up RAID 0 doesn't fail. Total data loss. <scoff> The chances of that happening in the first place are crazy-rare. And crazy-rare^X = crazy-rare.


    All of it. It's all wrong! It's like calculating the chances of your bellybutton falling off. It's so rare in the first place it's semi-ridiculous to even postulate in a small home system. In the second place it's very likely that anyone setting up a RAID is going to know enough to make provisions for a back-up and maintain it thereby eliminating all need for such ludicrous consideration. In the third place all you're doing is scaring people unnecessarily - Sure, the possibilities of our bellybuttons falling off are mathematically very real and we can either spend our lives worrying about it and laying immobile on our backs or we can just enjoy ourselves.

    The math itself in that Wiki article is wrong because it does not consider a huge multitude of factors social, environmental, and physical and it's based entirely on false constants and postulates.

    Mean Time Between suggested Replacement (MTBR) for a single drive is roughly 3 to 5 years. Guess what, MTBR for a 2 to 8 drive RAID (any level including 0) is 3 to 5 years.


    .
     
  14. NoNameBrand macrumors 6502

    Joined:
    Nov 17, 2005
    Location:
    Halifax, Canada
    #14
    The manufacturers don't consider a "huge multitude of factors social, environmental, and physical" when making the MTBF and MTBR numbers, either. Here's what the formula says: if your system (whatever that is) will fail if any single constituent part fails, then the odds that it will break are one less the odds that everything works. This formula is for the specific case where every component has the same failure rate.

    Lots of people never see a hard drive failure, but the odds of it are definitely higher than the odds of your belly button falling off or the sky falling in (nice hyperbole, though!). Computer forums across the internet have posts from people with failing drives - I'm sure their frustration is assuaged with them knowing it was unlikely to happen. If those drives were part of a RAID0 set, that RAID0 is toast. If you have some other RAID level then you merely replace the disk and rebuild.


    This forum is full of people that don't backup their data, especially if they're on some sort of RAID - I know you've seen posts by Nanofrog and Ryder (??) on this subject as we've both posted in the same threads. I happened to read this thread as I have WD-Black disks in a four disk RAID0. No one had yet posted the seemingly-obligatory warning about the increased risk of data loss, so I did.

    I'm not spreading false information on RAID0 failure rates. They really are increased over volumes made up of a single drive. Google 'RAID0 failure' for lots of articles.
     
  15. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #15
    They can't consider those other factors, however real they may be. It's just too much/too hard to reproduce in a lab (every hardware configuration, every possible usage pattern, quality of power,...). So they stick to what they can control (run it within power and environmental specs), and leave the rest to ideal, or out of any analysis all together. To me, it's necessary in order to obtain an empirical result.

    To me, the confusion is between an actual drive failure (controller board, servo, spindle motor,...) and data failure (Bad Sectors). Either can toast data, and is a guarantee in the case of a stripe set.

    I have given warnings about it. Perhaps a little low key as I've not really figured out a wonderful, easy way to explain the differences, but it's there. Now whether or not they listen, I've no idea. But they'll figure it out the hard way if they don't. ;)

    I wouldn't say it's false, but not entirely accurate either. :eek: It covers the actual drive failure rate of a set, but not data issues, such as Bad Sectors. (Hence the confusion IMO). :D
     
  16. Tesselator macrumors 601

    Tesselator

    Joined:
    Jan 9, 2008
    Location:
    Japan
    #16
    Yes, I know! It's actually based on a theory which was devised to calculate injury occurrence and insurance rates for merchant marines in the early 1900's. But that just proves my point.


    That's an incorrect over-simplification. :)

    Depends on the RAID level and the number of drives that fried.


    Some people may try to use it underwater too which would be just as stupid as not backing up a RAID 0 but yeah, then just say: "make sure to have a backup".


    Well, I don't think you're doing so with intent of malice but that is the result. The information IS wrong.


    Yeah, you can google alien abductions too. That doesn't mean it's true. :D

    They don't factor in any of the important details at all. For just one example wear and tear is actually reduced per drive in a multi-drive striped RAID. The more drives the less wear and tear each drive will see. Not factored in. A person or company running a RAID set is on an order of magnitude, more likely to be aware of the affects that temperature has on drives and will more than likely in-turn take precautions against over or under heating. This alone blows their statistics out of the water and into outer space. And there's more too as the statistical data used for MTBF does NOT consider WHY the drive failed - just that it did. MTBF is a very VERY generalized rating and applying any other stipulations to it (like factoring in that the drive will be part of a RAID set) nullifies it completely.


    .
     
  17. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #17
    Definitely true, as a stripe set = 0 redundancy, while it's available in other levels, and at different degrees. RAID 5 is less secure than RAID 6 for example, and it does make a difference. ;)

    LOL! :D

    The "Just make sure to have a backup" is simple, and accurate. Despite the reason/type of failure in a stripe set, or any other RAID level.

    For me, the failure rate of a single disk isn't necessarily the same as the same disk in a set. Then take into consideration the level used, the failure rate of each disk may not be the same (QC issues),... it isn't completely accurate. So the only thing I can think of, is experiment for empirical data for the entire lifespan of the set. (First failure in a stripe for example). Then you'd have accurate data. Unfortunately, it wouldn't be exactly applicable to other configurations, as there's a likelihood they aren't identical (different drives, controller, motherboard,...). Makes it hard to accurately predict the effective lifespan. But the average of 3-5 years does seem to hold, due to a nice fat 2 year variance. :p

    How do you come to this conclusion?:confused:

    I'm used to thinking wear is actually higher, given ALL the drives in the set are operating simultaneously. The platters are always spinning unless a MAID function on a RAID controller is engaged, or the SMART functions kick in if they're in a software RAID configuration/Fake RAID off a simple controller card , and read/writes performed on all drives). This introduces vibration issues, which consumer models can't handle. (No feedback/control circuits employed, such as fly height adjustment).

    Unless you mean in terms of time needed for read/write for each disk, as the load is shared. (Reduction of access time per drive).

    Environmental (and other) factors such as these are ignored, as the testing is for an individual drive, not RAID sets. Hence the part of the disparagement in the data IMO. ;)
     
  18. Tesselator macrumors 601

    Tesselator

    Joined:
    Jan 9, 2008
    Location:
    Japan
    #18
    Yup.



    Yeah, it's not the same - I agree with that. But is it more or less? There's good math to support both ideas. The only thing that remains the same is that if a single drive fails catastrophically you're more likely in a RAID 0 set to lose more data as a RIAD 0 set will typically be larger. Yet the inverse is true if we consider a RAID 0 of 4 250GB drives (or 3 300GB drives) as compared to a single 1.5 or 2.0 TB drive so even that isn't a fast hard fact.



    Yeah, in fact, when you begin to actually do the math, investigate and work in the actual statistics you realize early on that it's such an inaccurate model that it really doesn't even apply. We might as well just pick a number from a hat. :)



    Yeah, that would work. But then unless we use the exact same parts in exactly the same conditions it kinda kills the usefulness of doing it at all. Right? I mean the whole thing is about trying to come up with a predictive model so that we can compensate in advance. After the fact it's a little late. :)



    Yup. Hehehe, that's the size of the margin of error we're dealing with. :eek: Not a very good predictive model is it? :p But yeah if we get past the 1st 3 to 6 months then typically we're safe for 3 years. The following 2 years it becomes increasingly dangerous.



    Distributive load. If we consider that the amount of data is a critical factor in wear. Amount of data also factors into the time model. More data takes longer.

    So if I create a stat that calculates number of R/W operations for the life of the drive it will vary greatly as I vary the data size. An extreme example would be a 32KB file as opposed to a 320GB file. Mean R/W operations before failure. :)

    If I write as many as I can in 1,000,000 hours of each the wear will be profiled differently. Additionally the same comparative operation distributed over 4 drives means that each drive's platter set will be used less given the same model drives were used in all instances.



    Yup, exactly my point. :)
     
  19. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #19
    Now you're getting into the realm that drives me nutz. Platter density & platter count that make up a drive. UBE rates are becoming too low these days, as either too many drives, too many platters, and high density all can become problems. Ye olde multiple failures bit, especially those that occur during a rebuild. :eek: :mad: RAID 5 & even 6 now are affected by this more substantially with each capacity increase. Most of what I see, simplifies it down to capacity. Such as the recommendations of keeping a RAID 5 at a max capacity of ~10TB or so. It's a little too complicated to relate it to drive count. As you noted, the drives themselves do matter, and typically consist of 1 - 4 platters. Then there's the density. 250MB, 333MB per platter are common now. Thinking about it can cause headaches. :p

    Seems that way to me as well. :D

    Experimentation is handy, but waiting until a failure to occur, is too late for using it as a predictive means for replication. By the time it's done, the equipment may not even be made anymore. :eek:

    And for creating a model, it wouldn't be accurate either, as there's too many hardware and software variations. Perhaps as a rule of thumb, but it won't hold long, as the technology is always changing. ;)

    It's all we've got though. :p

    The last 2 years are a gamble. Less with SAS IMO, but it's not eliminated. Hmm... maybe that's the reason for implementing Replacement Policies. :eek: ;)
    I assumed this is what you meant, but wasn't absolutely sure. So I played it safe. :D

    Don't forget the stripe size either. ;)

    Then you've got the CRC implementation to consider, firmware, feedback circuits,.... is there no end? :eek: (I had to include some hardware here somewhere). Couldn't help myself. :p
    Usage patterns. Say it isn't so. :D :p
     
  20. Tesselator macrumors 601

    Tesselator

    Joined:
    Jan 9, 2008
    Location:
    Japan
    #20
    Yup. But it's barely good enough to apply to a single drive. Trying to apply it to a RIAD set is like; forget it. It reminds me of magnitudes of severe rounding errors. :)



    NP, it also begs the question of which is more critical, Seek actuations or the superparamagnetic limits and life of the cobalt-based alloy substrate of the platters. Perpendicular recording has helped but soon I guess we'll need something like HAMR or some kind of patterned media recording. :)
     
  21. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #21
    Error Cascading :eek: :p
    Good question. I'm thinking magnetic density limits of the materials (superparamagnetic) is. Particularly in relation to the UBE's. If they don't increase along with capacities of HAMR and HAMR + patterning, I really don't want to think about the mess. I'm imagining: Build. Fail. Rebuild. Fail. Wash, rinse, repeat. :p

    What I'd really like to see, is FeRAM based gear. It's capable of 1E16 currently...YES! :D Followed by optical. Crap. Now I'm making myself drool. :p
     
  22. NoNameBrand macrumors 6502

    Joined:
    Nov 17, 2005
    Location:
    Halifax, Canada
    #22
    Apologies for digging this up again, I was away.

    This is the one really interesting (to me) point you've made, and it changes everything.

    To me, the complexities of modelling failure rates that you've described, don't diminish the basic idea behind the formula: that adding more things to a system that each can break the whole, increases the odds of it all breaking; that in the average case, similar drives will have a similar probability of failure— from which, one can make a broad statement about reliability of striped arrays.

    But! This is a neat and (I think) original idea, which does, indeed, monkey up the formula- the degree of monkeying depending on how much mechanical wear & tear matters (a lot, I think) and the nature of the data and its use. I still think that a striped array is less reliable than an individual disk (given that there are more pieces that can break it), but I won't make any more claims about how much.
     
  23. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #23
    This is the case, but it's not super simple. ;)
     

Share This Page