OWC SSD for RAID . . . is the RE version necessary?

Discussion in 'Mac Pro' started by sboerup, Aug 25, 2010.

  1. sboerup macrumors 6502

    Joined:
    Mar 8, 2009
    #1
    OK, I find the OWC RE SSD version a bit of a gimmick for RAID performance on their SSD's . . . maybe I'm not gullibe, but I also don't know the technical or scientific reasoning behind what they say.

    Rather than ask them for a potential sales pitch or weak answer, considering this forum has a lot of knowledge on the matter, I thought it appropriate to ask here.

    I was thinking of getting 2, or even 3, 40GB SSDs (the $99 model) to setup in a RAID array using an internal card. The 50GB "RAID approved" version is more than 2x the cost of the 40GB version, so to be honest, I'm curious why.

    http://eshop.macsales.com/shop/internal_storage/Mercury_Extreme_SSD_Sandforce/Solid_State_Pro

    Also, this is the card I was planning on getting: http://www.newegg.com/Product/Product.aspx?Item=N82E16816115027&cm_re=2310-_-16-115-027-_-Product
    It's the same card that Transintl.com has for the DX4 drive caddy thingamabob . . . so I'm assuming this is bootable?
     
  2. sboerup thread starter macrumors 6502

    Joined:
    Mar 8, 2009
    #2
    Scratch that card from newegg, it only supports up to 300mbs . . . which is almost the speed of 1 SSD from newegg
     
  3. Ryan P macrumors regular

    Joined:
    Aug 6, 2010
    #3
    I'd be curious, as well, as to the cheapest solution to extract the most performance out of several SSD's if you are not concerned with advanced raid functions like RAID 5 on a RAID card. I'm thinking "enthusiast" not "enterprise" level stuff . That is quite a good deal if those drives could handle RAID in any capacity.
     
  4. barefeats macrumors 65816

    barefeats

    Joined:
    Jul 6, 2000
    #4
    The big idea of the "enterprise" OWC Extreme Pro RE SSD is that it has 20% over-provisioning. That means 20% of a 256GB SSD is unavailable to the user. So it's advertised as 200GB. OCZ does the same thing with the Vertex 2

    Dubbed the "write cliff" effect, consumer grade SSDs can show dramatic variations in response times under sustained write conditions. This dropoff occurs once the drive is filled for the first time and the drive's internal garbage collection and wear-leveling routines kick in. This only affects write performance. Enterprise grade drives (like the OWC Mercury Extreme Pro RE and OCZ Vertex 2) avoid this problem by over-provisioning and by employing wear-leveling algorithms that only move data around when the drives are not being heavily utilized. (Google "SSD" and "overprovisioning" for more info.)

    OWC offers an Extreme Pro SSD with 7% over-provisioning that costs less and should be sufficient for consumer use.
     
  5. HunterMaximus macrumors member

    Joined:
    Jun 25, 2008
    Location:
    Toronto, ON, Canada
    #5
    For a full explanation read this AnandTech article: http://www.anandtech.com/show/3690/...andforce-more-capacity-at-no-performance-loss
    It's dealing with OCZ branded drives, but the results still apply, the OWC drives are Sandforce ones too. Specifically the "RE" drives are 28% provisioning, and the non-RE ones are 7%.

    Bottom line is that OWC is fairly full of it on this one. They're charging a premium for drives that offer lower capacity at equivalent speed and durability. The only thing you really get out of it is the added warranty.
     
  6. sboerup thread starter macrumors 6502

    Joined:
    Mar 8, 2009
    #6
    This is what I anticipated.

    Thanks for the explanations, makes sense on the "over provisioning" . . . almost like short stroking a large 1TB drive to 400GB so that slowdowns don't occur when it's writing on the slower portion of the drive.

    I think I'll get 2x40GB SSDs and play around . . . now to find an appropriate card to handle them.
     
  7. johnnymg macrumors 65816

    johnnymg

    Joined:
    Nov 16, 2008
    #7
    Why don't you just start with a SW RAID?
     
  8. alphaod macrumors Core

    alphaod

    Joined:
    Feb 9, 2008
    Location:
    NYC
    #8
    Because software RAIDs suck?
     
  9. johnnymg macrumors 65816

    johnnymg

    Joined:
    Nov 16, 2008
    #9
    I've seen some excellent BW #'s using an OSX RAID 0 config. So is really fair to say they "suck"? :p

    cheers
    JohnG
     
  10. sboerup thread starter macrumors 6502

    Joined:
    Mar 8, 2009
    #10
    The reason I don't go software RAID is because I'm currently using all 5 of my HD ports . . . so the RAID0 will replace one drive with 2 drives . . . need another port.
     
  11. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #11
    What exactly are you trying to do?

    I ask, as you need to be aware:
    1. The ICH on the system board (backplane board) is only good for ~660MB/s, so it doesn't take much to throttle when using SSD's and other disks simultaneously (if it's not simultaneous, then a software RAID may make sense, assuming 0/1/10 will be sufficient for your needs. For example, if the SSD's are able to produce 250MB/s each, and you try to run 3x of them (750MB/s), it will slow down to the max limit of the ICH's bandwidth (won't get 750MB/s, as the DMI bandwidth allocated to the ICH is insufficient for that much).

    2. Parity based arrays require a hardware RAID controller in the system (there are software implementations, but they're not capable of dealing with the write hole issue associated with parity arrays), as Disk Utility isn't capable of this at all (5/6 or nested parity = 50/60).

    Granted, as you're already using all of the system's SATA ports, you will need a separate controller of some kind. But the details will help put you in the right direction (including if SSD's are actually the most effective solution in terms of balancing cost/performance for your specific usage).
     
  12. macintel4me macrumors 6502

    Joined:
    Jan 11, 2006
    #12
    I'm 99% sure that you can change the provisioning size through its firmware.
     
  13. Ozric macrumors newbie

    Joined:
    Jul 22, 2004
    Location:
    NJ
    #13
    I can offer real results to you now. Yesterday I installed the OWC Extreme Pro 240GB SSD as my startup. Also have a 4 HD external software RAID 0 in a SeriTek 5PM Housing using the 2ME4-E SATA card, from Firmtek. I copied a 50GB file to my startup and RAID, then ran BlackMagic Disk Speed Test. Disk read/write numbers are as follows: SSD 153.8 MB/s and 147.9 MB/s, RAID 98.5 MB/s and 25.2 MB/s
     
  14. sboerup thread starter macrumors 6502

    Joined:
    Mar 8, 2009
    #14
    Thanks for sharing . . . . but wow mate, those speeds are extremely low for what you should be getting. The OWC SSD runs at 280/270 read and write. Just sayin' . . .
     
  15. Ozric macrumors newbie

    Joined:
    Jul 22, 2004
    Location:
    NJ
    #15
    The numbers are upwards of 265 normally, these numbers were taken while a 50GB file is being copied to it.
     
  16. alphaod macrumors Core

    alphaod

    Joined:
    Feb 9, 2008
    Location:
    NYC
    #16
    The 40GB version probably has like 42GB in total chips, while the 50GB version has 64GB in chips.

    I'm sure you'd love to, but OWC won't let you ;)
     
  17. deconstruct60 macrumors 604

    Joined:
    Mar 10, 2009
    #17
    That reading of the AnandTech write up is flawed. Quoting from the write up.

    What AnandTech measured was whether there was whether there was an increase in performance. They didn't measure anything about durability. So claiming that don't get any increased durability is bogus. They never measured it. It was the "best way to measure" because it was the only one they tried. ( granted it would take months and low level tools to figure out something on durability... so this was the expedient test to run. Which is "best" if trying to deploy content to generate ad webviews. )


    Common sense should tell you that a bigger pool of empty cells to write to the more any reasonable wear leveling algorithm will spread out the erase/writes. The smaller the pool of free cells the higher rate of individual cell erase/writes. Sure the vendors could have bonehead algorithms inside their controllers, but that seems pretty hard to get wrong. (select free cell from beginning of the queue. recycle current cell put it on the end of the queue. The longer the queue the longer until an individual cell gets recycled. Pretty durn hard to screw that up. ) If the wear leveling algorithm doesn't get better with a larger number of "free" cells that would be a drive would probably want to aviod putting into a writing oriented RAID system.



    As far as performance goes again the Anadtech writeup is a bit lacking of demonstrating a difference or not. The difference is dependent both on workload and size of the drive (and hence the size of the overprovision "buffer"). That much is indicated in the test. The "heavy download" variation of the benchmark is closest to invoking enough "larger" writes to see any substantive difference. However, if it probably not large enough to promote any real different between the two percentages. Multi GB sized files would have a better chance to actually exhibit a difference.

    Anand's tests are far more illustrative of the different strategies and implementations than of the specific Sanforce implementation. Even there is an effect. Whether want to label it cost effective or not, there is a difference.

    For example, if the size of the files writing (so get long stream of sequential writes ) are both less that 50% of the over-provision buffer then wouldn't expect to see much of a difference because not blowing out the short term "free cells" buffer cache that the disks have.

    If the files were large enough to blow out 80% of the smaller setting but 50% of the larger one then there might be a difference. But as long as not blowing out either it isn't very surprising to see a non-effect.

    The other issue with SanForce is that the over-provision is also used to other stuff ( ECC , some duplicates, redirects with hashing , etc. ).


    So comes back to workload. If RAID-0 so that can read 99% of the time faster.... sure over-provision doesn't have much of an effect. A game or some common app workload... not going to see much bang for buck. But then why RAID those anyway? With extremely highly skewed reads it is overkill for most people.

    If doing a RAID-0 because want to spread out the writes... then layers over-provisioning ( across the drives with RAID-0 and within with internal) probably does have a positive effect on durability. Also likely, only if writing large (multi GB ) files would see much of of a performance difference. Performance doesn't matter much though if the drive starts loosing data (or shrinking in size). Lifetime trumps performance if writing alot.
     
  18. deconstruct60 macrumors 604

    Joined:
    Mar 10, 2009
    #18
    P.S. forgot about the background material on Anadtech's site (referenced indirectly earlier in that article). So when the site actually does talk about experiments that directly impact durability, there is an effect.

    http://www.anandtech.com/show/2829/8

    On each of the graphs on that page there is a difference across all the static (and implicitly corresponding dynamic ) data and the 0.1 to 0.3 range on the x-axis representing over-provisioning rates (i.e., "spare area". ). The majority of the curves there tend to start to flatten out between 0.2 and 0.3 so can see debate whether 20% or 28% is better bang-for-the-buck, but there is still a diminished effect. There is clearly a difference between that whole range and below 0.1 .

    This also highlights why there is a qualified "can make difference in durability" clause. In the second graph where over-provisioning is set to 0.1 (10%) there is no impact when write more/less. There is no impact because it is grossly too small to make an impact. Not because having a significant amount would not have an impact. So 7% doesn't have an impact on this specific issue because it is dinky.




    And toward bottom of that page.....

    So one of the differentials that Intel uses on the enterprise drives is to up the "spare area" (i.e., over-provision area ). So multiple vendors vary the over-provisioning to change the value aspects of the drive. The perhaps temporary difference with the Sandforce controller drives is that it is somewhat configurable by the drive vendors (and perhaps users would hack around with the drive firmware.)

    Additionally, you can approximate the same effect yourself by just not filling the drives up where there is an internal garbage collector. ( depending no software going out of control and consuming everything it can. )
     

Share This Page