For a full explanation read this AnandTech article:
http://www.anandtech.com/show/3690/...andforce-more-capacity-at-no-performance-loss
......
Bottom line is that OWC is fairly full of it on this one. They're charging a premium for drives that offer lower capacity at equivalent speed and durability. The only thing you really get out of it is the added warranty.
That reading of the AnandTech write up is flawed. Quoting from the write up.
More spare area can provide for a longer lasting drive, but the best way to measure its impact is to look at performance (lower write amplification leads to lower wear which ultimately leads to a longer lifespan).
What AnandTech measured was whether there was whether there was an increase in performance. They didn't measure
anything about durability. So claiming that don't get any increased durability is bogus. They never measured it. It was the "best way to measure" because it was the only one they tried. ( granted it would take months and low level tools to figure out something on durability... so this was the expedient test to run. Which is "best" if trying to deploy content to generate ad webviews. )
Common sense should tell you that a bigger pool of empty cells to write to the more any reasonable wear leveling algorithm will spread out the erase/writes. The smaller the pool of free cells the higher rate of individual cell erase/writes. Sure the vendors could have bonehead algorithms inside their controllers, but that seems pretty hard to get wrong. (select free cell from beginning of the queue. recycle current cell put it on the end of the queue. The longer the queue the longer until an individual cell gets recycled. Pretty durn hard to screw that up. ) If the wear leveling algorithm doesn't get better with a larger number of "free" cells that would be a drive would probably want to aviod putting into a writing oriented RAID system.
As far as performance goes again the Anadtech writeup is a bit lacking of demonstrating a difference or not. The difference is dependent both on workload and size of the drive (and hence the size of the overprovision "buffer"). That much is indicated in the test. The "heavy download" variation of the benchmark is closest to invoking enough "larger" writes to see any substantive difference. However, if it probably not large enough to promote any real different between the two percentages. Multi GB sized files would have a better chance to actually exhibit a difference.
Anand's tests are far more illustrative of the different strategies and implementations than of the specific Sanforce implementation. Even there is an effect. Whether want to label it cost effective or not, there is a difference.
For example, if the size of the files writing (so get long stream of sequential writes ) are both less that 50% of the over-provision buffer then wouldn't expect to see much of a difference because not blowing out the short term "free cells" buffer cache that the disks have.
If the files were large enough to blow out 80% of the smaller setting but 50% of the larger one then there might be a difference. But as long as not blowing out either it isn't very surprising to see a non-effect.
The other issue with SanForce is that the over-provision is also used to other stuff ( ECC , some duplicates, redirects with hashing , etc. ).
So comes back to workload. If RAID-0 so that can read 99% of the time faster.... sure over-provision doesn't have much of an effect. A game or some common app workload... not going to see much bang for buck. But then why RAID those anyway? With extremely highly skewed reads it is overkill for most people.
If doing a RAID-0 because want to spread out the writes... then layers over-provisioning ( across the drives with RAID-0 and within with internal) probably does have a positive effect on durability. Also likely, only if writing large (multi GB ) files would see much of of a performance difference. Performance doesn't matter much though if the drive starts loosing data (or shrinking in size). Lifetime trumps performance if writing alot.