Hi, who here is running two or more SSD's in RAID0?
What did you use for a stripe size? Why (theory, test results, source)?
Thx!
What did you use for a stripe size? Why (theory, test results, source)?
Thx!
Sorry, but I highly doubt anyone is.
Most SSD's have a native block size of 32KB when erasing. Some have 64KB. This is the space that has to be trimmed or "re-zeroed" before a write. If you write 1,024KB to a RAID-0 pair with 64KB blocks, with a 32KB stripe size, it will be 32 writes, requiring 64 erases. With 128KB stripes, it will be 32 writes, 32 erases. You'll effectively see a 30-50% difference in writes. This does not affect reads quite as much, but it's still usually double-digits. Also, with double the erase cycles, you will cut the lifespan of the drive in half.
With small writes, who cares? It takes you how long to write 4K at 200MB/s? When it really matters is when you load that 500MB file, or a 300MB game executable plus 1100MB of textures.
First thing you need to do is forget everything you always known about RAIDing platter-type disk drives. Smaller stripe may not always be best anymore for those small IO's. The drives you are working with are already internally cut into 4KB stripes, and you are building a RAID on top of and made of multiple 4KB stripe RAIDs. Don't be surprised if your DB app that never reads larger than 16KB consecutively still does better with 32, 64, or 128KB stripes.
The erase block size has a significant effect on write amplification. On an untrimmed drive (worse case) a 1k write can cause a complete block to be erased, so the smaller the block size the less wear on the nand cells. Techreport discuss it here but there are plenty of other reviews that also discuss the issue.
With regards to the raid debate the write amplification effect would appear to be quite a big factor as to why smaller stripe sizes are not effective. The answer lies in the inner workings of SSD that no one seems willing to discuss.
One thing I have noticed on all the Iometer benchmarks that I have seen is that CPU utilisation goes up significantly with small stripe sizes, yet reads/ write speeds seem to decrease.
"Write amplification" is really a terrible term. Perhaps "write overloading" would have been better. What it refers to is the amount of NAND flash that needs to be written to accommodate a certain chunk of data.
For example, in a typical SSD, if you want to write 4KB of data, the system first needs to copy a 128KB block of data from the NAND to the controllers memory. Then the system modifies the data with the 4KB of new data. Finally, it needs to write the entire 128KB block back to the NAND. Typical write amplification multipliers are in the neighborhood of 2040X, on average.
Nice bit of research.Since Intel X25-M drives reportedly have 128K write erase blocks, it's prudent to use a 128K stripe size when RAIDing these drives.
That's what I've chosen.
I was wondering about this as well. given the 160gb Intel SSD is about twice the price of the 80, why not just buy 2 x 80 and RAID-0 them. Same size, twice the speed.
How many would be willing to part with $1256 + S/H to try it though?I'd love to see an Intel RAID-10 setup with four 80GB drives.