Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

VirtualRain

macrumors 603
Original poster
Aug 1, 2008
6,304
118
Vancouver, BC
Hi, who here is running two or more SSD's in RAID0?

What did you use for a stripe size? Why (theory, test results, source)?

Thx! :)
 
Something to consider

I googled SSD performance and found that while they have great initial speeds, their performance degrades over time.

I suggest that anyone looking into SSDs consider this if they are looking for the absolute performance in their systems.

http://perspectives.mvdirona.com/2008/04/25/LaptopSSDPerformanceDegradationProblems.aspx

From what I gather, higher cost SSDs should perform better over time vs. entry level lower cost units.
 
I'm running two x25-m's in raid0. I decided on a 64k stripe size as some of the sites i visited while researching seemed to recommend it. I didn't have the patience to take time finding the optimal stripe size for these ssd's but I'd be very interested to see the optimal stripe sizes for different uses of the raid (os/app drive, photo scratch disk, video scratch disk, etc.)

I posted some speed results here: https://forums.macrumors.com/threads/698503/
 
I polled the experts over on another IT related forum... here are some of their insights...

Most SSD's have a native block size of 32KB when erasing. Some have 64KB. This is the space that has to be trimmed or "re-zeroed" before a write. If you write 1,024KB to a RAID-0 pair with 64KB blocks, with a 32KB stripe size, it will be 32 writes, requiring 64 erases. With 128KB stripes, it will be 32 writes, 32 erases. You'll effectively see a 30-50% difference in writes. This does not affect reads quite as much, but it's still usually double-digits. Also, with double the erase cycles, you will cut the lifespan of the drive in half.

With small writes, who cares? It takes you how long to write 4K at 200MB/s? When it really matters is when you load that 500MB file, or a 300MB game executable plus 1100MB of textures.

First thing you need to do is forget everything you always known about RAIDing platter-type disk drives. Smaller stripe may not always be best anymore for those small IO's. The drives you are working with are already internally cut into 4KB stripes, and you are building a RAID on top of and made of multiple 4KB stripe RAIDs. Don't be surprised if your DB app that never reads larger than 16KB consecutively still does better with 32, 64, or 128KB stripes.
 
I'm reposting this from another thread because it's very relevant here...

The erase block size has a significant effect on write amplification. On an untrimmed drive (worse case) a 1k write can cause a complete block to be erased, so the smaller the block size the less wear on the nand cells. Techreport discuss it here but there are plenty of other reviews that also discuss the issue.

With regards to the raid debate the write amplification effect would appear to be quite a big factor as to why smaller stripe sizes are not effective. The answer lies in the inner workings of SSD that no one seems willing to discuss.

One thing I have noticed on all the Iometer benchmarks that I have seen is that CPU utilisation goes up significantly with small stripe sizes, yet reads/ write speeds seem to decrease.

I think I get it...

I found this additional explanation...

"Write amplification" is really a terrible term. Perhaps "write overloading" would have been better. What it refers to is the amount of NAND flash that needs to be written to accommodate a certain chunk of data.

For example, in a typical SSD, if you want to write 4KB of data, the system first needs to copy a 128KB block of data from the NAND to the controllers memory. Then the system modifies the data with the 4KB of new data. Finally, it needs to write the entire 128KB block back to the NAND. Typical write amplification multipliers are in the neighborhood of 20–40X, on average.

Basically, the smaller the write erase block, the better.

Also, I see now why you want your stripe size to match or exceed your write erase block size... If the SSD has to write erase 128K of data regardless of the data being committed to disk, you might as well have a matching stripe.

Example (if I understand this correctly)

If the OS is writing 128K to disk and you have a stripe size of 32K and a write erase block size of 128K, then that write will be split into four stripes across both drives which have to each write erase two 128K blocks... thus a total of 512K has to be rewritten. Very ineffecient.

In conclusion:

Since Intel X25-M drives reportedly have 128K write erase blocks, it's prudent to use a 128K stripe size when RAIDing these drives.

That's what I've chosen.
 
I was wondering about this as well. given the 160gb Intel SSD is about twice the price of the 80, why not just buy 2 x 80 and RAID-0 them. Same size, twice the speed.
 
I was wondering about this as well. given the 160gb Intel SSD is about twice the price of the 80, why not just buy 2 x 80 and RAID-0 them. Same size, twice the speed.

A lot of us are thinking along the same lines. If I'm not saving an appreciable amount on say going without double the size I may as well see if I have the open bays to leverage multiple drives.

I'd love to see an Intel RAID-10 setup with four 80GB drives.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.