If you're using SSD as boot drive what's your setup? Did you moved your user directory? Or maybe you just moved your Mail folders? Or perhaps you made no changes.
HERE is the correct method for moving your home folder. Wild-bill, seems to me like you did it right. you may want to check this against the method you used.
Summary
System (SSD array)SSDs mounted in optical drive bay.
OSData (Raptor array)
User Folder
additions in /usr/local
Client ProjectsMedia
Personal Projects
iTunes LibraryTimemachine
Staging
Downloads
TimemachineExternal
Archive Data
Settled on a 16kb stripe after testing against 128kb stripe. My test wasn't thorough, nor did it test for the settled upon layout. Various benchmarks prove inconclusive, best to test against your data set. That said the Internet consensus, based on Windows benchmarks, states 'the' optimal stripe size is 128kb - whatever. The drives are so fast comparatively that I'm not sure its worth the time to optimize more; my apps load instantly as is.
Hi, what was your reasoning on the 16K stripe size? It may be that your testing was on your fresh new drives where the write erase penalty was not yet a factor? What's your Xbench disk results?
Results 426.96
System Info
Xbench Version 1.3
System Version 10.5.8 (9L30)
Physical RAM 16384 MB
Model MacPro3,1
Drive Type System
Disk Test 426.96
Sequential 276.44
Uncached Write 290.23 178.20 MB/sec [4K blocks]
Uncached Write 296.42 167.71 MB/sec [256K blocks]
Uncached Read 155.40 45.48 MB/sec [4K blocks]
Uncached Read 822.71 413.49 MB/sec [256K blocks]
Random 937.33
Uncached Write 1072.27 113.51 MB/sec [4K blocks]
Uncached Write 428.62 137.22 MB/sec [256K blocks]
Uncached Read 2391.72 16.95 MB/sec [4K blocks]
Uncached Read 1713.35 317.92 MB/sec [256K blocks]
Results 334.31
System Info
Xbench Version 1.3
System Version 10.5.8 (9L30)
Physical RAM 16384 MB
Model MacPro3,1
Drive Type System
Disk Test 334.31
Sequential 216.26
Uncached Write 188.51 115.74 MB/sec [4K blocks]
Uncached Write 223.39 126.39 MB/sec [256K blocks]
Uncached Read 151.64 44.38 MB/sec [4K blocks]
Uncached Read 471.52 236.98 MB/sec [256K blocks]
Random 736.22
Uncached Write 457.20 48.40 MB/sec [4K blocks]
Uncached Write 470.66 150.67 MB/sec [256K blocks]
Uncached Read 2532.52 17.95 MB/sec [4K blocks]
Uncached Read 1376.61 255.44 MB/sec [256K blocks]
Results 400.90
System Info
Xbench Version 1.3
System Version 10.5.8 (9L30)
Physical RAM 16384 MB
Model MacPro3,1
Drive Type System
Disk Test 400.90
Sequential 249.82
Uncached Write 247.50 151.96 MB/sec [4K blocks]
Uncached Write 240.04 135.81 MB/sec [256K blocks]
Uncached Read 153.64 44.96 MB/sec [4K blocks]
Uncached Read 771.36 387.68 MB/sec [256K blocks]
Random 1014.40
Uncached Write 1103.86 116.86 MB/sec [4K blocks]
Uncached Write 494.71 158.37 MB/sec [256K blocks]
Uncached Read 2244.58 15.91 MB/sec [4K blocks]
Uncached Read 1753.20 325.32 MB/sec [256K blocks]
Apparently, the consensus on a 128K stripe is founded on sound reasoning that the erase block size on Intel drives is 128K. That is, for any block that already contains data or once contained data, the drive has to first load the 128K block, add the new data, and then write it back. You can see why having a stripe size equal to an SSD's write erase block size is advantageous. Consider that with a 16K stripe, if you write 64K of data to your array in one operation, it will break down into two write operations on both drives... which means there's potential for 4 write erase block penalties and a total of 512K of data needs to be read and written. Alternatively, with a 128K stripe, only one drive is involved in the operation (so there is less paralellism) but you are only incurring a single write erase penalty.
Are you using the two connectors from the optical bay (meaning you don't have an optical unit connected) or how are you sharing a single SATA connection between the two SSDs?
The '08 Mac Pro has a total of 6 SATA connectors. 4 are for the sleds, I'm using the two spare/unused SATA connectors. The optical drive uses a PATA interface.
Yes I'm fully aware of the reasoning here. I'm sure we've read all the same articles. I read several forum threads where you (or someone with the same nick) were involved in discussing this exact topic and over the course the the thread observed the light click on in the individual.
The problem with this reasoning is it doesn't account for the obscure path from high level API to device and presupposes a given benchmark is representative of the deployment environment.
A wise teacher once told me that if you want to become proficient at something then practice it. I think an analogy can be drawn here.