I mentioned it for performance results only, but Yes, you'd have to use external enclosure/s to fit it all, and get it operational on a Mac Pro. Unless you do like the experiment, and have a rat's nest of cables and drives hanging out of the computer.Problem with that, even if you do have a big budget, is that you cant fit all 24 drives into your case. Mac pro could handle 6 at the most, 5 if you still want to keep an optical drive.
Very good point. Since the OS uses data in 512KB blocks, it would need to be adjusted. It can be in Vista and Win7 (at least I can confirm this in 64bit), but it does need to be tested. Hopefully, that's what the SSD makers are doing, and working with OS developers to make the optimizations for SSD in the OS to help matters.Yeah, there's definitely a cap on the ICH10's throughput. But I believe we also have a lot to learn about the impact of the large write-erase blocks on most drives and the interaction this has with stripe size. It's odd to me that no professional review sites have tackled this issue. For example, what's the impact of using RAID0 arrays with small stripes (even 128KB stripes) on drives that have write-erase blocks of 512KB?
It's still really early yet.
To me, the whole point of it, is to feed the cores faster (aleviate the disk to CPU bottleneck). It's only useful if the application/s can benefit from it, so the performance levels needed depend on the specifics for each user & system.Of course reads should will be faster, but SSD's are so damn fast that a RAID0 doubling of read performance will only be really useful in very limited circumstances that are not CPU bound. For example, I don't believe boot times or app load times scale proprortionally to the number of SSD's in a RAID0 array. Do they? I better start looking for some stats before I look foolish!![]()
As far as specific data, it's hard to find now (IMO), as it's limited mainly to SSD on ICH10R SATA. I've been waiting for more RAID card and PCIe Flash Drive data. My instincts tell me it will scale, so long as the throughput isn't throttled as it is on the ICH10R. It does seem to scale for a pair, but no more (at least with Intel G1's, IIRC). Past that, and it hit the wall.
I do even wonder if it's the chipset drivers that's causing the issue in the first place, as it's capable of more bandwidth from what I can derive off the specs. What I don't know, if there's something going on internally (latency overhead) that's eating clock cycles, and thus throttling the throughput.
I'd certianly like to understand more.