Originally Posted by g4cube
It would seem that a very useful benchmark for SSDs would be IOPs per second at various block sizes.
Thruput is a factor when copying files or editing video.
IO operations per second would give a good handle on everyday use.
Correct. IOPs is a much better metric. Yes... BW is useful for transferring large files... something that client computers almost never do. By contrast, the vast majority of accesses on client computers are small random reads... with the majority of the remaining accesses being small random writes. IOPs (and the corresponding low latency numbers) dominate the performance advantages.
So while BW of an SSD may be a "few times" faster than a HDD (ie: <10x)... it hardly ever matters. IOPs of a consumer SSD might be 100X or for an enterprise SSD... 1000X faster than a HDD. This is what makes your computer fly.
Originally Posted by bplein
IOPs are huge, bandwidth is important as well. That's why an old crusty 40GB Intel SSD feels faster than a hard drive, because it does IOPs better than the hard drive. It certainly doesn't do bandwidth better!
I work in the solid state industry, with a focus on enterprise (not desktop) class devices. These desktop storage devices are toys to me (but cool enough that I use them for my desktop). To see online publications oversimplify things just shows me that they have no idea what the real datacenter I/O traffic does to a storage subsystem. Where is the multithreaded testing? Where is the queue depth variance? What about the differences in the SSDs under test? That's the biggest single variable in this test, and it was utterly ignored.
Our professional path's probably cross.