Well, yes and no. Yes, random I/O operations as conducted in benchmark programs are slower. But this is very misleading as this "Random I/O" test behavior has almost no real-world analogue. Mostly the majority or indeed ALL of the files on an HDD are continuous. Most I/O operations are single file reads or writes. When there are multi-file I/Os taking place they're almost always of the same type from the same basic location. There are some exceptions but I would venture to say that for the average SOHO (small office, home office) user between 95% and 99.8% of all I/O operations are more like Sequential I/O and not much like "Random I/O" that benchmark programs perform, at all.
I understand your point, but keep in mind, I'm used to Windows. If changes are made, the registry gets wonky, and it slows you down. Even if the drive is contiguous. Or at least that's what I've run into on many occasions, and the worst of it was usually caused by a registry cleaning utility.
The other factor, though the files may be contiguous, the file sizes are varied, particularly for OS and applications (mostly small), and have a great affect on the transfer speeds for a given data request. That you can feel, if there's enough of them in the queue.
But we both agree that current benchmarks aren't real world.

Unfortunately, it's all we have, and keep hoping for something better.
Then also consider that in a 4.5TB to 6TB RAID likely only about 20% to 25% at most will be used and all of that is in a very continuous area of the HDDs. So actually Random I/O like benchmark apps do will never ever happen - ever. It's nice to know how your drives perform under those conditions but it's not related to much if anything in the real-world other than maybe a terribly fragmented HDD that's very near 100% full. If you have that then you deserve to have bad performance anyway! Heh!
I dont' assume the data's going to reside on that small of capacity %, unless it's properly planned. So I err on the side that it isn't. You and I know well enough to get more capacity than is actually needed to utilize those outer tracks. Also, OS X and Linux seem to place files better than windows, which will scatter them.

Take a look at the file distribution of your windows install.

On a large array, those on the inner tracks can murder your throughputs at times, and can make you think somethings wrong.
Given your capacity %, it's effectively a short stroke.

(I like to set those at 10% or smaller if possible).
Here the latency times include the time it takes to read or write the data chunks or scratch file as well as the seeks so as the data chucks get bigger the latency times increase. This is a RAID0 of probably the slowest seeking drives on the market. Their average manufacturer specified seek time is 16 ms and I don't know of a slower drive. As you can see the overall average is very small and quite similar to SSD times. Unless you're attempting to read or write a few thousand files at once you won't notice any difference and I doubt very seriously you can even measure the difference with a stop-watch.
I usually dont' see those latencies with windows, and the RE3's I've got have a latency of 12.7ms (worst test results, not manufacturer data). But as I mentioned, the data's scattered (not entirely contiguous), and some of it is fixed for it's location (back end of the drive = inner most tracks; Eww), and won't be moved by a defrag.

Hence the affect I mentioned earlier. The OS does make a difference.

For data files not belonging to the OS, is another story.