HH, I hope you don't mind me sticking an old post of mine where we discussed SSD performance. If you do, then feel free to delete this.
Original thread link:
https://forums.macrumors.com/threads/1197959/
---------------------------------
There is no question that the Apple supplied SSDs are overpriced and that is not what I am discussing, but if the drive is user upgradeable do you really need to pay more for a Vertex 3 when an Intel 320 is fast enough for most users and workflows? Never mind the reliability can of worms.
Anand summarises it best in one of his SSD reviews so I'll quote it here.
www.Anandtech.com said:
The majority of our SSD test suite is focused on I/O bound tests. These are benchmarks that intentionally shift the bottleneck to the SSD and away from the CPU/GPU/memory subsystem in order to give us the best idea of which drives are the fastest. Unfortunately, as many of you correctly point out, these numbers don't always give you a good idea of how tangible the performance improvement is in the real world.
Some of them do. Our 128KB sequential read/write tests as well as the ATTO and AS-SSD results give you a good indication of large file copy performance. Our small file random read/write tests tell a portion of the story for things like web browser cache accesses, but those are difficult to directly relate to experiences in the real world.
So why not exclusively use real world performance tests? It turns out that although the move from a hard drive to a decent SSD is tremendous, finding differences between individual SSDs is harder to quantify in a single real world metric. Take application launch time for example. I stopped including that data in our reviews because the graphs ended up looking like this:
All of the SSDs performed the same. It's not just application launch times though. Here is data from our Chrome Build test timing how long it takes to compile the Chromium project:
Even going back two generations of SSDs, at the same capacity nearly all of these drives perform within a couple of percent of one another. Note that the Vertex 3 is even a 6Gbps drive and doesn't even outperform its predecessor.
In doing these real world use tests I get a good feel for when a drive is actually faster or slower than another. My experiences typically track with the benchmark results but it's always important to feel it first hand. What I've noticed is that although single tasks perform very similarly on all SSDs, it's during periods of heavy I/O activity that you can feel the difference between drives. Unfortunately these periods of heavy I/O activity aren't easily measured, at least in a repeatable fashion. Getting file copies, compiles, web browsing, application launches, IM log updates and searches to all start at the same time while properly measuring overall performance is near impossible without some sort of automated tool.
Now this, unfortunately, is where Anand and I differ in thought when it comes to his "storage bench" suites. I understand what he is doing and I respect the site and Anand's work. But the problem here is that certain impressionable people see the benchmarks and do not realise what they're actually saying to them.
Anandtech.com said:
The best we can offer is our Storage Bench suite. In those tests we are actually playing back the I/O requests captured of me using a PC over a long period of time. While all other bottlenecks are excluded from the performance measurement, the source of the workload is real world in nature.
What you have to keep in mind is that a performance advantage in our Storage Bench suite isn't going to translate linearly into the same overall performance impact on your system. Remember these are I/O bound tests, so a 20% increase in your Heavy 2011 score is going to mean that the drive you're looking at will be 20% faster in that particular type of heavy I/O bound workload. Most desktop PCs aren't under that sort of load constantly, so that 20% advantage may only be seen 20% of the time. The rest of the time your drive may be no quicker than a model from last year.
The point of our benchmarks isn't to tell you that only the newest SSDs are fast, but rather to show you the best performing drive at a given price point. The best values in SSDs are going to be last year's models without a doubt. I'd say that the 6Gbps drives are interesting mostly for the folks that do a lot of large file copies, but for most general use you're fine with an older drive. Almost any SSD is better than a hard drive (almost) and as long as you choose a good one you won't regret the jump.
Here is a description of the test
Anandtech.com said:
Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage.. And it takes a *long* time to run.
1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.
2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.
And finally here is the disk busy time chart, which tells us everything we need to know. Clearly some drives are faster than others. Looking at the chart, without considering what it's actually saying, the Vertex 3 and other new SandForce-based drives look very impressive indeed. They are clearly the quickest by far! But hold on a moment… take a look at the chart and consider this benchmark….
It is run using an automated tool and it's playing back a recording of work done by a real user over a couple of days. It's clearly not something that a normal person would be looking to do within a couple of minutes.
Now, as much as I like to think I am pretty good, and I earn decent money doing it, no matter how hard I try I cannot edit images, play games, surf the web, compile my code, copy stuff and install applications all at the same time or one after another within 700 seconds.
There are times when I do a lot and then I do something like actually read the web page, or edit a document or edit my code or swear at the computer in frustration. I am not always doing something where the drive needs to do intensive work. This chart tells us that the difference between the fastest and the slowest drive tested is 869.7 seconds, which is equivalent to nearly 14.5 minutes. Therefore, I've managed to save 14.5 minutes by buying the fastest drive on a workload that is equivalent to a couple of days from a human perspective. Think about that for a moment. So how much time have I really saved?
The point remains, unless you're doing 50 things at the same time and running 5 automated build servers on 5 different virtual machines, which compile lots of code and runs lots of automated tests and downloading, copying, editing photos, and editing a movie on the side too, you, as a real user, are the bottleneck; not the SSD drive.
There are instances and workflows where the fastest SSD is justifiable, but even the "slow", "crappy" drives are fast enough for 98% of you.