View Full Version : 6Gbps SSD blades

Jul 22, 2011, 05:16 PM
Anyone have a clue how long it'll take to produce these things? All this discussion about the Toshiba/Samsung difference isn't really that drastic, considering they're both still 3Gpbs drives. Who thinks that we'll be able to upgrade our drives sometime in the next year with a 6Gbps drive?

Jul 22, 2011, 05:58 PM
Depends on if the new Air even supports SATA III.

Apple Expert
Jul 22, 2011, 06:08 PM
They do.

Jul 22, 2011, 06:14 PM
They do.


Jul 22, 2011, 06:29 PM
Depends on if the new Air even supports SATA III.

Yeah, my Air supports 6Gbps, but the SSD clocks well under that. Hence the question :D

Jul 25, 2011, 03:59 PM
Yeah, my Air supports 6Gbps, but the SSD clocks well under that. Hence the question :D

I just got mine today (CTO), and yeah, I see it does now. I'd wager that OWC would be the first to market with it. They already have 3Gbps MBA drives, and they have access to the 6Gbps Sandforce chipset for their 2.5" version.

The only real blocker I see is if the 6Gbps Sandforce chipset doesn't fit on the board for some reason.

Jul 25, 2011, 04:19 PM
Yeah, my Air supports 6Gbps, but the SSD clocks well under that. Hence the question :D

What are you planning to do with your MBA? The reason why I ask is because you won't see a significant difference between "slow" SSDs and "fast" SSDs in normal use, unless you are transferring/creating large files on a daily basis.


I like this from Anand

The majority of our SSD test suite is focused on I/O bound tests. These are benchmarks that intentionally shift the bottleneck to the SSD and away from the CPU/GPU/memory subsystem in order to give us the best idea of which drives are the fastest. Unfortunately, as many of you correctly point out, these numbers don't always give you a good idea of how tangible the performance improvement is in the real world.

Some of them do. Our 128KB sequential read/write tests as well as the ATTO and AS-SSD results give you a good indication of large file copy performance. Our small file random read/write tests tell a portion of the story for things like web browser cache accesses, but those are difficult to directly relate to experiences in the real world.

So why not exclusively use real world performance tests? It turns out that although the move from a hard drive to a decent SSD is tremendous, finding differences between individual SSDs is harder to quantify in a single real world metric. Take application launch time for example. I stopped including that data in our reviews because the graphs ended up looking like this:


All of the SSDs performed the same. It's not just application launch times though. Here is data from our Chrome Build test timing how long it takes to compile the Chromium project:


Even going back two generations of SSDs, at the same capacity nearly all of these drives perform within a couple of percent of one another. Note that the Vertex 3 is even a 6Gbps drive and doesn't even outperform its predecessor.

In doing these real world use tests I get a good feel for when a drive is actually faster or slower than another. My experiences typically track with the benchmark results but it's always important to feel it first hand. What I've noticed is that although single tasks perform very similarly on all SSDs, it's during periods of heavy I/O activity that you can feel the difference between drives. Unfortunately these periods of heavy I/O activity aren't easily measured, at least in a repeatable fashion. Getting file copies, compiles, web browsing, application launches, IM log updates and searches to all start at the same time while properly measuring overall performance is near impossible without some sort of automated tool.