Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The replies in this thread are very interesting, thank you.
The thing that absolutely amazes me is the amount of money that some of you people have put into multiple SSDs with RAID.
Some people think I spent too much money buying a Mac edition GTX 285.
Some of this SSD money puts my GTX 285 to shame.
After looking at these responses, I am trying to decide if I can work with a 160GB Intel X25-M.

What is the intended purpose of the drive?

Video editing.
 
It will depend on the usage.
As a boot/application drive, an SSD makes night and day difference in performance.

It's not incorrect.
It was and still is. I suggest you look into what "seek time" is. You seem to have it confused with "throughput".

Just love all the SSD fan boys, one solution for everyone's needs right? :rolleyes:
...
The only way to get real desktop performance is either SSD or RAID0.
Yep.

But if you're doing a lot of writes, SSD may not be the best bang for the buck, especially if it's large sequentail data.
An intelligent person in that situation wouldn't be considering an SSD in the first place.
 
Then a 2TB drive is what you want.

Maybe even two 1TBs in RAID 0 if your working with RAW HD.

No, but thank you.
I was looking for speed.
If the VelociRaptor isn't fast enough then the SSD is either the answer or I just don't need a new hard drive.
 
Give this guy an SSD for a day and he won't go back either! :p
The type of writes I'm thinking of, and an SSD may only last a day. :eek: :D :p

Think simulations running 1 TB+ per day writes (Infiniband connected clusters). ;)

Of course when I have the funds, I'll get myself 8 Intel 160GB X25s and stripe them. That would be fast enough for just about anything. :p
Traitor! :eek: :D :p

Then a 2TB drive is what you want.

Maybe even two 1TBs in RAID 0 if your working with RAW HD.
Possible with a pair of 1 TB Colossus SSD's. Not cheap by any means though, as the last street prices would put it over $7k USD. :(

As a boot/application drive, an SSD makes night and day difference in performance.
I've said this many times. But its a specific usage, not a general statement that applies to everyone, no matter what they're doing.

It was and still is. I suggest you look into what "seek time" is. You seem to have it confused with "throughput".
I fully understand what seeks are, and how it occurs.

What you're neglecting to realize, that simultaneous access across all the members in a RAID set does in fact speed up random access time (in terms of the ms value, despite the fact the single drive seek rates (ms) do NOT change). It's not a miracle, but how the data is split amongst the members. The files are no longer on a single drive, so mulitple files are fed from all the members, because they tend to be small files that don't require storage on every member (less stripes are required, so it's not stored across the entire set). This can allow multiple files to be loaded at the same time (no lag from head movement). i.e. I can come close to cutting a large set's ms time to half, so long as the files aren't too big. This is why it varies, and there's no single algorithm to determine the improvement.

A given drive's single drive access time /= set access time, as it's an aggregate value determined by the set size, file size, and stripe size.

An intelligent person in that situation wouldn't be considering an SSD in the first place.
You'd be surprised.

Not many spend the time in the specifications sheets to realize what's going on anyway, and it's not that easy to understand at a glance. Many may come to the conclusion they're bullet-proof compared to mechanical, which isn't the case.
 
What you're neglecting to realize, that simultaneous access across all the members in a RAID set does in fact speed up random access time
Incorrect. That will increase throughput, but not seek times.
You still have the same 7ms limitations of the physical heads moving to find data.
You'll get x number of bits at the same time, but if you access a different file you have to wait the 7ms for all the drives to move their heads to the new location.

The files are no longer on a single drive, so mulitple files are fed from all the members, because they tend to be small files that don't require storage on every member
That is not how striping works.

You'd be surprised.
You would be too if you figured out how they actually work. :rolleyes:

Even if you miraculously got a raid's average seek time down to 1ms, thats still 10 times slower than even the cheapest SSD.
 
Incorrect. That will increase throughput, but not seek times....That is not how striping works.
Sort of. For sequential access, you're correct. But that's NOT the case with random access (i.e. file size is smaller than the stripe), which is what I was discussing.

Take a look at the following:
RAID 0 performance
While the block size can technically be as small as a byte, it is almost always a multiple of the hard disk sector size of 512 bytes. This lets each drive seek independently when randomly reading or writing data on the disk. How much the drives act independently depends on the access pattern from the file system level. For reads and writes that are larger than the stripe size, such as copying files or video playback, the disks will be seeking to the same position on each disk, so the seek time of the array will be the same as that of a single drive. For reads and writes that are smaller than the stripe size, such as database access, the drives will be able to seek independently. If the sectors accessed are spread evenly between the two drives, the apparent seek time of the array will be half that of a single drive (assuming the disks in the array have identical access time characteristics). The transfer speed of the array will be the transfer speed of all the disks added together, limited only by the speed of the RAID controller. Note that these performance scenarios are in the best case with optimal access patterns.

Source.

You would be too if you figured out how they actually work. :rolleyes:
I understand it quite well, actually. You've misunderstood the concept of random access performance within RAID, and it's different with other levels. Worse yet, the real world results may not be anywhere near ideal.

But in any event, the increase in aggregate random access times (ms) can be seen with some benchmarking utilities, and is less than that of one of the members tested as a single drive.

Even if you miraculously got a raid's average seek time down to 1ms, thats still 10 times slower than even the cheapest SSD.
I've never claimed it would outpace an SSD in terms of random access times. But SSD's aren't the end-all-be-all of drive technology as it exists right now either. :rolleyes:

It comes down to specific usage and budgets, so what's the best solution for one user, isn't with someone else. This isn't a hard concept to understand. ;)
 
Don't worry, you'll get it someday.
I even bothered to both quote and link evidence to what I've been able to reproduce in testing, yet you ignore it.

So how come you can't realize that there is a difference between the access times of a single and stripe set made of the same drive model?
 
Don't worry, you'll get it someday.

How old are you? 15? :eek:

Nanofrog has demonstrated his deep understanding of all things storage related in countless threads here. He get's it.

With regards to access times, I believe he's talking about small files and large stripes... while you are talking about large files on small stripes. Both are correct. As for his view on different storage solutions for different workloads... I agree that there's no such thing as one size fits all as he says, but I do believe for almost anyone frequenting these forums, an SSD will pay big dividends in perceived performance, if not real performance.
 
It can decrease access times....but not always. It depends on how much data and where it is located o the platters.
Yep.

File size, stripe size, and the distribution of data all affect it. So though half is possible, it's not common from what I've seen. But even 2ms shaved off (i.e. 12ms single drive, set produces ~10ms) is an improvement, and that's fairly realistic for general use setups. :)

With regards to access times, I believe he's talking about small files and large stripes... .
Yep. :)

Quite applicable information to many, when most are intersted in or are using a RAID0 with the stripe size at 64K or larger, and it contains everything but the kitchen sink (OS, apps, data).
 
Thank you for contradicting yourself.
It's not a contradiction. There's dependencies on multiple factors, so the ideal scenario doesn't always exist. And it was stated in the previous posts, and is also mentioned within the supporting information I both quoted and linked. :rolleyes:

Thats still 100x slower than an SSD.
Random access rates aren't the only aspect of data throughput, and may be a distant consideration to other areas, such as sequential, or with a different level, redundancy which a stripe set can't provide (critical in servers).

It depends on the specific needs. Just because you love SSD's and think they're the best thing since sliced bread, that may not be the case for others. For example, for those that have large files, random access performance won't be the primary consideration, as the sequential access is what matters (where most of the data aquisition distribution is spent; ie. sequential vs. random in an avg. %). If they need both, they can even use separation. OS drives aren't uncommon in servers (RAID1), while the data could be in multiple arrays in other levels (parity <5/6> to nested parity <50/60>).

It just seems that you have absolutely no concept of data usage design for storage applications, as your statements are SSD this, SSD that.
 
Speaking as someone who edits video and photos, using a Velociraptor as a boot/apps drive, I would say that the SSD is by far the superior choice for this and I, too, expect the Velociraptors to die off unless major improvements are coming.

There's already been a lot of talk about various speed statistics, but I'll go ahead and say that access time is a major hold-up in my system - you can hear it - and that an SSD would address this well. I obviously wouldn't want to use it as a scratch disk - way better options for that - but you want the far lower seek time and faster random IOPS a SSD allows.

Velociraptors are probably the best physical disk option for a boot drive - seek time far better than all the other HDDs - but they pale before even mid-tier SSDs, let alone the top shelf offerings by Intel and OCZ.
 
It depends on the specific needs. Just because you love SSD's and think they're the best thing since sliced bread, that may not be the case for others. For example, for those that have large files, random access performance won't be the primary consideration, as the sequential access is what matters (where most of the data aquisition distribution is spent; ie. sequential vs. random in an avg. %).

That takes me back to the thread that Tess made a few months back testing an ssd raid vs hdd raid (raid5?) . IIRC the hdd raid was substantially faster for volume throughput of large files. This is very important for a lot of people - ssd isn't the be all and end all. Its more of a stand on until optical storage comes about right ;)
 
Speaking as someone who edits video and photos, using a Velociraptor as a boot/apps drive, I would say that the SSD is by far the superior choice for this and I, too, expect the Velociraptors to die off unless major improvements are coming.

There's already been a lot of talk about various speed statistics, but I'll go ahead and say that access time is a major hold-up in my system - you can hear it - and that an SSD would address this well. I obviously wouldn't want to use it as a scratch disk - way better options for that - but you want the far lower seek time and faster random IOPS a SSD allows.

Velociraptors are probably the best physical disk option for a boot drive - seek time far better than all the other HDDs - but they pale before even mid-tier SSDs, let alone the top shelf offerings by Intel and OCZ.
A single SSD vs. a RAID set of mechanical drives, the mechanical set can outperform the SSD in terms of sequential throughputs if there's adequate members where parallelism works in your favor. The other issue is capacity for the funds, and budgets typically aren't unlimited.

If they are and you want to go with SSD's in a RAID configuration, that's quite possible. It's just expensive in comparison to mechanical (capacity), and in the realm of write reliability, you have to make sure there's adequate unused capacity on the array for wear leveling.

Also, please note that with a mechanical array, power settings can slow you down in software based RAID (occasionally with hardware cards that have the MAID feature, and is set active via a time setting). Spin up, then perform the function (read or write). You will notice the difference vs. drives are always spinning.

That takes me back to the thread that Tess made a few months back testing an ssd raid vs hdd raid (raid5?) . IIRC the hdd raid was substantially faster for volume throughput of large files. This is very important for a lot of people - ssd isn't the be all and end all. Its more of a stand on until optical storage comes about right ;)
The parallelism (more members in a set) that can be achieved within (or less) on the same budget can produce greater throughputs.
 
In case my post wasn't clear, I meant using the SSD as a boot/apps drive. Throwing SSDs at video editing as media drives is just silly unless you're either so wealthy that a Mac Pro is chump change, or someone's paying you to edit a feature film, in which case they've probably already set up your rig. The new big HDDs are just fine for media, where sequential read/write is more important than access time.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.