Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
wasnt the test done with an identical amount of drives?
Not that I recall, but thinking in terms of throughput and budget, definitely not anyway.

Say 2x Intel 80GB G2 SSD = 500MB/s @ $600USD.
Now 6x WD 1TB Caviar Blacks drives = 600MB/s @ $600USD.
*both are figured as stripe sets*

In this case, there's a modest improvement in sequential throughputs (especially writes, as the SSD's can do 200MB/s or so, and the mechanical more than 400MB/s), but you have a drastic difference in capacity as well (160GB vs. 6TB).

So for large large capacity and or sequential throughput performance (especially writes), mechanical arrays make far more sense. ;)

In case my post wasn't clear, I meant using the SSD as a boot/apps drive. Throwing SSDs at video editing as media drives is just silly unless you're either so wealthy that a Mac Pro is chump change, or someone's paying you to edit a feature film, in which case they've probably already set up your rig. The new big HDDs are just fine for media, where sequential read/write is more important than access time.
SSD as an OS/app disk would be nice if it's possible (budget). Otherwise, everything can be shoved on the array (particularly useful in tighter budgets). :)

Physical space can be an issue, but I see that as one that can be overcome with adequate funds. ;) So I still see it as budget. :p

Velociraptor all the way. SSD's just aren't there yet.
For high write situations, not really.
 
Not that I recall, but thinking in terms of throughput and budget, definitely not anyway.

Say 2x Intel 80GB G2 SSD = 500MB/s @ $600USD.
Now 6x WD 1TB Caviar Blacks drives = 600MB/s @ $600USD.
*both are figured as stripe sets*

In this case, there's a modest improvement in sequential throughputs (especially writes, as the SSD's can do 200MB/s or so, and the mechanical more than 400MB/s), but you have a drastic difference in capacity as well (160GB vs. 6TB).

So for large large capacity and or sequential throughput performance (especially writes), mechanical arrays make far more sense. ;)

they do make a lot more sense in those sorts of situations. thats probably why a lot of businesses still are using the mechanical drives. i havent really seen/heard of that many businesses changing over to SSDs. i wonder what sort of power decrease they would have if they changed to SSDs, probably not worth it - cant forget physical space requirements either i guess!

160GB vs 6TB is the killer for me. you would be a TOTAL idiot to go with the 2xSSDs in this situation, what possible benefits would there be at all!? (rhetorical lol) :rolleyes:
 
I bought a 300GB and had it installed for about a month. Got a good deal on it at the time. It is fast, but too noisy for me, so I removed it. It is not incredibly loud or anything, but I was using it on a home theater computer. The difference in speed is not an incredibly noticeable difference, but that may vary from system-to-system. Mine has been sitting on a table for months now. I figure I would build a server one day and use it for that, but I have yet to do so.
 
they do make a lot more sense in those sorts of situations. thats probably why a lot of businesses still are using the mechanical drives. i havent really seen/heard of that many businesses changing over to SSDs. i wonder what sort of power decrease they would have if they changed to SSDs, probably not worth it - cant forget physical space requirements either i guess!
It all comes down to specific needs. For corporate environments, they need capacity, speed, low power, and small form factors.

So they use mechanical in the vast majority of situations. Green power has it's place, as it does save on the electricity for the data center, and the smaller format drives are able to reduce the physical space needed as well. Far cheaper than expansion of existing building or move to a new a one.

What do you consider a high-write situation? In general, what is written must be read at some point.
Of course data that's read has to be written. :p

But to give you an idea:
  • Database use, where existing files are updated multiple times a day (same tracks written over and over throughout the drive's life). Account and banking data are good examples.
  • Scratch space
  • Large files that have to be written often, such as video/graphics file projects (gets into capacity usage, and throughputs). Also until the project file is completed, it's updated with each step saved, so there's some re-writing of files
 
Again, save a few bucks and get way more storage. The WD Black series 2TB standard HDD is currently outperforming the Velociraptor according to the tests linked above (and personal experience, I got to compare them in real terms using CS4 and Final Cut yesterday; the difference is negligible, but the WD Black did perform slightly better, which for around the same price and, what, 7X the storage?)...
 
Again, save a few bucks and get way more storage. The WD Black series 2TB standard HDD is currently outperforming the Velociraptor according to the tests linked above (and personal experience, I got to compare them in real terms using CS4 and Final Cut yesterday; the difference is negligible, but the WD Black did perform slightly better, which for around the same price and, what, 7X the storage?)...
The VR is harder to justify now when compared to the newer 7200 rpm drives. It's due for a new model soon, and there's been an article that indicated WD is working on a 20k rpm version (not sure to it's validity, but it's out there if you search).
 
The VR is harder to justify now when compared to the newer 7200 rpm drives. It's due for a new model soon, and there's been an article that indicated WD is working on a 20k rpm version (not sure to it's validity, but it's out there if you search).

wow! i wonder how long until we are seeing 1gbps read/write on a single drive ;)
 
wow! i wonder how long until we are seeing 1gbps read/write on a single drive ;)
1 gb/s = 125MB/s, and we've exceeded that already on SAS disks, close if not reached on SATA (newest models at 7200 rpm & 500GB/platter densities). :p
 
Sorry to take this on a tangent, but I'd like to dig into this a bit...

Of course data that's read has to be written. :p

But to give you an idea:
  • Database use, where existing files are updated multiple times a day (same tracks written over and over throughout the drive's life). Account and banking data are good examples.
  • Scratch space
  • Large files that have to be written often, such as video/graphics file projects (gets into capacity usage, and throughputs). Also until the project file is completed, it's updated with each step saved, so there's some re-writing of files

You also said this...

So for large large capacity and or sequential throughput performance (especially writes), mechanical arrays make far more sense.

I agree with the bit about large capacity, but not the bit about sequential performance.

With respect to storage solutions for databases... I'm not sure that's an applicable example here where we are largely focused on single user workstation workflows (no?)... Also, by my understanding db servers are largely random access in nature and just as much, if not more, reads than writes which would also make them an odd choice as an example of a workload that relies heavily on sequential writes. :confused:

For scratch and large media files, I'm not knowledgeable enough about the OSX file system, but if it employs any kind of write back cache then the user is not going to be very impacted by large writes. And on a single user workstation where large media files are being worked on, there's still more random reads going on than anything else, so even under these kinds of workloads, the orders of magnitude reduction in access times for SSD's vs HDD's should really provide a noticeable performance improvement.

So I still fail to understand a single user workstation workload that would not benefit from SSD's assuming one could afford enough capacity to accommodate the required storage.

So I agree, it's fine to say HDD arrays are desirable where capacity is concerned, but I don't think there's a performance argument to be made in favor of mechanical arrays in a single user workstation environment.
 
That's more or less what I was saying - the SSD should always be faster for OS functions, but Nanofrog is right to say that HDDs are unmatched for long, large sequential access as you might have in video capture and editing. Everything I've read argues that the fastest rig will use SSDs for all OS/App functions and use high capacity HDDs for media location.

Photoshop scratch and other such things will also benefit from the SSD.

I should also point out that OS X is optimized to make the best fastest use of mechanical drives, not SSDs...yet. I expect that to change in time.
 
Sorry to take this on a tangent, but I'd like to dig into this a bit...



You also said this...



I agree with the bit about large capacity, but not the bit about sequential performance.

With respect to storage solutions for databases... I'm not sure that's an applicable example here where we are largely focused on single user workstation workflows (no?)... Also, by my understanding db servers are largely random access in nature and just as much, if not more, reads than writes which would also make them an odd choice as an example of a workload that relies heavily on sequential writes. :confused:

For scratch and large media files, I'm not knowledgeable enough about the OSX file system, but if it employs any kind of write back cache then the user is not going to be very impacted by large writes. And on a single user workstation where large media files are being worked on, there's still more random reads going on than anything else, so even under these kinds of workloads, the orders of magnitude reduction in access times for SSD's vs HDD's should really provide a noticeable performance improvement.

So I still fail to understand a single user workstation workload that would not benefit from SSD's assuming one could afford enough capacity to accommodate the required storage.

So I agree, it's fine to say HDD arrays are desirable where capacity is concerned, but I don't think there's a performance argument to be made in favor of mechanical arrays in a single user workstation environment.
The more recent post where database use was mentioned, I took as a more general question, and included database useage.

That's a particular use that does NOT rely on sequential access, but random access (read and write; say ~ 50% per, as the existing data may be required to determine new data values, such as an account balance). Mechanical is less expensive and the reliability is high (tracks written over and over again, as the records are updated with multiple transactions per day, such as banking data).

SSD is attractive in this case, but the cost for not only the actual data capacity requirements (for the expected lifespan planned), but additional capacity for wear leveling (the only way to prevent premature dead cells as it stands with current Flash chips), is out of bounds IMO over cost issues. I expect this to change in time though.

With video/graphics use, large files are involved, so they need both very large capacities and fast sequential throughputs, particularly writes. It's been my understanding that the sequential access requirement is more important than the random access useage in cases where the budget is limited, and everything must be placed on an array. This actually makes sense to me, as an array can improve the random access over a single drive, and the OS and applications are resident in memory (assuming there's an adequate RAM capacity).

Ideally, I actually agree that splitting the OS and apps to SSD and the large files to an array is the best of all (and have mentioned this multiple times), but it may not be possible due to budget constraints. It seems this issue crops up often, especially if the system was just aquired. ;) Worst case, they can get an SSD later, and move things around. :)
 
With video/graphics use, large files are involved, so they need both very large capacities and fast sequential throughputs, particularly writes. It's been my understanding that the sequential access requirement is more important than the random access useage in cases where the budget is limited, and everything must be placed on an array. This actually makes sense to me, as an array can improve the random access over a single drive, and the OS and applications are resident in memory (assuming there's an adequate RAM capacity).

Ideally, I actually agree that splitting the OS and apps to SSD and the large files to an array is the best of all (and have mentioned this multiple times), but it may not be possible due to budget constraints. It seems this issue crops up often, especially if the system was just aquired. ;) Worst case, they can get an SSD later, and move things around. :)

Thanks for clarifying.

While Intel's sequential writes are not outstanding, Indilinx based drives (i.e. OCZ Vertex) have sequential writes around 200MB/s per drive making them extremely strong performers when it comes to sequential writes. Thus I believe the only benefit that mechanical arrays offer is in cost per GB... there is no performance argument that holds up for mechanical arrays on a single-user workstation.

Ideally, a video editor would use a set of Intel SSD's for OS/Apps which has been optimized for small random reads, and use a set of OCZ Vertex drives for media storage which has been optimized for sequential access. While the cost is perhaps prohibitively high, there is no arguing that this would be the best performing solution.
 
Thanks for clarifying.

While Intel's sequential writes are not outstanding, Indilinx based drives (i.e. OCZ Vertex) have sequential writes around 200MB/s per drive making them extremely strong performers when it comes to sequential writes. Thus I believe the only benefit that mechanical arrays offer is in cost per GB... there is no performance argument that holds up for mechanical arrays on a single-user workstation.
The Indilinx controller based drives are nice, and do outperform Intel's in sequential writes (I'm thinking of the Colossus as well).

But budgets are the biggest area where both can apply. Not just in terms of capacity, but if you can budget for x SSD's, you can get y mechanical models (y is greater). That additional parallelism translates to not only more capacity, but faster sequential throughputs are also possible. It will depend on the specifics, and will narrow as SSD's mature (faster sequential writes, such as 3x+ that of mechanical). Then it will be solely cost based reasoning to stick with SSD's, assuming a user budgets enough additional capacity to make sure wear leveling has adequate unused space to use for remapping dead cells (and so long as existing Flash chips are used, as there's newer flash tech in the works that has a significantly higher number of write cycles, such as FeRAM = 1E16 writes, and that's before wear leveling is used :eek: :D).

In the case of an unlimited budget, SSD's could be used, but given the capacity issues alone, it may still be more attractive to use mechanical for the mass storage array, until the cost/GB lowers. Physical space in the MP is limited, but it's possible to do via a decent RAID card and external enclosures. Adds quite a bit of additional cost above the drives. But possible from a technical POV. :)
 
The Indilinx controller based drives are nice, and do outperform Intel's in sequential writes (I'm thinking of the Colossus as well).

But budgets are the biggest area where both can apply. Not just in terms of capacity, but if you can budget for x SSD's, you can get y mechanical models (y is greater). That additional parallelism translates to not only more capacity, but faster sequential throughputs are also possible. It will depend on the specifics, and will narrow as SSD's mature (faster sequential writes, such as 3x+ that of mechanical). Then it will be solely cost based reasoning to stick with SSD's, assuming a user budgets enough additional capacity to make sure wear leveling has adequate unused space to use for remapping dead cells (and so long as existing Flash chips are used, as there's newer flash tech in the works that has a significantly higher number of write cycles, such as FeRAM = 1E16 writes, and that's before wear leveling is used :eek: :D).

In the case of an unlimited budget, SSD's could be used, but given the capacity issues alone, it may still be more attractive to use mechanical for the mass storage array, until the cost/GB lowers. Physical space in the MP is limited, but it's possible to do via a decent RAID card and external enclosures. Adds quite a bit of additional cost above the drives. But possible from a technical POV. :)

I think we agree. ;) When cost is an issue (as it almost always is) a balance between solid state and mechanical storage must be made.
 
74GB raptors in raid0 vs intel 160gb ssd g2

I have two older 74GB raptors in RAID-0 on one desktop, and one Intel 160GB SSD (generation 2) on another almost exact desktop (only difference is the main drives and external drives). Both are running Ubuntu linux 9.10, using 8-core Intel Xeon Harpertown processors, and 8GB of DDR2 FB-DIMM memory.

No matter what the benchmarks say, for me, on a day-to-day basis of usage, opening applications, copying files, etc, etc. The SSD feels many, many times faster. I think mostly due to the fast access times and random read/write times. It really makes all the difference with daily desktop/workstation type usage, IMHO.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.