Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Option 1 if both options are the same speed and cost. Worst case scenario, you put the SSDs together in RAID 0 non-striped so you get normal speed and efficiency. Choosing one big SSD just limits your choices later on!
 
That looks like it has more to do with parity raids (4,5,6), due to the large amount of file shuffling going on to maintain the parity. RAID0 doesn't do this and since Intel updated TRIM support for RAID0, what's the problem?

There is no file shuffling going on, a RAID operate below the file system and the concept of files. The parity can be calculated before anything is ever written to disk. If you are about to write A, B, C to the RAID then A, B, C is XORed to get the parity and all four can then be written simultaneously.
 
If I were you, I will choose option 1 or "Option 3: two big 1 TB SSDs in RAID". Because I want to make sure the speed. Also, I think it will be safer to store things in two SSD in case of all the data losing once a time. :)

Anyway, that is just my choice. You can choose what you want.:D
 
Now that affordable 960GB SSDs 1TB SSDs are here which is choice for my 2010 Mac Pro 5,1 3.33 GHz 6-Core Xeon upgrade to my main system disk?
1. 2 480GB ssds on a Sonnet Tempo Pro (RAID 0)
2. one (1) big 1TB SSD on a Apricorn Velocity X2 PCIE card?

Wouldn't the second choice be safer for data and applications than splitting it in half?:)

Option 1 would be my choice. The safety thing is just BS and doesn't need to be taken seriously. Two SSDs in RAID0 will give you twice the speed! And later if you would like you can put the OS on one and use the other for project/scratch or whatever.

But I personally wouldn't use anything like a Tempo or a Velocity. I'd just use the standard SATA2 and software RAID if you have the open connections and especially if you're using this as an OS/Apps drive. Something like 95% of OS I/O happens with really small files and those can not be accesses at speeds faster than about 100 or 200MB/s over any modern system. Two SSDs in a SATA2 Software RAID will allow speeds up to about 590MB/s and that's faster than almost any I/O is actually going to occur at. Thus no need for the PCIe card unless you're running out of ports otherwise.

As well as the speed increase, which you dismiss, it is really nice, IMHO, to have five or six SSD/HDD devices inside of the Mac Pro.

Lou

For me it depends on the usage. If it's for the OS and Apps you will NEVER see the full potential speed used. About one tenth to a quarter of the potential is where all I/O will log in that case. So if that then as I say it's just not useful AT ALL. (Maybe it might increase small file I/O from 50MB/s to 55MB/s or something but still not worth even $10 when ya consider the extra power it uses and heat it introduces.

OTOH, if you intend to edit multi-layer 1080 video or something else which actually uses more than 600MB/s (benchmark utilities are the only other thing I can think of at the moment) then absolutely get the card which will boost you from 600MB/s to 1200MB/s.

And of course as you and I both mentioned, if you need more SATA connections then there's not much choice anyway. ;)
 
Now that affordable 960GB SSDs 1TB SSDs are here which is choice for my 2010 Mac Pro 5,1 3.33 GHz 6-Core Xeon upgrade to my main system disk?
1. 2 480GB ssds on a Sonnet Tempo Pro (RAID 0)
2. one (1) big 1TB SSD on a Apricorn Velocity X2 PCIE card?

Wouldn't the second choice be safer for data and applications than splitting it in half?:)

Maybe try both, putting system and apps on one, data on the other. That's my setup (recently upgraded so not reflected in my sig file yet) in same 5,1 MP that you have ...
1. 2 Samsung SSD 840 Pro 512GB's on a Sonnet Tempo SSD Pro card in RAID0 for video project files and data
2. A Crucial M4 256GB on an Apricorn X2 for system and apps.

Haven't run any benchmarks yet. Of course I've got Time Machine backing up everything hourly, and use SuperDuper to do periodic backups as well.
 
That looks like it has more to do with parity raids (4,5,6), due to the large amount of file shuffling going on to maintain the parity.

It has little to do with RAID and indirectly to do with parity. The root cause issue is duplicating data as store it on SSD. "RAID killing SSD" is more so FUD from the results I saw on a google search ( folks whipping up support for their own variant workaround. )

Parity in RAID 4,5,6 is just a clever way of compressing the duplicate data. Not particularly doing anything different from the SSD's perspective than RAID 1 is doing. The data for every block is effectively written twice (or 3 times in the 6 variant). If write twice as much data to a SSD then it will wear out faster.

Pragmatically this isn't a big problem is actually use SSDs that are configured with Flash controllers and over provision areas that are designed for higher than average writing contexts. Right tool for the right job.


RAID0 doesn't do this and since Intel updated TRIM support for RAID0, what's the problem?

TRIM isn't a panacea. It really isn't the root cause of the problem. It is a benfit if the Flash controller's garbage collector can't deal and isn't particularly smart about defragging usage pattern that build up with failures.

RAID 0 will help primarily because it spread the write load out over more SSDs. So fewer writes go to any one of the SSDs. It also likely means the user isn't trying to fill them both to the brim with data ( another minor corner case where TRIM might help with a modern Flash controller). .

There is no file shuffling going on, a RAID operate below the file system and the concept of files. The parity can be calculated before anything is ever written to disk.

Parity would have to be calculated and recorded before can finish the atomic write transaction. These higher versions typically have 'write holes'


If you are about to write A, B, C to the RAID then A, B, C is XORed to get the parity and all four can then be written simultaneously.

Frankly, not all multiple block sequences are going out atomically. In file systems with blocks of 4K data, all application saves are not going to be 12K big. Can cache some of the previous writes from A & B when write C, but unless want to create an even larger write hole that is 3 updates of partity as the data changes. Hardware RAID with battery backup may reorder and combine these so possibly can save two of those parity updates, but for most part will be doing 2 writes for at least one of those blocks.

Still same root cause issue. Since writing redundant copies of the data, the number of writes go up. So the question is weather the redundant copies matter ( have value). Saying well one of the disks may fail. Well duh... that is why writing out redundant copies in the first place. Saying don't use SSDs because they die ... so use HDDs because they die too doesn't really go anywhere.


Frankly SSDs are doing variants of RAID 4,5,6 so actually it may more so be the issue of whether it is worth while applying redundant techniques on top. Monitor the "health" ( failure rate) of the SSDs and when their coping mechanisms indicate old age ... duplicate and retire them. There is even less good reason to run SSDs into the ground than there is to run HDDs into the ground.
 
Parity would have to be calculated and recorded before can finish the atomic write transaction. These higher versions typically have 'write holes'

No, only the parity drive gets a checksum written to it, and a missing part can be recalculated regardless of which drive fail, that is how XOR work. Yes RAID 5 has a so called write hole, but there are implementations that avoid it.

----------

Frankly, not all multiple block sequences are going out atomically. In file systems with blocks of 4K data, all application saves are not going to be 12K big. Can cache some of the previous writes from A & B when write C, but unless want to create an even larger write hole that is 3 updates of partity as the data changes.

If that's the case then it should be the same for RAID 0, or all RAID levels for that matter.
 
No, only the parity drive gets a checksum written to it,

I know the parity drive is separate. However, a write is a write: the "main" data block or the duplicate is completely immaterial to SSD wear impact.

In the 3 disk RAID 5 set-up A B C if C is getting A's parity data then if write to A there will be a corresponding write to C. The amount of data written to the collective set of drives doubles.

Yes RAID 5 has a so called write hole, but there are implementations that avoid it.

Logging or shifting the parity location so that can get the parity down onto disk before the main block is down onto disk does absolutely nothing to reduce the number of writes ( which is core cause if the negative impact in question. )

If try to combine , log , and write the updates to reduce parity then likely going to incur more write overhead then just the duplicate (i.e., use more non-volatile storage.


If that's the case then it should be the same for RAID 0, or all RAID levels for that matter.

If trying to combine write to reduce duplicate/parity updates , then no. The duplicates are inherently coupled to one another. They need to be completed at the same time for the RAID system to be in a coherent state. For RAID 0 the blocks being written are not coherent coupled to contain the same information.

Sure a RAID 0 system could reorder the writes so that there less of a latency impact ( the drives can do the same thing to some extent), however that isn't going to change the number of writes.
 
I know the parity drive is separate. However, a write is a write: the "main" data block or the duplicate is completely immaterial to SSD wear impact.

In the 3 disk RAID 5 set-up A B C if C is getting A's parity data then if write to A there will be a corresponding write to C. The amount of data written to the collective set of drives doubles.

I have no idea what your point is or what it is you disagree with in particular. The A, B, C example is data that is written to a stripe on a 4 drive RAID 5, the fourth drive being the parity drive. The wear of the drives in a RAID 5 is pretty much even across all drives, the reason to be cautious of SSDs is that the parity drive would age at the same rate as the other drives, and the fact that bit errors on SSDs is much more common than regular HDDs.

Logging or shifting the parity location so that can get the parity down onto disk before the main block is down onto disk does absolutely nothing to reduce the number of writes ( which is core cause if the negative impact in question. )

The point being that there is no reason to "record" the parity as you said, what ever you mean by that. If A,B,C is going to disk then the parity = A ^ B ^ C, when that is done A, B, C and Parity can be written in parallel to the four drives as if it were a RAID 0.

Example:


Data about to be written to disk is divided in three chunks A, B, C.
From these chunks a parity is calculated as follows: P = A ^ B ^ C.
A, B, C and P can now be written to the disks (disk 4 is the parity
drive)

Code:
   A        B        C        P
   |        |        |        |
[disk 1] [disk 2] [disk 3] [disk 4]




If trying to combine write to reduce duplicate/parity updates , then no. The duplicates are inherently coupled to one another. They need to be completed at the same time for the RAID system to be in a coherent state. For RAID 0 the blocks being written are not coherent coupled to contain the same information.

Sure a RAID 0 system could reorder the writes so that there less of a latency impact ( the drives can do the same thing to some extent), however that isn't going to change the number of writes.

The whole point of what you said in your last post had nothing to do with wear, but the fact that smaller writes was slower (your claim). You are now talking about something completely different. Anyway, this whole discussion is most appropriately kept at a schematic broad level.
 
Last edited:
Microsoft Research is über reputable, I don't know how you can't take it seriously.
 
Just to mention... Configuration by consensus is usually the worst idea.
Read up first, then trial and test till it's tuned for your specific need(s).
 
Microsoft Research is über reputable, I don't know how you can't take it seriously.

Who isn't taking it seriously? What are you referring to btw, is it this paper?

http://research.microsoft.com/en-us/um/people/maheshba/papers/hotstorage09-raid.pdf

In this paper, we explore the possibility of using device-level redundancy to mask the effects of aging on SSDs. Clustering options such as RAID can potentially be used to tolerate the higher BERs exhibited by worn out SSDs. However, these techniques do not automatically provide adequate protec- tion for aging SSDs; by balancing write load across devices, solutions such as RAID-5 cause all SSDs to wear out at ap- proximately the same rate. Intuitively, such solutions end up trying to protect data on old SSDs by storing it redun- dantly on other, equally old SSDs. Later in the paper, we quantify the ineffectiveness of such an approach.

Basically it seems to be the higher bit error rate that is the problem, because the same (in bold) is true on HDDs.
 
Wouldn't the second choice be safer for data and applications than splitting it in half?:)
It could because there are less components such as a RAID controller. With RAID you add complexity. Not only can the ssd fail, the RAID controller can fail as well (and they do; defective RAID arrays are not uncommon; the latter can also be caused by messing around with it).

If you want RAID with ssd's in a Mac Pro you can't use the Apple RAID card so you have to get another one. The proper ones, the ones you want, are expensive. Those cards also do not work in the RAID card slot, they need external wiring (i.e. they don't use the internal sata ports). That means you have to get some kind of solution for both the data and the power part of the sata connection on the ssd's. An additional cost.
It has another disadvantage: you are now occupying 2 sata bays and 1 PCIe slot. With the 1TB PCIe ssd you are only occupying 1 PCIe slot so 2 additional sata bays for you to use.

I'd go with the single PCIe ssd. It is the simplest solution that will cost you the least in the end (both in terms of money as in effort ).
 
It could because there are less components such as a RAID controller. With RAID you add complexity. Not only can the ssd fail, the RAID controller can fail as well (and they do; defective RAID arrays are not uncommon; the latter can also be caused by messing around with it).

If you want RAID with ssd's in a Mac Pro you can't use the Apple RAID card so you have to get another one. The proper ones, the ones you want, are expensive. Those cards also do not work in the RAID card slot, they need external wiring (i.e. they don't use the internal sata ports). That means you have to get some kind of solution for both the data and the power part of the sata connection on the ssd's. An additional cost.
It has another disadvantage: you are now occupying 2 sata bays and 1 PCIe slot. With the 1TB PCIe ssd you are only occupying 1 PCIe slot so 2 additional sata bays for you to use.

I'd go with the single PCIe ssd. It is the simplest solution that will cost you the least in the end (both in terms of money as in effort ).

Since the OP wants a simple 2-drive RAID-0, using the Apple software RAID in Disk Utility works fine, whether the disks are on the backplane drive-bays, or on a PCIe card. A hardware controller would be an expensive overkill for his application of RAID.
 
That only matters for the cost aspect. It still adds complexity, possibly even more than hardware RAID since changes to something like CoreStorage can influence your RAID array. In the end RAID is quite complex and has it's risk no matter if you have hardware or software RAID. Saying if it is overkill for this application is silly since you don't know the application. The OP hasn't really told what he wants/needs.
 
I have two SSDs in a Raid 0 on my PC. Blazing fast performance (@1000 MB/s read). Wasn't too hard to set up on a PC and I imagine it's even easier on a Mac with Disk Utility. I have the drive cloned to an internal SATA which I can boot from in the event of catastrophe. Given current failure rates, you're probably safer with an SSD raid than an average SATA drive.

For clarity is the Sata II or Sata III?
 
For clarity is the Sata II or Sata III?

Two SSDs delivering about 1GB/s as he claims would have to be SATA III. ;)

Two-drive SATAII tops out in the real world at around 580MB/s with a theoretical limit of 600MB/s.

Two-drive SATAIII tops out in the real world at around 1150MB/s with a theoretical limit of 1200MB/s.

Of course these speeds are mostly only achievable in benchmark utilities and maaaaaybe some intense video editing apps. Mostly either one of those systems is going average about 150MB/s and the average top 10th speed percentile will be around 300 to 400MB/s (again, on either system).
 
Given current failure rates, you're probably safer with an SSD raid than an average SATA drive.

Funny thing just replaced a failed Micron C300 128GB SSD but a week ago, it lasted 3 years almost to the day. Replaced it with a 256GB Samsung 840Pro.
 
Samsung 840 EVO installed on Velocity Solo X2

Thanks for all the great info but I went the low cost road. I installed my new Samsung 1 TB 840 EVO on a $89 Apricorn Velocity Solo X2 PCi card. MY Mac Pro has been running 24/7 all week with out problems. The only down side is the start up time is quite long - over 1 minute from button push. All my software starts quickly even the CAD software which loads extensive libraries on start up. I have not noticed earth shaking speed but I have not had any issues with waking from deep sleep.

Hope to keep this Pro going for a least 3 more years. Next is to change the HD 5770 GPU to something fast.

rjtiedeman
 
Last edited:
RAID0 if you have the need for speed

Two SSDs in raid0 will be so fast that it might saturate the pcie bus.

I'm not sure what the PCIe saturation point would be, but two SSDs in RAID0 are much faster in my setup:

1) Samsung 256GB SSD 840 Pro in an Apricorn Velocity X2 card gives:
- 232.4 MB/s write
- 479.7 MB/s read
(startup drive, OS X 10.8.4, about half full)

2) Pair of Samsung 512GB SSD 840 Pros in RAID0 on a Sonnet Tempo SSD Pro card:
- 825.9 MB/s write
- 942.5 MB/s read
(video/graphics project file and data drive, about 10 percent full)

Not directly comparable I guess since one drive has system and apps and is half full, vs a data drive with 90 percent free space. But for my needs, this setup seems like the best of both worlds.

Speeds measured via Blackmagic Disk Speed Test v2.2 on 2010 Hex core 3.33 MHz, 48GB RAM (sig file is not up-to-date)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.