Option 1 if both options are the same speed and cost. Worst case scenario, you put the SSDs together in RAID 0 non-striped so you get normal speed and efficiency. Choosing one big SSD just limits your choices later on!
That looks like it has more to do with parity raids (4,5,6), due to the large amount of file shuffling going on to maintain the parity. RAID0 doesn't do this and since Intel updated TRIM support for RAID0, what's the problem?
Now that affordable 960GB SSDs 1TB SSDs are here which is choice for my 2010 Mac Pro 5,1 3.33 GHz 6-Core Xeon upgrade to my main system disk?
1. 2 480GB ssds on a Sonnet Tempo Pro (RAID 0)
2. one (1) big 1TB SSD on a Apricorn Velocity X2 PCIE card?
Wouldn't the second choice be safer for data and applications than splitting it in half?![]()
Option 1 would be my choice. The safety thing is just BS and doesn't need to be taken seriously. Two SSDs in RAID0 will give you twice the speed! And later if you would like you can put the OS on one and use the other for project/scratch or whatever.
But I personally wouldn't use anything like a Tempo or a Velocity. I'd just use the standard SATA2 and software RAID if you have the open connections and especially if you're using this as an OS/Apps drive. Something like 95% of OS I/O happens with really small files and those can not be accesses at speeds faster than about 100 or 200MB/s over any modern system. Two SSDs in a SATA2 Software RAID will allow speeds up to about 590MB/s and that's faster than almost any I/O is actually going to occur at. Thus no need for the PCIe card unless you're running out of ports otherwise.
As well as the speed increase, which you dismiss, it is really nice, IMHO, to have five or six SSD/HDD devices inside of the Mac Pro.
Lou
Now that affordable 960GB SSDs 1TB SSDs are here which is choice for my 2010 Mac Pro 5,1 3.33 GHz 6-Core Xeon upgrade to my main system disk?
1. 2 480GB ssds on a Sonnet Tempo Pro (RAID 0)
2. one (1) big 1TB SSD on a Apricorn Velocity X2 PCIE card?
Wouldn't the second choice be safer for data and applications than splitting it in half?![]()
That looks like it has more to do with parity raids (4,5,6), due to the large amount of file shuffling going on to maintain the parity.
RAID0 doesn't do this and since Intel updated TRIM support for RAID0, what's the problem?
There is no file shuffling going on, a RAID operate below the file system and the concept of files. The parity can be calculated before anything is ever written to disk.
If you are about to write A, B, C to the RAID then A, B, C is XORed to get the parity and all four can then be written simultaneously.
Parity would have to be calculated and recorded before can finish the atomic write transaction. These higher versions typically have 'write holes'
Frankly, not all multiple block sequences are going out atomically. In file systems with blocks of 4K data, all application saves are not going to be 12K big. Can cache some of the previous writes from A & B when write C, but unless want to create an even larger write hole that is 3 updates of partity as the data changes.
No, only the parity drive gets a checksum written to it,
Yes RAID 5 has a so called write hole, but there are implementations that avoid it.
If that's the case then it should be the same for RAID 0, or all RAID levels for that matter.
I know the parity drive is separate. However, a write is a write: the "main" data block or the duplicate is completely immaterial to SSD wear impact.
In the 3 disk RAID 5 set-up A B C if C is getting A's parity data then if write to A there will be a corresponding write to C. The amount of data written to the collective set of drives doubles.
Logging or shifting the parity location so that can get the parity down onto disk before the main block is down onto disk does absolutely nothing to reduce the number of writes ( which is core cause if the negative impact in question. )
A B C P
| | | |
[disk 1] [disk 2] [disk 3] [disk 4]
If trying to combine write to reduce duplicate/parity updates , then no. The duplicates are inherently coupled to one another. They need to be completed at the same time for the RAID system to be in a coherent state. For RAID 0 the blocks being written are not coherent coupled to contain the same information.
Sure a RAID 0 system could reorder the writes so that there less of a latency impact ( the drives can do the same thing to some extent), however that isn't going to change the number of writes.
Microsoft Research is über reputable, I don't know how you can't take it seriously.
In this paper, we explore the possibility of using device-level redundancy to mask the effects of aging on SSDs. Clustering options such as RAID can potentially be used to tolerate the higher BERs exhibited by worn out SSDs. However, these techniques do not automatically provide adequate protec- tion for aging SSDs; by balancing write load across devices, solutions such as RAID-5 cause all SSDs to wear out at ap- proximately the same rate. Intuitively, such solutions end up trying to protect data on old SSDs by storing it redun- dantly on other, equally old SSDs. Later in the paper, we quantify the ineffectiveness of such an approach.
It could because there are less components such as a RAID controller. With RAID you add complexity. Not only can the ssd fail, the RAID controller can fail as well (and they do; defective RAID arrays are not uncommon; the latter can also be caused by messing around with it).Wouldn't the second choice be safer for data and applications than splitting it in half?![]()
It could because there are less components such as a RAID controller. With RAID you add complexity. Not only can the ssd fail, the RAID controller can fail as well (and they do; defective RAID arrays are not uncommon; the latter can also be caused by messing around with it).
If you want RAID with ssd's in a Mac Pro you can't use the Apple RAID card so you have to get another one. The proper ones, the ones you want, are expensive. Those cards also do not work in the RAID card slot, they need external wiring (i.e. they don't use the internal sata ports). That means you have to get some kind of solution for both the data and the power part of the sata connection on the ssd's. An additional cost.
It has another disadvantage: you are now occupying 2 sata bays and 1 PCIe slot. With the 1TB PCIe ssd you are only occupying 1 PCIe slot so 2 additional sata bays for you to use.
I'd go with the single PCIe ssd. It is the simplest solution that will cost you the least in the end (both in terms of money as in effort ).
I have two SSDs in a Raid 0 on my PC. Blazing fast performance (@1000 MB/s read). Wasn't too hard to set up on a PC and I imagine it's even easier on a Mac with Disk Utility. I have the drive cloned to an internal SATA which I can boot from in the event of catastrophe. Given current failure rates, you're probably safer with an SSD raid than an average SATA drive.
For clarity is the Sata II or Sata III?
Given current failure rates, you're probably safer with an SSD raid than an average SATA drive.
Two SSDs in raid0 will be so fast that it might saturate the pcie bus.