Greetings fellow Darwinites,
I have just recently procured the necessary equipment to create what I believe to be an affordable and reliable storage solution for OS X, but I have been having trouble with one aspect thus far.
Definitions:
RAID 1: Mirrored set of drives
Raid 0: Striped set of drives
Breakdown:
I have 4 external firewire drives.
Since OS X does not offer a SoftRAID 5 solution (only 1, 0, and concatination), I have nested 1+0, id est, I have created two mirrored sets of two drives each, each set which resides under the parent striping.
This provides fault tolerance of two disks, as long as both are not in the same mirror, and increased performance, however, you do lose the capacity of two disks with this set-up.
That said, I have the following problem:
After setting up my RAID 1+0, I went ahead and turned off one of the drives to simulate a drive failure. The fancy diskutil GUI once again delighted me by informing me that my RAID set was degraded and that one of my sets only had one drive.
So, hooray, everything is working great; but little did he know that....
Upon switching my drive back on, the set didn't see the second drive, well, no big deal I thought; I restarted my machine and allowed all of the externals to initialize on boot. But to my chagrin, as I pulled up the RAID tab in disk utility, it displayed the set, now containing both drives (so at least it recognized it), but still claimed the mirrored set to be degraded, and the drive i had switched off and on to have FAILED!!!
Well, this was clearly an error, could it really have failed? I tested a few things, but the drive was just fine, but instead of coming up as containing a RAID stripe as its partition of the volume it came up with your generic disk5s2 sort of partition...
I reformatted and attempted to rebuild, but even though the set was entirely blank, a mirrored rebuild seems to copy bit for bit, and it wanted 11 hrs to mirror the entire 500G rather than intelligently looking for packets architecture within the volume.
Well, I didn't have 11 hrs so I just restarted by reformatting all drives and reengineering the entire RAID set... this time i tested turning off another drive... with the same infuriating result.
It seems, from these tests, that once Disk Utility has detected a failure, whether real or not, there is nothing you can do to convince it otherwise, and are then forced to go through the painful rebuilding process.
Theoretically this could be a real problem for this sort of setup if say, you forget to turn on all of the drives before boot, the power goes out and your drive shuts down a split second before your computer, the drive cable is loose, the drives don't all initialize for some reason, and so forth, and so on to no appreciable end.
So I am here, my fellow Applians, to ask for your assistance in either finding what went wrong, or to circumvent Disk Utility's infallible memory (perhaps by some plist deletion?).
I would greatly appreciate the help,
Cheers!
--Tim
I have just recently procured the necessary equipment to create what I believe to be an affordable and reliable storage solution for OS X, but I have been having trouble with one aspect thus far.
Definitions:
RAID 1: Mirrored set of drives
Raid 0: Striped set of drives
Breakdown:
I have 4 external firewire drives.
Since OS X does not offer a SoftRAID 5 solution (only 1, 0, and concatination), I have nested 1+0, id est, I have created two mirrored sets of two drives each, each set which resides under the parent striping.
This provides fault tolerance of two disks, as long as both are not in the same mirror, and increased performance, however, you do lose the capacity of two disks with this set-up.
That said, I have the following problem:
After setting up my RAID 1+0, I went ahead and turned off one of the drives to simulate a drive failure. The fancy diskutil GUI once again delighted me by informing me that my RAID set was degraded and that one of my sets only had one drive.
So, hooray, everything is working great; but little did he know that....
Upon switching my drive back on, the set didn't see the second drive, well, no big deal I thought; I restarted my machine and allowed all of the externals to initialize on boot. But to my chagrin, as I pulled up the RAID tab in disk utility, it displayed the set, now containing both drives (so at least it recognized it), but still claimed the mirrored set to be degraded, and the drive i had switched off and on to have FAILED!!!
Well, this was clearly an error, could it really have failed? I tested a few things, but the drive was just fine, but instead of coming up as containing a RAID stripe as its partition of the volume it came up with your generic disk5s2 sort of partition...
I reformatted and attempted to rebuild, but even though the set was entirely blank, a mirrored rebuild seems to copy bit for bit, and it wanted 11 hrs to mirror the entire 500G rather than intelligently looking for packets architecture within the volume.
Well, I didn't have 11 hrs so I just restarted by reformatting all drives and reengineering the entire RAID set... this time i tested turning off another drive... with the same infuriating result.
It seems, from these tests, that once Disk Utility has detected a failure, whether real or not, there is nothing you can do to convince it otherwise, and are then forced to go through the painful rebuilding process.
Theoretically this could be a real problem for this sort of setup if say, you forget to turn on all of the drives before boot, the power goes out and your drive shuts down a split second before your computer, the drive cable is loose, the drives don't all initialize for some reason, and so forth, and so on to no appreciable end.
So I am here, my fellow Applians, to ask for your assistance in either finding what went wrong, or to circumvent Disk Utility's infallible memory (perhaps by some plist deletion?).
I would greatly appreciate the help,
Cheers!
--Tim