Separate names with a comma.
Discussion in 'Mac Pro' started by ahavriluk, Sep 17, 2010.
Would creating a RAID 0 from two drives where one of then has data erase all data?
Yes. Starting a Raid 0 will destroy all data on both drives.
If you have enough space on an external, you can either copy the files over or (for a filesystem) use Carbon Copy Cloner and make a copyable disc image of the drive. But yes, as the last poster said, setting up a new raid of any kind in OSX will blank all discs involved.
OK, I understand RAID 0, but the incremental speed bump is hardly worth losing everything versus using RAID 1 which mirrors one drive to the other, especially with platter drives which we know fail eventually. So far the jury is out IMHO on SSD failures, so perhaps that is worth it, but than again how fast do you need something to process-we all have differing opinions for sure.
I'd rather have safety first and speed second. I say this due to experiencing MANY HDD platter failures over the last 25 years of computer use.
It's worth it to some people. I'm a video editor, and RAID 0 has the best disk read speed of any RAID mode. We need that speed to play back multiple video streams, large video formats, or uncompressed video - all of which require lots of throughput. If you're worried about losing the data on your RAID 0 array, keep a second copy of it somewhere, or upgrade to RAID 5
On a workstation that has proper backup and quality drives, raid 0 is great. Servers are a really good reason for a redundant disk array but I don't see it in a workstation. In fact the only reason I would use raid in a workstation is raid 0, otherwise it is a PITA. Just make sure you have incremental and nightly backups to a good system.
unless its your living ? then its a serious question of is it worth it ?
work on something and 3 hours into it you loose a HDD ?
their goes 3 hours of work 3 hours to redo it and behind with other clients ? thats about $750 lost !!!!! not worth the risk ?
if its stuff you can live with repeating ? then that's a choice everyone can make I use it for some things ?
if video people need it ? and the budget wont allow a larger raid 6 setup ? then its a choice
so I agree but with some reservations of everyone has needs that are different
not sure what PITA ? if you use a card like Areca setting up a 0 or 6 is really the same thing ?
Either is fine, it depends on what you're doing. I make my living working in CS5, everything from simple Photoshop work to full-length after effects videos. For me, an SSD RAID0 makes perfect sense. Of course, I also have a network of storage and back-up solutions on hand, and working files are stored on network, so I don't have to worry about my machine crashing...
But yes. If the speed and storage space cost of RAID1 is acceptable to you, it is easily one of the safest solutions.
As for SSD fault rates, can't speak for the entire market, but I own one Patriot, one Kingston, and five OCZ ssd drives, and thus far haven't even encountered the slowdowns people talk about (except for the Kingston, which is first gen and has no TRIM related capabilities, but then it's in a netbook so who cares? The point of that one was something that wouldn't break when I drop it). If you do your homework and rigorously test your drives when you get them you'll be fine; in my experience and what I've heard thus far with this being a 2-3 year old tech as far as the mass market is concerned, the failure rate is much less that of traditional drives, at least in the DOA and 6 month-1 year fault world...
How much data do you need to store?
How critical is that data?
How much is your time worth (i.e. can you spare the necessary hours needed to perform a recovery)?
These are pertinent questions that will determine if a user needs to go with redundant level or can live with the additional risk of a stripe set (it and JBOD are the bastard children of RAID, as neither are redundant).
What this translates to, is even if you've a proper backup system in place (not just location with sufficient capacity, but auto saves of application data, and incremental backups on say a 1hr setting), can you afford the time necessary to repair and recover if there's a problem?
Please note, that as the capacity increases, the time required will increase, and as the member count grows to hold that data, the risk increases as well (i.e. a 2x disk stripe set is safer than a 4x set, and so on). Meaning the odds go up that you will see a disaster within a given unit of time.
So it's all a matter of perspective need.
For those that have large amounts of data, it's absolutely critical (absolutely cannot afford to loose it; data recovery services are super expensive), and they can't afford the time needed to perform a recovery (which includes time spent re-performing any missing work, which even happens in the proper backup system described above), a redundant array becomes a necessity, not an option.
Performance, capacity needs, and other factors will decide if that means a software capable level, or if a hardware solution is needed (parity based array 5/6 or even nested parity 50/60).
Just a thought.
I and many others run this setup and make a serious living. If you aren't using version control, hourly backups, nightly clones then you should be worried about your data.
At its core, absolutely correct. But there are ways to make things far more secure while using a RAID0 (especially using one for scratch/boot and working with files on a separate volume that is RAID1 or has a regular back-up system).
The optimal situation isn't in everyone's reach/price point, so while it would be better, for example in my set-up, for our server to be using RAID1+0 instead of RAID5, the loss of a terrabyte of space isn't worth the added security (our most necessary files are kept on a triple RAID1, however). Running my desktop in a RAID0 has never frightened me in the least because I am able to get back up and running from a total system loss in a matter of about an hour, and all my files, other than the modifications from the most recent save, are secure. For me, the trade off of time saved by using my configuration are worth the possible time lost. I realize that everyone's set-up is different, but unless you need your set-up constantly running 24-7 and every bit of data protected, working within your means to get the best balance of protection and performance just makes sense.
OOOHHHHH A SERIOUS LIVING compared to a non serious one heheheeh
What you describe for backups is a good way to go (I'm still amazed at the number of users with critical data that don't have anywhere near a decent backup system), but it's not the only consideration.
That's all we're trying to point out. Availability (uptime), redundancy, and performance can all be improved from other RAID levels, and a better fit for the user's specific needs.
A stripe set may work for you, but it may not be the best solution for someone else (every configuration has a different balance of the above considerations), and is further complicated by budget constraints (usually what's really needed can't be had, as there's insufficient funds).
From my observations, this is a serious problem with independent pros, and is getting to be more of a problem with corporate situations as well (holding off on tech purchases/trying to piecemeal it together).
I agree. As you mention budget, if you had a much larger capacity requirement, 10 is even harder to justify (budget constraints).
For example, lets say the usable capacity requirement is 12TB (using 2TB enterprise disks). Redundancy = 2 members (to get a fair comparison).
In a 10 configuration, you'd still need a separate controller of some kind, and can get away with a non RAID HBA (say the ATTO H608). 4x disks on the internal HDD bays, 8x on the card, and place the disks in an external enclosure.
12x disks (WD RE4) $3480
ATTO H608 $400
Enclosure (TR8X) $400
HDD Adapter Kit $129
Total = $4409
8x disks (WD RE4) $2320
Enclosure (TR4X) $230
Internal to External Cable $60
HDD Adapter Kit $129
Total = $3339
$1070 difference, for the same usable capacity and redundancy. Sustained throughputs would be around the same as well. 10 would would have a performance advantage for database usage.
For video/graphics work as what's described most commonly in MR (and in this thread), sustained transfers = usage pattern.
I can't say about you, but more than $1k will usually get someone's attention. So the cheaper solution would definitely be more attractive for most.
You've put together a nice setup from what I can tell (1 hr recovery time = nice for a DAS implementation).
in all honesty lots do raid 0 ? lots of pro forums are filled with people asking what to do their HDD failed ?
your BU is sound ? but I still say if you are making serious money and you do all this why leave out the one thing that can take you down ?
this is like being prepared for a cross country journey with spare parts and water and food and no spare tire ?