Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MIKE1971

macrumors newbie
Original poster
Aug 8, 2008
10
0
I have been looking at a lot of different options on higher-capacity storage with redundancy.

Has anyone looked into setting up a hardware RAID system that has over 4TB and what did you do? I really would like to have at least 4TB but I also want to run MAC OS on an SSD drive. I am in design and use a lot of large files, so this is really the problem I am trying to solve.

Idea One:
So I have been mulling over the idea of getting a RAID card (RocketRaid 3520) and running RAID5 with 4 2tb drives. I would need to figure out where to put the SSD drive. The RAID card should also boost performance.

Idea Two:
Running 4 2tb drives as RAID1+0 and figuring out how to also use an SSD to run the OS as a 5th drive (would I need a RAID card still? Can I run as Firewire or something?). This isn't my ideal choice and it doesn't involve RAID cards and extra headache.

Thoughts:
I have read that recovering a RAID5 is horrible and also don't want to have to worry about future conflicts with a RAID card but it should boost my speed considerably? I don't know if I understand RAID1+0 and how it performs and works if a drive fails - sure that is googleable. I am also thinking about using an online backup as well, like Mozy or whatever.

Anyone have any other suggestions on a set up or see any issues? I really would appreciate it as this is just overwhelming. Thanks in advance.
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
I would say read this thread :)

https://forums.macrumors.com/threads/998644/
then come back and post up some questions since it was about the same thing

I do PS and photography for a living
I have a areca raid card over 4TB my card is the 1222x
avoid the highpoint they are junk cards IMHO and I also speak from having them so this is not just something I heard !! total junk nothing but problems !!!


your idea 2
no raid card needed you run the extra SSD in the optical bay as the connections are their already !
the other 4 HDs would be in the sleds !
basically raid 10 is you take two HDD create a raid 1 then repeat that !
so now we have two raid 1 setups ! you create a raid 0 and drag those two into the raid 0 creating the raid 10 setup
the idea is if a disc dies on each side you can keep running ? its pretty quick and with a card raid 10 can offer good things for some people for most designer type people accessing files individually raid 6 is the way to go for a card and without a card raid 10 is a great option a bit of speed and protection :)


thoughts on recovering ? I run raid 6 ? same indifference yeah recovering is slow but oh well if you ever need it
analogy :) rather have run flat tires on a car these days if I can :) than have to get out and change a tire much better to be able to still drive and control things then just POP have it go !


but with quality cards you should not have the issues you worry about ? if a HDD goes down ? better that then loose data and raid 5/6 on a good card is easy stick in the new disc and rebuild it !

a good card with good drives is very fast
 

chatfan

macrumors regular
Nov 2, 2006
103
0
in mah cribb yo
Just remember one simple rule: all harddrives will fail in the end and if you buy 4 drives at the same time, they usually die at the same time. So keep a backup of the RAID because there will be a point if fails. Unless you replace it in a few years.

Used / using over 68 harddrives, 18 have died on me (mostly Maxtor) the new WD drives are very good and no problems so far but still they can die and RAId 10 has a problem: You halve capacity and gain only little security because you really need an external backup. Best is to look for a cheap unlimited online backup storage.

anyway, bit OT but the whole RAID idea is not the heaven it seems. (Lost 3 months of Photo's because my Mac RAID card rom flipped, re-initiliased the raid and made it unreadable for a new card.)

Right now I got 3 QNAP 809 boxes on iSCSI, they manage around 100MB read / write, all in RAID 6. Two have 8X 1.5TB and one 8x 2 TB WD Black. The third is actually a backup of the back. And then there is a online backup as a fourth out of the building one.... RAIDs suck :)
 

nanofrog

macrumors G4
May 6, 2008
11,719
3
Has anyone looked into setting up a hardware RAID system that has over 4TB and what did you do?
  • ARC-1231ML running 8x WD RE3 drives in RAID 5
  • ARC-1680ix12 running 4x 15k rpm Fujitsu SAS drives in RAID 5
  • Other disks used for multiple OS's and backups

It's all internal, but not in a MP (got rid of it, as it wasn't doing what I needed, and wasn't going to be cost effective to get it sorted out; namely due to the various external enclosures needed for so many disks). Nor was I able to use the second RAID card without throttling bandwidth (8x card in a 4x electrical slot).

I really would like to have at least 4TB but I also want to run MAC OS on an SSD drive. I am in design and use a lot of large files, so this is really the problem I am trying to solve.

Idea One:
So I have been mulling over the idea of getting a RAID card (RocketRaid 3520) and running RAID5 with 4 2tb drives. I would need to figure out where to put the SSD drive. The RAID card should also boost performance.
I'm not a big fan of Highpoint, as they don't design or manufacture thier own products. What this means for users, is that their support sucks and the quality/usability of their products varies drastically.

So I recommend staying away from them if you're new to RAID and will never need to boot from the card (other users have had problems getting the EFI firmware needed, and some didn't get it at all).

That said, there are other brands out there that are much better cards than either Highpoint or Apple's over priced pile of junk. Areca and ATTO are two companies to look at. Areca's products are cheaper, but have both excellent performance and features. The compromise may be the support (email instead of telephone, as they're in Taiwan; ATTO is in the US).

But without further information, I'm stabbing in the dark.
  • What system do you have?
  • What kind of OS support do you need?
  • What is the performance requirement?
  • Future capacity expansion (cheaper to get more ports than you need now, and just add drives and enclosures later on)?
  • Do you need to boot OS X?
  • Do you need to run Windows and/or Linux?
  • Budget?

Idea Two:
Running 4 2tb drives as RAID1+0 and figuring out how to also use an SSD to run the OS as a 5th drive (would I need a RAID card still? Can I run as Firewire or something?). This isn't my ideal choice and it doesn't involve RAID cards and extra headache.
What exact system do you have?

Assuming it's an '09/10 model:
  • HDD bays 1 - 4 = 4x 2TB disks in RAID 10
  • Empty Optical Bay = OS/applications SSD

The system's SATA controller will have sufficient bandwidth to handle this as well (ICH has a limit of ~660MB/s; assuming each mechanical disk is 110MB/s, the RAID 10 is good for ~220MB/s, which leaves sufficient bandwidth for an SSD, though the bottleneck is actually a single SATA port, which tops out at 270 - 275MB/s).

Thoughts:
I have read that recovering a RAID5 is horrible and also don't want to have to worry about future conflicts with a RAID card but it should boost my speed considerably? I don't know if I understand RAID1+0 and how it performs and works if a drive fails - sure that is googleable. I am also thinking about using an online backup as well, like Mozy or whatever.
Recovery isn't harder than anything else, and if you pay attention (drive failures), it's actually easier.

What this means is, if a drive dies, it goes into a degraded state (slower). Pull the dead disk, and pop in a new one. Set it as an independent disk (in the RAID card's control panel; specific software for ATTO, via a web browser for Areca), and it then automatically rebuilds the array. :D Same for any RAID card with a redundant array (1/10/5/6/50/60).

If it happens to go badly wrong, it's not much different than recovery from anything else. Fix the hardware, restore the data from backups, and re-perform any missing work (what you did between the last backup and the failure).

The above situation applys to a single disk to the most complicated array set in the history of th world. But what the redundant levels do for you, is allow you to keep from having to do this in the first place. Unlike single disks or stripe sets, that it's required every time there's a failure.

The only thing I can think of that RAID 5 is a problem with (or any other parity based array for that matter), is when it's a software implementation (computer does the calculations). There's something called the write hole issue associated with parity based arrays, and software just is not capable of dealing with it. You need a proper RAID card for this (contains an NVRAM solution = hardware), but if you have such a card, you'll be fine (it needs to run with backup power, which ideally means a card battery and UPS).

But a UPS is a good idea to have anyway (provides protection for your system for undervolt conditions <aka brown-out>, which are actually quite common). Unlike a blackout (no power), brown-outs can actually damage electronics (some power, but not full).

You might want to take a look at the RAID wiki to get you started (you seem to have some understanding of it, but not sure where it ends). The UPS wiki is another good one to read.

I know it's a lot to digest, it should get you started. :)
 

MIKE1971

macrumors newbie
Original poster
Aug 8, 2008
10
0
Thanks

Thanks everyone so far. My head is spinning as I have spent two days reading forums online etc. I am learning a lot (slowly) but still not as much as most people online seem to know. I am starting to think I am over my head but also feel like I am close.

So I am using a Mac Pro Dual Core Intel Xeon 3GHz on Mac 10.5. Thinking about upgrading to 10.6. I only need Mac OS. I am really wanting HD space with speed. I like the idea of expanding in the future but not counting on it now.

Raid Card: I have been seeing a lot of people use Areca, so going to narrow it down to that.
areca ARC-1222 (also the x)
areca ARC-1231ML-2G

These two cards are the ones that I see being used. I really don't know the difference between them and from what I know, I need a bootable card if I am running a hardware raid. Raid 5 has interested me the most as it seems the most secure for the amount of space. I have read about the different kinds of RAIDS many times and I get the general gist of it all. I would use the four bay's to hold the drives.

The second issue is how to use the SSD. From reading, it seems like using the SSD for a boot disc just makes all the apps load faster which I could care less about if that is the only advantage. I am more concerned with having speed when I use applications like photoshop. So thinking about maybe using it as a scratch disc instead. I also can't find any info if you can hook multiple SSD's to the optical. If so I could do one for start up and one for scratch.

Honumaui, appreciate that link and your post, reading it slowly and helped out a lot. Also thanks Nanofrog, reading your posts on other threads now and trying to put everything together in my head. It is still a bit much. I have to eat dinner and will read more tonight.

Chatfan. yeah. That is why I am going to go with online back up as well. Its pretty cheap too so seems worth it.

Thanks again all.
 

noire anqa

macrumors regular
Aug 20, 2010
137
0
I'd seriously consider raid 10 if i were you .. raid 5 is a pain because of the constant parity overhead. Raid 10 will give you the best speed (faster read, write AND seek than raid5) and redundancy (best case can tolerate 2 failed disks vs raid 5's best case of 1 disk) coupled with the shortest rebuild time in the event of a disk failure (raid5 is a dog to rebuild). the only point in which raid 5 is superior to raid 10 is in total array capacity - if you used 4x 2TB disks raid 5 would give you 6TB where raid 10 would give 4TB.

I concur with the earlier posters, put an SSD in the optical bay, run your OS from that. Use the 4 built in sleds with 4x 2TB drives. Also, very important, back up your data! the drives WILL fail on you eventually. Redundancy is great but redundancy != backup. Backup also saves you from user error which redundancy does not.

If you take 4x2TB internal in raid 10, then you could stick 4x1TB (or even 4x2TB if you've got the cash and want multiple backups) in an external esata raid enclosure, with an esata card in one of your pci slots and backup to that array.
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
I'd seriously consider raid 10 if i were you .. raid 5 is a pain because of the constant parity overhead. Raid 10 will give you the best speed (faster read, write AND seek than raid5) and redundancy (best case can tolerate 2 failed disks vs raid 5's best case of 1 disk) coupled with the shortest rebuild time in the event of a disk failure (raid5 is a dog to rebuild). the only point in which raid 5 is superior to raid 10 is in total array capacity - if you used 4x 2TB disks raid 5 would give you 6TB where raid 10 would give 4TB.

I concur with the earlier posters, put an SSD in the optical bay, run your OS from that. Use the 4 built in sleds with 4x 2TB drives. Also, very important, back up your data! the drives WILL fail on you eventually. Redundancy is great but redundancy != backup. Backup also saves you from user error which redundancy does not.

If you take 4x2TB internal in raid 10, then you could stick 4x1TB (or even 4x2TB if you've got the cash and want multiple backups) in an external esata raid enclosure, with an esata card in one of your pci slots and backup to that array.

I would never say raid 10 is always faster ? pretty bold wide statement that I would not say is true ?

raid 5 or 6 with a good card is not a pain ? you set it up you are done ?
rebuild in background mode you keep working ?

raid 6 ANY 2 drives build it with a spare ANY 3 Drives

4 disc raid 10 you better pray if you loose 2 HDD they are on opposite sides !!!

come on both have pros and cons and to paint one as bad is kinda showing lack of knowledge or a bad experience with a card like a highpoint and you then feel that all raid setups are like that ?

I used to run a 8 disc raid 10 setup ? was nice before I had good choices on the mac I liked ?

this is like saying a Porsche is superior in every way to a pickup truck ! except for putting stuff in the back !
OK then pull that boat !! OK get up that muddy snowy hill !
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
look into the 1880 depending on budget :)

read some benchmarks to gives you a idea of what the 1880 series can do :)
http://arecaraid.com/forum/viewforum.php?f=7&sid=f03ba685088d500629c7643ae01a4143

OK
I have a 1222x cause it has external connectors
the case has connectors so its two cables now the way the cable works its like 4 cables in one so the speed is not choked in any way unlike a
a PM or port multiplier you may be familiar with ?
these miniSAS cables allow each HDD to run full speed

the case I have from sans digital comes with the cables and is about $400 at new egg the card at PC-Pitstop for the external is $515 the battery module is another $100 and worth getting ! so figure $1000 for the base setup minus the HDD

1222x and 1231ML-2G are basically the same card ? different connectors and the 1231ml you can hook up 4 more drives the biggest thing is the 1231 you can add more cache memory to it but the processor and the speed are the same
the ML or Multi Lane comes with SAS 4i connectors ? (i for internal 4 the number of drives on each connector) so the cables break out to single SATA type connectors ? for a external you dont really want this cable ? unless you are going to run some internal drives which could be a way to hook up a few inside the mac ?

you can get adapters of dif types to get your card to give you ports that hook up to external cases like these
http://www.pc-pitstop.com/sas_cables_adapters/


ok if I had a 1231 I might run my SSD off it ?
then the internal connectors are SFF-8087 I would get one of these http://www.pc-pitstop.com/sas_cables_adapters/AD8788-2.asp
and a case like I have http://www.newegg.com/Product/Product.aspx?Item=N82E16816111092

the reason I got my card is the 1880 series was not out :) and for what its used for it does the job I needed a 8 drive raid 6 setup
the speed difference in the new 1888 cards is going to be more controlled by the HDD but looking forward the 1888 is the way to go 6gbps etc.. worth looking into if your budget allows ?
now depending on the model and what you do with it will you need external adapters and figure the battery being a extra $100 ?
the 1222x at $515 is a great deal no extra adapters etc.. needed :)

you want to go to http://www.areca.us and make sure the HDD are on their list of OK drives to use with the controllers
get the RE3 if you are starting from scratch :)

unless you need those 4 internal or you want to run 12 drives ? get the 1222x and the case or a 8 bay case of your choice with a 8088 connector ?
I would skip the 1231 and go for the 1880 series !! price gets closer so might as well ?
 

MIKE1971

macrumors newbie
Original poster
Aug 8, 2008
10
0
Still Reading...

Thanks tons for all the reading. I am still trying to understand it all but think I am grasping it more and more each time. So my budget really doesn't have a cap but when I see 3,000 I start to flinch. I was sort of expecting to pay 1g at first, fully knowing that every time I quote what I want to spend, I have to double it for reality. So 2g sounds good to me.

So I did some research the same time you were posting and did see that the WD RE4 drives were the way to go after comparing the 2tb drives. So that is solved.

When I was reading the 1222x seems to jump out also. I like the idea of expanding, but for now I will probably stick with 4 HDD and 1 SDD and see how that goes. OR if you all think it would be smarter to go with 2 SSD (one for OS and one for Scratch) I could do that too.

I was not planning on getting an external case, but I am not opposed to that. I do like the idea of being able to expand. How easy is it to expand a RAID5 system? How does that work? Can you run any number of drives on a RAID5 if you have the 1222x card? Can all of those 8 ports/drives run together? So I could run 8 drives on the 1222x external as one RAID5?

I think this is where I am now, its such a slow pace. I am used to understanding things much faster. :)

Thanks again for all your help!!!! I really really appreciate this.
 

MIKE1971

macrumors newbie
Original poster
Aug 8, 2008
10
0
Ps

I am thinking about getting an new Mac Pro too. The Quad one ($2,500). Seems like a smart move when thinking about my taxes!
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
Raid is like a disease :) those that have it know why they have it how they got it and know they cant get rid of it !

I would say avoid the disease !


serious though unless you really have a reason avoid it for now ! grow into it and wait

so lets back up :) why do you need 4 TB ? what files what kind of stuff ? video photos music ? and what programs do you use :)


if you just want safety and some speed ? the raid 1+0 option with the black WD 2TB for $150 each or so now on sale at newegg are going to be the easiest way to go in some senses $600 and you have 4TB

remember a 1222x case and battery is $1000 and thats empty so RE3 are about $130 ? so another $1000 so you are at over $2000 for single storage and then you have to back that up ? so what you going to do for BU ;)
see how it gets out of hand :) $ wise

so lets say you do the 4 2TB raid 1+0
now lets say you dont go past 3 TB of data to keep it fast :)

get some 3TB external seagates Frys had them on sale for $199 they are the new single HDD not the raid in a box setups !
you could get two and have two bu sets and call it good :)

OR get one and get a 4 or 5 bay sans PM case and put in 3 2TB green drives in JBOD for time machine the 5 bay you get a bit more room to grow but cost more up front ?

this way you have two sets of BU ?

lots of other ways get a standalone raid 5 box ? for time machine etc.c.
 

nanofrog

macrumors G4
May 6, 2008
11,719
3
Thanks everyone so far. My head is spinning as I have spent two days reading forums online etc. I am learning a lot (slowly) but still not as much as most people online seem to know. I am starting to think I am over my head but also feel like I am close.
RAID isn't a quick and easy thing to understand. There's more than just the levels that need to be understood, such as HDD types (enterprise vs. consumer models), enclosures, adapters, ... And a mistake in any part of the system can be disasterous.

So I am using a Mac Pro Dual Core Intel Xeon 3GHz on Mac 10.5. Thinking about upgrading to 10.6. I only need Mac OS. I am really wanting HD space with speed. I like the idea of expanding in the future but not counting on it now.
It's not as expensive as you might think using internal cards (there's a cable that can be used).

Raid Card: I have been seeing a lot of people use Areca, so going to narrow it down to that.
areca ARC-1222 (also the x)
areca ARC-1231ML-2G
Don't do this, as there are other suitable models that may better fit your needs and budget.

ARC-1222 = SAS model (it also runs SATA disks)
ARC 1231ML = SATA only​

Both are specced to 3.0Gb/s per port, as they were designed before the 6.0Gb/s specification was created.

The Areca 1880 series is a very recent card, and is 6.0Gb/s compliant (just came out a month ago).

I need a bootable card if I am running a hardware raid. Raid 5 has interested me the most as it seems the most secure for the amount of space. I have read about the different kinds of RAIDS many times and I get the general gist of it all. I would use the four bay's to hold the drives.
Actually, you only need to boot the card if the OS is on it. All that's needed otherwise are the drivers (i.e. separate boot disk or array attached to a different controller, such as the system's ICH).

As per using the internal disks, you need to decide for certain if you're getting a new system now or not, as there will be different adapters needed to make it all work between the different systems. So not moving it from the existing 2006 model will actually save you money.

The reason for this, is that the 2006 model has a cable that attaches to the HDD bays and plugs into a MiniSAS connector on the logic board (SFF-8087 port). Unfortunately, it tends to be too short to reach the card, so you have to use an extension cable to make it reach.

From 2009 on, the data is carried by traces directly on the logic board, so it needs a different adapter (a kit actually, as it's directly in each HDD bay) to get the HDD bays operational with a 3rd party RAID card (Apple's pile of junk RAID card can use the traces, but isn't worth having).

The second issue is how to use the SSD. From reading, it seems like using the SSD for a boot disc just makes all the apps load faster which I could care less about if that is the only advantage. I am more concerned with having speed when I use applications like photoshop. So thinking about maybe using it as a scratch disc instead. I also can't find any info if you can hook multiple SSD's to the optical. If so I could do one for start up and one for scratch.
If you'd rather use SSD for scratch, that's fine as long as you realize that they have to be replaced every 1 - 1.5 years (MTBR = Mean Time Between Replacement).

Physical installation is easy, but getting them connected to a controller can be more difficult, depending on which machine is used (2006 - 2008 systems are easier on this account if you want to use the ICH). A separate controller isn't that big a deal, exept there's more money involved.

For the 2009 systems, placing a single unit in the empty optical bay is easy, but you only have a SATA connection (ICH) for a single disk. This is where you'd need another card for a second disk at that location. Another option, one many don't like as much, is to remove the optical drive from the top bay, and move it to an external enclosure (USB is best, as it will work with any OS). This gives you another internal port for an SSD without consuming an HDD bay.

As you can see by now, the internal situations are specific to the models being used.

I'd seriously consider raid 10 if i were you .. raid 5 is a pain because of the constant parity overhead. Raid 10 will give you the best speed (faster read, write AND seek than raid5) and redundancy (best case can tolerate 2 failed disks vs raid 5's best case of 1 disk) coupled with the shortest rebuild time in the event of a disk failure (raid5 is a dog to rebuild). the only point in which raid 5 is superior to raid 10 is in total array capacity - if you used 4x 2TB disks raid 5 would give you 6TB where raid 10 would give 4TB.
This isn't completely true, especially with a proper RAID controller (not at all in this case).

Please don't take this as a harsh reply, as parity based arrays do mean the specifics are more complicated (i.e. software vs. hardware implementations in particular). I'll explain a bit further....

The parity overhead is totally moot with the equipment being considered, as it's done by the RAID card (Fake RAID controller = software implementation, you'd be correct in terms of system overhead). Since the Fake controller doesn't have a processor, it has to use system resources to get the calculations done, and it's more complicated than 10 = more clock cycles will be needed (shows up in CPU % utilization).

A proper RAID card however, has it's own processor, cache, and NVRAM solution to the write hole (another problem with software implementations for parity based arrays = not suited).

As for performance between RAID 5 and 10, definitely not true at all. Especially as you increase the member count. Using the same 4x disks, you'd get over 300MB/s out of the RAID 5 vs. 200MB/s for the same disks in a 10 configuration. The cost for the speed is that the redundancy is only a single disk vs. 2 for a 10 configuration. But given it's a workstation (user at the system = total control both in terms of settings accessibility and physically), it's an acceptable compromise compared to a remote system ( = netowork access for settings/management, not physical access; someone has to be sent out if there's a problem).

And in OS X's case, you do place the overhead on the system (not terrible by any means, but it's there).

If you were talking about a RAID 6, you'd be correct (same redundancy, but the parity calculations do slow it down a tad as it's more complex than 1+0; both using the same 4x disks).

Also, very important, back up your data! the drives WILL fail on you eventually. Redundancy is great but redundancy != backup. Backup also saves you from user error which redundancy does not.
Absolutely.

If you take 4x2TB internal in raid 10, then you could stick 4x1TB (or even 4x2TB if you've got the cash and want multiple backups) in an external esata raid enclosure, with an esata card in one of your pci slots and backup to that array.
You even have to be careful with the external solutions if you want to use parity based arrays, as some are software implementations, and others use a simple, inexpensive hardware controller (RAID on a Chip, aka RoC), such as an Oxford 936 or similar part from other vendors (LSI, JMICRON, VIA).

I would never say raid 10 is always faster ? pretty bold wide statement that I would not say is true ?

raid 5 or 6 with a good card is not a pain ? you set it up you are done ?
rebuild in background mode you keep working ?
Definitely true.

I suspect the confusion is based on experience with software implementations for RAID 5 (disasterous at some point = when a problem occurs, not if), not a proper hardware controller.

raid 6 ANY 2 drives build it with a spare ANY 3 Drives
It's better to think of a 4 disk minimum to build a RAID 6 (just as it's a 3 disk minimum for RAID 5).

look into the 1880 depending on budget :)
+1 on this, as it opens up future expansion.

I can forsee issues similar to the 1.5 to 3.0Gb/s transition, as we're starting into the 3.0Gb/s to 6.0Gb/s transition now (can be particularly problematic with RAID cards). This is the biggest reason to try and use 6.0Gb/s HDD's (they are beginning to show up), if you're going to be running SSD's on it as well (one of the attractive reasons for getting this card).

Simply put, it offers a longer usable lifespan as it's compliant with the newer specification. ;)

1222x and 1231ML-2G are basically the same card ?
Actually, there's a difference that may or may not matter for a specific user's needs (see above). ;)

you can get adapters of dif types to get your card to give you ports that hook up to external cases like these
http://www.pc-pitstop.com/sas_cables_adapters/
Do not attempt to use these types of adapters with SATA drives. The reason is, the voltages are too low (600 mV DC), = the array is unstable at best (you may not even get it to initialize).

If you're using SAS disks and/or SAS expanders, you can use them, as SAS uses much higher voltages (20V DC).

the reason I got my card is the 1880 series was not out :)
You, me, and most everyone else. :D

now depending on the model and what you do with it will you need external adapters and figure the battery being a extra $100 ?
the 1222x at $515 is a great deal no extra adapters etc.. needed :)
Again, skip the adapters with SATA disks. Even with SAS, there's a less expensive alternative.

Instead, use an internal to external MiniSAS cable. ;) This will work with SATA disks (been there done all of this, so it's definitely not opinion; never thought the contact resistance would have been siginificant enough to cause all the problems that occured, but I was absolutely wrong on that one :().

you want to go to http://www.areca.us and make sure the HDD are on their list of OK drives to use with the controllers
get the RE3 if you are starting from scratch :)
Absolutely. This simple step will save endless hours of aggravation due to trying to use what are ultimately the wrong drives (run through every possibility to get them running, only to find out that none of it helps at all).

Another thing that needs to be mentioned (just in case it hasn't or hasn't sunk in), is that when using a RAID card, you must use enterprise drives (that's all that will be listed on Areca's HDD Compatibility List), as the recovery timings used are different than consumer models. It has to do with how recovery is handled between the system's controller and the RAID card (consumer = 0,0; enterprise = 7,0; values are in seconds, read & write respectively).

Thanks tons for all the reading. I am still trying to understand it all but think I am grasping it more and more each time. So my budget really doesn't have a cap but when I see 3,000 I start to flinch. I was sort of expecting to pay 1g at first, fully knowing that every time I quote what I want to spend, I have to double it for reality. So 2g sounds good to me.
$1000 is probably going to be a bit too tight, especially with a card. Drives (capacity and quantity needed) and enclosures will have the greatest impact on the cost.

It seems that you don't need an SSD for a boot drive, and you can actually put the scratch space on the primary array when using such a solution (i.e. add in a separate scratch array or just expand the existing array for performance down the road as funds become more available).

The point is, once you start to narrow it down, there are ways to plan for future expansion to keep the initial costs down while getting you started, and allows you to grow in both capacity and performance (i.e. buy a sufficient card for what you really need, but only buy drives and enclosures that are needed at that instant).

There are options (might even have to transition from OS X's 10 implementation using enterprise disks to a RAID card later on; not sure of the capacity requirements just yet). ;) This is a worst case, but you can recycle the disks to the controller later on (no wasted funds). But you will need a proper backup system to start with so you won't loose any data you've already created that needs to be kept.

When I was reading the 1222x seems to jump out also. I like the idea of expanding, but for now I will probably stick with 4 HDD and 1 SDD and see how that goes. OR if you all think it would be smarter to go with 2 SSD (one for OS and one for Scratch) I could do that too.
The ARC-1222 (i or x) are good cards, and offer a nice price/performance ratio. It's why they're so popular for MP users (few if any would ever need to use SAS expanders; 1680 and 1880 series can use SAS expanders = up to 128 disks on one card). ;)

As per SSD's, if your budget is that tight, then put upgrade budgets into RAM, the primary array, and backup systems.

I was not planning on getting an external case, but I am not opposed to that. I do like the idea of being able to expand.
You can use the internal card, the HDD bay adapter (assuming a new machine), and drives to get started. Then get the internal to external cable linked above, and a 4 bay SAS enclosure later on for expansion.

How easy is it to expand a RAID5 system? How does that work? Can you run any number of drives on a RAID5 if you have the 1222x card? Can all of those 8 ports/drives run together? So I could run 8 drives on the 1222x external as one RAID5?
  • Expansion = easy. Just physically install the disk, and add it to the array (via the web interface). The system will do the rest.
  • Minimum drive count = 3, up to the port limit of the card. As a general rule, I wouldn't ever go over 12 disks in a RAID 5, but that's not an option with the ARC-1222 anyway (limit = 8 in this case, and is safer anyway).

I am thinking about getting an new Mac Pro too. The Quad one ($2,500). Seems like a smart move when thinking about my taxes!
This is important, as the two different sytems are different internally in terms of adapters needed (how Apple dealt with the HDD bay connections).

Raid is like a disease :) those that have it know why they have it how they got it and know they cant get rid of it !

I would say avoid the disease !
I think it may already be too late. :eek: :p

serious though unless you really have a reason avoid it for now ! grow into it and wait

so lets back up :) why do you need 4 TB ? what files what kind of stuff ? video photos music ? and what programs do you use :)
I've asked all the perntinent questions, and am awaiting answers. Once given, we can go from there.

The mention of the $1000 target (not sure how hard this is), will verly likely be problematic. The internal card + HDD adapter kit for the 09/10 systems = $589 before shipping (still need enterprise disks, and with the 4TB capacity requirement, will go over by a notable amount).


The 1TB WD RE3 drives are $130 each at newegg, and the 2TB RE4 drives are $290 each.

If the capacity requirement can be lowered, then the disks would only add $520 (not too horrible, but usable capacity = 3TB, not 4 -and the additional member may be a tad faster setup). RAID system totals out to $1109. Not horrible at all, but over the $1k limit mentioned.

But 3x of the 2TB units (results in 4TB usable capacity) = $870. RAID system totals out to $1459. This gets worse, and may be a tad slower. But it's also easier to upgrade with one additional disk (better performance and capacity than the setup listed just above).

The 1.5TB units wouldn't be as cost effective as either solution above IMO.

But none of this includes a backup system yet, which will push the price up yet.

For cost reasons, it's best to go with a PM enclosure kit (includes the eSATA card), and 4x 1TB Greens. Run it as a JBOD to get the maximum capacity without increasing the risk of data loss.

Enclosure (Sans Digital TR4M) = $130
4x 1TB Green drives = $280

Backup System Total = $410

For the 3TB usable capacity configuration, Grand total = $1519. The 4TB set is $1869.

OEM disk = boot drive or could be added to the backup system (for the latter, it could shave off $70).
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
did not know about the adapters :) good thing I was told they would work no issues :) the little things to learn and put in memory banks :)

also why I like simple things kept simple !!!!


I said:
raid 6 ANY 2 drives build it with a spare ANY 3 Drives
Nanon said: It's better to think of a 4 disk minimum to build a RAID 6 (just as it's a 3 disk minimum for RAID 5).


this part :) heheheh I meant how many can die not ever to setup a raid 6 with that many minimum :)

trying to say you can have 2 in raid 6 die ! if you have a spare as in the 3rd and if a HDD dies it will pick it up put the spare into play and you can keep going with no down time etc.. and technically 2 more could die you would still be OK :)
just so I dont sound stupid :) I write poorly sometimes :)
 

nanofrog

macrumors G4
May 6, 2008
11,719
3
did not know about the adapters :) good thing I was told they would work no issues :) the little things to learn and put in memory banks :)

also why I like simple things kept simple !!!!
As I mentioned, this was learned the hard way, as I was used to them with SAS systems, where they work just fine (makes for a cleaner installation when you've a system that's a hybrid = internal + external members you don't want to rely on SAS expanders, such as the 1680 series with internal ports + 1 external MiniSAS port, and you can't get all the disks internally).


I said:
raid 6 ANY 2 drives build it with a spare ANY 3 Drives
Nanon said: It's better to think of a 4 disk minimum to build a RAID 6 (just as it's a 3 disk minimum for RAID 5).


this part :) heheheh I meant how many can die not ever to setup a raid 6 with that many minimum :)

trying to say you can have 2 in raid 6 die ! if you have a spare as in the 3rd and if a HDD dies it will pick it up put the spare into play and you can keep going with no down time etc.. and technically 2 more could die you would still be OK :)
just so I dont sound stupid :) I write poorly sometimes :)
It was a bit confusing, but it's not a big deal. It's sorted out. :D

Hot spares are nice to have, but I'm accustomed to DAS systems not having them in smaller configurations. So the DOA unit has to be pulled, replaced with a new unit, and enter the web interface (ARCHTTP) and set the disk as a single. Then it will go into auto rebuild.

Not that big a deal IMO when you've physical access to the system. A remote OTOH, is a completely different scenario. Hot spares are a necessity, as you don't have access, and there may not be anyone on duty there with access to the rack room (not a full data center), as can happen in satellite offices (i.e. it's a locked closet/room, and either no one has the key, or they're clueless as to what to do, and can't follow directions). Been there, done that (dealt with this on a number of occasions).
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
yeah I am tough to understand sometimes :)

but I know I just like plain raid 6 no hot spares ? my thoughts on why are I am next to my machine if anyone can sit through the noise and not realize a HDD died :) and would rather have that bit more speed and storage :)
 

MIKE1971

macrumors newbie
Original poster
Aug 8, 2008
10
0
Getting closer

So I really would first like to say thanks for all your help. I am starting to "break on through" with your info. So I will try and give you all some background of why I am doing this and then some ideas from what you have posted. I am trying to answer all the questions you ask, so if I am missing something, let me know.

The reason why I am wanting to do all this is I have an old software RAID on a PC that is almost full at 1.5 TB. I have another 1TB total full though three different discs. What I wanted was a machine that I can set up to use as a work station/RAID. So I can put all my info (2.5tb now) in once place and have some security. I have none right now. I want to condense as much down into one or two machines as I can and also keep it exclusively Mac OS.

I would like at least 4tb one one machine as I produce big Photoshop files when I work and on top of that, I have a new camera with HD Video and 15+ meg photos. I can see me starting to fill my space up fast. I use Photoshop and Illustrator almost exclusively and every now and then some video and audio apps.

I am fine with spending what is needed to get a proper system that will last me and function well. So my first two requirements are speed and space. I would like to have the opportunity to expand in the future too as I don't see 4tb being a large amount of HD space. After some thought, I am going to get a new 2010 Mac Pro Quad too as my current machine is getting older. So guess this is really where the issue comes in from what you are pointing out. I would rather spend a bit more on the RAID now than having issues down the road because I bought outdated hardware.

I already purchased a 60gig SSD (G SKILL pro) and going to use that as a Boot Disc on the 2010 Mac. I like the idea of a scratch disc but replacing drives every two years seems not like something I want to do.

It seems like the ARC-1880 is the future with what you both are saying with the SAS, 6gb transfer, and speed of the drives. The only thing is it doesn't seem like any of the 2tb/6gbs drives are compatible with Arc-1880. I just wrote to Areca to see if that is true. Even if you go with new system would you suggest the 1222 still? The 1222x seems to not get good ratings. The ARC-1222 has the SAS but then I have to use slower drives, is that not as good? I am fine with spending around the $2,000+ range.

Nanofrog's combo is this:

Arc-1222 $500
4 x 2TB RE4 drives $1160
Enclosure (Sans Digital TR4M) = $130

For backup, I am going with online.

Would it be foolish to just get 4 x 2tb/6gbs drives and run them as RAID1 for now and call it a day?

So what would you suggest doing with what I have said? I am fine with starting here but I also want the room to grow as Nano pointed out. I also am a bit intimidated by actually putting all this together myself being a noob to all this.
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
my thoughts
the sans box he listed is a PM enclosure ? I am pretty sure he meant that for the other user to use as a JBOD backup set ? he will come in and correct it I am sure wrong connectors :)
TR4X is the 8088 connectors :) but unless you only do 4 outboard HDD get the 8 disc case ?


my current setup I have a post production company that works with just other pro photographers and I do photography ? I used to be a full time commercial shooter ? but moving we lost our client base and decided to setup a business we can move anywhere we have a connection ! so I do a lot of PS work
I play with video ? but I am no video expert when it comes to FCP just a fun thing for family movies :) this might help you know where I am coming from :)

so my setup is areca 1222x
(this was not out)
the reason I like the X is the external connectors are easy to work with no adapters (found out these wont work with SATA anyway) so glad I went with my gut
this is my 3rd hardware raid setup I tried intel in my PC days still used macs but had another reason I had to have PCs and HighPoint on the macs and they were junk ! I am happy with the Areca setups !
OK the case is this TR8X
comes with the cables no hassles ?
I dont need more than 8 :) even though the tech in me wants more but thats another thing :)

I would say get a setup like this and fill it up with RE3 or RE4 HDDs ?
depending on space and money ? the 2TB of course are going to be way more $ but you get way more storage ! to me the reason to go with this is two fold !
speed and security so fill up the case with 8 HDD that will allow you a few years of use !

you might look at the RE3 1TB drives for $129 each
8 of these just over $1000
the case is $400
the card I would get the 1880X
purely cause its going to be no hassles to setup !
$780 battery module $100
so about $2300 you have a super fast super reliable solution and with the 1880 series some future upgrade path when faster discs come out in a few years replace the discs !

the other 1880 series cards have more ports and some other things like optional memory upgrades for cache etc..
but you will need a special cable to get that internal port to the outside !
it does allow you to run some extra discs inside ?
so they are a option ? me I like clean and simple also ? so the two port X version are nice for many of us !
I say use the inside for your boot and scratch setup use the areca for storage and go outboard PM stuff for BU !


so BU remember what you do in the raid you have to BU !!! that is where money also comes in
I use
Product.aspx
for my time machine its filled with 2TB discs I used Hitachi since I had 6 around already :)
this would be good for BU as it gives you one more layer of protection from a HDD failure ? they are fast enough for BU but no where close to the areca setup !
figure $350 case and 5 2tb HDDs for $120 each about $1000 for BU or time machine ?
use your online for the 3rd layer you want a local set !!!! you can get away with out it :) but its like saying I dont need a spare I have AAA then you find out you get a flat in a area with no cell service and nobody is coming buy to save you ! :)
the local BU is important along with offsite 3rd layer which your online is doing for you !!!
 

nanofrog

macrumors G4
May 6, 2008
11,719
3
I am fine with spending what is needed to get a proper system that will last me and function well. So my first two requirements are speed and space. I would like to have the opportunity to expand in the future too as I don't see 4tb being a large amount of HD space. After some thought, I am going to get a new 2010 Mac Pro Quad too as my current machine is getting older. So guess this is really where the issue comes in from what you are pointing out. I would rather spend a bit more on the RAID now than having issues down the road because I bought outdated hardware.
We'll get you sorted. ;)

The ARC-1880 series (as can all of Areca's SAS cards), run 2TB drives (WD's RE4 and RE4-GP).

Given 6.0Gb/s is out, this is really the card to go for. As for speed, the way you can increase this and capacity more cost effectively, is to run a larger member count of smaller capacity drives (i.e. run more than 4x disks). You'll end up with a faster array of sufficient capacity, and even if you have to get a card with a few additional ports, it tends to end up costing less than using fewer, larger capacity drives as it currently stands ($290 for an RE4 vs. $130 for a 1TB RE3).

Ultimately, you run the numbers, but more ports gives you some options and room to grow. I actually prefer to get users into a card with 4 ports more than their initial requirements for this purpose, as it's cheaper to just add drives for expansion, than to have to replace the card. If you get the cycle right, you can flip-flop between adding disks, and replacing them with larger units (still see a capacity increase each time if needed).

Also keep in mind, that you should use an MTBR with mechanical disks as well (typically 3 years).

To me, a nice card to get would be the ARC-1880ix-12 (run 4x internal, and up to 8x external). External enclosure = Sans Digital TR8X.

I already purchased a 60gig SSD (G SKILL pro) and going to use that as a Boot Disc on the 2010 Mac. I like the idea of a scratch disc but replacing drives every two years seems not like something I want to do.
This is fine, but keep in mind, you need to realize that you should establish an MTBR with mechanical disks as well.

I've run the numbers out, and the 40GB disks (2x stripe set) on a 1 year MTBR work out cost wise vs. a 4x mechanical set on a 3 year MTBR (no where near as drastic as it was just a couple of months ago, when mechanical had a cost advantage, assuming sufficient physical locations were available).

And it's faster as well, so the SSD set actually has an advantage IMO. Ultimately however, RAM is more important than scratch (ideally, you want enough the application never needs to run scratch in the first place). CS5 at least will allow access to more memory than previous versions without creating RAM disks from physical memory (trick applied to earlier versions as I understand it).

It seems like the ARC-1880 is the future with what you both are saying with the SAS, 6gb transfer, and speed of the drives. The only thing is it doesn't seem like any of the 2tb/6gbs drives are compatible with Arc-1880. I just wrote to Areca to see if that is true. Even if you go with new system would you suggest the 1222 still? The 1222x seems to not get good ratings. The ARC-1222 has the SAS but then I have to use slower drives, is that not as good? I am fine with spending around the $2,000+ range.

Nanofrog's combo is this:

Arc-1222 $500
4 x 2TB RE4 drives $1160
Enclosure (Sans Digital TR4M) = $130
Ideal setup (Primary Data):

MaxUpgrades adapter kit $129 (allows you to run 4x internally if you wish; left this out of the price, as you can stuff all 8x disks in the external). But if you do this, you *could* go with the smaller TR4X (4 bay enclosure unit) for the externals, and get a second one when you need it. Personally, I'd go ahead and get the 8 bay unit, and get the internal adapter later on when you need it (easier to deal with in terms of clutter IMO).

Subtotal = $2419 (covers your primary data array)

Primary Backup:
  • Sans Digital TR8MP $350 (this is a Port Multiplier enclosure, and connects to the system via an eSATA card). Please note it comes with a card, but it won't work in the MP, as there's no drivers (and it's Highpoint, which you don't want anyway).
  • eSATA card = newertech PM version $80
  • 7x 1TB Green drives (JBOD configuration) $490

Subtotal = $920

Grand Total = $3339 (covers primary data and primary backup)

I know this seems expensive, but you're starting from scratch, and you really need the backup system as well. It's a system that will grow with you (highly important, and it's actually cheaper in the long run, as you don't have to get a new card and enclosures every time you need to make an upgrade). Because of this, you can transfer it from system to system (another reason why it's nice to have, and it pays for itself over time = better ROI).

Also, the above pricing doesn't take any shipping charges into consideration, or the all important UPS (you really need this). I'd go with a refurbished SUA1500. A battery for the RAID card is a good idea too, but at a bare minumum, get the UPS over the battery (allows you to shut down, and it protects your system from undervolt conditions; surge suppressors cannot do this). You also need to have a good surge suppressor (hope you already have one) between the wall and the UPS (trying to get you over 3k Joules of surge suppression, as the UPS only has 459 Joules worth of suppression in it, which isn't that much).

So don't forget to take the above (UPS, surge suppressor, shipping, and maybe the card's battery) into consideration as well.

I'll stop scaring the crap out of you now.... :eek: :D :p

But I can't stress enough how important these items are.

For backup, I am going with online.
I'd recommend keeping a primary backup system on site for speed. If you have a major problem, getting your data back from an online (off site location such as Mozy), takes to long to get up and running.

Not that it's a bad idea, as it can protect you from disasters such as fire, flood,... (i.e. Acts of God type of situations). But it's really a secondary backup, not primary IMO.

Would it be foolish to just get 4 x 2tb/6gbs drives and run them as RAID1 for now and call it a day?
Unfortunately, Yes. 6.0Gb/s mechanical drives won't get you any advantages, as I presume you mean to connect them to the sytem's ICH.

You'd be better to get 3.0Gb/s drives, and run them in a 10 configuration. You will loose half the total capacity (4TB usable if you use 2TB models), and the performance would be around 220MB/s.

my thoughts
the sans box he listed is a PM enclosure ? I am pretty sure he meant that for the other user to use as a JBOD backup set ? he will come in and correct it I am sure wrong connectors :)
TR4X is the 8088 connectors :) but unless you only do 4 outboard HDD get the 8 disc case ?
Exactly. That particular enclosure was meant for a primary backup source, not the primary array.

Array = SAS enclosures
Backup = Port Multiplier enclosures (assuming an iSCSI box is out of the question, such as a ARC-5040-U3, as it's $1100 without disks). Fast, and the RAID is a hardware implementation (not as fast as what the primary array will be capable of though). ;)


you might look at the RE3 1TB drives for $129 each
8 of these just over $1000
the case is $400
the card I would get the 1880X
purely cause its going to be no hassles to setup !
$780 battery module $100
so about $2300 you have a super fast super reliable solution and with the 1880 series some future upgrade path when faster discs come out in a few years replace the discs !
Very similar to what I went with above, but I went for the 12 port version for expansion (easier and cheaper to add smaller disks than have to swap them out each time you need additional capacity).

But if he decides that's not needed, I'll second the ARC-1880X, 8x 1TB RE3's, and 8 bay enclosure we've both linked (makes sense, as they're good products, and have a good price/performance ratio).

but you will need a special cable to get that internal port to the outside !
Yes, but it's not that big a deal IMO. ;)

I say use the inside for your boot and scratch setup use the areca for storage and go outboard PM stuff for BU !
The setup I've listed above relys on this as well.

If/when the internal HDD bays are needed for expansion, the SSD's will need to be moved to the optical bay (4 * 2.5" bay cage), and attached to another card (i.e. cheap 6.0Gb/s HBA available at that time; they should have a few more available, and be less expensive, say the $130 range).

As per the backup system, it's cheaper to use more 1TB disks than the 2TB models (half the cost for the needed capacity, as 7x 1TB Greens = $490). One bay is left open as well (7 disks in an 8 bay box), which could be used to slap in a larger disk when needed (add it to the JBOD to expand the capacity). ;) Usable starting capacity = 7TB. Add in a 2 or 3TB disk, and you get 9 or 10TB just by adding a disk. After that, he'd need another box, or start swapping existing disks for larger models).
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
one thought you mention bad reviews on the 1222x yet look at the 1222
http://www.newegg.com/Product/Product.aspx?Item=N82E16816151039&cm_re=1222-_-16-151-039-_-Product

a couple idiots got the cards did not know whats up gave it bad reviews ? trust me I would not get it if it was bad !!!

good idea to go with 12 port though ?

again I have a few boxes and the 8 port did exactly what I wanted and my known path for HDD upgrade is going to be the 2TB RE4 when price drops by about next year then this gets moved down and the machine that has raid 10 gets replaced with this areca then I get a new 1880 :) and the cycle continues :)

I skip every other model mac 1,1 to 3,1 to 5,1 and instead of the 2,1 model I dump that update money into my storage to bring things back up on that box so our 1,1 boxes are getting updated now and the 3,1 moves down a position with new storage I set up

so I do like clean and easy but I have to remember to step out of my box sometimes :)
 

deconstruct60

macrumors G5
Mar 10, 2009
12,296
3,890
The parity overhead is totally moot with the equipment being considered, as it's done by the RAID card (Fake RAID controller = software implementation, you'd be correct in terms of system overhead).

This is not really true. You are spinning that the computation overhead has been offloaded to the RAID card. There is more to calculating parity that just calculations. You first have to have the data to calculate from to compose the changes to the parity data. Then you can write the data. So:

One RAID 5 write is 1 read data block(s*) + calculate new parity + write new block + write new parity block

One RAID 10 write is write data block + write data block


[* blocks(s) because at least one block, but often two (e.g., the old parity and the old data block ). However, typically can grab those in parallel. ]

Since disk operations are relatively slow and RAID 5 has an extra one, it should be pretty clear that generally it is slower than RAID 10.

On reads they are more close to even ( can play some load balancing tricks by alternating reading out of different sides on a RAID 10 since have two duplicate copies. You can shift workload to the "other side" if start to get a slowdown. )

Random access workloads with with deep command queues are going to preform better on RAID 10 then on RAID 5-6. The benchmarks where see RAID 5-6 even out with 10 is where the stripe size gets so wide that can more easily hide the extra read and the workload is forced into a highly distributed state. For 3-4 drive set up there will be one. That's why see lots of the counter arguments start off with stack of 8+ drives and then trott out the sequentail streaming benchmarks. ( sequential track lay out reduces the read access latency times embedded in the writes for RAID5-6 )





Since the Fake controller doesn't have a processor, it has to use system resources to get the calculations done, and it's more complicated than 10 = more clock cycles will be needed (shows up in CPU % utilization).

This is too often a moot point. Two reasons. First, if the Mac Pro has unused cycles won't miss what was being flushed 'down the toilet' anyway. ( the ZFS file system, among others, turns on volume/raid management all the time on boxes and few are significantly impacted by the CPU utilization jump. ) Second, there is no reason why the RAID card can't do the RAID 10 also. Looping back to CPU overhead is a misdirect away from the card by introducing "apples-to-oranges" comparison.




A proper RAID card however, has it's own processor, cache, and NVRAM solution to the write hole (another problem with software implementations for parity based arrays = not suited).

it is a problem for RAID 5-6 cards too. The battery back up "solves" write hole problem. Actually, it doesn't solve it ( as some data centers close to the World Trade Center learned a couple of weeks after 9/11 when couldn't get in to get to data before batteries died. ) . The problem is triaged until you can get the drives back up and running so you can finish the transaction. Parity has a problem because the single logical write you do explodes into two writes which both have to complete as a transaction. RAID 1 (or 10) doesn't really add non atomic transaction elements to the set up. If the mirrored writes are submitted in parallel then. If know which goes first if sequential then can no worse that single drive performance in power outage.


As for performance between RAID 5 and 10, definitely not true at all. Especially as you increase the member count.

As pointed out above, this because covering up the extra overhead.
You are also alluding to changing the stripe width between the two.
Even out the stripe width and it is faster. It costs more but you get more.

I suspect the confusion is based on experience with software implementations for RAID 5 (disastrous at some point = when a problem occurs, not if), not a proper hardware controller.


No this is also true of hardware RAID. There is often hardware "lock-in" with hardware RAID. With software RAID there isn't. It may have changed in some vendors evolutionary product paths, but it is likely that only that specific card with that generally specific firmware version can read those disks. If that card dies and you need to replace it the whole data set could be put into jeopardy if can't get access to that specific card/firmware version anymore.

Software RAID typically doesn't have the problem. Even if change RAID encoding schemes often will be able to load up old drivers to read the data off the old drives and just use the new drivers to interact with the new.
(or mount drives on VM running old OS+softraid to get data off. )


The "recovery is simple" is all premised on the specific card being present.
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
I would ask you then to post some raid 10 times you have ? using aja since its free and easy to get for everyone :) post the settings you used ?
post your HDD used and such also please :)
also post the card and such that you are using to test the raid 6


I will post some times with my Areca 1222x ?
and the 4 disc raid 10 on our other machine
I dismantled my 8 disc raid 10 setup but my raid 6 areca beats it ?
 

nanofrog

macrumors G4
May 6, 2008
11,719
3
This is not really true. You are spinning that the computation overhead has been offloaded to the RAID card.
The post was based on a level 10 on the ICH, and level 5 on the RAID card. Given the card does have the cache to contain the data while it performs the calculation and write cycles, it has an advantage in this case.

The card can of course run either (10 or 5). But parity does have the capacity advantage, which fits the OP's needs for both performance and budget.

BTW, the performance requirements is for large sequential files, not a database, where the random access performance of a 10 would be beneficial.

Based on the following...
I am in design and use a lot of large files...

I read this to mean applications such as CS5 (Photoshop,...), so my comments were aimed at the specific usage, not as a general rule. I didn't mention this (or go into the additional detail, as I was concerned about information overload, and ultimately, it doesn't apply to the usage described).

Two reasons. First, if the Mac Pro has unused cycles won't miss what was being flushed 'down the toilet' anyway. ( the ZFS file system, among others, turns on volume/raid management all the time on boxes and few are significantly impacted by the CPU utilization jump. ) Second, there is no reason why the RAID card can't do the RAID 10 also. Looping back to CPU overhead is a misdirect away from the card by introducing "apples-to-oranges" comparison.
We're not talking about ZFS implementations (ZRAID1 or ZRAID2), as this doesn't suffer the issue of the write hole at all, unlike parity based arrays. Nor is the system a dedicated storage server (clock cycles meant to run the storage system).

Granted, I understand your point, but I'm also looking at this from a worst case POV (presume the applications will be used in a manner those cycles would be better used for the application rather than storage overhead).

Since the MP can't do RAID 5 on it's own, and OS X can't do ZFS implementations, a card is really the only alternative in the OP's situation. I've also the impression that the budget won't handle a level 10 configuration to meet the capacity needs (too many disks, enclosures and port count on the card). Nor is 10 truly necessary IMO, given it's large sequential files (and the user is at the system in the event there is a drive failue/other problem).


The battery back up "solves" write hole problem. Actually, it doesn't solve it ( as some data centers close to the World Trade Center learned a couple of weeks after 9/11 when couldn't get in to get to data before batteries died. ) . The problem is triaged until you can get the drives back up and running so you can finish the transaction. Parity has a problem because the single logical write you do explodes into two writes which both have to complete as a transaction. RAID 1 (or 10) doesn't really add non atomic transaction elements to the set up. If the mirrored writes are submitted in parallel then. If know which goes first if sequential then can no worse that single drive performance in power outage.
Ultimately, power is a problem for any of it. As can be the application itself (i.e. doesn't have the ability to automatically resume the processing once the power has been restored; no log/reference point to proceed from).

In the case of the World Trade Center aftermath, that one was unforseen, and couldn't have been realistically accounted for. That is, even if you can think about a scenario, at some point, you have to quit/set limits (i.e. establish a fuel capacity for backup generators, battery capacity,...).

There is often hardware "lock-in" with hardware RAID.
If you mean attempting to run an array created under say Areca on a 3Ware, then Yes.

But I've seen it with software implementations too (i.e. drivers designed for a different controller chip; could be a new system board or card). The same can happen with OS implemented versions as well.
 

MIKE1971

macrumors newbie
Original poster
Aug 8, 2008
10
0
Great

So this is what I am seeing:

:
ARC-1880ix-12 $860
8x 1TB RE3 drives $1040
Enclosure = Sans Digital TR8X $400
2x Internal to external MiniSAS cables $119
APC SMART-UPS 1500 $250

I will also be getting some RAM and software also so feel like I am spending my life (and time!) away. ;)

I bought the MAC and pretty close to the RAID it seems. I agree on the 1tb drives as it is less than half and probably easier in the long run.

Questions:
I guess I was planning on putting the SSD in the optical drive. So am I understanding that the RAID card totally replaces all my Mac drives (so I can't use the internal bays or optical?)?

I am probably going to start with 5 1tb drives or so and then work up. Will help ease that $$ pain. Guessing you don't have to use all exact same drives or is that advised? Just thinking down the road.

K really have to get back to work, took a reading/researching break last night. Thanks and getting excited about this now.
 

Honumaui

macrumors 6502a
Apr 18, 2008
769
54
So this is what I am seeing:

:
ARC-1880ix-12 $860
8x 1TB RE3 drives $1040
Enclosure = Sans Digital TR8X $400
2x Internal to external MiniSAS cables $119
APC SMART-UPS 1500 $250

I will also be getting some RAM and software also so feel like I am spending my life (and time!) away. ;)

I bought the MAC and pretty close to the RAID it seems. I agree on the 1tb drives as it is less than half and probably easier in the long run.

Questions:
I guess I was planning on putting the SSD in the optical drive. So am I understanding that the RAID card totally replaces all my Mac drives (so I can't use the internal bays or optical?)?

I am probably going to start with 5 1tb drives or so and then work up. Will help ease that $$ pain. Guessing you don't have to use all exact same drives or is that advised? Just thinking down the road.

K really have to get back to work, took a reading/researching break last night. Thanks and getting excited about this now.

the first bold you have one SFF-8088 port on that card since your box will have two of these and comes with cables you will need only one internal to external cable

I have that case ! its nice for the money !

the next you can still use your internals just remember their is a limit to how much bandwidth they have ? use them for something else ? bu and or clone of your os is a good one :)

my opinion is to use the same HDD but the RE3 will be out for a while soget those to expand
some say it wont matter ? but then again its that old saying of everyone has one ! opinion that is :) I prefer to get the most out of things by keeping the HDD the same !
 

nanofrog

macrumors G4
May 6, 2008
11,719
3
So this is what I am seeing:

:
ARC-1880ix-12 $860
8x 1TB RE3 drives $1040
Enclosure = Sans Digital TR8X $400
2x Internal to external MiniSAS cables $119
APC SMART-UPS 1500 $250

I will also be getting some RAM and software also so feel like I am spending my life (and time!) away. ;)

I bought the MAC and pretty close to the RAID it seems. I agree on the 1tb drives as it is less than half and probably easier in the long run.

Questions:
I guess I was planning on putting the SSD in the optical drive. So am I understanding that the RAID card totally replaces all my Mac drives (so I can't use the internal bays or optical?)?
That will depend on whether or not you use the MaxUpgrade adapter linked previously, that allows you to use the HDD bays with the card. By skipping this, you still can use the ICH controller for more than the optical bays. This can allow you more drives total, as you can use another internal to external cable to a 4 bay SAS enclosure to get drives on the card (SSD or HDD).

If you skip the adapter, the HDD bays will use the ICH which has the ~660MB/s limit. This can be a problem if you're using SSD's (2x 3.0Gb/s SSD's will be fine, but 3x could throttle, depending on what the exact drives are capable of independently). But if this isn't an issue/acceptable when planned around, is a good idea IMO (allows you more drives <total> to be attached to the system).

Where the ICH comes into consideration, is if the disks can do ~225MB/s+ each. Given the ICH limit, you will ony be able to run 3x at best, which would have an aggregate throughput of 675MB/s+. Since this is running on a 660MB/s pipe, you'd throttle. On the lower side (15MB/s for example), you probably won't notice, but with faster unit's, I'm not sure as I've not had the opportunity to test out SSD's (ICH or RAID card).

If you chose to use the adapter, and place an SSD in an HDD bay, the drive will run off of the card.

Please understand, I prefer to test = definitive data/information, as the results are real world results (things can happen that prevent what might be expected from actually occuring). Since I've not had the ability to test SSD's myself, I'm unsure of real world results. It's just theory to me ATM, and I've seen situations that don't follow theory/what you'd expect.

QUOTE=MIKE1971;11036139]I am probably going to start with 5 1tb drives or so and then work up. Will help ease that $$ pain. Guessing you don't have to use all exact same drives or is that advised? Just thinking down the road.[/QUOTE]
You don't have to, but it's a good idea as each disk has the same performance (get a spare as well, so if something happens, you can repair the problem immediately). The reason is, you could experience a second disk failure while waiting on a disk to show up (shipping time).

It's cheap insurance IMO.

I agree with all of it. I also prefer to keep disk the same when at all possible (tend to get cheaper too, if you're upgrading over time, so long as they're still in production, or just ceased). If you're talking a significant period of time, it may be a good idea to replace the existing set and add additional units to increase performance and/or capacity).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.