Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The I/O processing for a 4 drive array (max), is minimal for RAID0. A little higher for RAID 10, but probably not enough to require the Apple RAID card.

The card can give a slight performance boost though, by the built-in cache, and it's own IOP processor. Again, perhaps not enough to justify. Unless you want to use a RAID type that OS X does not support, such as 5 or 6.

I'd recommend basing your decision based on primary system usage. If you're using it for graphics/video editing, and are constantly loading all of the cores, it might be worth it just to eliminate the extra load placed on them (RAID IOP). If that's not the case however, I wouldn't bother.

And in the case you could really use a RAID card, you might want to consider 3rd party alternatives, despite the difficulties in the '09 MP. It can allow you more drives, additional array types, and faster throughput. Other features are also handy, as they make recovery easier.

Sorry if it seems convoluted, but I've no idea of how you'll use your system.

Thanks.

I am using my system mainly for processing of RAW photographies and I also produce music (although that's more of a hobby than a money maker). But I am speed freak and I want the most performance I can get from my system. However, if the raid card only give me 10 mb/sec more over software raid, I will not bother. It has to be significant, not just measurable.
 
Thanks.

I am using my system mainly for processing of RAW photographies and I also produce music (although that's more of a hobby than a money maker). But I am speed freak and I want the most performance I can get from my system. However, if the raid card only give me 10 mb/sec more over software raid, I will not bother. It has to be significant, not just measurable.
RAID cards can provide far more than 10MB/s. Some can go over 1600MB/s! :eek:

Just not Apple's offering (stupid 4 drive limit//grr...). :p
The best you could do with it under workstation use, is go with a Fujitsu MBA3300RC's (300GB SAS drives; they also offer a 147GB version). For database use, Seagate Cheetah 15K.6 series.

It would require a card capable of running more drives though, and in an '09, that means external. If you're serious about this, check out Areca. They offer the highest performance possible, and are less expensive than their closest competitor, Atto Technologies. (Cards running 1200MB/s IOP's, (SAS controllers)). They do extremely well with SATA controllers as well.
 
RAID cards can provide far more than 10MB/s. Some can go over 1600MB/s! :eek:

Well I thought so, but the question was more whether the RAID card will introduce a notable speed difference in my current setup. I am not going to have more than 4 drives anyways, and right now all I do is stripe two disks via raid-0. I might expand this to a 4 disk stripe set, or to a 4 disk raid-10. that's it. now the question is: will I gain more performance in that setup if I use the raid card? I have doubts that it will, but I need some clarification.

I figure the raid card will offload the CPU when doing things like parity calculations in a raid-5 set, but on raid-0? I could argue that the raid card has a large cache, but how will this influence the performance in real worl day to day usage of the system compared to software raid.

question about the third party solutions you guys mention: what if I decide at a later date that I want to buy a third party raid card. can I migrate my exiting disks to that without losing data, or would this mean reinstall of my system? also, do third party solutions support hassle free booting? I definitely want to boot my OS from raid.
 
Well I thought so, but the question was more whether the RAID card will introduce a notable speed difference in my current setup. I am not going to have more than 4 drives anyways, and right now all I do is stripe two disks via raid-0. I might expand this to a 4 disk stripe set, or to a 4 disk raid-10. that's it. now the question is: will I gain more performance in that setup if I use the raid card? I have doubts that it will, but I need some clarification.

I figure the raid card will offload the CPU when doing things like parity calculations in a raid-5 set, but on raid-0? I could argue that the raid card has a large cache, but how will this influence the performance in real worl day to day usage of the system compared to software raid.

question about the third party solutions you guys mention: what if I decide at a later date that I want to buy a third party raid card. can I migrate my exiting disks to that without losing data, or would this mean reinstall of my system? also, do third party solutions support hassle free booting? I definitely want to boot my OS from raid.
I'd skip Apple's card. I don't think you'd get enough of a performance boost to warrant it at all. RAID 0 is simple, and won't place much of a load on the CPU(s) and memory resources. It's cache will help, but I can't see it being worth the asking price. (Not justifiably at any rate). :p You'd likely be able to get just as much of a boost by carefully selecting your drives.

I only mentioned 3rd party, as that would be the way to go for better performance than the Apple RAID Pro card can provide. Additional drives (more than 4 ;)) would be needed.

Using a 3rd party solution would potentially allow you to use the existing drives, but not migrate them. The array is created by software, and won't be recognized by a card. You'd have to make a backup of the data first, as creating the array under a hardware solution would erase all data. I say potentially, as you have to pay attention to the Hardware/HDD Compatibility List generated by the manufacturer. Not every drive will work, due to firmware issues between the card and drive. It's also HIGHLY recommended to use enterprise drives. They're built for RAID, and have a higher chance of being usable by whatever card you select.

As for the ability to boot, some do, some don't. You have to wade through the specs carefully. Other items that may be of interest as well, such as the ability to operate in a multi OS environment if you want to do that. It's detailed, so be prepared to spend as much as a couple of months doing research/planning before you buy anything. Leaping too quickly can cause all kinds of headaches, and usually followed with RMA's.
 
So, after all of your help I have started making progress and I wanted to share it with you and also get just a bit more advice. I ended up talking to a guy in IT here and they had a MAXconnect tray bay so it turns out I didn't have to fabricate my own thing after all. Good timing on my part!!! Saved effort.

So I can now confirm a few things for people out there.

1.) If you are looking at the MaxConnect bay mods for adding hard drives I say that they are well worth it. It is a custom fit CNC'd 2 part aluminum assembly that is very rugged and you can mount two 3.5 in hard drives below the optical (see photo) or if you take out the optical can get 4. I did a little experimenting of my own and by drilling two more sets of holes on the T-mount for the optical drive it is actually possible to get FOUR drives into the lower optical bay-two 3.5 in drives and then two 2.5 inch drives hung upside down on the lower 1/2 of the mount + the optical. So, if you have access to a drill press you are set. I don't have any 2.5 in drives I wanted to mount in there but I did drill the extra holes and can confirm it is an option just so you know. See pics, I threw a 2.5 in there so you could see it.

Moral is you can get an optical, six 3.5 inch drives, AND two more 2.5 in if you drill it.

2.) I bought the RocketRAID 4320 and there are NO issues with OS X 10.5.6 and the EFI update v1.3 as previously reported. The update was successful, you CAN boot from the card (I have done it now), and the web management software is actually quite easy and there were no issues reinitializing disks in legacy format back to incorporation in the RAID array. So far I am very impressed with the RocketRAID.

Users should however be aware of an unsaid restriction on password length with the web-based configuration tool. It is limited to 8 characters and so I was CLUELESS as to why I couldn't get access to the card management but Highpoint's tech support sorted this out fast after calling them though it took a while since it is a oddity. Email support was slow but a phone call got me right in.

So, just a review on the Specs and what decisions I made.

Mac Pro 2x2.8 Ghz, 6 Gigs Memory
1, 300 Gig Velociraptor OS system disk (I decided this route would be "safer" in the event of an update making the card inaccessible. Plus this thing screams and now is just the OS disk).
5, 1 Tb Seagate Barracudas (all ES.2 but 2 that will be replaced soon) connected to a Highpoint RocketRAID 4320 card via MaxConnect SAS/SATA Link (to reach the card from the iPass cable.

Critical files (Research, Photos, OS, etc) are backed up by 2 terabytes in an extra USB enclosure. NAS will come in the future per your suggestions, but this has me covered for now.

So, where I am now is that I am doing 2 tests (following Nanofrogs suggestion) I am setting up a RAID 5 + HS and going to test performance and then I am going to do the same thing with a RAID 6. I realize that I may take a small bit of a performance hit on the RAID 6 but 2 disk failure seems like it is desired over the Hot spare. We talked about this before but any additional thoughts are welcome on which to do.

But, what I really wanted suggestions on is regardless of RAID 5+HS/RAID 6 it will be 3 terabytes in the Array. The default Block size is 64k and the default Sector Size is 512B. Do you have any suggestions about how these values should change? Or is it fine to accept the defaults?

Lastly, what is better - Write Back or Write Through? I have my whole system on a APC 1300 battery backup so I have about 15 minutes of run time so is Write Through really needed?
Thanks again for all the help. I wanted to include the pics in the event that anyone would benefit. Cheers! S-
 

Attachments

  • IMG_4242.jpg
    IMG_4242.jpg
    107.3 KB · Views: 133
  • IMG_4243.jpg
    IMG_4243.jpg
    138.9 KB · Views: 145
  • IMG_4244.jpg
    IMG_4244.jpg
    94.7 KB · Views: 140
  • IMG_4245.jpg
    IMG_4245.jpg
    101.6 KB · Views: 139
So, after all of your help I have started making progress and I wanted to share it with you and also get just a bit more advice. I ended up talking to a guy in IT here and they had a MAXconnect tray bay so it turns out I didn't have to fabricate my own thing after all. Good timing on my part!!! Saved effort.
:cool: Looks like you got really lucky. :D

Users should however be aware of an unsaid restriction on password length with the web-based configuration tool. It is limited to 8 characters and so I was CLUELESS as to why I couldn't get access to the card management but Highpoint's tech support sorted this out fast after calling them though it took a while since it is a oddity. Email support was slow but a phone call got me right in.
Not that big a deal. ;) :p

So, where I am now is that I am doing 2 tests (following Nanofrogs suggestion) I am setting up a RAID 5 + HS and going to test performance and then I am going to do the same thing with a RAID 6. I realize that I may take a small bit of a performance hit on the RAID 6 but 2 disk failure seems like it is desired over the Hot spare. We talked about this before but any additional thoughts are welcome on which to do.
Given your interest in safety, I still say go with 6.

But, what I really wanted suggestions on is regardless of RAID 5+HS/RAID 6 it will be 3 terabytes in the Array. The default Block size is 64k and the default Sector Size is 512B. Do you have any suggestions about how these values should change? Or is it fine to accept the defaults?
Generally speaking, the smaller the block, the better capacity usage is. But smaller blocks also have additional overhead, as there's more of them. So it has a performance hit.

Conversely, large blocks can allow for greater speed, but waste massive space, if you primarily use small files. (Less than the block size). Just one bit of data, and the entire block is gone.

The default values are an approximate balance for general purpose use, but I don't know what file sizes you're using. You can always test different blocks sizes, and see what fits you best. ;) Perhaps not a bad idea anyway, as you can dial in the best fit for your needs if uncertain, which you are. :p

It's even possible to use separate arrays to handle certain file sizes. For example, video/graphics files could be stored on an array with say a 128K or larger block, while really small stuff might be on one set at 16K blocks. :eek:
One trick for capacity managment in a mixed system. Not my favorite in a workstation.

I doubt you wish to get this ellaborate, as you don't have a lot of disks, and splitting them may not be what's best for you.

Lastly, what is better - Write Back or Write Through? I have my whole system on a APC 1300 battery backup so I have about 15 minutes of run time so is Write Through really needed?
Thanks again for all the help. I wanted to include the pics in the event that anyone would benefit. Cheers! S-
For safety, Write Through, performance, Write Back.

Here's a description from Wiki. ;) (Look in the Operations heading).

The down side, is if you use Write Back, and the data is dirty, you get a performance hit while the system corrects it. Write Through eliminates this, but at a cost of overall performance. The UPS is a good thing to have, but doesn't solve this issue.

In your case, you can try both (test), but you'd likely be fine with Write Back. If you prefer paranoid, go with Write Through. :p At least testing, you can see the performance differences. ;) :D
 
Nanofrog, thanks! I don't think I'm going to worry too much about changing the block sizes then at this point to play around with it - esp. given that it is going to take 4-5 hours to fully initialize this array ONCE. :p

But I may see how things vary from the Write-thru/back change if it makes a huge difference then I'll switch. But yeah this thread has been exceptionally informative for me and thanks for all of your help!
 
Nanofrog, thanks! I don't think I'm going to worry too much about changing the block sizes then at this point to play around with it - esp. given that it is going to take 4-5 hours to fully initialize this array ONCE. :p

But I may see how things vary from the Write-thru/back change if it makes a huge difference then I'll switch. But yeah this thread has been exceptionally informative for me and thanks for all of your help!
:cool:

BTW, Clean up that Rat's Nest! :eek: :p
 
I've been looking at getting a pair of Vertex SSD's and running them off an Areca 1210 I have repurposed from my PC rig.

One of the potential problems is that the Areca 1210 apparently tops out at about 400MB/s throughput... while the drives in RAID0 can achieve about 500MB/s. While I should be happy with 400MB/s, I really don't want a card that is throttling my performance.

A couple of questions:

1. Can software RAID and the chipset I/O keep up with these kind of SSD's in RAID0 or is it likely bottlenecking these super fast drives as well?
2. Are there any specs on the Apple RAID card's I/O limit in RAID0? (EDIT... I found the specs... 533MB/s!) I wonder if I should just sell the Areca and go with the Apple RAID card to ensure the card is not bottlenecking things.
3. If I go with the Apple RAID card, can drives connected as JBOD be booted via Bootcamp? Or will I need to connect a Windows drive to the unused optical drive SATA connector?

Thanks for any thoughts.
 
I've been looking at getting a pair of Vertex SSD's and running them off an Areca 1210 I have repurposed from my PC rig.

One of the potential problems is that the Areca 1210 apparently tops out at about 400MB/s throughput... while the drives in RAID0 can achieve about 500MB/s. While I should be happy with 400MB/s, I really don't want a card that is throttling my performance.

A couple of questions:

1. Can software RAID and the chipset I/O keep up with these kind of SSD's in RAID0 or is it likely bottlenecking these super fast drives as well?
2. Are there any specs on the Apple RAID card's I/O limit in RAID0? (EDIT... I found the specs... 533MB/s!) I wonder if I should just sell the Areca and go with the Apple RAID card to ensure the card is not bottlenecking things.
3. If I go with the Apple RAID card, can drives connected as JBOD be booted via Bootcamp? Or will I need to connect a Windows drive to the unused optical drive SATA connector?

Thanks for any thoughts.
The ARC-1210 won't work under OS X. :eek: It only has Windows/Linux support. :( :( So you'd need to use a different card.

However, if it's only the two drives, you'd probably be better served going with software RAID under OS X, assuming you only wish to run it under OS X. If you want it to run with another OS as well, you'd have to go with a hardware solution. Fortunately, as it's only two drives, you could do this easier than if you needed more drives. Install in the empty optical bay, and use a MiniSAS Fan Out Cable to attach them to a card.

The logic board will certainly be able to keep up with the drives. If they were 6.0Gb/s, and could actually exceed 3.0Gb/s, that would be a different story. But none are capable of this, not even SSD's ATM. ;)

If you went with a hardware solution, you'd still be OK, as the IOP isn't as much of a bottle neck as you may think, particularly in RAID 0 or 1. Minimal overhead is involved. To give you an example, the ARC-1231ML can actually exceed 1600MB/s, yet the IOP is only an 800MHz part. Some of the SAS versions (ARC-1680 series), use a 1200MHz IOP, yet can't run SATA drives quite as quickly as the ARC-1231ML. :eek: ;)

As for you're 3rd question, Yes, you can attach as JBOD. But if you plan to use a separate drive for Windows, Boot Camp isn't even needed. :eek: :D You just install to the drive you wish. (Use the Option key). Super easy, as BC is only a utility that creates a partition on a shared drive for multiple OS's.
 
The ARC-1210 won't work under OS X. :eek: It only has Windows/Linux support. :( :( So you'd need to use a different card.

However, if it's only the two drives, you'd probably be better served going with software RAID under OS X, assuming you only wish to run it under OS X. If you want it to run with another OS as well, you'd have to go with a hardware solution. Fortunately, as it's only two drives, you could do this easier than if you needed more drives. Install in the empty optical bay, and use a MiniSAS Fan Out Cable to attach them to a card.

The logic board will certainly be able to keep up with the drives. If they were 6.0Gb/s, and could actually exceed 3.0Gb/s, that would be a different story. But none are capable of this, not even SSD's ATM. ;)

If you went with a hardware solution, you'd still be OK, as the IOP isn't as much of a bottle neck as you may think, particularly in RAID 0 or 1. Minimal overhead is involved. To give you an example, the ARC-1231ML can actually exceed 1600MB/s, yet the IOP is only an 800MHz part. Some of the SAS versions (ARC-1680 series), use a 1200MHz IOP, yet can't run SATA drives quite as quickly as the ARC-1231ML. :eek: ;)

As for you're 3rd question, Yes, you can attach as JBOD. But if you plan to use a separate drive for Windows, Boot Camp isn't even needed. :eek: :D You just install to the drive you wish. (Use the Option key). Super easy, as BC is only a utility that creates a partition on a shared drive for multiple OS's.

Thanks man... as usual, a very thorough answer... you answered questions I didn't even know I had!!! :)

Are you positive the 1210 won't work under OSX... I did a search awhile back and was confident that it was supported. I guess I need to dig deeper or if you have a link, please share. (Edit: I found this which says it is supported... http://faq.areca.com.tw/index.php?view=items&cid=9:Mac&id=372:Q10120806- )

FYI, I'm now thinking of buying a single 30GB SSD just for Windows and using the dual 120GB drives for OSX... it's how I RAID0 these two drives that's up in the air now.
 
Thanks man... as usual, a very thorough answer... you answered questions I didn't even know I had!!! :)

Are you positive the 1210 won't work under OSX... I did a search awhile back and was confident that it was supported. I guess I need to dig deeper or if you have a link, please share. (Edit: I found this which says it is supported... http://faq.areca.com.tw/index.php?view=items&cid=9:Mac&id=372:Q10120806- )

FYI, I'm now thinking of buying a single 30GB SSD just for Windows and using the dual 120GB drives for OSX... it's how I RAID0 these two drives that's up in the air now.
The link from Areca is definitely weird. ;) :p I was going by the product page for the ARC-1210.
The link you posted is certainly interesting, as it would provide more choices for Mac users. :D And some of them are less expensive (fewer ports primarily).

You might want to go with a larger capacity drive for Windows, unless you plan to place some of the data on another drive. Windows, particularly the 64 bit versions, is capacity hungry. Last I checked, even a DVD Burn application, such as Nero, is over a gig. :eek: :p
 
The link from Areca is definitely weird. ;) :p I was going by the product page for the ARC-1210.
The link you posted is certainly interesting, as it would provide more choices for Mac users. :D And some of them are less expensive (fewer ports primarily).

You might want to go with a larger capacity drive for Windows, unless you plan to place some of the data on another drive. Windows, particularly the 64 bit versions, is capacity hungry. Last I checked, even a DVD Burn application, such as Nero, is over a gig. :eek: :p

Thanks. I guess I'll try the Areca when I get my MP. I guess I'm not sure it's going to help vs. hinder. On one hand, the added cache will help as will the dedicated processor to reduce CPU load, but as you say, the load for managing RAID0 is minimal. On the other hand, the throughput of the Areca 1210 is apparently only 400MB/s which shockingly is not enough to enjoy the peak capabilities of the Vertex drives in RAID0. I may be better off, and have a simpler install by just running the Vertex in software RAID0 and selling my Areca card to offset the cost of the SSD's.

Thoughts? Opionions?

As for Windows, I plan only a stripped down OS install and one or two games at a time. Windows will strictly be for gaming.
 
Thanks. I guess I'll try the Areca when I get my MP. I guess I'm not sure it's going to help vs. hinder. On one hand, the added cache will help as will the dedicated processor to reduce CPU load, but as you say, the load for managing RAID0 is minimal. On the other hand, the throughput of the Areca 1210 is apparently only 400MB/s which shockingly is not enough to enjoy the peak capabilities of the Vertex drives in RAID0. I may be better off, and have a simpler install by just running the Vertex in software RAID0 and selling my Areca card to offset the cost of the SSD's.

Thoughts? Opionions?

As for Windows, I plan only a stripped down OS install and one or two games at a time. Windows will strictly be for gaming.
As you already have the Areca, give it a try first, and test the performance. ;) Can't hurt, and if it can't handle the SSD's, sell it. I'm sure someone would be happy to buy it from you. :)

At least if it works under OS X, you can let the forum know, as some users could benefit from knowing it actually will work. :D Lower cost alternatives (not junk), are always of interest around here. ;)

Keep us posted on your progress please. :)
 
So the RAID Array is in, initialized (after 24 hours!!!), and running smoothly nearest I can tell. I just ran AJA system test on it and I'm getting 3x what I was getting before now in this 5 disk RAID 6 config (1 tb drives just to remind you). But here are the results. It seems like it hits the wall around 320 Mb/s and I know that this isn't the controller card because it can do much better than that. Does this seem about right to those of you that are more familiar with it? I'm happy but maybe ignorance is bliss :)

Seems like it should be about the right ball park however because the test done by BareFeats here was done with Seagate 15k 450 gig SAS Cheetah drives and for a RAID 6 config with 5 disks they were getting sustained read / writes of about 460 MB/s. Seem about right? I assume this is the difference between the SAS drives and the difference in RPMs, etc. Again thanks for all the input!

If you have any advice on how to be sure that this thing is stable let me know. :)

BTW: so in RAID 6 with 5 drives I have 3 Tb effective. Given that the performance is about 300 MB/s I am assuming that adding a 6th drive would not make the throughput scale linearly (e.g. gain another 100 MB/s for 4 Tb). Though... it sort of looks like this on the bare feats test of the controller I have. I am going to stick with RAID 6 I believe, but it is shocking to see the difference between RAID 5 and 6.
 

Attachments

  • RAID6_sfs.jpg
    RAID6_sfs.jpg
    150.7 KB · Views: 74
So the RAID Array is in, initialized (after 24 hours!!!), and running smoothly nearest I can tell. I just ran AJA system test on it and I'm getting 3x what I was getting before now in this 5 disk RAID 6 config (1 tb drives just to remind you). But here are the results. It seems like it hits the wall around 320 Mb/s and I know that this isn't the controller card because it can do much better than that. Does this seem about right to those of you that are more familiar with it? I'm happy but maybe ignorance is bliss :)

Seems like it should be about the right ball park however because the test done by BareFeats here was done with Seagate 15k 450 gig SAS Cheetah drives and for a RAID 6 config with 5 disks they were getting sustained read / writes of about 460 MB/s. Seem about right? I assume this is the difference between the SAS drives and the difference in RPMs, etc. Again thanks for all the input!

If you have any advice on how to be sure that this thing is stable let me know. :)

BTW: so in RAID 6 with 5 drives I have 3 Tb effective. Given that the performance is about 300 MB/s I am assuming that adding a 6th drive would not make the throughput scale linearly (e.g. gain another 100 MB/s for 4 Tb). Though... it sort of looks like this on the bare feats test of the controller I have. I am going to stick with RAID 6 I believe, but it is shocking to see the difference between RAID 5 and 6.
You're running the card with "Cache Disabled". :eek: No wonder. ;) :p
 
You're running the card with "Cache Disabled". :eek: No wonder. ;) :p

LOL! I hang my head in embarrassment!!!!! :eek:

Ok so THIS is a little more like it!!!!!! Write speeds seem pretty constant around 250 MB/s and read speeds are up over 1500 MB/s!!!! Hell yes!!!! I am curious what happens to the read speeds between file sizes of 4000-8000 MB that makes the read performance drop - just from a mechanics perspective.

Anyway, I did the AJA test on my single velociraptor as well and I get almost identical Read performance but about 150 MB/s in write speed. Interesting. I'm sure that it would be incredible to see a bunch of velociraptors in RAID given that!!!

Anyway, that seems much much better!!!! And again... Oops!!! Given the similarity in READ speeds between the ARRAY and my velociraptor disc I think I may stay with this... I guess the other question is though how much would adding a 6th disc and booting off the array crush that? I carbon-copy-cloned my OS disk to the array and booted off of it and it took 26 seconds from boot select to login screen. Raptor took 10 seconds longer.
 

Attachments

  • Raid_1a.jpg
    Raid_1a.jpg
    160.9 KB · Views: 76
  • Raid_2a.jpg
    Raid_2a.jpg
    160.2 KB · Views: 88
  • Raid_3a.jpg
    Raid_3a.jpg
    161.4 KB · Views: 69
LOL! I hang my head in embarrassment!!!!! :eek:

Ok so THIS is a little more like it!!!!!! Write speeds seem pretty constant around 250 MB/s and read speeds are up over 1500 MB/s!!!! Hell yes!!!! I am curious what happens to the read speeds between file sizes of 4000-8000 MB that makes the read performance drop - just from a mechanics perspective.

Anyway, I did the AJA test on my single velociraptor as well and I get almost identical Read performance but about 150 MB/s in write speed. Interesting. I'm sure that it would be incredible to see a bunch of velociraptors in RAID given that!!!

Anyway, that seems much much better!!!! And again... Oops!!! Given the similarity in READ speeds between the ARRAY and my velociraptor disc I think I may stay with this... I guess the other question is though how much would adding a 6th disc and booting off the array crush that? I carbon-copy-cloned my OS disk to the array and booted off of it and it took 26 seconds from boot select to login screen. Raptor took 10 seconds longer.
The performance is skewed in a sense, as the entire file fits into the cache. Up the file size to exceed the cache, and you'll get the real performance when you're feeding the card a steady load. (Make it earn it's power). :p

IIRC, AJA should allow you to set the file size to 4GB. Try testing the array with varied file sizes (scale up), to get a feel for what happens. Now is the time to learn.

You should also try to simulate various failures to determine how the card will react. Keep notes, as this testing will give you the experience you need to help recover the array if something ever does happen. For example, pull the power cord to simulate a power outage & test the UPS. Next, turn off the power switch (no shutdown), wait say 30s, and turn back on. Remove a drive,... Repeat with all array types the card can support. (Test performance while you're at it). Hopefully you have the idea. ;)

Try to get in the habit of performing such tests each time you ever make an array on a card you're not familiar with. It's due to the fact not all cards will react precisely the same way. Odd ball behavior is possible, and the idea is to know about it before anything critical happens.

It can take some time, but it's in your best interest to know how the card will react, and what you need to do in order to recover. ;) As quickly as possible. :D

The performance differences for large files has to do with both the size of the file, and the block size. The smaller the block setting, more blocks have to be read in order to obtain the entire file. This can be compensated for to some degree, by setting the block size to a larger value. But if you have a lot of small files, it will waste a lot of space.

And to change the block, you end up recreating the array. So another round of sit and wait, while the array initializes. Basically a day due to the drives you're using. ;)

If you add the additional disk, you will see a performance boost.
Here's a rule of thumb formula:
Avg. Throughput (read) = n*(individual drive throughput)*0.85
n = number of drives

So if you have an average throughput of 100MB/s for a single drive, it would look like this.

Tp (read) = 6*100*.85 works out to 510MB/s. :)

(Worst case, when the array is full, change the decimal to 0.75). Remember, these are approximations, not exact values.
 
Nanofrog, thank you again for the detailed post as always. As fate would have it I was reading your message at the exact moment that my RAID card's alarm when off on it's own. It turns out that the drive in Bay 1 of the upper 4 standard bays just decided to drop out and stop reporting the SMART information and so it dropped out of the array. In your experience how common is this and what is the best way to mitigate that type of behavior? I assume that it has to do with the communication link between the card and the iPass connector in this case? Such that if there is a lag in reporting the card kicks it out of the array thinking it has failed?
 
Nanofrog, thank you again for the detailed post as always. As fate would have it I was reading your message at the exact moment that my RAID card's alarm when off on it's own. It turns out that the drive in Bay 1 of the upper 4 standard bays just decided to drop out and stop reporting the SMART information and so it dropped out of the array. In your experience how common is this and what is the best way to mitigate that type of behavior? I assume that it has to do with the communication link between the card and the iPass connector in this case? Such that if there is a lag in reporting the card kicks it out of the array thinking it has failed?
Drop outs tend to be mainly firmware issues. I checked Highpoint's site for the Compatibility List, and it indicated that version AD14 is required for the drives to operate correctly. (Always check the list before using drives, as it can save headaches like this). ;)

So check the drive's firmware revision. The card too, and see if it's the latest version. If it's not the correct one (drive), contact Highpoint, as they may actually have it, or can give you a direct link. :) It could be far easier and faster than having to contact Seagate directly. They've been rather bad as of late. :(

You could try a cable swap, but I don't think that's going to be the issue. It's internal, and isn't too long. If it were external, and total length of all cabling over 2.0m, that would be another story.
 
Drop outs tend to be mainly firmware issues. I checked Highpoint's site for the Compatibility List, and it indicated that version AD14 is required for the drives to operate correctly. (Always check the list before using drives, as it can save headaches like this). ;)

So check the drive's firmware revision. The card too, and see if it's the latest version. If it's not the correct one (drive), contact Highpoint, as they may actually have it, or can give you a direct link. :) It could be far easier and faster than having to contact Seagate directly. They've been rather bad as of late. :(

You could try a cable swap, but I don't think that's going to be the issue. It's internal, and isn't too long. If it were external, and total length of all cabling over 2.0m, that would be another story.

Hey thanks for the link and I checked the compatibility and my drives do have the correct supported firmware for their respective drive models. I also did a fresh install of the firmware with the tech support from Highpoint the other day so that should be good.

I suppose that the MaxConnect connector could be an issue but That wouldn't make sense given that the other 3 drive bays connected to the iPass cable were fine.

When I pulled out the drives, I noticed that there was a stupid jumper that was set to limit operational speed to 1.5 Gb/s on two of my ES.2 drives. The others didn't have this on, and I don't know WHY the hell you would want to limit your operational speed, but the one that went down was one of them with this set on. So I'm wondering if this could have somehow been an issue?

I shutdown, pulled the drives, checked all the connections, plugged them back in, and the RAID card started rebuilding that disk. I guess the thing to do next would be to verify the array when this finishes? Ah.. good times. I just hope that this doesn't become a common problem but hey! Here is that learning experience right? ;)
 
Hey thanks for the link and I checked the compatibility and my drives do have the correct supported firmware for their respective drive models. I also did a fresh install of the firmware with the tech support from Highpoint the other day so that should be good.

I suppose that the MaxConnect connector could be an issue but That wouldn't make sense given that the other 3 drive bays connected to the iPass cable were fine.

When I pulled out the drives, I noticed that there was a stupid jumper that was set to limit operational speed to 1.5 Gb/s on two of my ES.2 drives. The others didn't have this on, and I don't know WHY the hell you would want to limit your operational speed, but the one that went down was one of them with this set on. So I'm wondering if this could have somehow been an issue?

I shutdown, pulled the drives, checked all the connections, plugged them back in, and the RAID card started rebuilding that disk. I guess the thing to do next would be to verify the array when this finishes? Ah.. good times. I just hope that this doesn't become a common problem but hey! Here is that learning experience right? ;)
Glad to hear both the card and all drives are the correct Firmware version. :D

As for the jumpers, YES. :) You've very likely solved it, but do keep an eye on it. Such behavior is what you're attempting to discover. :eek: :D

It's easier to deal with this now, before trusting important data to it.

Good luck with the testing. :)
 
Glad to hear both the card and all drives are the correct Firmware version. :D

As for the jumpers, YES. :) You've very likely solved it, but do keep an eye on it. Such behavior is what you're attempting to discover. :eek: :D

It's easier to deal with this now, before trusting important data to it.

Good luck with the testing. :)

Yeah, again this thread has been extremely informative and I hope other n00bs like myself find all this info useful in the future too! After getting everything up and running following last night's issue I have been testing the heck out of this thing.

After removing the jumpers I haven't found any issues yet. I've been pulling out random drives, 2 at a time, rebuilding, pulling out 3, watching the array disappear and seeing what the software does as you suggested. So far it seems very easy to deal with provided you know which drives fail. MY ADVICE to anyone doing this for the first time, do what I did and label the drive bays with the serial numbers of your hard drives (see previous pics). It has saved MUCH confusion already!!!!!

Anyway, The system still responds quite well with only 3 of the 5 drives and rebuild times of the other 2 seem like they are taking 3-4 hours in the event of dual drive failure. Not sure what was going on with those jumpers being on there in the first place but I'm glad you think that may have been THE issue.

Oh and... I see how this is MUCH easier than when critical data is on the drives! And more importantly I definitely see the need for a functioning backup system!!!!! ;) :D

On a totally unrelated note you mentioned something about the Areca cards having partition table backup. Why is this a bonus? Oh and after all of this setup that I am doing, what is going to happen to my data in the event that my card fails? Does that mean that I will have to reinitialize them and I'd lose the data or if you put in the same controller card and you connect things the same does it recognize the array? I am assuming that a card death = array death since the setup of the array is going to be stored on the controller?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.