Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Actually when reading up it looks like the 4310 and the ARC-1212 are only 800MHZ, but the 4321 is 1.2GHZ.
It's for financial reasons, and it's only with SSD's that more than 800MHz would be needed on a 4 port card (i.e. running parity based arrays using SSD's would require more processor power I think). I've not tested it, but my gut tells me that would be the case, which is a special case. I wouldn't even try it with SSD anyway.

interesting. i wonder what the different frequencies of the card indicates. nano? ;) :p
Both are an IOP348 series, just the frequency is different is all, which is needed as the port count goes up. As it happens, the 1200MHz part is the fastest part Intel currently offers.

All of Intel's XScale IOP parts are still 3.0Gb/s based BTW, and they've not released a 6.0Gb/s compliant line yet.
 
Both are an IOP348 series, just the frequency is different is all, which is needed as the port count goes up. As it happens, the 1200MHz part is the fastest part Intel currently offers.

All of Intel's XScale IOP parts are still 3.0Gb/s based BTW, and they've not released a 6.0Gb/s compliant line yet.

ok i get that. but why is it referred to in MHz? why not by how many b/B it can handle every second, or how many operations or what not? that would make it easier to refer back to when you are estimating the speed of all the HDDs combined.
 
ok i get that. but why is it referred to in MHz? why not by how many b/B it can handle every second, or how many operations or what not? that would make it easier to refer back to when you are estimating the speed of all the HDDs combined.
Too many variables.

X drive model, n drive count, and Y array type all make a difference in the achievable throughput. And that's just for starters. Then there's the affect of the stripe size, PCIe slot (8x lanes available or less?),...

It would actually get confusing IMO (multiple spreadsheets that give the impression of a Hydra), as they can't test out every possible scenario. So they just use the clock frequency the processor is rated for.

Ultimately, the only real way I know to get good throughput data, is testing under real world conditions. Which is specific to the usage anyway if it's designed properly.
 
Too many variables.

X drive model, n drive count, and Y array type all make a difference in the achievable throughput. And that's just for starters. Then there's the affect of the stripe size, PCIe slot (8x lanes available or less?),...

It would actually get confusing IMO (multiple spreadsheets that give the impression of a Hydra), as they can't test out every possible scenario. So they just use the clock frequency the processor is rated for.

Ultimately, the only real way I know to get good throughput data, is testing under real world conditions. Which is specific to the usage anyway if it's designed properly.

ok, im with ya. its just not a normal thing to compare hard drive speeds with frequencies, even though it happens behind the scenes.

say, for arguments sake, that the RAID card was the bottleneck - does the 1200MHz yield 50% faster throughput then the 800MHz card? one would assume so, given that the MHz relates directly to MB/s. or there is more to it?

:)
 
ok, im with ya. its just not a normal thing to compare hard drive speeds with frequencies, even though it happens behind the scenes.
Well, as the RAID card maker doesn't produce drives, frequency is really all they can provide (hard fact). Throughput rates published are specific to each test configuration (i.e. performance tests in the ads/product page that shows what the card can do). But they can't give specifics other than that given the variables and dependencies involved.

say, for arguments sake, that the RAID card was the bottleneck - does the 1200MHz yield 50% faster throughput then the 800MHz card? one would assume so, given that the MHz relates directly to MB/s. or there is more to it?
It's possible, but it will depend on the specifics. That is if the processor can be run flat out, all cores blazing, and everything else identical for the array, Yes.

But it's possible that in some cases, "flat out, best throughput" might not need that much additional frequency (less than 400MHz, limiting the comparison between those two processors).

With SSD's however, high frequencies are needed to keep up given the drives are much faster than mechanical; ~2.5x on recent drives, and it's going to get faster as the drive controllers' throughputs improve (even on light loads, such as a stripe set).
 
Well, as the RAID card maker doesn't produce drives, frequency is really all they can provide (hard fact). Throughput rates published are specific to each test configuration (i.e. performance tests in the ads/product page that shows what the card can do). But they can't give specifics other than that given the variables and dependencies involved.
thats a good point! the card makers cant really benchmark their cards accurately on every system out there! so MHz rating makes sense, thats for clearing that :)

It's possible, but it will depend on the specifics. That is if the processor can be run flat out, all cores blazing, and everything else identical for the array, Yes.

But it's possible that in some cases, "flat out, best throughput" might not need that much additional frequency (less than 400MHz, limiting the comparison between those two processors).
CPU usage of the computer processor wouldnt be anything >1% though. because all the computer sees is one drive right? the card handles all the processing of data etc. (thats its job haha).

With SSD's however, high frequencies are needed to keep up given the drives are much faster than mechanical; ~2.5x on recent drives, and it's going to get faster as the drive controllers' throughputs improve (even on light loads, such as a stripe set).
yup thats true. i guess the SSDs would stress the card a lot more because of their improved latency and whatnot. the increased bandwidth of the drives gives a proportionately increased level on frequency?
 
CPU usage of the computer processor wouldnt be anything >1% though. because all the computer sees is one drive right? the card handles all the processing of data etc. (thats its job haha).
I meant the card's processor, not the system CPU. The IOP series are ARM processors at their hearts (some with multiple cores).


the increased bandwidth of the drives gives a proportionately increased level on frequency?
If you mean what I think you do, then Yes.

Let's say you want to build a 1.2GB/s stripe set.
Mechanical = 12x members using 100MB/s drives
SSD = 5x members using 250MB/s drives (generates 1.25GB/s, as I can't work with 4.8 disks :eek: :p)

Though there's fewer SSD's, you still need the same frequency processor on the card to produce the desired level of performance.
 
I meant the card's processor, not the system CPU. The IOP series are ARM processors at their hearts (some with multiple cores).
my mistake, i thought you might have meant that, but wasnt sure!

If you mean what I think you do, then Yes.

Let's say you want to build a 1.2GB/s stripe set.
Mechanical = 12x members using 100MB/s drives
SSD = 5x members using 250MB/s drives (generates 1.25GB/s, as I can't work with 4.8 disks :eek: :p)

Though there's fewer SSD's, you still need the same frequency processor on the card to produce the desired level of performance.

makes perfect sense. if you were to implement 12x SSD dirves for a total of 3GB/s, then it would clearly need a much higher frequency on the card.

what is the max throughput of the highest RAID card? lets assume that the RAID card would be the bottleneck and that hard drives have infinite throughput, as well as PCI.
 
what is the max throughput of the highest RAID card? lets assume that the RAID card would be the bottleneck and that hard drives have infinite throughput, as well as PCI.
Assuming you've an ARC-1680 series and the cache is at 4GB, it would likely hit the wall at 1.5GB/s or so with a stripe set (sustained throughput). Burst would be higher (i.e. I can get 1.39GB/s out of an 800MHz unit due to cache), but it's not sustainable in real world conditions (shows up in benchmarks).

Now if the processor is updated to produce additional throughputs, the PCIe bandwidth (card or board spec used) and lane count would become the bottleneck. Currently, 8x lanes are the max that are available. So @ PCIe 1.1, the max throughput = 2GB/s (currently, all of Areca's cards are PCIe 1.1 spec compliant IIRC, not PCIe 2.0).
 
The amount of RAID knowledge in this thread puts me to shame, but I would like to make one suggestion: you should consider an SSD with the new SandForce controller (such as http://eshop.macsales.com/shop/internal_storage/Mercury_Extreme_SSD_Sandforce).

Early SSD drives have shown significant performance degradation over time. This can be remedied by a) a full wipe or b) implementing the TRIM command. Unfortunately, there's no word of Apple adding TRIM support to OS X (it's working in W7). The SandForce SSD's have built-in wear-levelling and block management (at the controller level) that obviate needing either of the two remedies above.

If you go with an older SSD, such as an Intel, you will most likely find yourself doing periodic full wipes - of each disk. Seeing as you're planning on a RAID setup, you will have to: backup your RAID -> wipe each drive -> reformat each drive -> reconfigure your RAID -> reload your data (or reinstall all programs). I imagine that this is something you will want to avoid.
 
Assuming you've an ARC-1680 series and the cache is at 4GB, it would likely hit the wall at 1.5GB/s or so with a stripe set (sustained throughput). Burst would be higher (i.e. I can get 1.39GB/s out of an 800MHz unit due to cache), but it's not sustainable in real world conditions (shows up in benchmarks).

Now if the processor is updated to produce additional throughputs, the PCIe bandwidth (card or board spec used) and lane count would become the bottleneck. Currently, 8x lanes are the max that are available. So @ PCIe 1.1, the max throughput = 2GB/s (currently, all of Areca's cards are PCIe 1.1 spec compliant IIRC, not PCIe 2.0).

that explains it very nicely nano :) thankyou! i dont think that i have any further questions so far :p thats a pretty good understanding of it a think!

great suggestion Jacquesass! i didnt know that :D
 
Nano, I just received an e-mail from OWC that the Highpoint Rocketraid 4321 will not work in 2009 Mac Pros! :(

I e-mailed them about whether or not their new OWC Mercury Extreme SSD will work with the 4321 (they're actually going to test it to see!), but then he informed me that the 4321 will NOT work with the 2009 MP's. Now I don't know what to do!
 
Nano, I just received an e-mail from OWC that the Highpoint Rocketraid 4321 will not work in 2009 Mac Pros! :(

I e-mailed them about whether or not their new OWC Mercury Extreme SSD will work with the 4321 (they're actually going to test it to see!), but then he informed me that the 4321 will NOT work with the 2009 MP's. Now I don't know what to do!
They keep messing about with the EFI firmware, that I'm not sure what's going on. Upon initial release of the RR43xx line, they were all stated to be MP compatible.

At this point, it looks like the best solution is an Areca, and be done with it. The ARC-1212 (4 port model) and ARC-1222 (8 port model) uses an 800MHz processor, and the ARC-1680 series (8 port or more use 1200MHz processors; both processors are from the same family = IOP341 series).

As it happens, the ARC-1222 is close in price to the RR4321, but it will boot OS X. It's ports are all internal, but there's a cable that solves the issue (internal to external, but you will need to run it out a PCI bracket when you need to use those ports). So it's not quite as clean in terms of cables.
 
Thank you nano.

I just finally got a reply from the guy. He seemed to think I was planning on using the card by connecting the mini-SAS cable that runs from the HD sleds in the 08 MP's and try to do that with an 09 (which obviously will not work, which is why he said the card is incompatible). So I explained it and I think I will most likely get a response back that the configuration I'm going for will work perfectly fine.
 
Thank you nano.

I just finally got a reply from the guy. He seemed to think I was planning on using the card by connecting the mini-SAS cable that runs from the HD sleds in the 08 MP's and try to do that with an 09 (which obviously will not work, which is why he said the card is incompatible). So I explained it and I think I will most likely get a response back that the configuration I'm going for will work perfectly fine.
:cool: NP. :)

I wish you luck, and hope they've still EFI firmware available for it. :)
 
Help on RR4321 Firmware

I have a 2009 MacPro with Snow Leopard. Issue I have is with the 090122 EFI firmware. Will not boot. Seems sporatic on drive dropping periodically from desktop.

Other forums has said that 20090617 EFI is the right firmware, but that is not the version posted by HighPoint. Go figure. So...does anyone have this version and could share?

Thanks!!
 
i've decided to configure my drives as follows, and i need help making it happen. if you guys could shed some light on this for me i'd really appreciate it.

i currently have 4TB of digital caviar black 1tb hard drives. i'd like to turn those into a RAID 5 for a video editing scratch disk (software RAID for now, RAID card when i can afford one)

i just purchased a 2TB digital caviar green drive that i'd like to install internally as well, either in the optical bay or in one of the sata sled slots, and use for a OS boot volume. as a secondary concern, if i can i'd like to make a partition of the disk that is safe for booting windows 7 but i understand that there's something about how this drive writes/reads blocks of data that doesn't work well with windows.

i have a few questions: #1, can this be done in the 09 mac pro? #2, is this a bad idea? #3, can i put one of the RAID disks in the optical bay instead of my boot volume?
 
i've decided to configure my drives as follows, and i need help making it happen. if you guys could shed some light on this for me i'd really appreciate it.

i currently have 4TB of digital caviar black 1tb hard drives. i'd like to turn those into a RAID 5 for a video editing scratch disk (software RAID for now, RAID card when i can afford one)

i just purchased a 2TB digital caviar green drive that i'd like to install internally as well, either in the optical bay or in one of the sata sled slots, and use for a OS boot volume. as a secondary concern, if i can i'd like to make a partition of the disk that is safe for booting windows 7 but i understand that there's something about how this drive writes/reads blocks of data that doesn't work well with windows.

i have a few questions: #1, can this be done in the 09 mac pro? #2, is this a bad idea? #3, can i put one of the RAID disks in the optical bay instead of my boot volume?

sounds like a pretty good setup to me! i dont see any potential problems using bootcamp + OSX on the same partition, a lot of people do it! i have never had a problem...

you can most certainly put one of the RAID HDDs in the optibay, provided that your cables reach - which i dont think will be a problem.

software RAID isn't completely optimal for a RAID5 setup - but it will have to do until you have the funds to afford the RAID card, which can be quite expensive :eek:

goodluck!
 
i currently have 4TB of digital caviar black 1tb hard drives. i'd like to turn those into a RAID 5 for a video editing scratch disk (software RAID for now, RAID card when i can afford one)
Not recommended for three reasons:
1. Software based RAID is NOT up to the task of a level 5 array, as it has absolutely NO provision for the write hole issue associated with parity based arrays. A UPS helps significantly, but it's still not as good as a proper hardware controller (such as additional recovery techniques in hardware not available via driver based setups).

2. For a RAID 5, you really need to run enterprise grade HDD's, and this is especially the case with hardware RAID controllers (it has to do with the recovery timings, and SAS is notoriously picky with SATA drives, which most of the higher end controller cards are moving to SAS).

3. RAID 5 isn't necessary for scratch (it's wasteful of resources, and actually could do more damage to the drives due to the increased stress levels - write environment). RAID 5 is suited to valuable data and availability (uptime of the system).

Also keep in mind, that Disk Utility (OS X) only offers you 0/1/10, so RAID 5 has to be done through 3rd party offerings. There is something known as a Fake RAID controller, which is nothing more than a SATA controller chip + drivers (no processor or cache). Which you will find on true RAID cards, and it's the cache + emergency power that comprise the NVRAM solution to the write hole issue with parity arrays.

i just purchased a 2TB digital caviar green drive that i'd like to install internally as well, either in the optical bay or in one of the sata sled slots, and use for a OS boot volume. as a secondary concern, if i can i'd like to make a partition of the disk that is safe for booting windows 7 but i understand that there's something about how this drive writes/reads blocks of data that doesn't work well with windows.
If you mean to use one OS disk for both OS X and Windows, it's possible via Boot Camp. But this seems to cease to function if the RAID is created under OS X (0/1/10), and I can't recall if this has changed (seems Disk Utility modifies the system firmware).

It is still possible however, if you use a hardware RAID controller. If you go this route, you'd want to pay attention to the HDD Compatibility List to see what was passed, and chose your drives from that (known to work, so it saves you seemingly endless hassle).

#3, can i put one of the RAID disks in the optical bay instead of my boot volume?
It will depend on the specifics you chose (details as to cabling and perhaps necessary adapters to function with the HDD bays).

What exactly are you looking at/doing?

If you wish to use RAID 5, I'd recommend looking at Areca or ATTO (Highpoint in a pinch, but the value isn't what you might think from the prices). Highpoint has poor support (your own skill level could negate this, sae the issues obtaining firmware if you should ever want to boot from it - all 3 brands listed do offer EFI firmware in order to enable boot capability), and a lack of included cables which negates the price for some.
 
Not recommended for three reasons:
1. Software based RAID is NOT up to the task of a level 5 array, as it has absolutely NO provision for the write hole issue associated with parity based arrays. A UPS helps significantly, but it's still not as good as a proper hardware controller (such as additional recovery techniques in hardware not available via driver based setups).
what sort of probability are we looking at here? how likely is it to fail or to have something incorrectly written? is it only likely to happen in a brown out/power loss - or can it happen even when there are no power issues?
 
what sort of probability are we looking at here? how likely is it to fail or to have something incorrectly written? is it only likely to happen in a brown out/power loss - or can it happen even when there are no power issues?
Probability is high enough it's a real issue, as scratch is a high write environment. Reads, it's not an issue at all.

As per the causality, it's with power, but it doesn't have to be an outage. Brownouts and even a glitch in the PSU or VR's can cause it as well (brownouts are by far the biggest cause, and why the UPS is better if power provision is a solo proposition due to limited budgets - that is, card batter vs. UPS).

Ultimately, power isn't perfect (wall source), and you have to plan for that. Otherwise, you've undermined the entire undertaking, and data loss is a guarantee with parity arrays (when, not if).

The card battery can out-last the UPS, but it's limitation is if the write is larger than the cache, it's still borked (corrupt). A UPS can allow it to complete the process, then allow the user to shut down properly, and not have a data loss. Ideally, you use both, but budget limitations do hamper this in many cases. Even the enterprise market will skip card batteries, though they take other precautions with data centers, such as backup generators and large tanks of fuel that can last days (some are natural gas, but most are diesel).
 
Probability is high enough it's a real issue, as scratch is a high write environment. Reads, it's not an issue at all.

As per the causality, it's with power, but it doesn't have to be an outage. Brownouts and even a glitch in the PSU or VR's can cause it as well (brownouts are by far the biggest cause, and why the UPS is better if power provision is a solo proposition due to limited budgets - that is, card batter vs. UPS).

Ultimately, power isn't perfect (wall source), and you have to plan for that. Otherwise, you've undermined the entire undertaking, and data loss is a guarantee with parity arrays (when, not if).

The card battery can out-last the UPS, but it's limitation is if the write is larger than the cache, it's still borked (corrupt). A UPS can allow it to complete the process, then allow the user to shut down properly, and not have a data loss. Ideally, you use both, but budget limitations do hamper this in many cases. Even the enterprise market will skip card batteries, though they take other precautions with data centers, such as backup generators and large tanks of fuel that can last days (some are natural gas, but most are diesel).
aahh ok i presumed it was something more to do with the way that the software interacted directly with the hard drives - and that by using the RAID card it allowed for a more, uhh, persistent communication.

that makes sense though, i am glad i did not implement a software RAID5 then! because my house suffers daily brown outs from our washing machine (it causes our modem to reset every day), and our UPS doesnt "kick" in in time to keep the modem powered :(

it all makes sense now why the RAID cards are needed!
 
that makes sense though, i am glad i did not implement a software RAID5 then! because my house suffers daily brown outs from our washing machine (it causes our modem to reset every day), and our UPS doesnt "kick" in in time to keep the modem powered :(
You need a better UPS, one that always runs off of the batteries (no switching involved, so there's no interupption at all).

it all makes sense now why the RAID cards are needed!
That's part of it. Cards can also offer better performance, capacity, and reliability/availability (i.e. in the form of propper handling of parity arrays, but levels are available that can't be done via software alone). ;)
 
You need a better UPS, one that always runs off of the batteries (no switching involved, so there's no interupption at all).
yup it switches, and its BLOODY annoying! its hard to find one that will actually work around here though, we almost have to buy them and test them - they dont normally specify on the box.


That's part of it. Cards can also offer better performance, capacity, and reliability/availability (i.e. in the form of propper handling of parity arrays, but levels are available that can't be done via software alone). ;)
right, because the card handles the parity/everything else, the computer wouldnt even realise you have a RAID? it just writes as per normal.
 
yup it switches, and its BLOODY annoying! its hard to find one that will actually work around here though, we almost have to buy them and test them - they dont normally specify on the box.
If it's not on the box, it's on the manufacturer's site somewhere. Also keep in mind, what's not stated is just as important as what is.

An example of the type of UPS I described, is the SUA series from APC (Smart UPS). There are other companies as well, such as Eaton (I know they sell in the UK, not sure about Australia).

This type isn't cheap, but it will pay for itself. SUA's are built like tanks, and don't go often. Additional surge suppresion is a good idea (even some of the SUA models are under 500Joules, and you want ~4k Joules or more of suppression).

right, because the card handles the parity/everything else, the computer wouldnt even realise you have a RAID? it just writes as per normal.
The RAID itself is invisible (presents as a single disk to the OS and software).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.