Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
If it's not on the box, it's on the manufacturer's site somewhere. Also keep in mind, what's not stated is just as important as what is.
which is why ive yet to buy one - none of them state they run directly off the battery so i assume the worst :)

An example of the type of UPS I described, is the SUA series from APC (Smart UPS). There are other companies as well, such as Eaton (I know they sell in the UK, not sure about Australia).
anything <$100 lol?

This type isn't cheap, but it will pay for itself. SUA's are built like tanks, and don't go often. Additional surge suppresion is a good idea (even some of the SUA models are under 500Joules, and you want ~4k Joules or more of suppression).
just something that can provide constant power and maybe 30mins-1hr of backup battery. thats all hehe


The RAID itself is invisible (presents as a single disk to the OS and software).
intersting stuff isnt it :D
 
which is why ive yet to buy one - none of them state they run directly off the battery so i assume the worst :)
Take a look at here and here for more detailed information, but the basic type you're after is called an Online UPS (additional subtypes exist, and additional information is in the links). ;)

anything <$100 lol?
Unfortunately, No, as they're ~10x that minimum for an SUA1500 model, according to the APC.au site (1500VA - and as it's not listed as online, it's likely the Line Interactive type now; still better than what you've been looking at I think). :eek: And the true Online unit is worse than that, (1kVA or 2kVA would be in the range you'd have to get in that family, as there is no 1500VA unit). You might also require a larger circuit to run a 2kVA unit (assuming you don't have the necessary circuit in the house/apt.), which means the added cost of an electrician (and almost certainly won't be possible in a rental apt./dorm).

But you can usually find refurbished units if you look hard (there are companies that specialize in this), and they'll work just as good as a new one, but at a notable savings. :D
 
as usual you're amazing nano. thanks others for your help as well!

until i can afford a hardware RAID controller, what do you recommend for my 4TB RAID scratch disk? i don't need mirroring because i'd prefer to have the space, and it's not a real backup solution anyway. i did buy a 4TB G-Tech hard drive for a backup just for my own peace of mind (i watched a coworker's media drive crash without a backup during his FINAL render tonight) but i always tell my clients to put their data in two OTHER places before giving me a copy so i'm not really that worried about that as much as space and speed. i'm trying to level-up with my home freelance work and would like to attract more RED footage clients. i've been working in 2k with no issues other than limited space and the occasional dropped frame on a two-1tb software RAID 0.

also, what's best to purchase next... the UPS or the RAID card?
 
Take a look at here and here for more detailed information, but the basic type you're after is called an Online UPS (additional subtypes exist, and additional information is in the links). ;)


Unfortunately, No, as they're ~10x that minimum for an SUA1500 model, according to the APC.au site (1500VA - and as it's not listed as online, it's likely the Line Interactive type now; still better than what you've been looking at I think). :eek: And the true Online unit is worse than that, (1kVA or 2kVA would be in the range you'd have to get in that family, as there is no 1500VA unit). You might also require a larger circuit to run a 2kVA unit (assuming you don't have the necessary circuit in the house/apt.), which means the added cost of an electrician (and almost certainly won't be possible in a rental apt./dorm).

But you can usually find refurbished units if you look hard (there are companies that specialize in this), and they'll work just as good as a new one, but at a notable savings. :D

we have this device which is pretty terrible at being a UPS lol. its 700VA and perfectly fine for our uses to provide power to the modem. we have a BatteryWorld in town, i might go and see if they have any APC models - anything thats >500VA would be fine, we dont need 1kVA or anything - that's way way too much haha
 
until i can afford a hardware RAID controller, what do you recommend for my 4TB RAID scratch disk? i don't need mirroring because i'd prefer to have the space, and it's not a real backup solution anyway. i did buy a 4TB G-Tech hard drive for a backup just for my own peace of mind (i watched a coworker's media drive crash without a backup during his FINAL render tonight) but i always tell my clients to put their data in two OTHER places before giving me a copy so i'm not really that worried about that as much as space and speed. i'm trying to level-up with my home freelance work and would like to attract more RED footage clients. i've been working in 2k with no issues other than limited space and the occasional dropped frame on a two-1tb software RAID 0.
I'd go ahead and stripe the 4x 1TB's together for scratch space (~400MB/s or so from the set, which is more than sufficient).

also, what's best to purchase next... the UPS or the RAID card?
Well, if you don't have the UPS on hand, I'd start with that.

The reasoning is as follows:
1. It can benefit your system (protect the computer + monitor from both blackout and brownout conditions). Bare minimum = Line Interactive type (it's still a switching unit, but has an autotransformer that help with low wall voltage scenarios that won't trigger the batteries + inverter).
2. You'll need it for RAID, especially if you run parity based arrays.

we have this device which is pretty terrible at being a UPS lol. its 700VA and perfectly fine for our uses to provide power to the modem. we have a BatteryWorld in town, i might go and see if they have any APC models - anything thats >500VA would be fine, we dont need 1kVA or anything - that's way way too much haha
That's a toy (link). :eek: I'd suggest going back to the links and look for a smaller unit (VA rating).

Eaton makes good gear as well, but you found the wrong model. :eek: :rolleyes: :p

The Eaton 9130 (700VA version) would do. The 500i might as well, but it's a Line Interactive type, not Online as teh 9130 is (and certainly a notable price difference as well).

Given budget limitations, you should try a line interactive unit, and see how it will act under the conditions you're experiencing. If it won't work, then you'd be stuck with buying an Online unit (refurbished or otherwise).
 
That's a toy (link). :eek: I'd suggest going back to the links and look for a smaller unit (VA rating).

Eaton makes good gear as well, but you found the wrong model. :eek: :rolleyes: :p

The Eaton 9130 (700VA version) would do. The 500i might as well, but it's a Line Interactive type, not Online as teh 9130 is (and certainly a notable price difference as well).

Given budget limitations, you should try a line interactive unit, and see how it will act under the conditions you're experiencing. If it won't work, then you'd be stuck with buying an Online unit (refurbished or otherwise).

hmm. what do you think of this?? looks like it will perform the functions - i had a read of line interactive and i hope that it will be able to maintain the power requirements!

its either that one, or this one lol! (NO THANKS)
 
hmm. what do you think of this?? looks like it will perform the functions - i had a read of line interactive and i hope that it will be able to maintain the power requirements!
I know nothing of the company/brand, but it's the right type for "bare minimum". Whether or not it will suffice, I can't say, as I don't know the specific power conditions at your location (sometimes even these won't do).

...this one lol! (NO THANKS)
NO THANKS?!?!? You CHEAP B@&#%^D. :eek: :D :p

It's the type that would solve the problem though. But like anything else, if it's really worth having, it's not cheap. And I understand the starving college student life means funding it is a challenge at best. ;) But if you can swing it, it's not a bad idea. More importantly however, if the Line Interactive model won't do, you'll have no choice (but you'd have to test out the other model first, and see how it reacts with your equipment).
 
I know nothing of the company/brand, but it's the right type for "bare minimum". Whether or not it will suffice, I can't say, as I don't know the specific power conditions at your location (sometimes even these won't do).
i dont even think our energy company knows the specifics :p :rolleyes: its pretty consistent though, hardly ever had black outs or brown outs. just need it to work so that the darn washing machine doesnt muck it up!


NO THANKS?!?!? You CHEAP B@&#%^D. :eek: :D :p
its not my house haha! if i dont find something cheap then it wont be bought, my rents dont care about the internet constantly being reset.

It's the type that would solve the problem though. But like anything else, if it's really worth having, it's not cheap. And I understand the starving college student life means funding it is a challenge at best. ;) But if you can swing it, it's not a bad idea. More importantly however, if the Line Interactive model won't do, you'll have no choice (but you'd have to test out the other model first, and see how it reacts with your equipment).

i wonder if they will let me test it :p

what is the battery "life" like on these sorts of on-line devices? what technology of battery are they? because if i were to buy the $500 model, i would want it to last me at least 5+ years (to justify my investment). :)
 
i dont even think our energy company knows the specifics :p :rolleyes: its pretty consistent though, hardly ever had black outs or brown outs. just need it to work so that the darn washing machine doesnt muck it up!
They can't, as the load is variable. They can however run statistics from previous data to get some idea as to where to set the controls (furnace based plants can't adapt that fast, as they take days to cool).

i wonder if they will let me test it :p
Maybe. Worth asking if you mean in-store, but you'd want it at your location, and check it for as long as possible (i.e. limit of return period).

what is the battery "life" like on these sorts of on-line devices? what technology of battery are they? because if i were to buy the $500 model, i would want it to last me at least 5+ years (to justify my investment). :)
The batteries are lead-acid types, and typically last 3 - 5 years.

The online units tend to last awhile (i.e. I've had many APC's last more than 10 years, so you only need to replace batteries, and they're not nearly as expensive as the unit when new), but I'm not familiar with that brand. But should be of a better quality than most consumer electronics, given what it's made to do, just as APC or Eaton is.

BTW, I'd recommend looking around for places in Australia that deal in refurbished units (APC and Eaton do sell there).
 
Software raid on 2009 MP

I'm relatively new to video editing on a mp and this is the closest forum I can find to my question. At any rate, it's clear that all of your knowledge greatly exceeds my own. I am buying two intel 160 GB SSD drives in hopes of setting up a striped software raid on my 2009 MP. The existing 750 GB conventional hard drive will function as extra storage and the entire system will be backed up with a time capsule (2 TB). Are there any problems I should foresee? Again, I'm relatively new to this so speak (write?) very slowly. :)
 
I'm relatively new to video editing on a mp and this is the closest forum I can find to my question. At any rate, it's clear that all of your knowledge greatly exceeds my own. I am buying two intel 160 GB SSD drives in hopes of setting up a striped software raid on my 2009 MP. The existing 750 GB conventional hard drive will function as extra storage and the entire system will be backed up with a time capsule (2 TB). Are there any problems I should foresee? Again, I'm relatively new to this so speak (write?) very slowly. :)
Assuming you only want to run OS X, you'd be fine. 2x Intel 160GB SSD's would generate ~500MB/s for sustained reads, and the ICH in the chipset can handle ~660MB/s. So even if you run a single HDD simultaneously, it won't throttle (660MB/s or so, no matter what the disks can do, as the ICH becomes the weak link).

If you want to run Windows (and/or Linux), you can do it under certain conditions, but not via Boot Camp or as a separate disk on the logic board's SATA controller (ICH built into the chipset). Disk Utility changes the system firmware, and will no longer function with Windows or Linux.

You'd need a separate card that can boot Windows (BIOS), which can be a SATA/eSATA card (cheapest method), up to a full fledged RAID card. There are a few cards of each type that would suffice, depending on specifics, such as drive count, Port Mulitplier support (if you need it), and SATA specification (3.0Gb/s is fine for HDD, but with SSD, 6.0Gb/s is recommended).

Hope this helps. :)
 
Assuming you only want to run OS X, you'd be fine. 2x Intel 160GB SSD's would generate ~500MB/s for sustained reads, and the ICH in the chipset can handle ~660MB/s. So even if you run a single HDD simultaneously, it won't throttle (660MB/s or so, no matter what the disks can do, as the ICH becomes the weak link).

If you want to run Windows (and/or Linux), you can do it under certain conditions, but not via Boot Camp or as a separate disk on the logic board's SATA controller (ICH built into the chipset). Disk Utility changes the system firmware, and will no longer function with Windows or Linux.

You'd need a separate card that can boot Windows (BIOS), which can be a SATA/eSATA card (cheapest method), up to a full fledged RAID card. There are a few cards of each type that would suffice, depending on specifics, such as drive count, Port Mulitplier support (if you need it), and SATA specification (3.0Gb/s is fine for HDD, but with SSD, 6.0Gb/s is recommended).

Hope this helps. :)

I used to run Windows XP sp2 in bootcamp on the same partition, but will no longer need to because my employer now supports a vpn via mac os, so this is great news. I also read your comments on getting an online UPS- I'm guessing this still applies to me as well. Thanks for your help.
 
I used to run Windows XP sp2 in bootcamp on the same partition, but will no longer need to because my employer now supports a vpn via mac os, so this is great news.
It seems the technical issues regarding Windows and Linux won't be a problem then. ;)

I also read your comments on getting an online UPS- I'm guessing this still applies to me as well. Thanks for your help.
It's a really good idea, but not absolutely required to keep the data already written in tact. But if power goes out during a write, that file will be corrupted, and the work may need to be re-performed (if the application doesn't have the ability to resume on it's own; OS X won't do it for you).

Taht said, it also protects the system from power issues, particularly brown-out conditions, which cause more damage to computers than blackouts or lightning strikes.

From a financial POV, it makes sense to use one, as the UPS is cheaper than a new system, and around the cost of a new logic board. Toss in a PSU, and the UPS is definitely cheaper than the repairs that would result from such damage. And this is before considering the financial aspect of your data (recovery fees and lost time if the backup is non-existant or damaged as well). :eek: :D
 
I know this is an old thread, but wanted to see how the OP has come along. I don't think the new Mac Pros are out, so has he used the 2009 model???

I've got an 8 core, 2009 Mac Pro. I've got 4 1TB drives in the drive bays, mostly for media. I'd like to get SSD for everything else. I noticed that for the same amount of space, 4 SSDs aren't much more than 2 SSDs. So, if I can get better performance from the raid with 4 of them, I'd do that. I'm just curious what the reality of that is.

This thread has a lot of useful raid setup information, so I decided to continue it... There's one note here that nanofrog made regarding the IOC. For example, with the RocketRaid 4321, using a mini-SAS to SATA cables - what is the max rate it can sustain? The bus can handle far more and each SATA port can handle 3Gb/s - each drive will be max, say 250MB/s read, so then does the controller become the bottleneck? I'm curious how to rate controllers to see if they can really do 4 x 250MB/s assuming a raid 0. I was also curious if the SAS to SATA fan out cable made a difference in terms of performance. If the board had 4 dedicated SATA ports - is that better, or does that make no difference at all and does it all come down to the chip?

What would be even nicer - although maybe overkill - getting an 8 port internal model, where the 4 SSDs could be in a bootable raid 0 and 4 1TB drives in the drive bays could be configured in a RAID5 - squeeze out some more space. That means a lot of cables in there and not sure if it's possible to get cables into the drive bays at all, but I think some have done it...

Thanks
 
I've got an 8 core, 2009 Mac Pro. I've got 4 1TB drives in the drive bays, mostly for media. I'd like to get SSD for everything else. I noticed that for the same amount of space, 4 SSDs aren't much more than 2 SSDs. So, if I can get better performance from the raid with 4 of them, I'd do that. I'm just curious what the reality of that is.
The sustained throughputs will definitely improve with a striped set (RAID 0). In this case, you get:

n disks * throughput of the single disk performance.

Now if you're SSD's are capable of 250MB/s per disk (sustained reads), and you've 4 of them, you'll get 1GB/s for sustained reads (n = 4, so 4 * 250MB/s = 1000MB/s = 1GB/s).

There's one note here that nanofrog made regarding the IOP. For example, with the RocketRaid 4321, using a mini-SAS to SATA cables - what is the max rate it can sustain? The bus can handle far more and each SATA port can handle 3Gb/s - each drive will be max, say 250MB/s read, so then does the controller become the bottleneck?
If the SAS or SATA controller chip is 3.0Gb/s, this is a bottleneck, particularly on this card.

Let me explain. First, real world throughputs on SATA (or SAS) 3.0B/s are ~ 270MB/s. As the card only has 4x ports, you just multiply 4 * 270MB/s = 1080MB/s. It's fine for existing SSD's, but will be a problem in the near future (in this case, 8x PCIe lanes aren't a problem, even at Gen 1.0, and the IOP runs at 800MHz, which is more than sufficient as well for RAID 0 for 4x ports).

6.0Gb/s non RAID HBA's and RAID cards are available from ATTO Technology (Areca only offers 3.0Gb/s right now), but the HighPoint card was listed as the cheapest option for a Windows array. If you go the non-RAID HBA route though, you can still implement a RAID 0 via Disk Utility, and the ATTO can actually boot in EFI AFAIK (Tech Sheet not yet available on what I was looking at on ATTO's site; Areca's non RAID HBA can't, but their RAID cards can).

I've mentioned it before, but I'll remind you it's difficult, if not impossible to actually get EFI firmware for the HighPoint card, which I presume would be a problem for you.

What would be even nicer - although maybe overkill - getting an 8 port internal model, where the 4 SSDs could be in a bootable raid 0 and 4 1TB drives in the drive bays could be configured in a RAID5 - squeeze out some more space. That means a lot of cables in there and not sure if it's possible to get cables into the drive bays at all, but I think some have done it...
The cable's won't hinder performance so long as you stick within the specified lengths (1.0 m for passive, and 2.0 m for active, which is only via a PM chip or SAS expander equipped enclosure). Internal or 1x port per disk external are passive signals, so you'd need to stick with 1.0 meters (and stay away from adapters, as they're not suited for SATA signals; the voltages are way too low, and the array is unstable at best).

As per getting a card, I usually recommend getting 4 ports beyond what's needed initially for future expansion (cheaper this way, as you won't have to swap out for a different card, just add disks).

You can fit the cables, as the HDD kit (MaxUpgrades) has the cables needed to get data to the card (you'd also need the extension cable to reach the card). Both are linked above.

What you don't want to do, is run SSD's in a parity based array, as they're not suited to high write environments, especially MLC based drives. This also means if you're writing a lot of data repetitively, such as newer file versions replacing older ones. If you're working in a high write environment, stick with mechanical right now.
 
Yeah - this is where things get a little confusing. How exactly do you rate the controller cards? As you said, the max for 4 SATA devices is going to be 270MB/s = 1080MB/s. Suppose you have a new SSD that can get that much bandwidth - we know the pcie is not going to be the bottle neck. Individually, the SATA per drive is not limiting, but the controller seems to be a bit of a question mark. You said the IOP runs at 800MHz, which isn't a problem - I think it's just difficult to compare or understand exactly what cards would be bottlenecks, or how to compare them.

For example, looking at this site:

http://macperformanceguide.com/Reviews-SSD-OWC-Mercury_Extreme-RAID.html

There's two different cards with external drives. They ran with 4 OWC SSD drives in a raid 0. Individually, they were getting over 250MB/s read speed from these drives. In various configuration - using the MacPro it was 611MB/S (expected) but no more then 789MB/s using the Sonnet Tempo card. The FirmTek was even slower - 679MB/s. They got faster results by using two cards and two disks on each card. Those results approached the 1000MB/s, but not quite. Now, expect it's not going to be perfectly linear - but, 789MB/s is pretty from 1000MB/s - I'm curious what 3 drives would do. If it's around 750MB/s, no point in buying 4 drives! And, I can only assume the card is the bottleneck here.

Now, I don't know anything about these cards. Maybe their chips are slower. Maybe they have other issues. Maybe his tests were flawed, I don't know. But - what I'm trying to say is - it's difficult to understand how these cards will perform in these Raid configurations. From the bus, to the SATA interface, to the hard drives, we have good numbers on what through-put they can do and adding some buffer space in, we could buy a few components and generally expect a certain level of performance based on those specs. But the controllers seem to be a bit of question mark. Besides having expert opinions from someone like yourself who posts quite a bit, or someone's writeup on a test, we can't just look at hardware specs and know if a card can perform as expected. It would be nice if they just posted some standard maximum throughput numbers so you could compare...
 
Yeah - this is where things get a little confusing. How exactly do you rate the controller cards?
Definitely, as some will get a better idea from throughputs (MB/s), such as DAS (Direct Attached Storage = storage is available only to the system it's attached to), and others from IOPS (i.e. SAN systems running large scale databases for example).

In your case, it's a DAS situation from what you've posted so far (most MR members use this type of storage).

As you said, the max for 4 SATA devices is going to be 270MB/s = 1080MB/s. Suppose you have a new SSD that can get that much bandwidth - we know the PCIe is not going to be the bottle neck.
Actually, the PCIe lanes can be an issue as well under certain circumstances. For example, each drive capable of pushing 3.0Gb/s ports to the limit, enough disks to have one port essentially using a single lane on it's own (i.e. 8x disks on an 8x lane card), and either the slot or card using the PCIe 1.0 specification (PCIe 1.0 = 250MB/s per lane).

BTW, 6.0Gb/s RAID cards are designed with Gen 2.0 PCIe spec, but would experience the same issue with fast disks (250+ MB/s) if it's in a PCIe Gen 1.0 slot. Another way to throttle, is if the lane count of the slot is less than that of the card (i.e. running an 8x slot in a 4x active slot that it fits; 8x or 16x lane physical connector, but not wired for that many lanes electrically).

It all depends on the specifics, so as the old saying goes, "The Devil's in the Details". ;)

Individually, the SATA per drive is not limiting, but the controller seems to be a bit of a question mark. You said the IOP runs at 800MHz, which isn't a problem - I think it's just difficult to compare or understand exactly what cards would be bottlenecks, or how to compare them.
First off, a proper RAID card has either a SAS or SATA controller, dedicated processor (removes the RAID processing from the CPU), and a cache. It's also designed to handle it's own recovery (essentially, it's a dedicated computer in it's own right, but aimed at a specific use, which is to handle the disks in specific configurations for increased redundancy and/or throughputs, depending on the level implemented). When dealing with parity based arrays, you have something called the write hole, and proper cards include a hardware solution (NVRAM). They need batteries and/or a UPS to help with it, as it also requires power (ideally, you run both, but some card makers don't actually offer batteries, as the UPS, particularly an Online type, are expected to be used, and even the card battery can't help you if the data is larger than the cache).

What you're looking at, are Fake RAID controllers, which are nothing more than a SATA controller chip. The computer uses drivers to handle the RAID functions, which means system resources are consumed to do this, reducing the available clock cycles for other functions.

It's like comparing a sports car to a bicycle. They're that different, especially as you move to other RAID levels (i.e. some Fake RAID controllers include the ability to run RAID 5, but aren't suited, as they don't posses an NVRAM solution to the write hole). No cache to hold data in the event of a power failure, and they don't have the recovery capabilities that true hardware cards do either.

Simply put, if you run a parity based array, put the money into a true hardware card and UPS at a bare minimum, as you will get burnt if you don't, not if.

Bit of a side note really, as you're not indicating you want to do this (RAID 5/6/50/60), but could help you to understand the differences between true RAID cards and software based implementations.

As it happens, a RAID 0, isn't that stressful, so it won't eat up that much of the system's compute cycles. But as you're wanting to run SSD's in a stripe set, you hit the problem of the ICH throttling, as it's only allowed ~660MB/s, and you're planning a system that can push ~1GB/s.
 
Definitely, as some will get a better idea from throughputs (MB/s), such as DAS (Direct Attached Storage = storage is available only to the system it's attached to), and others from IOPS (i.e. SAN systems running large scale databases for example).

In your case, it's a DAS situation from what you've posted so far (most MR members use this type of storage).


Actually, the PCIe lanes can be an issue as well under certain circumstances. For example, each drive capable of pushing 3.0Gb/s ports to the limit, enough disks to have one port essentially using a single lane on it's own (i.e. 8x disks on an 8x lane card), and either the slot or card using the PCIe 1.0 specification (PCIe 1.0 = 250MB/s per lane).

BTW, 6.0Gb/s RAID cards are designed with Gen 2.0 PCIe spec, but would experience the same issue with fast disks (250+ MB/s) if it's in a PCIe Gen 1.0 slot. Another way to throttle, is if the lane count of the slot is less than that of the card (i.e. running an 8x slot in a 4x active slot that it fits; 8x or 16x lane physical connector, but not wired for that many lanes electrically).

It all depends on the specifics, so as the old saying goes, "The Devil's in the Details". ;)

I was referring to the Mac Pro 2009 specifically - so, even with the Gen 2.0 4X slots, those should get 500MB/s X 4, which should be more than fast enough.

First off, a proper RAID card has either a SAS or SATA controller, dedicated processor (removes the RAID processing from the CPU), and a cache. It's also designed to handle it's own recovery (essentially, it's a dedicated computer in it's own right, but aimed at a specific use, which is to handle the disks in specific configurations for increased redundancy and/or throughputs, depending on the level implemented). When dealing with parity based arrays, you have something called the write hole, and proper cards include a hardware solution (NVRAM). They need batteries and/or a UPS to help with it, as it also requires power (ideally, you run both, but some card makers don't actually offer batteries, as the UPS, particularly an Online type, are expected to be used, and even the card battery can't help you if the data is larger than the cache).

What you're looking at, are Fake RAID controllers, which are nothing more than a SATA controller chip. The computer uses drivers to handle the RAID functions, which means system resources are consumed to do this, reducing the available clock cycles for other functions.

Sorry - I should have specified I was referring to RAID0 for this performance tests. I understand all the raid modes, and I would generally only use RAID10 or RAID1 in most cases. Even for my media data, I know I can squeeze more out with RAID5 on the 1TB disks, but not sure it's worth the hassle or risk. That's another story. What I was primarily more interested in was the speed of a RAID card in a RAID0 configuration and how they related to those tests.

It's like comparing a sports car to a bicycle. They're that different, especially as you move to other RAID levels (i.e. some Fake RAID controllers include the ability to run RAID 5, but aren't suited, as they don't posses an NVRAM solution to the write hole). No cache to hold data in the event of a power failure, and they don't have the recovery capabilities that true hardware cards do either.


Simply put, if you run a parity based array, put the money into a true hardware card and UPS at a bare minimum, as you will get burnt if you don't, not if.

Bit of a side note really, as you're not indicating you want to do this (RAID 5/6/50/60), but could help you to understand the differences between true RAID cards and software based implementations.

Right - my bad, I got a little off topic, so, ignore RAID5 or doing anything with the hard drives, specifically just looking at RAID0 performance for 4 SSD drives. It's interesting as these are the first that can really push the SATA limit and it comes down to the controller.

As it happens, a RAID 0, isn't that stressful, so it won't eat up that much of the system's compute cycles. But as you're wanting to run SSD's in a stripe set, you hit the problem of the ICH throttling, as it's only allowed ~660MB/s, and you're planning a system that can push ~1GB/s.

Exactly - So, this comes back to my original thought: ICH throttling. The Mac Pro board is not going to go over the ~660MB/s. Clearly see that. The two tests they performed with those other cards did go over this. And the tests where they used two cards and 4 drives were even better, although it's unclear how they setup the RAID0 configuration. Either way, assume you have a very high performing SATA SSD drive that can hit the SATA 3Gb/s limit and you wish to put them in a RAID0 configuration using a card in a PCIE X4 or X16 slot, the ICH throttling is the bottleneck and something I find difficult to know by reading the stats of these cards. Like I said, I knew nothing about those cards, obviously they are not as high performance as some, but there's no real metric that the makers are using to distinguish simple RAID0 speeds. Obviously, a card that can do many different RAID configurations and do them well, makes it very difficult to compare to a card that does only a few. Not really apples to apples...

Having said all that - I still haven't convinced myself that I need 4 SSDs as opposed to 2 or even 1 yet. I do a lot of development and I have a lot of software running where I'll run them all in either virtual or some other mode on my system, so it gets quite taxed. But, as I investigated it lead me to this thread, where I can't seem to find anywhere else that even discusses this. For example - what cards would support ~1GB/s in a RAID0 configuration with 4 drives? It would be interesting to see what cards can do this...
 
Having said all that - I still haven't convinced myself that I need 4 SSDs as opposed to 2 or even 1 yet. I do a lot of development and I have a lot of software running where I'll run them all in either virtual or some other mode on my system, so it gets quite taxed. But, as I investigated it lead me to this thread, where I can't seem to find anywhere else that even discusses this. For example - what cards would support ~1GB/s in a RAID0 configuration with 4 drives? It would be interesting to see what cards can do this...
You need to figure this out before doing anything, as you could mistakenly spend more money than you need to. An exact description usually helps if you can't quite do this yourself (i.e. software usage, file sizes involved,...).

As per a single metric for RAID, it doesn't really exist. You've throughputs and IOPS. Distinguishing which is the more applicable between the two, requires specific information (usage) and the knowledge of how to interpret the information.

As per what they did with the cards in the most recent link, they made sure each drive had it's own PCIe lane. Setup of the array was done under Disk Utility (you just see available disks, and go from there). It's not that hard actually.
 
You need to figure this out before doing anything, as you could mistakenly spend more money than you need to. An exact description usually helps if you can't quite do this yourself (i.e. software usage, file sizes involved,...).

As per a single metric for RAID, it doesn't really exist. You've throughputs and IOPS. Distinguishing which is the more applicable between the two, requires specific information (usage) and the knowledge of how to interpret the information.

As per what they did with the cards in the most recent link, they made sure each drive had it's own PCIe lane. Setup of the array was done under Disk Utility (you just see available disks, and go from there). It's not that hard actually.

Understood. I can see where even hard drives have very different numbers. I really want to increase the speed of my system here - just curious what's worth it. Like I said before. If I need X amount of space and I can spend 20% more to get 4 disks with a controller card that's not much more and get 4 times the speed, then it may be worth it. But, if it's only 10%-20% more, then what's the point? The difficulty is knowing what controller card could even do this and if that price climbs, then I'd probably wait... stick to 2 for now.

Part of this is also theoretical. I'd like to see how many IOPS I could get just to do some tests. We built a custom clustered DB that requires over 170Ks IOPS a second - it's in a massive cluster of nodes with an ungodly number of large disks. I deal with software end, so part of this is to understand how SSDs could drastically improve the IOPS. Of course, there's a ton of other issues after that I won't go into, but if one of these can get 20K-30K IOPS, that's an amazing feat for systems that need real-time highly random IO access to data.

Anyway - thanks for the hardware help...
 
Understood. I can see where even hard drives have very different numbers. I really want to increase the speed of my system here - just curious what's worth it. Like I said before. If I need X amount of space and I can spend 20% more to get 4 disks with a controller card that's not much more and get 4 times the speed, then it may be worth it. But, if it's only 10%-20% more, then what's the point? The difficulty is knowing what controller card could even do this and if that price climbs, then I'd probably wait... stick to 2 for now.
You'd probably be fine using a singel SSD for an OS/applictions disk, and mechanical for anything else.

The ICH controller could handle 1x SSD + 4x HDD's without throttling (250MB/s + 400MB/s = 650MB/s, which is at the limit, but not over). Use an external solution for a primary backup.

Part of this is also theoretical. I'd like to see how many IOPS I could get just to do some tests. We built a custom clustered DB that requires over 170Ks IOPS a second - it's in a massive cluster of nodes with an ungodly number of large disks. I deal with software end, so part of this is to understand how SSDs could drastically improve the IOPS. Of course, there's a ton of other issues after that I won't go into, but if one of these can get 20K-30K IOPS, that's an amazing feat for systems that need real-time highly random IO access to data.
Software.... Eww... :eek: Compared to hardware anyway. :p

It sounds like your coworkers on the hardware end could give you some solid help if you asked nicely (perhaps buying them lunch while you ask away would do the trick). ;) :D
 
The hardware portion is outsourced. :( (And, one of the reasons I'm never happy with it... they just don't have the involvement we have in it.) It's a business decision...

Anyway, the 20K IOPS came from the SandForce stats on some of those tests in the link... I believe it was the 4KB random test. Which, may not seem like a lot, but I used configure many Oracle datafiles with 4KB block sizes as access was always random and you didn't want to thrash the buffer pool. Of course, we had dozens of disks serving this as well...
 
Anyway, the 20K IOPS came from the SandForce stats on some of those tests in the link... I believe it was the 4KB random test. Which, may not seem like a lot, but I used configure many Oracle datafiles with 4KB block sizes as access was always random and you didn't want to thrash the buffer pool. Of course, we had dozens of disks serving this as well...
ahh 4KB blocks. nice, not high but certainly pretty intense for mechanical disks! i imagine they wouldnt last too long haha! thanks for that info :)

utilising SSDs isnt really a likely scenario for you is it? given write amplifications and whatnot..
 
The hardware portion is outsourced. :( (And, one of the reasons I'm never happy with it... they just don't have the involvement we have in it.) It's a business decision...
As in a Host Company?
Or a consultant created the design, and the equipment is owned and operated by your company (physically on-site)?

I presume you mean it's hosted (dedictated equipment that's leased from an off-site company), but I'm just looking for clarification, as I've seen both.

Anyway, the 20K IOPS came from the SandForce stats on some of those tests in the link... I believe it was the 4KB random test. Which, may not seem like a lot, but I used configure many Oracle datafiles with 4KB block sizes as access was always random and you didn't want to thrash the buffer pool. Of course, we had dozens of disks serving this as well...
For a database, 4k makes sense. Some of the newer SSD controllers have gone to 4k operation internally, such as the SandForce.

BTW, are you doing this type of work with your own system you've been inquiring about?

utilising SSDs isnt really a likely scenario for you is it? given write amplifications and whatnot..
I wouldn't think so, unless SLC based models are used. Unfortunately, they're still really pricey and may not be an option for this reason.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.