Apple raid card & utility conumdrum

Discussion in 'Mac Pro' started by Spencer01, Dec 16, 2009.

  1. Spencer01 macrumors newbie

    Joined:
    Dec 15, 2009
    #1
    Hi

    This is my first post so go easy on me. Ive searched through threads without any joy to my query so hopefully someone can help.

    Ive recently purchased a Apple raid card for my 8 core Mac Pro. The card is the most recent addition and does away with the Ipass cable as on the older revision. I installed the card in around 5 minutes without any problems and once this was completed i checked that the card was recoginised by the OS (SL 10.6.2) which again was no problem. Ive also purchased 2 Samsung 2.5" SSD drives which were also installed and working fine in the system. However when i go to create a raid set in RAID Utility all the options to create a raid set are GREYED OUT.

    Ive tried creating the raid from Booting with the Mac OSX disc but i get the same problem (ALL FUCTIONS OF THE RAID UTILTY GREYED OUT)
    My final attempt was creating the raid through the Mac terminal which flagged up that NO drives were found.

    Does this mean the raid card does not support the SSD drives? or is there something i,m just not getting

    Any help would be appreciated on the matter as i'm getting very frustrated.

    I purchased 2 x 74GB Raptors today just to see if they work so i'll let you all know a bit later

    many Thanks

    MAC PRO 8 CORE*2.8GHZ*24GB*ATI 4870*HARDWARE RAID*SNOW LEOPARD 10.6.2
     
  2. Spencer01 thread starter macrumors newbie

    Joined:
    Dec 15, 2009
    #2
    Still Greyed out

    Hi

    As mentioned in my original thread. I tried using 2 73gb Raptors with my apple raid card but this was also unsuccessful as i'm getting the same problem. There's just no way of configuring/creating a RAid Set as all options in the Raid Utility are Greyed out.

    Would anyone know whether this is a Raid Card problem or a HDD/Raid Card incompatibility.

    Help would be appreciated

    Thanks
     
  3. Dr.Pants macrumors 65816

    Dr.Pants

    Joined:
    Jan 8, 2009
    #3
    The RAID card that gets rid of the iPass cable is for the 2009 MacPro only. On the motherboard for the 2009 MacPro, the SATA lanes are routed into the motherboard - they pass through the Apple RAID card if it is inserted. The SATA lanes on the 2008 model were not designed for this, so the card has no way to talk to the discs.

    Simply put - return the RAID card and buy something other then Apple for RAID. IMO, its worth it. Areca and ATTO have good offerings from what I've heard, and the Highpoint 43xx line is good as well (keep in mind that there is a hardware compatibility list for SAS-based RAID cards).
     
  4. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #4
    It looks like you have the wrong card. The model that elimnates the iPass (SFF-8087 connector) is for the '09 MP's ONLY.

    Given you're using an '08 model, you'd need the earlier version that has the iPass connector. The model you have is meant to use traces to connect to the drives, which aren't on your logic board. So there's no drives attached to it. You must attach the iPass cable, which can't be used with that exact card. :(

    As it happens, the Apple RAID cards are essentially junk. :eek: Seriously. They're slow, expensive, have problems with the batteries, and only work in OS X.

    You can get a 3rd party card that offers you more for your money, and if you only want a 4 port card, is a much better value (faster and notably less expensive; half as much actually, before the battery, which is sold as an option for most cards). Check out Areca (pay attention to the specific models, as not all are capable of booting in an EFI firmware based system). That said, you might want to take a look at the ARC-1212, which is a 4 port SAS card (most comparable to the Apple RAID card IMO, but there's other versions which may actually make more sense, especially in terms of port count for future expansion/increased throughput due to the additional parallelism of drives). Site I use with the best price, but note: their stock moves fast (I'm assuming you're in the US, as I don't know if they'll ship internationally or not).

    The battery is highly recommended, but a UPS is far more beneficial IMO. Ideally, you should run both. Just make sure the UPS has enough run time to complete the last task (runtime = load dependent), then allow it to shut down (assuming the settings are automatic, otherwise you'd need to do it manually).

    Just make sure you check the Hardware Compatibility List for compatible drives, as SAS cards are really picky about SATA drives (enterprise models will be necessary, not an option, as the recovery timings are different than their consumer counterparts).

    Hope this helps. :)
     
  5. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #5
    Yep.

    I gave a fair bit of info, given the lack of specific needs. ;) Areca offers the best value IMO for boot compatible cards in MP's (even in PC systems as well, given you usually get additional features, throughputs, or both). The exact model comparision matters though, particularly in the PC side. ;)
     
  6. Dr.Pants macrumors 65816

    Dr.Pants

    Joined:
    Jan 8, 2009
    #6
    You certainly did give a fair amount of info :eek: Makes my post look like an abridged version!

    However, for specific needs - I doubt that the OP will use them, but Seagate is offering their SAS drives now with a 6.0 Gb/s SAS connector. I assume this could be used with a card meant for 3.0 Gb/s SAS (similar to how SATA II drives can be used by SATA I interfaces). After doing the b to B conversion, 3.0 Gb/s turns into 375 MB/s (Hope I'm right here) - considering how the 600GB 15K7 drive tops out at 200 MB/s, I assume there wouldn't be much difference between SAS 3.0 Gb/s and SAS 6.0 Gb/s. Especially using a Areca 1680-series card ;)
     
  7. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #7
    SATA 3.0Gb/s = 375MB/s theoretical. In reality, they can run ~270MB/s or so. Double that for SATA 6.0Gb/s for real throughputs.

    No current mechanical drive can saturate 3.0Gb/s (and will likely be a while off, if ever), so in a 1:1 (drive to port ratio), it's not going to matter for mechanical drives yet.

    But that's not the case with SSD's (we should see SSD's that can exceed 275MB/s in the next generation I'd think), or with Port Multiplier chips designed with the 6.0Gb/s spec in mind (~500MB/s max real throughputs). That could make PM enclosures quite attractive for video/graphics users for working data, not just backups.

    PCIe based Flash drives are already able to exceed SATA 6.0Gb/s spec. IoFusion has a model or two (one is a custom model, and won't be available for independent purchase; HP branded it seems) comes to mind... ;)
     
  8. Dr.Pants macrumors 65816

    Dr.Pants

    Joined:
    Jan 8, 2009
    #8
    All I needed to hear. Barefeats had a test in which the read/write times were around 200 MB/s. Since I'm going to be working with large files (writing uncompressed HD and then editing with it), I thought that high sequential read/write would be better then high random read/write.

    Attractive indeed - though let's not run it through ICH10R, shall we :p;) Don't want to have to -throttle- anything :D

    However, UBE and actual drive size are SSDs limitations for me at the moment.

    Up until the giant RAMDisc discussion involving photoshop, I was thinking that PCI-E flash storage was meant mainly for servers with high random access on a small data set - but now I can see they would be useful on photoshop workstations where there might be some PITA involving a RAMDisc - they stay around during a shutdown or other act of God that takes power off of the machine. That and the amount of storage is higher then the amount of RAM some users have in their machines (Mainly MacPros), so it seems like a win-win... if one has the money to bleed. However, many Windows based workstations can accept ludicrous amounts of RAM.
     
  9. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #9
    Rob-Art tested a single drive. Nice throughput, but current SATA drives can't touch it, as the 15K.7 is ~2x faster than the fastest SATA models out right now. Even more, compared to some of the enterprise models, which is really a proper comparison IMO, as the UBE and MTBF is as close as possible.

    The faster figure came from a PM enclosure on a single port (4x stripe set on a card with cache IIRC), and is usually more like 250MB/s in such cases (limit of the PM chip). But SATA can give a bit more.

    Given the nature of mechanical drives, sequential throughputs are always faster than their random access figures. SSD/Flash drives are the only tech that can beat it consistently, but isn't the least expensive, or the necessarily the fastest way to go overall. Mechanical SATA in RAID can exceed it in sequential rates in terms of both reads and writes. Random access still isn't up to the level of SSD/Flash, but it does speed it up. Then there's far more capacity of course, and reliablility that doesn't rely on manipulated statistics (UBE generated off of a 90th percentile analysis). This aspect can be mitigated by leaving enough capacity unused, but at the current cost/GB, it's not necessarily an easy thing to do.

    The ICH10R isn't the best way to go, and in the case of the 15K.7's, won't work anyway (SAS disk). ;) :p

    SAS or SATA, a separate card would be advisable, though not an absolute necessity with SATA models (i.e. the ICH10R could handle 3x 6.0Gb/s disks in a stripe set <if they hit 200MB/s in reads>, maybe 4x <less ~150MB/s>). Hard to say without actual SATA 6.0Gb/s models available yet AFAIK, let alone have been benched by 3rd party sources.

    They are for most, and depending on the user, cost may be the most significant obstacle.

    I want to see more data on UBE in RAID scenarios myself, and given current Flash chips available, will wait. I don't care for manipulated statistics, especially for small capacities (too easy to burn up capacity and not have enough for the 10% of cells that will burn out much sooner). If the capacity is large enough, it's fine. But it could be a fine line, until the cost is low enough were the extra capacity isn't so hard to come by in terms of affording it.

    Flash drives are currently aimed at clusters and serious workstations. Unfortunately, the workstation market is even smaller ATM, due to the costs. But it's not impossible to find it, for a specific use and assuming there's adequate budget for it. ;)

    For more mainstream use (most servers and workstations), it's still too expensive (and untested in some IT personnel's minds I'd imagine).

    SATA based SSD drives are aimed primarily at the enthusiast market, which ATM, is paying most of the R&D costs. They're still working it out afterall, and it's not fully mature yet. The speed has been decent, but with the existing economy (and the price gouging seen recently), may slow.
     
  10. Dr.Pants macrumors 65816

    Dr.Pants

    Joined:
    Jan 8, 2009
    #10
    I know what UBE is, but not how its calculated. Can you go slightly more in depth there? I'm not trying to say you're wrong or anything, but the underlined phrase piqued my curiosity.

    Dropped a wee bit of a hint when I stated the term "1680ix-series" earlier.;)

    Oops. I managed to erase cost/GB in my reply. It should've read "However, for the cost, UBE and actual drive size are SSDs limitations for me at the moment." Sure, G2 Intels are nice, but for a little more I can have 4x the capacity per drive. Need to buy a controller, of course, but then (supposedly) one can route data through the PCI-E lanes instead of the southbridge. In some cases, this may be preferred (ICH10R).

    Where there's a will, there's a way. Also, 80GB of actual RAM would be more expensive then one of those PCI-E flash drives, IIRC... It's all a cost/benefit analysis at the end of the day.
     
  11. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #11
    NP. :)

    From the Flash manufacturers, Write specifications:
    MLC = 1E4 writes
    SLC = 1E6 writes

    That's the worst case, and many of the chips will do better. But it's what the Flash manufacturer can prove across the entire lot.

    Now take wear leveling into consideration (other features too, if they're there), and it improves substantially. But there's a little wrinkle. If you use the good chips (best performers that exceed minimum spec) + worst spec chips (cells actually), the statistical analysis isn't that wonderful, especially most people's understanding and expectations from semiconductors in terms of reliability.

    If the numbers looked worse, or marginally better (at best) compared to consumer grade mechanical drives, it's a "harder sell" to potential customers, especially given the current pricing IMO.

    So there's two options.
    A. Bin the parts prior to assembly. It takes time, generates waste, and adds to the cost of the final product. The statistics (UBE in this case) are real, and applicable to every cell in the parts.

    B. Toss out the outliers, which is typically 10% of the cells. Perform statistics of the best 90% (with wear leveling engaged). It increases the numerical values (substantially from the minimum spec on the Flash chips used).

    SSD makers have chosen option B, and can mitigate the "worst case/least reliable cells" by adding additional (unaccessible) capacity (Intel does this, not sure about others).

    But which one?

    There's a few choices in that line... :p

    SATA or even SAS RAID is less expensive than SSD in terms of cost/GB, and performance can exceed SSD in terms of sequential throughputs. :)

    As per running the data across the PCIe lanes in the '09 models, that ONLY works with Apple's card. To use the Areca cards with the HDD bays, you MUST us the adapter sold by MaxUpgrades (adds $165USD to the price for the correct kit that allows use of all 4 drives).

    The '06 - '08 models are much easier to do, as you only need to move the iPass cable (internal MiniSAS = SFF-8087) from the logic board to the card. In some cases, an extension is needed, which is also sold by MaxUpgrades (to the tune of $90USD). It's still cheaper though, and isn't always needed. Depends on the exact card. Areca 1680ix models = tight fit though, and likely needed.


    In general, perhaps. It always comes down to running the numbers for the specific gear, and weighing the benefits for me.

    But I was thinking specifically of the Fusion IO models aimed at clusters,..., given the initial pricing I spotted on CDW. IIRC, they had ~$16K & 32K USD range listed on two models when it still had prices (now it's CALL). :eek: :eek: WAY over MSRP. :rolleyes: Talk about gouging... :rolleyes: :mad:
     
  12. Dr.Pants macrumors 65816

    Dr.Pants

    Joined:
    Jan 8, 2009
    #12
    :eek: Well, that certainly opens up my eyes to Intel's drives, at least. For the premium that is pulled, I wouldn't expect option B to be a consideration. Then again, I am posting on an Apple forum. :p

    Heh. The model I was looking at was the ARC-1680ix-12. I would be connecting four of those Seagates via the SFF-8088 connector on the card in an external enclosure (that I would probably have to build as well). Maybe I could connect other hard drives internally to the card (which would be a good idea, IMO), but I seriously doubt my throughputs inside the machine (in the future) would push the ICH10R chipset at any rate.

    Still in a conundrum about how to build the array enclosure, though (it should be a matter of case+hard drive mounts+PSU+SFF-8088->SAS fanout cable+adapters)

    Erm, what I meant to say was that by using a RAID card in general one isn't shoving throughput through the southbridge because the RAID card (generally) is attached via PCI-Express. I apologize, I should've been clearer there.

    :eek: Maybe by calling you can get a reduced price? Or at least keep the wallet's heart beating for a little while longer...

    Earlier I was referring to the 80GB card that I thought sold for around $1K, but it turns out I'm off by $2,300 (according to Froogle). Nevermind on that RAMDisc thing I was talking about; while the ability to keep data in the event of a power loss would be a worthwhile endeavor, I would rather be spending money on FB-DIMMs to warm the home during the winter months.:p:D
     
  13. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #13
    It's not just Intel's drives though. This approach is taken by other SSD makers as well, as it's less expensive than binning the flash chips prior to manufacture.

    Quite a decent card. If you could do without the SAS drives, you might want to take a look at the ARC-1231ML. It's fast (800MHz IOP is fast enough), has the same features, and is less expensive.

    There's also the possibility of an 8 port SAS model (ARC-1222, which does NOT have support for SAS expanders).

    Just offering some possible options.

    How many drives are you looking to start with, what array level/s are desired, and what are your throughput requirements?

    As per internal drives in the MP, it's possible to use those + external drives in a single array to save on enclosure and cabling costs. That is, you can run 4x internal, and 8x externally. How they're set up won't matter, but an 8 bay enclosure is less expensive than a 12 bay unit (hard to find a pedestal version, if not impossible).

    Possible to do, but it's easier to buy than build for that few drives.

    What you need to DIY an enclosure:

    Computer case (needs a minimum of 6x 5.25" external bays for the backplanes to accomodate 8x drives with the backplane linked; 3bays holds 4x drives, so it's not as much required for a 4 drive enclosure)
    Backplane (example; if you go with something else, pay close attention to the connectors)
    PSU (figure ~300W)
    Adapter (4 bay capable model, 8 bay capable model)
    Internal cables (1 per SFF-8087 port)
    Internal to External cables

    *Please note that a DIY is larger than a ready-made system, as the cases used are hard to find by themselves (likely need to order out of China, and there's almost certainly a minimum order requirement).

    OR

    Enhance E4-MS
    Enhance E8-MS (these are available in silver btw)
    (or similar)

    Ah. PCIe does allow you to get past the the ICH10R throttling issue plaguing the '09 systems.

    ATTO's released 6.0Gb/s SATA port HBA's, but no one's released their 6.0Gb/s RAID cards yet.

    Worth a call, but I wouldn't bet on it. :eek: :p

    The entry level model has a MSRP of ~$900USD. But the street prices are insane right now.
     
  14. Dr.Pants macrumors 65816

    Dr.Pants

    Joined:
    Jan 8, 2009
    #14
    Well, initially I was planning a stripe of a pair of discs; as the year went on, I would add another pair at a RAID-5 level. After another year I would hopefully total seven drives in RAID-5 with a hot spare (if possible). Its unfortunate that only HBAs are showing the 6Gb/s instead of actual RAID cards - with the Seagates I was interested in, the throughputs are beginning to edge up to the standard. So the ARC-1222 might be a saving grace, since I don't think I'll ever

    My throughput requirements are a 165 MB/s write speed (I think) for a live-recorded 1080 stream. While a single one of the 15K.7 drives, I would be able to accomplish that task - unfortunately I would assume that speed slows down as the data gets written further inward. That and I might start recording other streams of data, might work with footage with very high resolution - hence why I would start at two and keep expanding.

    I would be archiving data to SATA drives as time went along as well to keep the array speedy, but I might be working with data/footage on the SAS discs for some time.

    While it certainly is savings, I also was under the impression that there's a caveat against combining external drives and internal drives in RAID... for whatever reason I was never aware of.

    While indeed it would look like a good idea to buy instead of DIY, I already have the case that I want to use DIY on for sheer aesthetics. Which was why I was wondering. Thanks for the list of parts, I was concerned about what specifically I was going to need. The PSU, however, concerns me slightly - most of the PSUs that happen to be around the wattage I would be using are not the best out there, or at least that was my opinion (dirty power). I assume that there are good 300-400 Watt PSUs out there, but I don't know where to look.

    Well, if one could make the assumption that 4GB of DDR3 (the standard nowadays) costs around $110 a DIMM, that's $2,200 for the same capacity as the entry level PCI-E SSD. That really is a gouge "on the streets".

    Well, if it delivers a product that works, I guess... So, I guess the next question is, how do SSD makers switch off the cells that are known to be bad?
     
  15. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #15
    HBA's are easier to design and get out the door. They do become the basis for the RAID cards, but it takes more work (not just design, but with the testing involved), resulting in a later release date. If I had to guess, ~May 2010 before they'd show, assuming there aren't any major problems (they'll use the same basic design and parts in the 3.0Gb/s models where ever possible to reduce the workload, but the testing has to be complete. No shortcuts there.

    Fortunately, the existing models will be able to run the Seagate 15K.7's just fine, as each of those SATA ports are good for ~270MB/s or so. It's still greater than that drive's throughputs, and won't throttle. It's with future versions that it will eventually happen, and especially so when SSD's go more mainstream. They're already really close as is, so the idea of getting a 6.0Gb/s card does make sense.

    So you'll either have to be patient, or bite the bullet and pick up something now if the need is that pressing.

    Yes, as you fill the drive, throughputs do slow down. But if you keep the capacity at the 50% mark, this can be mitigated. Otherwise, if you need to be sure to fill the capacity, you have to plan from the worst case (inner most tracks). Some benchmarks will give this to you (i.e. capacity in % with a throughput for that point, usually in 10% gradients).

    Given your speed requirements, you don't need that many drives whether it be SAS or SATA. Assuming there's a budget involved, you may be better served with a SATA card (so long as it's less expensive than it's SAS counterpart). You just have to pay close attention to the cable length when using SATA, as the spec is:
    1.0m = passive signals
    2.0m = active signals

    SATA's advantage is:
    1. Cost/GB
    2. Parallelism is less expensive, yet it can reach, or exceed what can be acheived with SAS (save random access) within the same or even on a lower budget, and provide additional capacity. That saves money in terms of having to add drives/swap drives to increase capacity as often.

    Something to think about, as I'm not convinced SAS is the best way for you to go, given what your throughput and usage description is (1080p = large sequential files).

    You could always use a SAS card and a couple of SAS drives for an OS/applications array (random access throughputs can make a difference here, but overall, isn't that often if everything is to be on a single array), and use a SATA array for the working data files.

    You can always use a simple eSATA card + PM enclosure system for archival/backup purposes. It works, and is cost effective.

    There's NO performance hit, if that's what you're wondering. :D

    In terms of swapping a bad drive, external hot-swap enclosures are faster (MP's, and most workstations do NOT have Hot-plug capabilities; the card does give hot-swap, but you need the hot-plug aspect as well).

    You also have to be extremely careful with cabling. The use of port adapters won't work well with SATA drives, due to the low voltages (~400 - 600 mV) & the contact resistance. SAS runs on 20V, and is much more stable as a result, and why the max cable length = 8.0m.

    Got it. I figured it was an attempt to save funds where possible.

    As I mentioned, the Enhance products (and ProAvio as well, as it's the same enclosure), do come in silver. The smaller size is also nice, and potentially make them easier to place. Particularly if used with SATA drives and 1.0m cables (must be next to the MP).

    As per the PSU, you'd be surprised. The ones used in enclosures are inexpensive units, and there's usually nothing special about them (some are redundant units, which is a pair; not cheap, and the power quality is no better than a single, and could be a little worse given the size limitations). I'd just look to a decent maker. Perhaps Seasonic would suffice for that small a unit (you can go up in W, but I wouldn't bother with more than 400W).

    Besides, you're only using the 12V rail/s to run drives. (BTW, using computer models, you'd need to make sure you jump the black and green wires on the main board connector or it won't work).

    Yep. Especially given the cost difference with Flash. Granted, there's the controller chip, and other components as well. There's a good bit added to the production cost for R&D recovery, and finally profit (all direct and indirect costs get figured in, of which R&D is one such expense, but it's a big one in this case).

    They don't actually switch them off, but do the equivalent of a remap (skips it).
     
  16. Spencer01 thread starter macrumors newbie

    Joined:
    Dec 15, 2009
    #16
    Conundrum sorted

    This is very helpful informations guys. You've all saved me valuable time and money. I ws beggining to think the rais card was faulty.

    I shall take your advise about buy a 3rd party card and sell the Apple raid card. If anyones interested it will be selling on Ebay very shortly (I guiess everyone hates Apple Raid cards)

    Many thanks for you help:)
     
  17. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #17
    :cool: NP. Glad you got sorted. :)
     

Share This Page