Inexpensive PCIe SATA Cards? for 2010 Mac Pro

Discussion in 'Mac Pro' started by fensterbme, Sep 25, 2010.

  1. fensterbme macrumors member

    Jul 27, 2010
    Columbus, OH

    I just ordered a new 2010 Mac Pro 3.33Ghz. hex core machine. I'd like to be able to put two SSD drives in the second optical bay (using the OWC Multi-Mount), but I'm short an SATA port (unless I remove the optical drive from the bay above). Is there any reasonably priced PCI-E internal SATA cards? I've seen the higher end Sonnet Tempo E4i 4 Port internal SATA card, but it's a bit overkill and doesn't support functioning as a boot drive.

    I also don't really care about hardware RAID as I'm planning on using SoftRAID to create the RAID groups both for the pair of SSD's that will function as my app/os and scratch volume as well as the four Western Digital Caviar Black 2TB drives sitting in the normal hard drive bays.

    My thought was I'd just use a single port off of an add-in card to run the DVD burner in the optical bay... and let the two SSD's use the SATA interfaces on the motherboard. But I can't find any simple cheap small SATA add-in cards, most of the cards are eSATA and send the ports outside.

    Any ideas?

    My other way of fixing this little problem would be to move the optical drive outside via an external 5.25 drive bay (something like the OWC Mercury Pro Case), which I'd really like to not have to do as I don't want any more cables/wires coming out of my computer and taking up space than I have too...

    I'm a pretty experienced IT guy by day, but I'm pretty darn new to Apple and building them up, so there is a bit of a learning curve going on here.
  2. Honumaui macrumors 6502a

    Apr 18, 2008
    not sure of internal ones?

    SSD in optical bay

    the OWC one I am sure is a bit nicer :) but the above is a option

    the other thing I was going to do was
    and then jump the power off somehow ? which would not be a big deal and then just route a USB cable back into the computer ?

    not saying to do this :) just a idea

    one thought depending on how you have those 4 WD 2TB setup ? raid 0 ? your going to tap out your ICH bandwidth I bet ?
    the idea of scratch on your boot ? at least for PS is a bad idea IMHO ? cause you cant control how large it gets it could bloat up and fill up the disc ?
    so if your raid 0 cache and is hitting 400 and your raid 0 is hitting 400 ? and your limit is 660 or whatever it is ?

    so if you are writing a file you are going to be limited a bit with the cache and write speed ? not sure which one will be hit the hardest or if in real world it will matter much ? but just good to know

    just some info :) and not saying dont do what you want :) just sharing my thoughts :)
    boot on raid 0 ? OK some say its faster but when you really bench your applications and such in real world its not really any faster ? even anandtech did a article some time ago on regular HDD disproving this raid 0 boot thing ?

    you might gain a bit quicker launch ? but once your programs are launched they launch quick and the difference of raid 0 to a single is most likely centiseconds ?

    the downside at least for LR ? is in raid 0 the boot with cache was slower than a single dedicated SSD to cache ? and two SSD for photos in LR and PS my testing showed a single boot and single dedicated scratch cache was quicker overall

    again test yourself :) always test out your machines and make good notes sometimes what you think is quicker or perceive to be quicker is not always the case :)

    hope this helps ;)
  3. nanofrog macrumors G4

    May 6, 2008
    Most are eSATA, not SATA, and bootable complicates it substantially.

    Which means, that there's exactly one bootable eSATA version (Highpoint, which I wouldn't touch), before you have to move to a NON-RAID Host Bus Adapter.

    For an SSD, you'd also want to get a card that's 6.0Gb/s (will allow it to work with future SSD's that can exceed what the 3.0Gb/s ports can handle).

    There's one candidate I can think of (call or email to confirm whether or not it will boot, but ATTO's products typically do have that capability); the ATTO H608 (bootable, handles 4x drives, but it's $400USD).

    Past that, you're looking at a RAID card, and the cheapest 6.0Gb/s model is $543USD (Areca ARC-1880i). 8x ports, and it will boot OS X once flashed with the EFI firmware. Personally, I'd go for this if it were me, due to the additional ports and configuration options (RAID).

    You don't even need to use SoftRAID. Just set it up under Disk Utility, which is capable of implementing 0/1/10 arrays (also software based).

    If you do go for a RAID card, then use the card to implement the array.

    This is not only possible, it's going to be the cheapest solution. Even the Highpoint I mentioned, but didn't link previously (eSATA for Mac Edition, is $230USD last I saw). And if you used it, you'd have to run cables through an open PCI bracket (not pretty, but would work).

    If you plan to run more than one OS, then use a USB connection to be sure it will work thanks to Windows dropping FW support. But it's not really a problem, as optical media's not that fast anyway (USB 2.0 is sufficient for the throughput).

    Another note to keep in mind, is the built-in SATA ports on the MP have a throughput limit of ~660MB/s, which means you can throttle the controller (ICH) with as few as 3x SSD's (depends on the exact models used). Nor does it scale well as you add SSD's.
  4. fensterbme thread starter macrumors member

    Jul 27, 2010
    Columbus, OH
    Good things to think about...

    I realize that if all the drives were all going full bore that I'm going to saturate things... I think in practice that's unlikely for me (not saying it couldn't happen just that I think if it *did* happen it would only be occasionally)

    My thoughts on striping the SSD's were because the difference between buying two OWC 50GB drives wasn't that much more than buying a single 100GB drive. I personally thought that 50GB might be a bit tight but didn't want to pay a ton of money for two larger drives, I also knew that I really didn't need the performance of striping the SSD's and wasn't going to shell out the cash for two larger drives.

    I thought about just keeping the two SSD's separate and having one for OS/Apps and the other for scratch space/temp space. But I thought striping them would offer me more flexibility with regards to space, I'd end up with 100GB of space that I could leave all together or if I wanted carve up logically into one partition for OS/Apps and the other for Temp/Scratch.

    I'd love to know how striping SSD's ends up being slower than two separate drives...

    Additionally in my situation I think I'd prefer to have a single larger RAID stripe and carve us as I want as opposed to have be limited to two 50GB containers.

    Regardless I plan on doing a lot of playing around with the configuration and doing some testing. I have a few weeks until I need to actually get to business with the machine, and I want to ensure that it both performs well and even more importantly is rock solid.

    That sounds like a great option if I also wanted to do hardware RAID for the storage but I'm not sure it buys me much in the price/performance department.

    So if I put the optical bay in an external enclosure and connected it via eSATA, I'm still going to have the boot issue aren't I? Thinking if I were to go that route I'd have to add in a single SSD use a software 'cloner' package to migrate the OS onto the SSD before pulling the optical drive out, etc.

    I plan on running Windows 7 within VMWare Fusion, and I don't see me using Firewire on the Mac Pro for anything other than my CF card reader. If it's going outside and it's a hard drive or SSD it will be via eSATA.

    I'm thinking it might just be a whole lot easier to get one larger SSD and mount it in the second optical bay. It would keep me from having to move the optical drive to an external enclosure or do some funky cabling and setup. Granted I'd loose some performance but I'm not sure how much that would actually matter practically.

    I don't see me ever buying a lot of SSD's to store some of my data on them until the price drops a TON...

    Thanks for the responses this is really helpful... It's amazing how much more annoyingly complex it is to crank up performance on a Mac compared to a Windows system, it seems Apple isn't at all in the business of making it easy for the small group of people who want to get more performance or storage out of their Mac Pro then Apple ships them with...
  5. Honumaui macrumors 6502a

    Apr 18, 2008
    on this part
    I'd love to know how striping SSD's ends up being slower than two separate drives...

    its again just for LR when I tested ?
    so a program like FCP ? dont know it might help :) I have a limited scope of what I need to move fast :)
    for games I play with PS3 and Wii with the kids ;)
    for other work my workstations are fast enough but cause of my company I work in PS and LR and the rest of the stuff like quicken and FTP etc.. wont matter :)

    the raid 0 with the OS on it as boot and using as cache was slower than the a dedicated boot and a dedicated cache ?
    no idea and I could not figure it out ?

    and it is quicker than no SSD :) and the time was not earth shattering slower ? forgot now but enough to think why do it this way ?
    and the times I mean we are talking tenths or less of a second ? for most that wont matter ? :)

    and again my main concern is LR and PS ? everything else takes a back seat ;)
  6. nanofrog macrumors G4

    May 6, 2008
    You're the one that has to determine this, but you're aware of the potential at least. Others have stumbled across this realization after they spent a fair bit of cash, and it may have been a problem.

    A striped set can be cheaper in terms of capacity when compared to a single large SSD. But you have to check the pricing to be sure if this is the case, as there's another "cost", which is it consumes additional ports.

    I don't recommend partitioning SSD's, given the write amplification issue. You want to have as much unused capacity available for wear leveling as possible.

    If you want to use a separate SSD for OS/applications and scratch, that's fine. Just be aware that the scratch disk will die faster, and need to be replaced much more frequently, as it's being used under a high write condition (the one area SSD's are weakest at than any mechanical disk currently available).

    Since the models you're interested in are MLC based, I'd figure 1 - 1.5 years as a replacement cycle for the scratch disk (covered before, if you search; will offer you more detail than is in this post).

    With the introduction of the 40GB model from OWC ($110USD last I checked), this isn't that big a deal IMO, if you're earning a living with the system. The increased workflow should allow you to earn additional profits above the anual SSD replacement cost for the scratch disk.

    Stripe size and the controller both have an influence here, so the details would be needed.

    As per the stripe size, you'd need to experiment with different sizes, and test the throughput for each (includes real world usage, not just AJA or any Windows benchmark software).

    A stripe set is a form of RAID (it and JBOD are the bastard kids of RAID :eek:).

    But if you notice, the first link is for a non-RAID HBA (it can't do RAID on it's own, but it can be used with Disk Utility = why it's cheaper, as there's less to it), while the second is a RAID card.

    As per price/performance, you'd actually do well, as it opens up options, and will increase your speed (allows you to go past the ICH limitations). In terms of raw cost, it's expensive compared to a standard SATA/eSATA card.

    This diminishes BTW, when you need a bootable version of a SATA/eSATA card. The Highpoint model linked is the only bootable eSATA out there (no SATA versions I've seen; they're all driver support only for OS X). $230USD is a tad expensive for a 2 port card, no matter how you slice it IMO. :eek: :p

    And let me re-state, I'm not a fan of Highpoint, and am hesitant you even try it. Seriously. Their support sucks, and if you've a problem, they're libel to tell you to go fly a kite (recent thread on one of their RAID cards where that's what seems to have happened). :(

    It will depend on whether or not the card has EFI firmware. For most, this will be a problem. The Highpoint card is the only eSATA model that will boot (3.0Gb/s BTW; the 6.0Gb/s version = driver support ONLY).

    That's another reason why the 5.25" USB enclosure is a good solution to your problem. It's also the cheapest.

    A stripe set created under OS X doesn't play well with multiple OS installations, so you'd most likley need to keep Windows on a separate disk (not sure how well it works/doesn't work with VM in this instance, but definitely won't work with Boot Camp).

    This would be easier, but may not be the most cost effective. It will depend on the exact disks you're considering + cost of the USB enclosure for the optical drive if you build a striped set of smaller SSD's.

    Most users have the same exact sentiment. :D :p

    Things like the ICH are applied to any Intel based system that doesn't have any additional disk controllers to help distribute the load.

    But MP's have a tendency to require special adapters and have cabling issues you don't have to deal with for PC's. Try looking at the threads involving RAID cards and internal HDD bays to get an idea how ugly it can get.... :eek:
  7. fensterbme thread starter macrumors member

    Jul 27, 2010
    Columbus, OH
    Your correct there is more to price than dollars...

    When planning out the boot/app and scratch on the same SSD stripe set, I was thinking only in terms of flexibility and netting a larger usable capacity. But you bring up an interesting point... I should indeed expect the swap/temp to really wear down the SSD. While I bought the OWC SSD's which are designed for RAID and have a larger amount set aside for over provisioning, using together in a strip set is going to punish both SSD's.

    Perhaps using two SSD's, but putting one outside on an eSATA connection and using that for the scratch/temp space. So when that drive fails it doesn't blow up my OS, and since it's it's just temp/scratch I don't care if it won't boot off of that disk. It looks like I can get a simple 2.5 external enclosure for less than $25. Then when it *does* fail I can just get the drive replaced under warranty.

    Yeah I'm not a highpoint fan at all... They don't just have crappy support for Mac, they have crappy support all around.

    My thoughts on using the built in SATA and not using an internal RAID card were because I don't think with how I will be using things and editing that the IOPS would buy me anything on a practical level. I'd rather save my pennies and spend it in other aspects of my overall system (when I had to buy things like a 30" NEC)

    I will be planning on adding a four drive external bay via eSATA in the spring as by that time I'll need the additional storage, and while slow I'm planning on setting up a FreeNAS server and trying to do iSCSI to the Mac Pro and move all my iTunes and other non-photo data onto slower storage where speed doesn't really matter.

    When SSD prices drop more I could see myself getting a few of them and using that for a smaller array that is just for active working image data, but I'd need at least 1TB of capacity to give me enough space. I'd then use all the traditional hard drives for archive and longer term storage as it would be much less frequently accessed.

    Yeah VMWare won't care, it's just files and has nothing to do with boot... I just wish Fusion had some more features from the ESX product I'm more used to using.

    Yeah my beef is that Apple I think did a good bit of "Value Engineerering" the Mac Pro... When one compares the mobo on a Mac Pro to any other mainstream Xeon workstation or server board it becomes very apparent how much was cut out (and it's not like the Intel chipset sitting in the Mac Pro doesn't support all that extra stuff, Apple just cuts it off).

    I was pretty tempted to build a hackintosh, but I can't run the risk of at some point being up a creek in a year or two down the road. This machine needs to last me a while and I can't deal with taking it off line for days at a time for me to 'tinker'.

    My day job for the last couple of decades has been designing IT server/storage systems for large companies but now things are changing a bit and when I'm not doing IT I don't want to have to do technology stuff anymore than necessary. So I'm vastly simplifying my home technology situation to just a Mac Pro a MacBook Pro and a single storage server running FreeNAS. Additionally since the photography thing is a pretty active business my wife and I need to be able to use it, anytime spent dorking with the technology is time I'm not editing, etc.

    Again thanks a lot for the information and things to think about...
  8. nanofrog macrumors G4

    May 6, 2008
    In this instance (disk/set = shared by the OS and scratch), it will wear the disk faster due to the high write conditions of the scratch data. Not a good situation to find yourself in, unless you're willing to implement a short MTBR cycle.

    If you use them for scratch + OS/applications, Yes. If you keep the scratch separate, you'd only have to use a short cycle MTBR with the scratch location, as reads don't wear the cells like writes do (OS/applications location will long outlast the scratch disk/s, assuming both are SSD's).

    This makes the most sense (keep the scratch on it's own disk, whether internal or external).

    And as you don't need to boot, the newertech cards from OWC would work (6.0Gb/s, non-bootable for OS X).

    I actually meant in general, not just with Apple products. I refuse to ever use Highpoint again, unless there's a proven change in how they deal with customers for support.

    The move to a single external disk should solve your problem.

    Then go with the external SSD and the RAID version of the newertech card (supports Port Multiplier enclosures, and is $80 from OWC). You don't need the card for RAID (it's software, and you use Disk Utility if you need to do that, but you will need the PM support for the 4 bay eSATA enclosure).

    Connect one port to the backup, and the other to the external SSD (try not to run them simultaneously, or you will be slowed down a bit, as the card is capable of 500MB/s max, as it's a 1x lane Gen 2.0 card).

Share This Page