SATA 6.0 card in Mac Pro question ?

Discussion in 'Mac Pro' started by vicentk, Nov 22, 2010.

  1. vicentk macrumors regular

    vicentk

    Joined:
    Feb 24, 2008
    Location:
    Hong Kong
    #1
    Dear All
    I saw there had some SATA 6.0 card release in market now.

    http://www.attostore.com/sas-sata-hbas/6gb-express-sas-h644.htm

    May I know if I use that in my Mac Pro 3.1 can I brake the 600mb/s limited? And this card don't had raid control that mean I need to use back OSX software raid, will software limited the bandwidth?
     
  2. Hellhammer Moderator

    Hellhammer

    Staff Member

    Joined:
    Dec 10, 2008
    Location:
    Finland
    #2
    The theoretical maximum of SATA 6Gb/s is 750MB/s but due to 8b/10b decoding, it can only deliver up to 600MB/s. Then some overlap and latency brings it down. SATA 3Gb/s can do up to 285MB/s so I would guess that SATA 6Gb/s can do around 580MB/s, so no, you won't be able to break 600MB/s with single SATA 6Gb/s port.

    If you put several drives into RAID 0 using separate SATA ports, then you should be able to break 600MB/s
     
  3. vicentk thread starter macrumors regular

    vicentk

    Joined:
    Feb 24, 2008
    Location:
    Hong Kong
    #3
    May I double confirm that, the Mac Pro internal bandwidth is 600mb/s only ?

    On my setting is already use 4*HDD in raid 0 ( Around 480mb/s read ), so if I upgrade to 2 SSD + 2 HDD it will had the bottleneck on internal bandwidth, is I right ?
     
  4. Hellhammer Moderator

    Hellhammer

    Staff Member

    Joined:
    Dec 10, 2008
    Location:
    Finland
    #4
    Mac Pro's internal SATA is 3Gb/s (300MB/s). Two good SSDs in RAID 0 can deliver ~550MB/s, if you make it three, then you would get around 800MB/s if you use the internal SATA. Remember that not all PCIe cards are bootable
     
  5. Transporteur macrumors 68030

    Joined:
    Nov 30, 2008
    Location:
    UK
    #5
    The software RAID functions of OS X won't limit the bandwidth. If you use the 4 internal ports of that card with 4 SATA II SSD's, you should easily get up to 1GB/s read and write transfer speeds. I would wait for the release of 6Gb/s SSD's, however (Q1 2011), which are good for 500MB/s each!

    The internal bandwidth is somewhere around 660MB/s (actually a little less for the 2008 model), so yes, you will be limited by the controller.

    The maximum combined internal transfer speeds of the Mac Pro are about 660MB/s, which can be saturated with two SSD's and a single mechanical drive.
     
  6. Hellhammer Moderator

    Hellhammer

    Staff Member

    Joined:
    Dec 10, 2008
    Location:
    Finland
    #6
    Is it? Got a link to share? I just thought that if there are e.g. 4 SATA ports, they would all provide ~285MB/s but maybe the controller has a limit as well.
     
  7. Transporteur macrumors 68030

    Joined:
    Nov 30, 2008
    Location:
    UK
    #7
    It is. I've got no link, but it is well known that the ICH has a bandwidth limit of ~660MB/s (in the later 2008 models, the pre 2009 models actually have a little less).
     
  8. Honumaui macrumors 6502a

    Joined:
    Apr 18, 2008
    #8
    double confirm for you :) I did some tests with SSD and show the 3rd one throttles quicker than the 660 you start to notice it around 550-580 it seems

    the one thing in real world if your two setups are not working at the same time in theory they could get along fine ? but say in a situation where one is data and one is scratch most likely you will slow down a bit

    in the idea that say that raid 0 is BU then it would not matter of course ;)
     
  9. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #9
    It's due to the fact that the ICH use a DMI link to connect to the chipset. If they didn't limit it to ~660MB/s, it could saturate that link, and prevent Ethernet and USB data from ever reaching the system (those controllers are part of the ICH as well; not just SATA).
     
  10. vicentk, Nov 23, 2010
    Last edited: Nov 23, 2010

    vicentk thread starter macrumors regular

    vicentk

    Joined:
    Feb 24, 2008
    Location:
    Hong Kong
    #10
    thank all
    But after read follow link, it made me a big shock.
    http://www.barefeats.com/nehal09.html

    So I will change my mine, my mp3.1 had 6 sata and 2 ide port in my case 2 ide is connect DVD and HDD(for windows), the 2.5" HDD is under the rom, I think inside I still had space to put 2 more 2.5" ssd inside.
    Follow is my new setting ( don't know is it work ).
    3 SSD in bay 1,5,6 3 HHD in bay 2,3,4.
    3 ssd will be raid 0 for osx, 3 hhd raid 5 for data ( 2TB each ), must had raid card.
    So the problem is coming as I remember in the mar pro 3.1 there had a sas cable connect bay 1-4, if I add a raid card the sas cable will connect to raid card, I don't know can I still use bay 1 for ssd that raid with bay 5 and 6 ?
    The raid card I think I will looking for org raid card, because I don't need large bandwidth and I can find some second hand to cut my cost down.

    Thank's
     
  11. vicentk thread starter macrumors regular

    vicentk

    Joined:
    Feb 24, 2008
    Location:
    Hong Kong
    #11
    After I found some card in 2 hand shop, there had some card but the card number make me worry.
    1 : A1247 ( around USD200 )
    2 : MA849Z/B

    Are they same ?
     
  12. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #12
    Stay away from the Apple RAID Pro card, as it's too expensive and slow for what it is.

    As per how that particular card works, it's only a 4 port card, and any drive you want to use with it, must be in the HDD bays (what it connects to). The optical bays cannot be attached as there's not enough ports or way to get the signal to the card anyway.

    So you'd be better off looking at Areca for a card, specifically the ARC-1880i, as it's 6.0Gb/s compliant, which you'll want for use with SSD's.

    Another note, using SSD's in RAID 5 is a really bad idea, given the write cycle limitations of MLC based Flash memory (wears out the disks quickly, as parity based arrays write both the data and parity data, which is what it uses to rebuild in the event of a disk failure). Definitely not a good thing to do.

    Either use mechanical disks for RAID 5 (BTW, you must use enterprise grade disks due to the different recovery timings in the disk firmware - consumer units are unstable, and will cost you time and possibly data), or use a different level (RAID 10 if you're after redundancy), or RAID 0 if you're just after speed. No parity data, so there's no additional wear on the drives.

    Also, RAID or not (level does not matter), make sure you have a proper backup solution, as there's no substitute. Period.
     
  13. VirtualRain macrumors 603

    VirtualRain

    Joined:
    Aug 1, 2008
    Location:
    Vancouver, BC
    #13
    Actually, I don't think anyone knows Intel's reasoning for this limitation with any certainty, but in my opinion, the ICH was simply never designed with SSD's in mind. 660MB/s is plenty for 6x mechanical drives which are all that existed when the chip was designed (launched in 2008 but based on the ICH9 which debuted in 2007). BTW, isn't the DMI link 4x PCIe 2.0 lanes?... which is 2GB/s?
     
  14. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #14
    It goes back to 2006, with the introduction of the ICH6 (first to use a 4x lane PCIe v1.1 link to communicate with the Southbridge). At that time, SSD's weren't on Intels mind for this product (remember, the ICH is meant to be a low cost solution for SATA, Ethernet, USB, and audio interfaces). So they continued their planning strategy based on mechanical drives.

    As to DMI, read this:

    source.

    Hope this helps. ;)

    It even appears that SATA 3.0 ports to be limited (to 2) in the next ICH revision due to a compromise between cost and bandwidth to the chipset, from tidbits of information that have floated around (some links posted by Hellhammer in MR IIRC). The rest will be SATA 2.0 (3.0Gb/s).

    There's not much in the way of public information on this, but it wouldn't surprise me due to cost constraints (the target is likely around where it is now; $14USD for non RAID <ICH10>, and $19USD for the RAID compliant <ICH10R> versions according to Intel's published Q=1000 pricing).
     
  15. VirtualRain macrumors 603

    VirtualRain

    Joined:
    Aug 1, 2008
    Location:
    Vancouver, BC
    #15
    Just to close the loop on DMI... it does appear to be 4 lanes of PCIe 2.0...

    [​IMG]

    Page 47 of this datasheet indicates 4 tx/rx pairs.
     
  16. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #16
    Seems to be a bit of confusion here... (not completely surprised, given the differences in how Intel's documentation has been presented).

    The PCIe v1.1 spec was what was used at the time of the ICH6. Yes, this has changed over time, and why the DMI interface is currently capable of 2.0GB/s.

    Simple way to figure this out:
    2GB/s (aggregate bandwidth) = 1GB/s up + 1GB/s down

    It also consists of a 4x lane link. Do the maths, and you get 500MB/s per link, which is PCIe Gen 2.0 lanes (figured you realized that the PCIe Gen. changed; it was 2.0GB/s for the ICH9 series as well). Didn't mean to get you confused here.... :eek:

    The point was, is that PCIe lanes are isochronous (strictly hardware; think SATA controller hogs all the bandwidth until done). So they had to use a fixed bandwidth (what fixed arbitration means in computer terms) to allow both the concurrency and QoS implementation (Quality of Service = protocol). As a result, no controller could saturate the entire bandwidth (up or down; each is not bidirectional, so no ability to dynamically shift to 2GB/s one way, and 0 the other or anything in between; DMI bandwidth up/down is fixed at a 50/50 split).

    This basic methodology hasn't changed since 2006. Just some of the controller and PCIe lane specs have changed. The protocol hasn't, nor has the implementation. Systems engineering applied to subsequent generations (common, as it's faster and cheaper than "re-inventing the wheel" each generation). And the cheaper part is what board makers and system vendors want (lowers cost per system = ability to keep prices low <consumers like> and profit margins acceptable for all the companies involved <stockholders like>).

    So maybe this will clarify things this time around... :D :p
     
  17. VirtualRain macrumors 603

    VirtualRain

    Joined:
    Aug 1, 2008
    Location:
    Vancouver, BC
    #17
    @nano... Your last two posts appear to be rambling :confused: :p :)

    Are you still of the opinion that they capped the SATA traffic on the ICH at 660MB/s to avoid saturating the DMI bus?

    As I've said, I don't think that was their motivation. I think at the time the ICH was developed, there was simply no drive traffic that could exceed 660MB/s across 6 ports, so it seemed like a reasonable design target. Are you disagreeing with me? :confused:
     
  18. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #18
    Sorry about that. :eek:

    Yes.

    It comes down to the lack of dynamic bandwidth allocation in DMI. If this existed, there would be no need for any bandwidth limitations in the first place.

    If they didn't set such a limit, the SATA traffic could keep the DMI saturated so that other data couldn't be passed to the system (Ethernet, audio, and USB), and stall other processes until it's completed (same for the other controllers on the ICH too, but SATA by far consumes more bandwidth).

    They had to leave enough band that you can use the other controllers as fast as possible under the most common conditions. But full speed wasn't possible for the SATA controller under any condition. There's just not enough band for that (~275GB/s * 6 = 1.65GB/s). That's ~65% more than DMI can handle over either link (up or down), as they're only good for 1GB/s.

    So they came up with the ~660MB/s cap for SATA for the best balanced operation (took advantage of the fact mechanical disks couldn't come anywhere near saturating a 3.0Gb/s SATA port). So they figured ~110MB/s per disk, which was considered sufficient for that time.

    SSD's have revealed this problem.

    Fortunately, that 660MB/s or so is allocated dynamically between the SATA ports themselves, so long as the total throughput doesn't exceed that limit. IF this wasn't the case, then any drive would top out at ~110MB/s.

    It seems to me, that Intel's depended on this to tide them over until they come out with an ICH capable of faster throughputs to the chipset. Essentially gambling that most users do not have enough SSD's to experience significant throttling, and those that do, would likely be those willing to spend funds on a solution (i.e. PCIe card of some sort).
     
  19. VirtualRain macrumors 603

    VirtualRain

    Joined:
    Aug 1, 2008
    Location:
    Vancouver, BC
  20. vicentk thread starter macrumors regular

    vicentk

    Joined:
    Feb 24, 2008
    Location:
    Hong Kong
    #20
    Dear nanofrog
    Sorry for my mistake, the 3*ssd should be in raid0.
    On another hand is raid card is too expensive, do you had any chose is cheaper ??
    May I know after this setting, is all SSD/HDD will on raid card? If yes, the org bay sas cable will connect to raid card and I need one more cable is sas to 4*sata ?
     
  21. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #21
    I realized this. From post #10, I gathered you were after:
    • 3x SSD's in RAID 0
    • 3x HDD's in RAID 5

    What threw me, was the way you numbered out the bays and how you meant to connect them.

    Really, for what you're trying to do, the ARC-1880i is the best, and cheapest solution for what you're trying to do ($542.29USD, and they do ship internationally if you either can't find it locally, or the pricing is insane).

    Now let me explain why this card is the best choice.
    1. It's 6.0Gb/s, which you really need for SSD's (cheaper RAID cards do exist, but they're all 3.0Gb/s, and are already maxed out by current SSD's, and they're only going to get faster - not going to be good for future SSD's).
    2. It can handle 0/1/10/5/6/50/60 levels, which means you can attach both sets to the card (RAID 0 will run faster on the card), and still have options in the future.
    3. You can boot OS X from the card after you flash the firmware with the EFI portion (what allows you to use the RAID 0 as a boot location when attached to the card).
    4. Clean installation (all internal).
    5. Won't waste additional slots.

    Use a 4x 2.5" Backplane Cage for the SSD's (mounts in the empty optical bay). Use this to get power to it.

    You will also need an adapter kit (here) to use the HDD bays with the card (and this would be needed with any internal card).

    You must also use enterprise drives with a RAID card, ideally those on any HDD Compatability List if published by the card maker (Areca does, and sticking to disks listed, will save you all kinds of headaches and aggravation). The reason you need them has to do with how recovery is performed (different timings in the firmware). Consumer mechanical disks won't be stable, if you can even get the array created.

    Get a good UPS (Line Interactive with Pure Sine Wave output inverter as a bare minimum).

    Please understand, a proper RAID configuration is not cheap. But this setup is about the same cost as that of the Apple RAID Pro on it's own (not including disks).

    If you follow what's above, then Yes. Both the SSD and HDD arrays will be attached to the card.

    The internal cables you need will be included in the equipment I linked (SSD's will be attached via a MiniSAS to 4i*SATA fanout that comes with the card; 2x in the box). The HDD adapter kit will come with another to get the HDD data signals to the card as well.

    In the end, you will have a spare cable, and not need to go buy anything separately (unlike other brands that don't include any cables ever). This is part of the reason Areca is chosen (best price/performance ratio currently available from what I've seen for actual retail prices).

    There are other solutions that are a bit cheaper, but there will be compromises involved.

    Let me know what you want to do. ;)
     

Share This Page