nMP optimal use of TB2

Discussion in 'Mac Pro' started by mintakax, Jan 1, 2014.

  1. mintakax macrumors regular

    Joined:
    Dec 19, 2013
    #1
    I'm wondering if there is an optimal hookup configuration for the three TB2 buses in the nMP ? For now let's say that I have 2 2560x1440 monitors (NEC PA271w-BK), a TB Raid( Pegasus R6) and a TB dock (Caldigit) .
    Do the monitors need to be on separate buses ? If so, then the answer is clear.
     
  2. Rock Hound macrumors member

    Joined:
    Dec 26, 2013
    #2
    Interesting question, and the answer probably depends on your particular use case. Here's my logic:

    I hooked my two 2560 x 1600 30" displays to the same bus (lower two ports). This is working well for me, since I don't game, and I doubt I will challenge the video cards unless audio, photo, and illustration software gets a major OpenCL overhaul. I reserved one bus for my main storage so it doesn't have to compete for bandwidth. I am using the third bus for some legacy Firewire audio gear.
     
  3. wonderspark macrumors 68030

    wonderspark

    Joined:
    Feb 4, 2010
    Location:
    Oregon
    #3
    Yes, there is an optimal use of the three TB2 controllers. Note the last two TB ports are also shared with the HDMI port.

    [​IMG]
     
  4. Cubemmal macrumors 6502a

    Joined:
    Jun 13, 2013
    #4
    Apple states that monitors should ideally be on separate TB busses - 1,2 or 3. A 27 Cinema or TB 1.0 Each bus is 5 GB/ps, but I forget the bandwidth needed for the 27" monitor. Well, according to TB 1.0 it's half the bus, so that must mean 2.5GB/s, leaving you with another 2.5GB/s for data I/O. Those are (PCIe 2.0) directly connected to the CPU, not bad. USB 3.0 has only a single PCIe lane for it. So your better off using TB disks.
     
  5. VirtualRain macrumors 603

    VirtualRain

    Joined:
    Aug 1, 2008
    Location:
    Vancouver, BC
    #5
    This is good to know for optimizing your PCIe data bandwidth over TB... Make sure peripherals are not bottlenecking each other by putting high speed devices on different buses.

    However, I'd also like to know how displays are tied to GPUs. I believe Anand said something like one GPU is used for display and one for compute, but how does one connect a pair of displays? Are they both some how magically connected to one GPU regardless of TB port? Or do you need to know which ports are associated with which GPUs.

    Is this mentioned in the documentation anywhere?
     
  6. chfilm macrumors 65816

    chfilm

    Joined:
    Nov 15, 2012
    Location:
    Germany
    #6
    plus what about daisy chaining thunderbolt displays? Bad Idea?
     
  7. deconstruct60 macrumors 604

    Joined:
    Mar 10, 2009
    #7
    What Apple fails to do is illustrate how the GPU cards are hooked up. Given TB and HDMI groupings, it seems as those they have hooked the 1 GPU to TB bus 1 and 2 and the other on TB bus 0 ( so can drive HDMI.... what HDMI has to do with Thunderbolt I have no clue. TB isn't necessary at all to be in the loop. ). It think there "TB bus" terminology might be a bundle grouping of PCI-e and DisplayPort inputs to the TB controllers. [ and the part of the DisplayPort bundle to bus 0 is peeled off and passed through a HDMI converter before it ever gets to the TB controller. ]



    DisplayPort , DVI , and HDMI monitors should take zero Thunderbolt bandwidth on a TB port or controller. if want to shift to 4K or use mainstream display products avoiding TB "displays" has TB network bandwidth consumption benefits. If need a display docking station to expand the number of ports, fine, but if not can save TB bandwidth for other kinds of TB devices.



    So if want to conserve TB bandwidth so it is availble for non display TB devices it is much better to direct attach the DisplayPort/HDMI/etc devices directly to the Mac Pro than to push them downstream. With 6 ports there is no good reason to do that.

    Which TB port is demand probably more so on the computational demand the workload is throwing at the 2nd GPU. If it is largely unloaded with display workload ( e.g., Displays on ports 5, 6, and/or HMDI ) then I would collect non-display TB devices on the "video" TB ports driven by that GPU ( I think that may be ports 1-4 or at least 1-2 ).

    Some older apps may work better if both screens of the app are handled by the same GPU ( older version of FCPX had this limitation. Not sure if the new one also still has this issue. That wouldn't be surprising as Apple may want to still force a segregation of GPU and OpenCL workload. )

    If going to be using both GPUs than it makes more sense to more evenly spread out the GPU load. (e.g., go vertical pattern in attaching Display. For instance the 'column' of ports that would include the HDMI port and starting from the bottom of the column. )


    They are not directly connected to the CPU. If the plex switch can uplift the PCI-e 2.0 traffic in parrallel to PCI-e v3.0 that it is not bad. If not it is what have to live with.

    Depends in part to what already have. If have, and have been using, USB 3.0 it doesn't make sense to dump them.

    There is a performance/cost trade-off for the disks. The overwhelming vast majority of USB 3.0 sockets deployed pragmatically have exactly the same single PCI-e lane (or equivalent ) bandwidth behind it as long as dealing with a single USB controller.

    ----------

    Monitors or their Display docking stations ? Docking stations with a DisplayPort sink attached to them also are high TB bandwidth consumers. Those should be spread out. Otherwise, your are dropping multiple hogs onto one bus.

    Apple's example of 6 TB display docking station is a highly goofy configuration. It will distribution the GPU load over all of the GPUs and then suck up the vast majority of TB bandwidth with video traffic. You'd have a hugmungous number of Ethernet , Firewire , and USB ports but quite limited external bandwith to storage ( unless somehow 'glue' all of those docking station ports back together).
     

Share This Page