New MP Lack of PCIe lanes an issue?

Discussion in 'Mac Pro' started by Bhang, Dec 24, 2013.

  1. Bhang macrumors member

    Joined:
    Oct 10, 2011
    #1
    This has me worried. A single E5 chip can only process 40 PCI express lanes. With 2 video cards each using 16 lanes that leaves a measly 8 lanes for all 6 thunderbolt 2 ports. How on earth is this going to work?
    :confused:
     
  2. flat five, Dec 24, 2013
    Last edited: Dec 24, 2013

    flat five macrumors 601

    flat five

    Joined:
    Feb 6, 2007
    Location:
    newyorkcity
    #2
    # it's magic!
     
  3. Cubemmal macrumors 6502a

    Joined:
    Jun 13, 2013
    #4
    How does this compare to a regular computer or workstation?
     
  4. paulrbeers macrumors 68040

    Joined:
    Dec 17, 2009
    #5
    Also don't forget that a 4x PCIE 2.0 (what thunderbolt uses) has the same bandwidth as 2x PCIE 3.0 which is what the PCIE type of the E5.

    ----------

    Well considering these are Intel CPUs with Intel chipsets, I'm going to say they are like all other workstations. Just externally connected vs internally.
     
  5. ZnU macrumors regular

    Joined:
    May 24, 2006
    #6
    With the additional 8x 2.0 lanes from the chipset mentioned in the post Nugget points to, they might not be bothering to use a PCIe switch to get more bandwidth by converting 3.0 lanes into a larger number of 2.0 lanes. Even without this they'd be able to route 4x 2.0 lanes to each Thunderbolt controller. This would mean you'd want to put any devices that plausibly needed more than 1000 MB/s on separate buses from each other, as you'd have a limit of 2000 MB/s per bus rather than per port... but in practice that wouldn't be much of a real-world bottleneck.
     
  6. deconstruct60 macrumors 604

    Joined:
    Mar 10, 2009
    #7
    If any such switch exists.

    The "additional" 8 isn't pragmatically there.

    WifiBluetooth x1 (discrete controller; not in chipset)
    USB 3.0 x1 ( discrete controller; not in chipset )
    Ethernet x1 ( from chipset )
    Ethernet x1 ( with discrete )

    [NOTE: even if collapse both Ethernet ports to 1 possibly an audio chip cosuming another if not using chipset audio. ]

    Probably only 4 left from the chipset. So

    x4 TB controller 0
    x4 TB controller 1
    x4 TB controller 2
    x16 GPU
    x16 GPU

    Is 44 and still haven't done the PCI-e SSD (another x4 )

    The new Mac Pro is oversubscribed on PCI-e lanes. ( So is the 2009-2012 one. the two x4 slots share bandwidth.) With the GPUs clocked down so low, they may be borrowing x4 from one of those 16's. This is just a 1/4 drop in sharing 4 ( and having 12 not ) versus 100% share if overlap if making a TB and SSD share.

    Longer term Apple needs a variant chipset from Intel that can trade-in SATA lanes (that Apple is using zero of ) for PCI-e lanes (and blow past the upper bound of x8 ).

    Either that or specialized TB controllers that can 'step down' PCIe v3 traffic into v2. The SSD should eventually go x2 v3 with less drama.


    Concurrently need more than.... Different storage groupings used at different times will get along OK.


    There always was a difference between the Thunderbolt data bandwidth (from port to port) and the Thunderbolt controller to host bandwidth.
     
  7. flat five macrumors 601

    flat five

    Joined:
    Feb 6, 2007
    Location:
    newyorkcity
    #8
    maybe those are x8?
    isnt' pcie 3 twice as fast as 2?
     
  8. AidenShaw macrumors P6

    AidenShaw

    Joined:
    Feb 8, 2003
    Location:
    The Peninsula
    #9
    They do. http://www.plxtech.com/products/expresslane/switches


    Not sure what you mean here.... PCIe lanes need to be a power of two - I don't think that PCIe x12 is supported. Apple could run one GPU at x16 and the other at x8 - freeing 8 PCIe 3.0 lanes.


    That variant exists today - in the form of a dual-socket system with 88 lanes. ;)
     
  9. deconstruct60 macrumors 604

    Joined:
    Mar 10, 2009
    #10
    They'd be squashing throughput in some of the higher data transfer OpenCL and graphics loads.
     
  10. flat five, Dec 25, 2013
    Last edited: Dec 25, 2013

    flat five macrumors 601

    flat five

    Joined:
    Feb 6, 2007
    Location:
    newyorkcity
    #11
  11. deconstruct60, Dec 25, 2013
    Last edited: Dec 25, 2013

    deconstruct60 macrumors 604

    Joined:
    Mar 10, 2009
    #12
    Not that switches exist.. Of course that they. What you need to find is a "switch" that lifts x4 v2 traffic into x2 v3 traffic. Unless things have radically changed all that PLX has is switches that doesn't kneecap when the switch is passing through v3 to v3 data. But I didn't find any that did "up/down" lift.



    Talking relative bandwidth seen.

    host x16 v3
    ^--------------------------------> switch <--------> x16 v3 GPU
    [ a host x16 lane from CPU cntrl ] |
    [ x4 lanes run off the GPU card ] |----------> x4 v2 SSD


    When in non-kneecap x16 -> x16 mode the GPU would get x16. The issue is that it would only have those x16 lanes "part time". That "part time" is in the x12 range and not necessarily down in the x8 range.


    No it doesn't. No in a Mac Pro or an Apple product.


    There is no need to go to yet another whole CPU package when the bandwidth is already there in the chipset. It is just allocated to something that Apple isn't using. There is zero design need for space increase of twice as many DIMMs slots, yet another CPU socket, and the assocatied infrastructure increase.

    this new chipset wouldn't just be useful for Apple and the Mac Pro. A sizable number of deployed/configured workstations over the last year or so come with SSD. It isn't like SSDs are a rarity inside of workstations. High performance SSDs come in PCI-e form factors these days and going forward. Additionally, 10GbE ( that Intel wants to sell more of ) is also a relatively sizable consumer of PCI-e v2 lanes. Again it would be easier to compose workstations with on-motherboard 10GbE configs if there were more v2 lanes around.

    So yes, the workstation chipsets are dragging behind the current trends in design.
     
  12. AidenShaw macrumors P6

    AidenShaw

    Joined:
    Feb 8, 2003
    Location:
    The Peninsula
    #13
  13. deconstruct60 macrumors 604

    Joined:
    Mar 10, 2009
    #14
    Oversubscribed storage links is nothing new. The x99 chipset is in that state no matter if have 6 USB 3.0, 10 6Gb/s SATA links, and x8 PCI v2 links on it or 4 USB 3.0 , 0-2 6Gb/s SATA links, and x12 PCI v2 links on it.

    At some point either Intel needs to convert DMI to a QPI like link ( 10 6+Gb/s SATA links and USB 3.1 anything aren't going sit well on a static DMI) or they will have to move to a new Socket that can support 44 links on the CPU.
     
  14. AidenShaw macrumors P6

    AidenShaw

    Joined:
    Feb 8, 2003
    Location:
    The Peninsula
    #15
    Exactly. The new system is oversubscribed, just assume that the Apple engineers made the right compromises in dealing with it. Apple chose to use a single socket design and throw away 40 lanes.

    Intel has a solution with more lanes - Apple didn't use it.
     
  15. AidenShaw macrumors P6

    AidenShaw

    Joined:
    Feb 8, 2003
    Location:
    The Peninsula
    #16

Share This Page