Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Bhang

macrumors member
Original poster
Oct 10, 2011
72
1
This has me worried. A single E5 chip can only process 40 PCI express lanes. With 2 video cards each using 16 lanes that leaves a measly 8 lanes for all 6 thunderbolt 2 ports. How on earth is this going to work?
:confused:
 

flat five

macrumors 603
Feb 6, 2007
5,580
2,657
newyorkcity
This has me worried. A single E5 chip can only process 40 PCI express lanes. With 2 video cards each using 16 lanes that leaves a measly 8 lanes for all 6 thunderbolt 2 ports. How on earth is this going to work?
:confused:

# it's magic!
 
Last edited:

paulrbeers

macrumors 68040
Dec 17, 2009
3,963
123
This has me worried. A single E5 chip can only process 40 PCI express lanes. With 2 video cards each using 16 lanes that leaves a measly 8 lanes for all 6 thunderbolt 2 ports. How on earth is this going to work?
:confused:

Also don't forget that a 4x PCIE 2.0 (what thunderbolt uses) has the same bandwidth as 2x PCIE 3.0 which is what the PCIE type of the E5.

----------

How does this compare to a regular computer or workstation?

Well considering these are Intel CPUs with Intel chipsets, I'm going to say they are like all other workstations. Just externally connected vs internally.
 

ZnU

macrumors regular
May 24, 2006
171
0
Also don't forget that a 4x PCIE 2.0 (what thunderbolt uses) has the same bandwidth as 2x PCIE 3.0 which is what the PCIE type of the E5.

With the additional 8x 2.0 lanes from the chipset mentioned in the post Nugget points to, they might not be bothering to use a PCIe switch to get more bandwidth by converting 3.0 lanes into a larger number of 2.0 lanes. Even without this they'd be able to route 4x 2.0 lanes to each Thunderbolt controller. This would mean you'd want to put any devices that plausibly needed more than 1000 MB/s on separate buses from each other, as you'd have a limit of 2000 MB/s per bus rather than per port... but in practice that wouldn't be much of a real-world bottleneck.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,219
3,821
With the additional 8x 2.0 lanes from the chipset mentioned in the post Nugget points to, they might not be bothering to use a PCIe switch to get more bandwidth by converting 3.0 lanes into a larger number of 2.0 lanes. Even without this they'd be able to route 4x 2.0 lanes to each Thunderbolt controller.

If any such switch exists.

The "additional" 8 isn't pragmatically there.

WifiBluetooth x1 (discrete controller; not in chipset)
USB 3.0 x1 ( discrete controller; not in chipset )
Ethernet x1 ( from chipset )
Ethernet x1 ( with discrete )

[NOTE: even if collapse both Ethernet ports to 1 possibly an audio chip cosuming another if not using chipset audio. ]

Probably only 4 left from the chipset. So

x4 TB controller 0
x4 TB controller 1
x4 TB controller 2
x16 GPU
x16 GPU

Is 44 and still haven't done the PCI-e SSD (another x4 )

The new Mac Pro is oversubscribed on PCI-e lanes. ( So is the 2009-2012 one. the two x4 slots share bandwidth.) With the GPUs clocked down so low, they may be borrowing x4 from one of those 16's. This is just a 1/4 drop in sharing 4 ( and having 12 not ) versus 100% share if overlap if making a TB and SSD share.

Longer term Apple needs a variant chipset from Intel that can trade-in SATA lanes (that Apple is using zero of ) for PCI-e lanes (and blow past the upper bound of x8 ).

Either that or specialized TB controllers that can 'step down' PCIe v3 traffic into v2. The SSD should eventually go x2 v3 with less drama.


This would mean you'd want to put any devices that plausibly needed more than 1000 MB/s on separate buses from each other,

Concurrently need more than.... Different storage groupings used at different times will get along OK.


as you'd have a limit of 2000 MB/s per bus rather than per port... but in practice that wouldn't be much of a real-world bottleneck.

There always was a difference between the Thunderbolt data bandwidth (from port to port) and the Thunderbolt controller to host bandwidth.
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,676
The Peninsula
If any such switch exists.

They do. http://www.plxtech.com/products/expresslane/switches


With the GPUs clocked down so low, they may be borrowing x4 from one of those 16's. This is just a 1/4 drop in sharing 4 ( and having 12 not ) versus 100% share if overlap if making a TB and SSD share.

Not sure what you mean here.... PCIe lanes need to be a power of two - I don't think that PCIe x12 is supported. Apple could run one GPU at x16 and the other at x8 - freeing 8 PCIe 3.0 lanes.


Longer term Apple needs a variant chipset from Intel that can trade-in SATA lanes (that Apple is using zero of ) for PCI-e lanes (and blow past the upper bound of x8 ).

That variant exists today - in the form of a dual-socket system with 88 lanes. ;)
 

flat five

macrumors 603
Feb 6, 2007
5,580
2,657
newyorkcity
Last edited:

deconstruct60

macrumors G5
Mar 10, 2009
12,219
3,821

Not that switches exist.. Of course that they. What you need to find is a "switch" that lifts x4 v2 traffic into x2 v3 traffic. Unless things have radically changed all that PLX has is switches that doesn't kneecap when the switch is passing through v3 to v3 data. But I didn't find any that did "up/down" lift.



Not sure what you mean here.... PCIe lanes need to be a power of two - I don't think that PCIe x12 is supported. Apple could run one GPU at x16 and the other at x8 - freeing 8 PCIe 3.0 lanes.

Talking relative bandwidth seen.

host x16 v3
^--------------------------------> switch <--------> x16 v3 GPU
[ a host x16 lane from CPU cntrl ] |
[ x4 lanes run off the GPU card ] |----------> x4 v2 SSD


When in non-kneecap x16 -> x16 mode the GPU would get x16. The issue is that it would only have those x16 lanes "part time". That "part time" is in the x12 range and not necessarily down in the x8 range.


That variant exists today - in the form of a dual-socket system with 88 lanes. ;)

No it doesn't. No in a Mac Pro or an Apple product.


There is no need to go to yet another whole CPU package when the bandwidth is already there in the chipset. It is just allocated to something that Apple isn't using. There is zero design need for space increase of twice as many DIMMs slots, yet another CPU socket, and the assocatied infrastructure increase.

this new chipset wouldn't just be useful for Apple and the Mac Pro. A sizable number of deployed/configured workstations over the last year or so come with SSD. It isn't like SSDs are a rarity inside of workstations. High performance SSDs come in PCI-e form factors these days and going forward. Additionally, 10GbE ( that Intel wants to sell more of ) is also a relatively sizable consumer of PCI-e v2 lanes. Again it would be easier to compose workstations with on-motherboard 10GbE configs if there were more v2 lanes around.

So yes, the workstation chipsets are dragging behind the current trends in design.
 
Last edited:

deconstruct60

macrumors G5
Mar 10, 2009
12,219
3,821
The chipset is on a DMI with PCIe 2.0 x4 equivalent bandwidth (20 Gbps).
http://www.intel.com/content/www/us/en/chipsets/c600-series-chipset-datasheet.html

It's quite a bit oversubscribed already.

Oversubscribed storage links is nothing new. The x99 chipset is in that state no matter if have 6 USB 3.0, 10 6Gb/s SATA links, and x8 PCI v2 links on it or 4 USB 3.0 , 0-2 6Gb/s SATA links, and x12 PCI v2 links on it.

At some point either Intel needs to convert DMI to a QPI like link ( 10 6+Gb/s SATA links and USB 3.1 anything aren't going sit well on a static DMI) or they will have to move to a new Socket that can support 44 links on the CPU.
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,676
The Peninsula
Oversubscribed storage links is nothing new. The x99 chipset is in that state no matter if have 6 USB 3.0, 10 6Gb/s SATA links, and x8 PCI v2 links on it or 4 USB 3.0 , 0-2 6Gb/s SATA links, and x12 PCI v2 links on it.

At some point either Intel needs to convert DMI to a QPI like link ( 10 6+Gb/s SATA links and USB 3.1 anything aren't going sit well on a static DMI) or they will have to move to a new Socket that can support 44 links on the CPU.

Exactly. The new system is oversubscribed, just assume that the Apple engineers made the right compromises in dealing with it. Apple chose to use a single socket design and throw away 40 lanes.

Intel has a solution with more lanes - Apple didn't use it.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.