This has me worried. A single E5 chip can only process 40 PCI express lanes. With 2 video cards each using 16 lanes that leaves a measly 8 lanes for all 6 thunderbolt 2 ports. How on earth is this going to work?
How does this compare to a regular computer or workstation?
Also don't forget that a 4x PCIE 2.0 (what thunderbolt uses) has the same bandwidth as 2x PCIE 3.0 which is what the PCIE type of the E5.
With the additional 8x 2.0 lanes from the chipset mentioned in the post Nugget points to, they might not be bothering to use a PCIe switch to get more bandwidth by converting 3.0 lanes into a larger number of 2.0 lanes. Even without this they'd be able to route 4x 2.0 lanes to each Thunderbolt controller.
This would mean you'd want to put any devices that plausibly needed more than 1000 MB/s on separate buses from each other,
as you'd have a limit of 2000 MB/s per bus rather than per port... but in practice that wouldn't be much of a real-world bottleneck.
If any such switch exists.
With the GPUs clocked down so low, they may be borrowing x4 from one of those 16's. This is just a 1/4 drop in sharing 4 ( and having 12 not ) versus 100% share if overlap if making a TB and SSD share.
Longer term Apple needs a variant chipset from Intel that can trade-in SATA lanes (that Apple is using zero of ) for PCI-e lanes (and blow past the upper bound of x8 ).
They'd be squashing throughput in some of the higher data transfer OpenCL and graphics loads.
Not sure what you mean here.... PCIe lanes need to be a power of two - I don't think that PCIe x12 is supported. Apple could run one GPU at x16 and the other at x8 - freeing 8 PCIe 3.0 lanes.
That variant exists today - in the form of a dual-socket system with 88 lanes.
There is no need to go to yet another whole CPU package when the bandwidth is already there in the chipset.
The chipset is on a DMI with PCIe 2.0 x4 equivalent bandwidth (20 Gbps).
It's quite a bit oversubscribed already.
Oversubscribed storage links is nothing new. The x99 chipset is in that state no matter if have 6 USB 3.0, 10 6Gb/s SATA links, and x8 PCI v2 links on it or 4 USB 3.0 , 0-2 6Gb/s SATA links, and x12 PCI v2 links on it.
At some point either Intel needs to convert DMI to a QPI like link ( 10 6+Gb/s SATA links and USB 3.1 anything aren't going sit well on a static DMI) or they will have to move to a new Socket that can support 44 links on the CPU.