If you define "hardly any" as at least a 10% performance loss, true.
Can you really scale drive performance like that though?
Marketing numbers seldom include actual real world performance - so the 20 Gbps raw number is marketed, and nobody mentions that the payload cannot be more than 16 Gbps.
So is 500 MB/s per lane of PCIe 2.0 marketed or real-world performance?
T-Bolt 1 was PCIe 1.0 - because it wasn't any faster than PCIe 1.0 (in a 4 lane to 4 lane bridge).
We have very little info on T-Bolt 2, but its speed does match with a PCIe 2.0 4 lane to 4 lane bridge.
I thought TB 2.0 was merely the combination of the two 10 Gb/s channels of TB 1.0 into a single, dynamic channel, instead of reserving one channel for display port and one for data. My question was simply, if there is nothing more than 4 lanes of PCIe 2.0 going into the TB controller (I could be out of my league here in terms of technicality), how can they claim anything more than 16 Gb/s even at a theoretical level?
In theory, T-Bolt 2.0 could provide a PCIe 3.0 x1 or x2 interface. Or, it could provide a PCIe 3.0 x4 with pauses between packets - but there would be little actual advantage to doing that other than possibly being able to run at 20 Gbps instead of 16 Gbps depending on the encoding.
In truth, though, T-Bolt is an opaque, proprietary interface that is very difficult to get any facts on unless you are a licensee.
Why would there be pauses between packets?
So, is it known how many lanes each controller/port consumes? I assume on the new Mac Pro all 40 lanes are in use. Presumable at least 16 of them are reserved for the GPUs (I assume each one is x8), which leaves 24 lanes. If each port is a x4 bridge (or whatever it's called), then that would account for the rest of the lanes. Or, as I seem to recall, are there only 3 controllers for all 6 ports? I'm just a little foggy on the facts here and how it works.
Last edited: