"Each direction in each channel can be data and / or display."
Meaning one pipe to one direction can mix data & dp?
Exactly. The packets, whether they were originally DisplayPort or PCIe, are all encapsulated in Thunderbolt packets by the protocol adapters before they are transported. The Thunderbolt switch, PHY, and cable are just focused on getting all of the packets to the correct addresses in a timely fashion.
2 ATDs chained makes 11.6Gbps, which won't fit in one direction of one channel.
If TB couldn't mix dp & data in one path, they would both be used to dp and then you couldn't send data to DAS.
Since you obviously can, dp & data are mixed together in one path, meaning that TB controller can allocate more than 10 Gbps of dp or data in one cable?
So the hard limit is 20 Gbps if there's enough bandwidth in PCIe?
As far as I can tell (and who really knows who isn't under NDA), the hard limits are a factor of what protocol adapters a Thunderbolt controller contains. It looks like Light Ridge has 2 DP 1.1a to Thunderbolt Source adapters, 1 DP 1.1a to Thunderbolt Sink adapter, and 1 bi-directional PCIe 2.0 to Thunderbolt adapter. This would allow it to move 10Gbps PCIe + 17.282Gbps DP in the outbound direction, and 10Gbps PCIe + 8.641Gbps DP in the inbound direction.
Of course 10 + 17.282 > 20, so unless you have a 27-inch iMac or some other device with 2 Thunderbolt ports, you're limited to the 20Gbps that the cable can carry. The only real world scenario I can come up with that would bump into the 20Gbps ceiling, is is to daisy chain two Apple Thunderbolt displays and a Pegasus R6 full of SF-2281 based SSD's, and then perform a long sequential write using highly compressible data. This would result in 11.6Gbps of DisplayPort packets and 10Gbps of PCIe packets all heading in the same direction simultaneously and overwhelm the cable's total single-direction bandwidth of 20Gbps. This is an absolute corner case, and yet the performance impact would still be fairly minor.
Btw, the slide says we will get optical cables in present year...
I said it was an Intel slide, I never said anything about it not being a pack of lies.
Wouldn't a T-Bolt switch be simply a PCIe switch (4 lanes PCIe in to n*4 lanes PCIe out), but it would have a T-bolt controller on the input (T-Bolt to PCIe) and a T-Bolt controller on each output (PCIe to T-Bolt)?
You wouldn't need a T-Bolt switch, simply use a PCIe switch.
Why would you convert back and forth unnecessarily? To go back to my 10GbE example, an Ethernet network just moves frames between addresses, it doesn't care about the structure of the higher layers. An Ethernet switch is equally happy forwarding TCP/IP, UDP, AppleTalk, or any number of other types of packets. Thunderbolt does the same thing, delivers packets to addresses, and doesn't care about whether they are PCIe or DisplayPort until they get to their destination.
It's quite simple - the input to the T-Bolt switch is 10 Gbps full duplex. It doesn't need to have more capacity than that.
Each port is 2x10Gbps, full-duplex. The beauty of protocols like FireWire and Thunderbolt is that each node can communicate directly with another. If you want to clone the data on one Thunderbolt drive to another, you can do so with little to no involvement of the host CPU, PCH, or system memory. After you initiate the copy, PCIe packets just flow from one drive to the other. This is not a host arbitrated bus like USB.
I'd prefer the "route DP to one output", so that the T-Bolt switch could be connected directly to the computer. Otherwise, you'd have to put the switch after the monitor.
Native DisplayPort displays can only be the last device in a chain, because they terminate them by default. With a Thunderbolt switch, it wouldn't really matter where you connected them since they wouldn't be blocking the only way of extending the chain.
I was trying to build it from off-the-shelf components. And since the input is limited to the 10 Gbps on the input, your fancy special silicon would perform the same as mine.
And why on earth would you think that you'd need a cross-bar switch? Does T-Bolt support peer-to-peer PCIe transfers, or is it all master-slave (CPU-device)? (While peer-to-peer is part of the PCIe standard, it is seldom used.)
If it's master-slave, the 10 Gbps limit is fine.
I see where you're coming from, with the off the shelf thing. I think we'd end up with a very expensive and not hugely functional switch using what's available (or not-so-available) right now for controllers.
A single Thunderbolt channel is 10Gbps, but every port is dual-channel, and hence 20Gbps. Just because current controllers are limited to 10Gbps of PCIe I/O, doesn't mean that the architecture can be reduced to 10Gbps total bandwidth. The cross-bar switch in a Light Ridge chip looks to be capable of switching no less than 8 10Gbps, full-duplex channels.
As I mentioned above, Thunderbolt is in theory peer-to-peer, both for PCIe and DisplayPort packets.
You may be right, or you may be wrong. Neither of us can say which.
It often turns out that I'm wrong about things I've said on this forum, but I still enjoy the mental exercise that comes along with rampant speculation.