This is probably the biggest flaw so far for TB. There's something so frong with the tech, when you can only daisy chain one after another, no splitting no hubs.
It is
not a flaw if looking for low latency connectivity. The routing is much simpler so the latency is much lower. If you make the routing complicated then either the latency goes up or you needs a substantially more expensive switch. There a 10-40Gb Eithernet and Infiniband switches, but you are not going to pay $20 for one any time soon.
Nevertheless I don't see any technical limitations to put 2 TB chips in a one box. You just need to have pci bus between those chips.
That would make that system a "host" which has different certification constraints (i.e., is a provider of DisplayPort output ). Same reason there is no "PCI-e data only" hosts. It isn't so much technical but a specification compliance constraint.
Frankly, it is probably a bad idea from a technical standpoint also if look at the overall network. Thunderbolt is primarily oriented to the transport of slower protocols over its network. That means Thunderbolt needs to be substantially faster than the others for this to work well. What essentially want is a fat tree network (
http://en.wikipedia.org/wiki/Fat_tree ).
Linking two Thunderbolt networks ( each one of those two controllers is on a different TB network ) with just a x4 link across the top of those two "trees" isn't going to be far tree. You'd need something like a single host which could devote a seperate x4 (total x8 ) to each. Essentially a hub inside of a personal computer that is a TB host.
The problem is most personal computers don't have a budget of 8x lanes to 'blow' on Thunderbolt. Mainstream Intel designs only have a total of 8x lines on the I/O Hub controller. The CPU isn't that much better with just 16x. That is like 1/3 of the total PCI-e budget for the
entire system being 'blown' on TB. That isn't particularly balanced nor I suspect in great demand given the necessary sacrifices of removing other PCI-e based controllers from the personal computer.
Real problem here is of course the price and thats why I see TB more dead than blu-ray (which actually had same problems in the beginning, the tech was too expensive..).
For the folks drinking the "One port to rule them all" kool-aid it is dead. Thunderbolt is doing OK. The utilization is still growing. It never was going to be a USB 'killer'. Price isn't scaring off vendors as much as Intel being the single source supplier. The growth is quite unlike Blu-Ray since it Thunderbolt isn't going through a "format war" ( Blu-Ray vs. HD DVD) as much as some folks want to turn Thunderbolt vs USB 3.0 (and USB 3.0+ the recent proposed bump) into something like the "format war".
Actually that's fault of 2 biggest players in industry: Intel & Apple. Both neglected usb3 too long,
Not really. First, Thunderbolt is not a replacement or equivalent to USB 3.0. Second, it actually helped the USB 3.0 market to have multiple implementers. If Intel had come in early with a discrete USB 3.0 implementation they would have probably squashed the multiple implementer market. Weaving USB 3.0 into an integrated core I/O chipset too soon would be a mistake. Jacked up core I/O chipsets with bugs can throw a hiccup into a CPU tick/tock cycles. ( In fact, it has both this year and last on some issues with SATA and this year with USB. The more stuff integrated the more likely to pop out bugs.) Only mature protocols should be weaved in. If Intel was doing only integrated USB 3.0 they were largely on time.
Apple was was a bit late but that in no way inhibited USB 3.0 all that much. Apple only has less than 8% of the PC market. Their little 8% was not going to drive overall industry rapid adoption for USB 3.0 than it has driven overall industry adoption of Thunderbolt. You can't make that argument that TB is a non factor but Apple could drive industry adoption at the time.
Apple has leaned on the crutch of "just going to pay attention to the Intel USB 3.0 controller" too long. The TB display would require a non-Intel controller so they were going to have to support as least one discrete controller eventually. Given the problems NEC/Renasas have with USAP/UAS it probably was a good call to wait for someone other than the first mover implementer in the USB 3.0 implementer market. ( I suspect they pick Fresco Logic, but likely isn't going to be the NEC one. ). UAS/USAP didn' settle down until after USB 3.0 initially launched.
If Apple wasn't rolling out TB it would have been off target. Given they were more motivated by TB the order makes sense. Apple's inability to push out a iOS7 upgrade without impacting OS X rollout doesn't speak well of them being able to walk and chew gum at the same time. I doubt they would have done well to try to do TB and USB 3.0 inside the same model year.
when they were the only players big enough to remove compatibility problems away. Now that intel's chipsets are industry standard for usb3 compliance, I'd guess that compatibility issues will be thing of the pass.
They? Apple isn't an implementer and they haven't driven TB to industry adoption. So there is no "they". What don't want with a industry standard like USB is that Intel and their defacto quirks in implementation to drive the standard. What is needed is something that everyone is trying to comply with and that the standard gets incrementally clarified in the first couple of years that is more fair to all implementers not just one. That is going to be a long term successful industry standard. You'll end up with multiple quality implementers.
When intel was trying to ram their vision for Fiber USB 3.0 as the future there was blacklash. Same also in the 1.1 ( 2.0) transition when Intel de facto drove implementation.