First thanks for very nice answers. I really learn from these and I like play with this kind of thoughts.Frankly, it is probably a bad idea from a technical standpoint also if look at the overall network. Thunderbolt is primarily oriented to the transport of slower protocols over its network. That means Thunderbolt needs to be substantially faster than the others for this to work well. What essentially want is a fat tree network ( http://en.wikipedia.org/wiki/Fat_tree ).
Linking two Thunderbolt networks ( each one of those two controllers is on a different TB network ) with just a x4 link across the top of those two "trees" isn't going to be far tree. You'd need something like a single host which could devote a seperate x4 (total x8 ) to each. Essentially a hub inside of a personal computer that is a TB host.
The problem is most personal computers don't have a budget of 8x lanes to 'blow' on Thunderbolt. Mainstream Intel designs only have a total of 8x lines on the I/O Hub controller. The CPU isn't that much better with just 16x. That is like 1/3 of the total PCI-e budget for the entire system being 'blown' on TB. That isn't particularly balanced nor I suspect in great demand given the necessary sacrifices of removing other PCI-e based controllers from the personal computer.
I think TB could loosen up from the ideal fat-tree topology. This would make it a lot more useful and very few users would ever notice any hickups.
I wasn't suggesting that mainstream computers would use dual-TB from their small pci resources.
Better idea would be separate hub, that can be used only when needed. This hub would be topologically at the root level of tree ie. the first thing after the computer. The pipe from hub to computer would be 200% overprovisioned, but I guess that in very rare occassions that would become any hindrance. This hub could have more than 2 ports and nececcary amount of controllers inside it. I'd guess that almost in all user cases, the problems with daisy-chaining is about physical connections, not about bandwidth or lag. In very few cases all TB devices are working with full speed all the time. So there is usually spare bandwidth avaialble.
Other option, if we want to keep strictly to clean fat-tree, would be making a hardware switch. You could attach many devices to a switch and with nice GUI choose which one is "connected". No fiddling with cables.
Not really. First, Thunderbolt is not a replacement or equivalent to USB 3.0. Second, it actually helped the USB 3.0 market to have multiple implementers. If Intel had come in early with a discrete USB 3.0 implementation they would have probably squashed the multiple implementer market. Weaving USB 3.0 into an integrated core I/O chipset too soon would be a mistake. Jacked up core I/O chipsets with bugs can throw a hiccup into a CPU tick/tock cycles. ( In fact, it has both this year and last on some issues with SATA and this year with USB. The more stuff integrated the more likely to pop out bugs.) Only mature protocols should be weaved in. If Intel was doing only integrated USB 3.0 they were largely on time.
Apple was was a bit late but that in no way inhibited USB 3.0 all that much. Apple only has less than 8% of the PC market. Their little 8% was not going to drive overall industry rapid adoption for USB 3.0 than it has driven overall industry adoption of Thunderbolt. You can't make that argument that TB is a non factor but Apple could drive industry adoption at the time.
Hmm,They? Apple isn't an implementer and they haven't driven TB to industry adoption. So there is no "they". What don't want with a industry standard like USB is that Intel and their defacto quirks in implementation to drive the standard. What is needed is something that everyone is trying to comply with and that the standard gets incrementally clarified in the first couple of years that is more fair to all implementers not just one. That is going to be a long term successful industry standard. You'll end up with multiple quality implementers.
usb3's awfully long time to mature is not Intel's and Apple fault because:
1) TB is not usb3
2) discreet chips from other vendors were good thing even when they didn't work
3) Apple couldn't helped because they are so small
Sorry, but these just don't cut to me, because
1) 3rd party discreet chips would be needed anyway, even if intel had rushed real working specs through. The difference would be, thet then those chips would work great, now they don't.
2) De facto quirks are needed when specs are not matured. Otherwise you end up with what's happened: you have half dozen different de facto quirks from small companies that don't work together.
Better option would have been de facto quirk from giant like intel and all the small players would have to follow. Just like with usb2 in decade earlier.
Apple could also have used their muscle in this. They still are the biggest and about the only player who makes both hardware and software for their machines. If working usb3 would have been rolled out in 2009 and Apple adopted it in 2010, there would have also not been these false hopes about TB. Apple made the comparison by themselves when offering TB as high speed interconnection and not offering usb3. They were the last manufacturer to offer usb3. The way Apple handled both TB and usb3 shows just how little they care about macs as "state of the art" anymore. Their product philosophy has long been fewer models with better profits and it's only natural that then they consentrate to mainstream products and something like TB means very little to this.
Sad thing here is, that when Apple looses interest in something (xServe, xRAID, macPro, Shake, FCP, Color, even Aperture, maybe even OsX), they don't sell it away, to those who care, who could continue development. They just axe it.
Still the question remains: why usb3 devices are not working with macs as good as with other computers? What Apple has done wrong and why they don't fix it? Is this a way to tell customers that "we told you TB is better, but you didn't believe us..."? Or just that they don't care?