It is still very early in the technology cycle.
I voted you up because in a sense I agree but you also need to understand it is very early in the technology adoption cycle. There is also a perception in the marketplace that Thunderbolt is a replacement for USB, it isn't. It will be a long time before hardware costs get down to around USB ranges.
I think Thunderbolt is a great spec. But until there are more peripheral devices to use with it, it really doesn't matter how fast it is...
Yep, specs are worthless without hardware implementing those specs. Just realize that such hardware does not come overnight. People need to think back to the first days of USB, Apple lead that revolution but it took years for USB to fully displace the other hardware of the time.
The existing PCIe 2.0 spec is lightning fast, if only we could get it showing up on docking stations, external hard drives, external graphics/physics cards, etc.
A few comments:
I believe Apples primary motivation was to implement TB to support docking monitors for its laptops. As you know that is where they implemented some of their first hardware. So from Apples standpoint they have already implemented their part.
Due to the inherent costs of TB I don't see it being cost effective for anything less than a disk array. Right now the TB cable itself is half the cost of a drive and then you need to add a TB specific adapter for the drive. So for the next couple of years one would expect most TB based "drives" to be disk arrays. An interesting exception here will be SSD, where there are now PCI Express controllers available to implement the SSD with.
As to external graphics or physics cards, sorry but I don't see the wisdom in such devices. TB simply isn't fast enough to justify the costs especially if you loose much of the cards performance because of that. Cards for computation might be viable but even then I don't see the value vs simply buying a Mac with a descrete GPU. To be viable external boxes for computation would have to be pretty high performance relative to what could be accomplished with a built in GPU. This relates to another thread I've taken part in, about compute nodes, the problem is how does one sustain a business on such hardware. Especially when the HPC sector is entrenched in the 1U compute server.
The economics of external compute devices is very difficult to get a handle on. For one thing you would need a big box just to house the power supply and fans. We are talking 300 watt and larger power supplies plus the GPU and support electronics. While it is easy to build such a box, it isn't so easy to stay in business selling them. It will be very interesting to see what the market does here.