Is there a good reason for all those ICs to be built into the cable?* Surely it would have been more sensible to put all the processing hardware into the computer/peripheral at the socket, leaving the cable as just that - a cable.
(*unless, the conspiracy theorist might add, you're trying to make your laptop ultra-thin and create a nice money-making scheme into the bargain...)
Apologies if I've missed something obvious, but I still don't see the (technical) reason why all that tuning/multiplexing hardware should be built into the cable, rather than into the ports at either end...?
I pondered the active vs. passive decision for some time, and wasn't quite sure what to make of Apple's decision. 10 Gbps x2 is tricky territory for a consumer cable, but passive twinax cables that support 10 and even 14 Gbps x4 exist—although none of them cost less than $49 either. And strangely, active Thunderbolt cables are limited to around 3m in length, which is the same as passive 14 Gbps QSFP+ cables.
At first I reckoned that including a low speed signaling pair and bus power that can provide either 3.3V or 18V and up to 10W to devices might be complicating the issue. But now I think it all has to do with the connector; pushing a peak aggregate throughput of 40 Gbps through a Mini DisplayPort connector is a big ask.
If we look at the history here, up until 6 months or less before the first Thunderbolt equipped Macs rolled off the line, Intel demonstrated Light Peak exclusively using optical media. Intel's Light Peak controller looked almost exactly like the 1st gen Thunderbolt controllers did, but it was connected by traces that were only an inch or two long to an optical transceiver module. The controllers appeared to be 4-channel (and were marked "LR A2", for code name "Light Ridge" perhaps?), while the optical modules were only 2-channel. This meant that a full 4-channel setup required two optical modules, each of which occupied more board space than the already sizable controller chip.
So right off the bat we have some issues. Light Peak is clearly a killer I/O interface for mobile devices, yet we have a solution that requires several rather large components that consume valuable board real estate as well as a significant amount of power. The optical components also added another $5-$10 to the BOM cost of what was already an outlandishly expensive technology for the PC market. Most OEMs were not terribly interested. The real deal-breaker though, was that despite appearing very involved in testing, Intel didn't actually make any part of the optical hardware for Light Peak. That was all sourced from a consortium including SAE Magnetics, Avago, Oclaro, Enabelance, IPtronics, Ensphere, Foxconn, FOCI and Corning.
Intel talked a big game about the optical transition being the chance for unification on the I/O front. They also used modified USB 3.0 connectors for most of their Light Peak demonstrations. I think they may have been hoping to go down the path of standardization, and possibly even have Light Peak rolled into the USB specification at some point. However, they had two eager customers that wanted the technology sooner rather than later and weren't concerned in the slightest about standards: Apple and Sony. Apple was willing to commit in a major way, but they also wanted an exclusive. There was no way to make that kind of deal with nearly a dozen players involved, so instead Apple and Intel cut everybody else out. Apple would bring their own trademark, Thunderbolt, and their own PHY based on the Mini DisplayPort connector which they had developed and thus held the license to. (Did anyone ever believe that Intel came up with the "Thunderbolt" moniker? If they had named it, they would have opted for something far more romantic, like "Intel CV82524EF/L Converged High-Speed Packet Based Controller (Formerly Light Ridge)".)
Not ones to miss an opportunity, Intel still sold their controllers to Sony, who used them in their Vaio Z notebooks and media dock via a "proprietary" optical implementation that was essentially identical to the Light Peak demonstration hardware. Through some careful couching of words by Sony and Intel, they could claim that this wasn't really Thunderbolt or USB 3.0, and therefore didn't infringe on any licensing agreements that might already be in place.
Now the issue that Apple faced was that they had an exclusive deal on a super fast controller that was only designed to push a signal down about 2 inches of copper. Repurposing the pins on the mDP connector and locating it close enough to the controller wasn't too hard, but the only way to propagate that signal down any reasonable length of cable required additional circuitry. With only six months to solve this problem, Apple adapted some more or less off-the-shelf components from the telecom industry and created the active Thunderbolt cable. This was probably the only solution that would also allow for backwards and forwards compatibility with future controllers or different media.
At some point down the road, Intel may well make the Thunderbolt PHY much more integrated and robust. But for now, using a tiny, consumer oriented, friction fit, 20-pin connector for 2 channels of bidirectional, 10.3125 Gbps signaling is going to require active cabling. The good news is that multiple vendors are developing silicon specifically targeting this problem, and thus prices should indeed come down—which was the topic of the original article. In addition to Intersil, I also noticed that TI seems to have a whole range of Thunderbolt products ready for market: http://www.ti.com/ww/en/analog/tps2...?DCMP=hpa_int_thunderbolt&HQS=thunderbolt-bt1
Last edited: