Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Is there a good reason for all those ICs to be built into the cable?* Surely it would have been more sensible to put all the processing hardware into the computer/peripheral at the socket, leaving the cable as just that - a cable.

(*unless, the conspiracy theorist might add, you're trying to make your laptop ultra-thin and create a nice money-making scheme into the bargain...)

Apologies if I've missed something obvious, but I still don't see the (technical) reason why all that tuning/multiplexing hardware should be built into the cable, rather than into the ports at either end...?

I pondered the active vs. passive decision for some time, and wasn't quite sure what to make of Apple's decision. 10 Gbps x2 is tricky territory for a consumer cable, but passive twinax cables that support 10 and even 14 Gbps x4 exist—although none of them cost less than $49 either. And strangely, active Thunderbolt cables are limited to around 3m in length, which is the same as passive 14 Gbps QSFP+ cables.

At first I reckoned that including a low speed signaling pair and bus power that can provide either 3.3V or 18V and up to 10W to devices might be complicating the issue. But now I think it all has to do with the connector; pushing a peak aggregate throughput of 40 Gbps through a Mini DisplayPort connector is a big ask.

If we look at the history here, up until 6 months or less before the first Thunderbolt equipped Macs rolled off the line, Intel demonstrated Light Peak exclusively using optical media. Intel's Light Peak controller looked almost exactly like the 1st gen Thunderbolt controllers did, but it was connected by traces that were only an inch or two long to an optical transceiver module. The controllers appeared to be 4-channel (and were marked "LR A2", for code name "Light Ridge" perhaps?), while the optical modules were only 2-channel. This meant that a full 4-channel setup required two optical modules, each of which occupied more board space than the already sizable controller chip.

So right off the bat we have some issues. Light Peak is clearly a killer I/O interface for mobile devices, yet we have a solution that requires several rather large components that consume valuable board real estate as well as a significant amount of power. The optical components also added another $5-$10 to the BOM cost of what was already an outlandishly expensive technology for the PC market. Most OEMs were not terribly interested. The real deal-breaker though, was that despite appearing very involved in testing, Intel didn't actually make any part of the optical hardware for Light Peak. That was all sourced from a consortium including SAE Magnetics, Avago, Oclaro, Enabelance, IPtronics, Ensphere, Foxconn, FOCI and Corning.

Intel talked a big game about the optical transition being the chance for unification on the I/O front. They also used modified USB 3.0 connectors for most of their Light Peak demonstrations. I think they may have been hoping to go down the path of standardization, and possibly even have Light Peak rolled into the USB specification at some point. However, they had two eager customers that wanted the technology sooner rather than later and weren't concerned in the slightest about standards: Apple and Sony. Apple was willing to commit in a major way, but they also wanted an exclusive. There was no way to make that kind of deal with nearly a dozen players involved, so instead Apple and Intel cut everybody else out. Apple would bring their own trademark, Thunderbolt, and their own PHY based on the Mini DisplayPort connector which they had developed and thus held the license to. (Did anyone ever believe that Intel came up with the "Thunderbolt" moniker? If they had named it, they would have opted for something far more romantic, like "Intel CV82524EF/L Converged High-Speed Packet Based Controller (Formerly Light Ridge)".)

Not ones to miss an opportunity, Intel still sold their controllers to Sony, who used them in their Vaio Z notebooks and media dock via a "proprietary" optical implementation that was essentially identical to the Light Peak demonstration hardware. Through some careful couching of words by Sony and Intel, they could claim that this wasn't really Thunderbolt or USB 3.0, and therefore didn't infringe on any licensing agreements that might already be in place.

Now the issue that Apple faced was that they had an exclusive deal on a super fast controller that was only designed to push a signal down about 2 inches of copper. Repurposing the pins on the mDP connector and locating it close enough to the controller wasn't too hard, but the only way to propagate that signal down any reasonable length of cable required additional circuitry. With only six months to solve this problem, Apple adapted some more or less off-the-shelf components from the telecom industry and created the active Thunderbolt cable. This was probably the only solution that would also allow for backwards and forwards compatibility with future controllers or different media.

At some point down the road, Intel may well make the Thunderbolt PHY much more integrated and robust. But for now, using a tiny, consumer oriented, friction fit, 20-pin connector for 2 channels of bidirectional, 10.3125 Gbps signaling is going to require active cabling. The good news is that multiple vendors are developing silicon specifically targeting this problem, and thus prices should indeed come down—which was the topic of the original article. In addition to Intersil, I also noticed that TI seems to have a whole range of Thunderbolt products ready for market: http://www.ti.com/ww/en/analog/tps2...?DCMP=hpa_int_thunderbolt&HQS=thunderbolt-bt1
 
Last edited:
Yeah, it's a Monoprice Mini DisplayPort to DVI cable. I have no idea why it's so freakin' huge either. And yes, it's a retina MBP.

Yikes. None of mine have been that big. It took me a minute to figure out what they might be, as that's where the ethernet port is on my MBP. :)
 
Thank you for the polite reply.

Any "optical" T-Bolt cable today is a hybrid Cu-optical cable - it has copper connectors, with the copper protocols (and speeds) bridged across an optical segment.

True optical does not exist.

Apologies for the tone of my post. I might have been a bit fired-up when I wrote that.

As I said in my previous post, there is always going to be an electrical/optical boundary, and whether it is on one side of the physical connector or the other is immaterial. Everything to do with bit rates and protocols is handled by the upper layers and is the domain of the Thunderbolt controller. All the optical transceiver does is convert the voltage levels carried by the differential signaling pairs coming out of the controller into light impulses.

thumb_230_1.jpg


Whether this bit resides inside the PC or the cable connector does not make one cable necessarily better or "more optical" than another. You can argue that locating the VCSELs and photodiodes in the connector makes the cable no longer purely optical, but for most consumer applications, a purely optical cable has more drawbacks than benefits. It also involves creating 4 pairs of tiny lenses that can withstand hundreds if not thousands of mating cycles, plus exposure to abrasion and any number of environmental contaminants. Meanwhile, electrical is proven technology that is cheap to manufacture, and you need to use copper anyway if you want to provide bus power.

If you're looking for Truth, I suggest talking to the folks over at the LHC. ;)
 
Apologies for the tone of my post. I might have been a bit fired-up when I wrote that.

As I said in my previous post, there is always going to be an electrical/optical boundary, and whether it is on one side of the physical connector or the other is immaterial.

My post was responding to a post saying that optical could do 100 Gbps - and it's not immaterial for the current T-Bolt V1.0 technology to point out that "true optical" does not exist, and that 100 Gbps optical is a future roadmap as I said.


If you're looking for Truth, I suggest talking to the folks over at the LHC. ;)

I do - I worked on LHC for five of the years that I lived in Switzerland - I still keep in touch with a handful of people. And they're really excited that Higgs has virtually been found.
 
The pictures are very deceiving. Here's a pic of mine for scale, next to a USB connector, MagSafe 2 connector, headphone, and what for some reason is an unnecessarily large MDP->DVI cable. It's maybe about a centimeter longer than the USB connector, but not long enough to be a problem I don't think.

7ZbMD.jpg

Oh, that's not as bad as I'd thought. Still looks to be coming out at a bit of a non-perpendicular angle though, is that caused by that behemoth next to it?
 
For now Thunderbolt has its pro uses like ultra fast external RAID and video capture but for general consumers it is a more expensive alternative to USB 3. Thunderbolt needs lower prices and more importantly, peripherals only possible on it, like external graphics cards that would boost performance on something like a MacBook Air and long cables.
 
Oh, that's not as bad as I'd thought. Still looks to be coming out at a bit of a non-perpendicular angle though, is that caused by that behemoth next to it?

It can't be, as they are all bent upwards. Probably just cable strain.

For now Thunderbolt has its pro uses like ultra fast external RAID and video capture but for general consumers it is a more expensive alternative to USB 3. Thunderbolt needs lower prices and more importantly, peripherals only possible on it, like external graphics cards that would boost performance on something like a MacBook Air and long cables.

I agree with the first part, and in reality, most consumers, even power users like me, are fine with USB 2. TB is great for pros now. I don't think it will ever be relevant for consumers. Consumers don't even want USB 3, but it will just become the standard, and no one will care one way or the other. If people happen to plug a USB 3 thing in to a USB 3 port, it will have the extra bandwidth available, if not, they probably won't care that it's running at USB 2 speed. Eventually everything will be USB 3 by default, and that will be that.
 
My post was responding to a post saying that optical could do 100 Gbps - and it's not immaterial for the current T-Bolt V1.0 technology to point out that "true optical" does not exist, and that 100 Gbps optical is a future roadmap as I said.

So are you saying that Sumitomo's optical Thunderbolt cables are "false optical", or that they don't actually exist?

I do wholeheartedly agree that 100 Gbps is not not in the cards for the current generation of Thunderbolt.

I think there's a general failure by many people to understand that although fiber can provide tremendous bandwidth, it is still limited by the silicon that is driving the signal going to it. The way that 40 and 100 Gbps links, both copper and optical, are achieved today is through the aggregation of multiple 10 Gbps channels. You can buy 10 Gbps x12 cables that bundle 24 optical fibers or 48 copper conductors into a 120 Gbps link, but the connectors are fairly large and the cables are pretty unwieldy. Increasing the number of channels in a Thunderbolt cable is one way to add bandwidth, but the result doesn't align well with the design goals of the interface. In order to make Thunderbolt faster, Intel needs to raise the single channel data rate of the controller.

So let's look at some common high speed serial interfaces and their per lane signaling rates:

USB 3.0 - 5.0 GBaud
DisplayPort 1.2 - 5.4 GBaud
SATA 6Gb/s - 6.0 Gbaud
PCIe 3.0 - 8.0 GBaud
10GbE / Thunderbolt - 10.3125 GBaud
12Gb/s SAS - 12.0 Gbaud
FDR InfiniBand / 16GFC Fibre Channel - 14.0625 GBaud
That's it. There is nothing faster on the market, period.

So if adding more lanes to the cable isn't desirable, and the highest performance silicon in production is only 36% faster than the original Thunderbolt implementation, well that's not really enough to provide what could be considered a generational advance. That will only happen when single lane throughput of 25 Gbps can be achieved. The race to this milestone was previously being contested by Mellanox and Qlogic, until January of this year when, lo and behold, Intel acquired Qlogic's InfiniBand business. Thus the next speed bump to Thunderbolt will presumably follow the EDR InfiniBand rollout, most likely sometime in 2014. Maybe then fake optical cables will become standard.
 
The pictures are very deceiving. Here's a pic of mine for scale, next to a USB connector, MagSafe 2 connector, headphone, and what for some reason is an unnecessarily large MDP->DVI cable. It's maybe about a centimeter longer than the USB connector, but not long enough to be a problem I don't think.

http://i.imgur.com/7ZbMD.jpg

So much for going wireless :lol:
 
My one ( and at the moment only) Thunderbolt cable was a freebie with my iMac...I had no use for it until I bought my Pegasus R4 which it's using at the moment....Read up a bit about the Apple cables not being as good as they should be, folks wrapping them up in foil due to interference etc. Either I got a good one, or a lot of it is just BS....I've put my phone, iPad next to it when it's in action, and neither device disconnect ts or exhibits odd behaviour...THEN I priced them up....Here in the UK, the are £30 from Apple, a little cheaper on Amazon....Some of the new ones, which promise zero interference etc. are a lot more expensive. As Thunderbolt devices become more mainstream, we should see the price for cables drop....Glad I didn't have to buy one though...it's the kind of thing that really annoys me...You don't get one supplied with Promise or Thunderbolt ACD's either.
 
Is there a good reason for all those ICs to be built into the cable?* Surely it would have been more sensible to put all the processing hardware into the computer/peripheral at the socket, leaving the cable as just that - a cable.

(*unless, the conspiracy theorist might add, you're trying to make your laptop ultra-thin and create a nice money-making scheme into the bargain...)

Apologies if I've missed something obvious, but I still don't see the (technical) reason why all that tuning/multiplexing hardware should be built into the cable, rather than into the ports at either end...?

When a piece of cable is cut from a spool, it'll have certain characteristics based on minute differences between cables cut of the same length from one spool to the next. Little things matter that can't be easily controlled in manufacturing, including things that people normally don't think about like the formulation and thickness of the sheath around the actual wire, since that becomes the dielectric for a capacitor made of the length of a wire and the adjacent wires. Or the particular thickness of the wire. Or the soldering job on the end of the wire. And so on...

In order to push multi-gigabit/sec of data through a cable, the transceivers need to be tuned to the cable's characteristics in order to increase the likelihood of the data arriving uncorrupted on the other side.

It's kind of like trying to mass produce tuning forks by cutting into the same shape. You still have to fine tune each one individually because they'll still be slightly different.

There's two ways to go about providing calibration data:
1) include calibration equipment on the device using the cable. (your idea)
2) run the characterization tests after the cable has been assembled with the transceivers, and write the calibration data to the transceivers on the cable itself. (what Apple did)

Why is #2 better than #1? Because the hardware necessary to do the calibration costs more than your laptop. I don't really know the price of ADCs in the 20 gigasample/second range. But digikey.com says that some in the 3 gigasample/second range (not good enough) already cost $700 a piece. So I'm guessing ones that can actually do the job are well over $1k.

So, it's technically possible to put calibration equipment onto the device and then use cheap cables between devices. But I don't think you really want that to happen. :)
 
So are you saying that Sumitomo's optical Thunderbolt cables are "false optical", or that they don't actually exist?

No, he's saying they're not actually Thunderbolt cables. They're a pair of bridges that happen to re-encapsulate Thunderbolt as it currently works over copper to a fiber medium.

Thunderbolt's specification does not support fiber currently as a transport medium. It will in the future, but we're not there yet.
 
Picture this: in the year 2015, the ONLY ports on most computers are USB 3.0 and Thunderbolt. You no longer have a need for VGA, DVI, HDMI, FireWire, or even Ethernet. All of those could be run through Thunderbolt and suddenly it's much easier to connect devices to computers. Yes, it would certainly take a long time to adopt the technology like that, but it sounds like a convenient world once fully adopted, doesn't it?
Wishing the disappearance of legacy ports is pretty old fashioned. It will always take few decades more than estimated.
Dvi came in 13 years ago and everybody was shouting how vga will die at least when all screens are flat (=digital).
Fast forward 13 years and guess what is most common connector on laptop: VGA. Even when there's no analog displays anymore. Nobody with any knowledge in ITC would have predicted this a decade ago.
The actual "way things are" is changing. The new Intel chip has embedded support for Thunderbolt, so Windows computers will be able to have these ports without spending so much on Thunderbolt. The new chips will help on the price of the connectors, and market volume will lower the prices as well. Apple chose the right horse.
And Ivy Bridge also has support onboard for USB 3.0. Which can work on Thunderbolt along with the screen and solid state storage. And firwire 800 drives.
Once more and more Ivy Bridge is out there, the device manufacturers will start pumping out affordable things along with more expensive gee-whiz suff.
Imagine a computer in pieces, connected by some future Thunderbolt. The end of the tower. Completely configurable.
Also, complete baloney.
Ivy Bridge does not have tb integrated, maybe Haswell will next year.
Also IB does not have fw integrated like no chipset ever had.
Somehow Apple has always had discreet controllers for fw and tb, but always they have run out of space when they should have put usb3 controller (which has always also been smallest and cheapest) on mb.

Things are really changing, but Apple chose the wrong horse.
Things have changed in that so much money has been invested both in corporate world and in private world to existing devices and infrastructure, that today's connections won't just die away. Just like electricity sockets or light bulb sockets won't die away.
You know, there are still ps/2 sockets in mb's?
Industry has stopped trying to replace rj45 and instead are trying to figure out how to better use it.
Every year there is more stuff sold, which will add the legacy weight and therefore slows down "the new generation". This is why 3rd party manufacturers are not intrigued when just-one-more-port is invented. They won't use it (in large scale), if there's doubt, that most of they customers will need that for at least a decade. There won't be any revolutions in this area anymore, just slow evolution, where new things should be designed with sharp and broad vision to both past & the future.

As for tb, Apple hasn't played their cards well for the future. All tb stuff will remain expensive. The technology is just so much more complicated that it just can't get affordable without being sold to almost every computer user.
Usb3 will win this without any doubt. Very few will pay 10x for 2x speed, so tb will not became mainstream.
Tb is also getting old before it is even adopted.
It will not replace workstation for those who need bandwidth and for those who don't need, usb3 is enough.
People are already asking big retina displays. These will saturate current tb with no time. After that, you'll have faster connection to your storage via usb3 than tb.
If they want to keep dp integrated to tb, they need new version of tb, which will be even more expensive than the current one and also all current tb stuff will be obsolete. This is just dead end.
Maybe they'll just have keep tb in the backburner like fw and see if it will succeed after several years. Maybe they have learnt something from intel's mistake with rambus' rdram.
Consumers don't know what hd-sdi is, but it is a vital connection for big industry and that's how it should be. There no reason to market it to consumers as a new way to hook up displays and on the side make things more expensive.
I already wrote about the tower and saturation issue here:
https://forums.macrumors.com/posts/15143816/
I don't know how widely this will ever be used. Firewire was originally intended for high speed data transmission, and USB for standard one-connector-for-most-things. We all know how that went. USB got faster and cheaper, and eve though firewire 800 was faster, USB 2 was the go-to connector.

Unless thunderbolt turns into a one-cable-to-rule-them all (and aside from power it looks like it will get there) it's not going to take off the way USB did. It needs to be cheaper and easier to use. Thunderbolt only has one of those points covered.
Light Peak was originally designed to be an extension for other sockets.
What we have now is because it was tried to change it to something else.
Maybe the problem really was, that intel's researchers first developed the interconnection and only after that started thinking what it could be used for.
Apple's biggest marketing blunder with Thunderbolt : making everyone think it was aimed at replacing USB. Whether intentional or not on their part, that is now what most in the Apple community think.

Thunderbolt is not a USB replacement. Never will be. It'll be a niche host based connectivity option for higher end machines and peripherals, mostly used in the prosumer world.

Enterprise is going network based connectivity (FC or IP SANs for storage, network based peripherals over GBE or 10GE), consumers are sticking with lower mass market priced peripherals (USB3).
Exactly!
And the problem will be, that it is sold to 100% of mac users, when only 1% really benefits from it. It should be handled more like express card; only those who need it, need to buy it.
My bet is, that tb will cause more problems with average mac user than regular dp would, after big retina displays arrive to market.
(Another gen of displays from Apple, that have certain limitations with very recent macs.)
Intel talked a big game about the optical transition being the chance for unification on the I/O front. They also used modified USB 3.0 connectors for most of their Light Peak demonstrations. I think they may have been hoping to go down the path of standardization, and possibly even have Light Peak rolled into the USB specification at some point. However, they had two eager customers that wanted the technology sooner rather than later and weren't concerned in the slightest about standards: Apple and Sony. Apple was willing to commit in a major way, but they also wanted an exclusive. There was no way to make that kind of deal with nearly a dozen players involved, so instead Apple and Intel cut everybody else out. Apple would bring their own trademark, Thunderbolt, and their own PHY based on the Mini DisplayPort connector which they had developed and thus held the license to. (Did anyone ever believe that Intel came up with the "Thunderbolt" moniker? If they had named it, they would have opted for something far more romantic, like "Intel CV82524EF/L Converged High-Speed Packet Based Controller (Formerly Light Ridge)".)

Not ones to miss an opportunity, Intel still sold their controllers to Sony, who used them in their Vaio Z notebooks and media dock via a "proprietary" optical implementation that was essentially identical to the Light Peak demonstration hardware. Through some careful couching of words by Sony and Intel, they could claim that this wasn't really Thunderbolt or USB 3.0, and therefore didn't infringe on any licensing agreements that might already be in place.

Now the issue that Apple faced was that they had an exclusive deal on a super fast controller that was only designed to push a signal down about 2 inches of copper. Repurposing the pins on the mDP connector and locating it close enough to the controller wasn't too hard, but the only way to propagate that signal down any reasonable length of cable required additional circuitry. With only six months to solve this problem, Apple adapted some more or less off-the-shelf components from the telecom industry and created the active Thunderbolt cable. This was probably the only solution that would also allow for backwards and forwards compatibility with future controllers or different media.

At some point down the road, Intel may well make the Thunderbolt PHY much more integrated and robust. But for now, using a tiny, consumer oriented, friction fit, 20-pin connector for 2 channels of bidirectional, 10.3125 Gbps signaling is going to require active cabling. The good news is that multiple vendors are developing silicon specifically targeting this problem, and thus prices should indeed come down—which was the topic of the original article. In addition to Intersil, I also noticed that TI seems to have a whole range of Thunderbolt products ready for market: http://www.ti.com/ww/en/analog/tps2...?DCMP=hpa_int_thunderbolt&HQS=thunderbolt-bt1
Apple chose the technically wrong way by coupling dp to tb, which resulted in hindered dp and now they will face the bandwidth problem.
Sony chose technically right way by keeping dp free of additional standard versions, but failed to negotiate with usb consortium.
All 4 (intel, apple, sony, usb consortium) failed in creating one solution that all would use and because of volume would become reasonably priced.
I just can't see that in current market situation more chips would make tb devices much cheaper. Volumes are simply too small, that it wouldn't matter if the chips would be free.
I guess money wasn't in mind for those who designed tb. To break the bad usability of chain topology, you need hubs. Eg. 4 port hub would need 4 controllers? In a situation where many tb devices have no second port for daisy-chaining, because those ports are just too expensive?
So if adding more lanes to the cable isn't desirable, and the highest performance silicon in production is only 36% faster than the original Thunderbolt implementation, well that's not really enough to provide what could be considered a generational advance. That will only happen when single lane throughput of 25 Gbps can be achieved. The race to this milestone was previously being contested by Mellanox and Qlogic, until January of this year when, lo and behold, Intel acquired Qlogic's InfiniBand business. Thus the next speed bump to Thunderbolt will presumably follow the EDR InfiniBand rollout, most likely sometime in 2014. Maybe then fake optical cables will become standard.
Does it really matter moneywise where the media conversion is made in chain-topology? Every connection needs 2 ports, 1 cable and 2 media conversions?

After all, back to the topic, if price of a cable is just a few percent of using the tb ecosystem, what does it matter if the cable is cheaper? Lowering the whole bill for 1%? Or even 2%.
Goosh, I'll need to run buy new mac with horrible glossy screen to get part of this amazing cost savings!
 
Last edited:
I seriously hope this happens. High pricing is the biggest detriment to thunderbolt right now, as most people are aware. And having such great technology (imperfect as it ma be) at our fingertips and yet just out of reach due to pricing is silly... I think things will improve once intel gets thunderbolt onto window's devices, which they will....
 
You really needed USB 2.0 once flash drives over 256 MB rolled out.
Would be nice to have flash drives with a Thunderbolt connector (ideally together with a USB connector on the other end for compatibility)...
 
Neat.

Until these cables start costing 5 dollars, I can't see this interface becoming the standard.

Standard for higher end devices, sure. But thats it. Now if they can ever get the cables down to the sub 10 dollar range, then hell yeah
 
So are you saying that Sumitomo's optical Thunderbolt cables are "false optical", or that they don't actually exist?

My previous post was:

Any "optical" T-Bolt cable today is a hybrid Cu-optical cable - it has copper connectors, with the copper protocols (and speeds) bridged across an optical segment.

True optical does not exist.

I'm saying that they're hybrid cables, not true optical. They have no advantages over copper cables except longer length and perhaps electrical isolation.


You can buy 10 Gbps x12 cables that bundle 24 optical fibers or 48 copper conductors into a 120 Gbps link, but the connectors are fairly large and the cables are pretty unwieldy.

They could use wavelength-division multiplexing without adding more fibres.
 
No, he's saying they're not actually Thunderbolt cables. They're a pair of bridges that happen to re-encapsulate Thunderbolt as it currently works over copper to a fiber medium.

Thunderbolt's specification does not support fiber currently as a transport medium. It will in the future, but we're not there yet.

What? Thunderbolt, nee Light Peak, was designed to use optical media from inception. Omitting the optical transceivers was a decision made based on cost and practicality in order to bring the technology to market more rapidly.

The reason I included the image of a Light Peak transceiver in my previous post was to illustrate that there is no significant logic in those devices. They are very simple; essentially all they do is convert electrons into photons. There is no "encapsulation" being performed whatsoever, and the signal is the same as the one being generated by the Thunderbolt controller regardless of the media being used. We're talking about a threshold here, not a bridge.

Have you never seen a technology that allowed for multiple interchangeable PHY's before? 10/40/100GbE, InfiniBand and Fibre Channel all use pluggable transceiver modules based on multi-source agreements so that the equipment can utilize whatever media is best suited to the deployment scenario. Sometimes the modules are separate from the cable itself, but for short runs, often the transceiver and cable are bonded to create a pluggable cable assembly. Would you argue that these technologies do not currently support fiber as a transport medium because not one of them outputs an optical signal directly from the switch or controller? Or would you say that they do because the transceiver module fits into a recess in the device and is thus not just part of the cable?

We are there. We just seem to be having trouble understanding that fiber optics doesn't magically make an I/O interface go faster, all it does is use light instead of electricity to carry the signal.

sumitomo_thunderbolt_cable_2.jpg
 
What? Thunderbolt, nee Light Peak, was designed to use optical media from inception. Omitting the optical transceivers was a decision made based on cost and practicality in order to bring the technology to market more rapidly.

Exactly what Aiden and me keep saying. What is it you don't understand about our posts and clarification exactly ? The current spec does not have optical as a transport medium. Any cable made out of fiber optics would need to take a copper connection and do a conversion to a light pulse, not necessiraly following any Thunderbolt specification since the other end will translate it back to a Thunderbolt copper signal.

----------

Have you never seen a technology that allowed for multiple interchangeable PHY's before?

Yes, Ethernet. Usually, following the ISO layer model, the physical layer is seperated from the logical link layer in a way that permits a protocol encapsulation from any type of logical link to physical media.
 
I'm saying that they're hybrid cables, not true optical. They have no advantages over copper cables except longer length and perhaps electrical isolation.

Well that's because the only advantages optical cables currently offer over copper are lower path loss, better isolation from interference, thinner, lighter, more resistant to certain types of corrosion, etc.

What advantage would be gained by placing the optical transceiver on the motherboard side of the connector, as was common with the Light Peak demonstration hardware and as was implemented by Sony in the Vaio Z? Note that the lens design that was available only allowed for single-channel cables, so it was half the bandwidth of the current implementation and did not provide multiple pathways to the devices.

They could use wavelength-division multiplexing without adding more fibres.

Very true. Now how does that benefit the end user? How does it make the Thunderbolt controller faster? The per channel speed stays the same, but now we can fit more channels down a single pipe. What does that do to the complexity of the cross-bar switch in the controller? Say we go for a 5x increase in bandwidth, now your switch has gone from 8 ports to 24. How do we feed that from the back end? Add more protocol adapters, bump the DP adapters to DisplayPort 1.2 and increase the PCIe connection to PCIe 3.0 x16. Oops, now we need a 40-port Thunderbolt switch and a 32-lane PCIe 3.0 switch. We've got a massive, expensive, power hungry, 800-pin behemoth of a controller on our hands. Sounds perfect for a mobile device. Now everyone can pay an extra $480 for anything that includes a Thunderbolt port, and we still need to retain the copper in the connector to provide bus power and not break compatibility with DisplayPort/Thunderbolt 1.0.

The silicon is the limiting factor, not the medium. There is no "true" optical cable that can make the silicon driving it any faster, and simply increasing the parallelism of the system at this point doesn't make any sense.
 
The whole concept of optical on short-range cables is stupid. Fiber is a great technology, for delivering high bandwidth over distance, but for anything under 100 meters, copper is cheaper, easier, and has fewer things to break. They're already doing 10gbps over Ethernet, so I don't see the need for fiber at the even shorter desktop level. Copper can easily do the job for a tiny fraction of the price.

Optical Digital audio seems to be the one exception, because somehow it become a standard, and it dirt cheap now. Other than that, fiber should end on the outside wall of the home, and be copper thereafter, like FIOS. Now if I could just get fiber to the outside wall of my house... :D

I'm sticking with USB 2. When it just happens that I have USB 3 devices, and USB 3 cables came with them, I'll switch. Until then, USB 2 is just fine.
 
Exactly what Aiden and me keep saying. What is it you don't understand about our posts and clarification exactly ? The current spec does not have optical as a transport medium. Any cable made out of fiber optics would need to take a copper connection and do a conversion to a light pulse, not necessiraly following any Thunderbolt specification since the other end will translate it back to a Thunderbolt copper signal.

What I don't understand, exactly, is why you think the Thunderbolt spec does not define the use of optical media, when clearly I posted an image of a licensed by Intel, made to specification, emblazoned with the Thunderbolt logo, honest-to-goodness optical Thunderbolt cable. The specification includes the use of active copper or optical cables, with the optical cables containing the transceivers within the connectors. Optical transport cables are also part of the DisplayPort 1.1 specification with which Thunderbolt ports are backwards compatible.

Yes, Ethernet. Usually, following the ISO layer model, the physical layer is seperated from the logical link layer in a way that permits a protocol encapsulation from any type of logical link to physical media.

So why do you understand the concept in the context of Ethernet, but not Thunderbolt?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.