Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Very true. Now how does that benefit the end user? How does it make the Thunderbolt controller faster? The per channel speed stays the same, but now we can fit more channels down a single pipe.

Ever heard of "teaming", as in "Ethernet teaming"?

PCIe packets can be transmitted in parallel on multiple channels, multiplying bandwith for the end user - so the "per channel" speed for the extended PCIe bus can be the sum of the actual fibre channels used.

For example, instead of only PCIe x4, WDM with 4 channels at the current data rate would allow PCIe x16 devices to be supported!
__________

But, enough of this tangent - T-Bolt 1.0 will never go faster than it is now, regardless of whether copper or hybrid Cu-optical cables are used.

We'll have to wait for T-Bolt V2.0 (or "PCI Express External") for faster external connections.
 
yeah how about shorter while you are at it! Who the hell always needs a 6ft cable? I'd pay 10$ for a 1ft!
 
When optical TB becomes more prevalent, they'll probably move the hardware back into the port controller. Then there will be cheaper cables, but it will establish a fragmented legacy cable market. Perhaps we might also see a transducer dongle adapter.

All this putting chips in cables does sound perverse, but we live in an age where things have become so small and so cheap there's really no reason not to put intelligence inside the most mundane things.

Yeah I guess Thunderbolt will be perfect once it goes optical… Until then it's just a limited solution, and until then I would say it's not that much better than USB 3.0, for most uses, when considering the price.

And I was surprised to find chips in ink cartridges that keep track of ink levels without having a clue about actual ink levels… Another way to make people pay more!
 
Yeah I guess Thunderbolt will be perfect once it goes optical… Until then it's just a limited solution, and until then I would say it's not that much better than USB 3.0, for most uses, when considering the price.

And I was surprised to find chips in ink cartridges that keep track of ink levels without having a clue about actual ink levels… Another way to make people pay more!

Optical is not any more perfect than copper. There is no benefit at short distances.
 
Optical is not any more perfect than copper. There is no benefit at short distances.

Sure there is, no cross-talk or interference by electrical and magnetic fields, resulting in higher throughput due to less error correction being required. On the other hand, careful how you bend that wire.
 
Ever heard of "teaming", as in "Ethernet teaming"?

PCIe packets can be transmitted in parallel on multiple channels, multiplying bandwith for the end user - so the "per channel" speed for the extended PCIe bus can be the sum of the actual fibre channels used.

For example, instead of only PCIe x4, WDM with 4 channels at the current data rate would allow PCIe x16 devices to be supported!
__________

But, enough of this tangent - T-Bolt 1.0 will never go faster than it is now, regardless of whether copper or hybrid Cu-optical cables are used.

We'll have to wait for T-Bolt V2.0 (or "PCI Express External") for faster external connections.

I am familiar with teaming, but as I said, introducing higher orders of parallelism to Thunderbolt doesn't bring anything desirable to the table. The first generation of Thunderbolt controllers included 2-channel and 4-channel designs ranging in price from roughly $20-$30. The 2nd generation brought us a single-channel controller and the first Thunderbolt accessory to retail for under $30. Everyone is clamoring for cheaper Thunderbolt gear. Adding more lanes to a serial interface tends to scale up the costs associated with it in a fairly linear fashion. I don't see a lot of forum posts where people are saying, "Heck, I'd happily pay twice as much for Thunderbolt if only it had more bandwidth, but 10 Gbps x2 just won't cut it for my workflow."

And while we do often see serial interfaces such as PCIe where several lanes are aggregated into a single link, generational advances are almost universally achieved by increasing the symbol rate, not by further increasing lane count. In this way the physical interface requires little to no modification, which in turn allows for backwards and forwards compatibility, and there is usually no significant increase in cost during the transition. Each new generation generally strives to double the symbol rate of the one that preceded it. i.e. PCIe 1.0 @ 2.5 GBaud -> PCIe 2.0 @ 5.0 Gbaud -> PCIe 3.0 at 8.0 Gbaud (but with significantly more efficient encoding so the bit rate was effectively doubled), SATA 1.5 -> 3.0 -> 6.0 Gbps, DisplayPort 1.1 @ 2.7 GBaud -> DisplayPort 1.2 @ 5.4 GBaud, etc.

SATA 12Gb/s could be available right now if SATA-IO decided to double the number of signaling pairs in the cable, but it would be very tricky to make equipment from different generations work together if they did. Thunderbolt is a dual-channel architecture at this point, and those channels are already operating just about as fast as they can.

And yes, I'll try to stop with the tangents now.

Sure there is, no cross-talk or interference by electrical and magnetic fields, resulting in higher throughput due to less error correction being required. On the other hand, careful how you bend that wire.

Fiber is bound by Shannon's channel capacity curve just as much as any other medium. And I'd be careful how I bent any cable carrying 10+Gbps channels, especially in light of how much they tend to cost.

Yeah I guess Thunderbolt will be perfect once it goes optical… Until then it's just a limited solution, and until then I would say it's not that much better than USB 3.0, for most uses, when considering the price.

And I was surprised to find chips in ink cartridges that keep track of ink levels without having a clue about actual ink levels… Another way to make people pay more!

A 4-channel Thunderbolt controller is capable of pumping 40 Gbps vs. a USB 3.0 controller which can only manage 4 Gbps. That's a full order of magnitude more bandwidth. Even if you're looking at a single channel, Thunderbolt is still 2.5 times faster than USB 3.0. These are not insignificant differences.

Thunderbolt devices and cables are also, on average, an order of magnitude more expensive than their USB 3.0 counterparts. And since most people don't generally require more than the 4 Gbps that USB 3.0 offers, it is understandable why many folks were annoyed that Apple chose to offer Macs with Thunderbolt ports for 16 months prior to their inclusion of USB 3.0.

I would not say that Thunderbolt is a limited solution, but rather the opposite. It merely remains underexploited at this juncture. That being said, there's no reason to pay 10 times more for a Thunderbolt device if you've got another option available that can get the job done just as well.

And you do realize that the purchase prices of printers these days are entirely subsidized by the cost of the consumables? That's why a free printer is generally the most expensive printer you can own.
 
Last edited:
T-Bolt - half-baked, not ready for prime time

Sure there is, no cross-talk or interference by electrical and magnetic fields, resulting in higher throughput due to less error correction being required. On the other hand, careful how you bend that wire.

Do you worry about cross-talk or interference on your USB cables?

Do you worry about cross-talk or interference on your GbE cables?

Do you worry about cross-talk or interference on your 1394 cables?

Do you worry about cross-talk or interference on your eSATA cables?

Since I use all of these, and the answer is "NO" for all, why shouldn't I conclude that this is one more piece of evidence that T-Bolt is a half-baked concept, rolled out before reasonable real-world testing?

If cross-talk and interference are issues for T-Bolt 1.0 - then T-Bolt 1.0 has serious flaws. We should look forward to PCIe Express External to kill T-Bolt outright.
 
Last edited:
Do you worry about cross-talk or interference on your USB cables?

Do you worry about cross-talk or interference on your GbE cables?

Do you worry about cross-talk or interference on your 1394 cables?

Do you worry about cross-talk or interference on your eSATA cables?

Since I use all of these, and the answer is "NO" for all, why shouldn't I conclude that this is one more piece of evidence that T-Bolt is a half-baked concept, rolled out before reasonable real-world testing?

You need a little logic review. Your conclusion is a total non sequitur.

Did the people who designed your USB cables worry about cross-talk or interference?

Did the people who designed your GbE cables worry about cross-talk or interference?

Did the people who designed your 1394 cables worry about cross-talk or interference?

Did the people who designed your eSATA cables worry about cross-talk or interference?

Since the answer to all of these questions is hopefully "YES", and they all seem to work just fine as a result, you should conclude that you would also have a similar experience if you used a Thunderbolt cable, since the engineers who designed it clearly worried about cross-talk and interference as well, and took steps to compensate for them specifically so that you wouldn't have any issues.

And once again, Thunderbolt is operating at 10.3125 GBaud. That's 29% faster than PCI 3.0. Would you be concerned about crosstalk and interference if someone asked you to create a 2m external PCIe 3.0 x2 cable?

If cross-talk and interference are issues for T-Bolt 1.0 - then T-Bolt 1.0 has serious flaws. We should look forward to PCIe Express External to kill T-Bolt outright.

External_PCI_Express_PCIe_.jpg


Yeah, baby! One of those is gonna look dead sexy hanging off of your Dell Precision Mobile Workstation!

I wish I could find an image that gave a better sense of scale so you could witness just how huge those suckers are. But you're right, they're gonna kill Thunderbolt outright in the Ultrabook segment. I can't wait for these products that Molex brought to market back in December of 2008 to suddenly go mainstream and prove to everyone just how much of a total marketing fail Thunderbolt is.

I'm not sure why I continue to be a victim of your trolling.
 
Last edited:
I am familiar with teaming, but as I said, introducing higher orders of parallelism to Thunderbolt doesn't bring anything desirable to the table. The first generation of Thunderbolt controllers included 2-channel and 4-channel designs ranging in price from roughly $20-$30. The 2nd generation brought us a single-channel controller and the first Thunderbolt accessory to retail for under $30. Everyone is clamoring for cheaper Thunderbolt gear. Adding more lanes to a serial interface tends to scale up the costs associated with it in a fairly linear fashion. I don't see a lot of forum posts where people are saying, "Heck, I'd happily pay twice as much for Thunderbolt if only it had more bandwidth, but 10 Gbps x2 just won't cut it for my workflow."
If they want to keep dp coupled with tb, there is a problem and something has to be done. Current tb does not have enough bandwidth for future retina displays. Combined with Apple's obsession for insanely limited amount of simultaneous models, I can't believe that they would sell both non-retina (for people who want to use multiple external displays with one computer) and retina (for people who want to use only one external display with one computer) model at the same time.

The per channel speed stays the same, but now we can fit more channels down a single pipe. What does that do to the complexity of the cross-bar switch in the controller? Say we go for a 5x increase in bandwidth, now your switch has gone from 8 ports to 24. How do we feed that from the back end? Add more protocol adapters, bump the DP adapters to DisplayPort 1.2 and increase the PCIe connection to PCIe 3.0 x16. Oops, now we need a 40-port Thunderbolt switch and a 32-lane PCIe 3.0 switch. We've got a massive, expensive, power hungry, 800-pin behemoth of a controller on our hands. Sounds perfect for a mobile device. Now everyone can pay an extra $480 for anything that includes a Thunderbolt port, and we still need to retain the copper in the connector to provide bus power and not break compatibility with DisplayPort/Thunderbolt 1.0.

The silicon is the limiting factor, not the medium. There is no "true" optical cable that can make the silicon driving it any faster, and simply increasing the parallelism of the system at this point doesn't make any sense.
I'd guess that doubling the channel speed is just too difficult eg. too expensive. Then the only option is more channels.

Going 5x is of course very complex, but going 2x might be pretty reasonable.
Then there is need for tb2 connector. Cables can still be active-copper (like now) or passive-optical. I don't think there would be big difference in overall price. In optical, sockets would be expensive, but the cables cheaper. In optical, use wavelength-division, if it's cheaper or double the fibers, their thickness is not notable.

Legacy interoperability will need dongles. Nothing new in here, we're already using dp-dongles, fw-dongles and ethernet-dongles.

Good thing with passive-optical cables could be, that they could be also used with future revisions. If you double the wire count for tb2, they could add wavelength-division in tb3. With current cable prices, it might be pretty nice that one cable could last for 2 generations and you wouldn't need new cable for every gadget.

Then, the final option: go back to drawing table and think again what light peak was designed for and making the logical decision: separating dp and tb again.

And voilá, there's no need for upgrading anything!
Current dp1.2 is good enough for multiple retina displays and current tb's 40Gbit/s bandwidth is enough for everything else.

Only problem here is, that this would look like Apple was wrong and Sony was right and Apple could never accept this. So, we're down at Apple's PR department, where they need to come up with believable story, how to market this mistake as insanely amazing innovation.
 
If they want to keep dp coupled with tb, there is a problem and something has to be done. Current tb does not have enough bandwidth for future retina displays. Combined with Apple's obsession for insanely limited amount of simultaneous models, I can't believe that they would sell both non-retina (for people who want to use multiple external displays with one computer) and retina (for people who want to use only one external display with one computer) model at the same time.
...

Then, the final option: go back to drawing table and think again what light peak was designed for and making the logical decision: separating dp and tb again.
...

Although you do say "future retina displays", it is worth noting that the MBPR's 2880x1800 screen is pretty much the highest resolution you can still drive via DP 1.1a. That means that a single Thunderbolt port can still drive two external displays at that resolution. Since the internal panels on the iMacs would be driven by DP 1.2 directly from the GPU, they can go "retina" if Apple so chooses. And I don't think too many people would be disappointed to see a not-necessarily-retina, 30-inch, 2880x1800, Apple Thunderbolt Display with USB 3.0 ports...

Many of the early Light Peak demonstrations involved transporting uncompressed HD display data at the same time as other data. I don't believe DP was a last minute addition in any way. If anything, I think Intel considered including an even wider array of protocol adapters within the controllers.
 
Optical is not any more perfect than copper. There is no benefit at short distances.

Yeah but I'm speaking about the technical aspect: it's just a transparent plastic wire, and nothing more, while the electric version today is not simply a cable but two microcomputers and eight or so wires. It's far more complex, thus prone to failure and higher prices.

Optical should be far simpler, and shouldn't require a pair of microcomputers in each freaking cable. Therefore I suspect that optical should be many times cheaper, even though it isn't going to be for pointless marketing reasons.
 
When a piece of cable is cut from a spool...
[snip]
...So, it's technically possible to put calibration equipment onto the device and then use cheap cables between devices. But I don't think you really want that to happen. :)

Thanks hchung for a perfectly pitched, well-mannered and well explained answer. :)
 
PC manufacters have been very slow to adopt, however perhaps it will gain traction. Although it's superior I can see it taking FireWires place as 2nd to USB3.

A very very distant second place.

It's dead for the mainstream consumer market - USB 3.0 is the future.

----------

Picture this: in the year 2015, the ONLY ports on most computers are USB 3.0 and Thunderbolt. You no longer have a need for VGA, DVI, HDMI, FireWire, or even Ethernet. All of those could be run through Thunderbolt and suddenly it's much easier to connect devices to computers.

Won't happen because people don't want to buy several dongles for ethernet, vga, dvi, etc. People are happy with physical ports.

Problem with Thunderbolt is that it does something that not many people want for too expensive of a price. For most people, USB 2.0/3.0 are good enough - and much cheaper.

----------

[/COLOR]
I own over a thousand dollars worth of TB peripherals.

So one RAID array? heh.
 
Sure there is, no cross-talk or interference by electrical and magnetic fields, resulting in higher throughput due to less error correction being required. On the other hand, careful how you bend that wire.

In theory. In practice, it doesn't really affect anything.

Yeah but I'm speaking about the technical aspect: it's just a transparent plastic wire, and nothing more, while the electric version today is not simply a cable but two microcomputers and eight or so wires. It's far more complex, thus prone to failure and higher prices.

Optical should be far simpler, and shouldn't require a pair of microcomputers in each freaking cable. Therefore I suspect that optical should be many times cheaper, even though it isn't going to be for pointless marketing reasons.

You still have to have optical transceivers on either end. Even if they're in the device.
 
In theory. In practice, it doesn't really affect anything.

You've never dealt with a shoddy Cat5e installation before it seems. Of course, I've also dealt with fiber optics squeezed between floor tiles and bent at 90 degrees (good thing the SAN is multipathed).
 
Although you do say "future retina displays", it is worth noting that the MBPR's 2880x1800 screen is pretty much the highest resolution you can still drive via DP 1.1a. That means that a single Thunderbolt port can still drive two external displays at that resolution. Since the internal panels on the iMacs would be driven by DP 1.2 directly from the GPU, they can go "retina" if Apple so chooses. And I don't think too many people would be disappointed to see a not-necessarily-retina, 30-inch, 2880x1800, Apple Thunderbolt Display with USB 3.0 ports...
Everybody's expecting that the new display is retina. They already did with iphone, ipad and mbp. Non-retina would be disappointment like last mp "update" was. (Funny that in new MSI's mb, there's next vga port next to tb port quite happily and at the same time Apple can't put tb to MP even if it kills as many legacy ports as possible on all its products... ;) )
I'd be interested in any high quality display from Apple with 10-bit colors and matte surface, retina or not, but MATTE.
Many of the early Light Peak demonstrations involved transporting uncompressed HD display data at the same time as other data. I don't believe DP was a last minute addition in any way. If anything, I think Intel considered including an even wider array of protocol adapters within the controllers.
If I remember correctly, intel wanted to include light peak to usb socket, but usb consortium didn't approve their intentions.

And Apple should have known that it will be pushing megapixel boundaries with their displays and at the same time they choose a connection that really is a meta connection that will always lag behind one generation.

Uncompressed HD is so last decade. Today you have to be able to quadruple that. Again, making current macs "future-non-proof" can also be just marketing decision from Apple.
 
If I remember correctly, intel wanted to include light peak to usb socket, but usb consortium didn't approve their intentions.

I've enjoyed reading the give and take, and just barely touching some of the true issues in providing economical, high-speed cables.

First, let's take a look at Intel's original idea of an optical connection piggy-backing onto a standard USB connector. the most difficult implementation detail of this solution is the transition from Fiber to metal. It is really a difficult thing to do - need to balance alignment, power levels, and transmission loss across the boundary. It not so easy to get a reliable connection. It's not just putting the end of the optical fiber "close" to a laser transceiver.

I think Intel simply ran out of time trying to get a reliable solution to market. It's one thing to make a few prototypes to demonstrate feasibility; it's another to make a reliable solution that can be manufactured sonsistently at low cost.

As for the copper cables that exist now. As we all know, it has taken quite a long time to get even to this point, with cable costs slowly declining. As has been discussed extensively, these active cables are labor intensive due to the need to calibrate each end of the cable to deliver consistent performance.

At 10Gbps speeds, each transition is a potential bottleneck. It is important to maintain a constant impedance across every transition: cable to transceiver chip; transceiver ship to connector plug. Then plug to receptacle within the computer or peripheral. Within the computer or peripheral, the PCB traces must conform to transmission line characteristics, with special PCB material, and controlled radius routing of traces to the controller chips. The overall goal is to provide a controlled impedance path between the controller in the computer to the controller in a device. Every transition has the potential of preventing reliable signal transmission.

I mention all this because it is important to contrast this with USB 3.0. Here we have a 5Gbps signal path where the expectation is that we have low-cost connectors and no active cable. Build-quality of the cable will have a significant contribution to reliable connections. A quick perusal of support forums for any computer manufacturer with USB 3.0 support will reveal a lot of discussion about unreliable USB 3.0 connections and performance.

Important factors again are a reliable connection between controller in the computer and controller in the device. Every transition "should" try to preserve the transmission line characteristics of a controlled impedance connection from end-to-end. It's not just a wire. In the real world, every transition has the potential of preventing reliable performance.

Whether USB or Thunderbolt, reliable connections are possible. It is just a matter of quality cable construction and device or computer design. Alas. this may require an investment of new, expensive equipment to verify the initial design, and to audit the manufacturing process to assure consistency.

With volume and experience, the methods will improve, and hopefully also drive the costs down.
 
First mistake - running GbE on Cat5e instead of Cat6.

If you're not worried about cross-talk or interference on your GbE cables, why on earth would you waste your money on Cat 6? In what way does your GbE network benefit from Cat 6 over Cat 5e?

If you're gonna bother switching to 22 AWG you might as well pony up for Cat 6a so at least you have the headroom for 10GbE out to 100m. Cat 6 always seemed to me to be nothing more than a great way to get everyone to buy all new cable, connectors and jacks while we're sitting around waiting for 10GBASE-T to come down in price/power consumption—and then they get to do it to us all over again with Cat 6a. Seriously, if you aren't bound to do so by contractual obligation, or using your UTP for something other than GbE that is bandwidth intensive like video, why would you actually pay more for Cat 6?

Cat 6 is just Monster Cat 5e.
 
You've never dealt with a shoddy Cat5e installation before it seems. Of course, I've also dealt with fiber optics squeezed between floor tiles and bent at 90 degrees (good thing the SAN is multipathed).

You can run anything shoddily. Wired, wireless, copper, or fiber. So that really has nothing to do with the discussion.
 
As we all know, it has taken quite a long time to get even to this point, with cable costs slowly declining.

At 10Gbps speeds, each transition is a potential bottleneck.

With volume and experience, the methods will improve, and hopefully also drive the costs down.
How much the cost of the one and only tb cable in the market has declined in past 2 years?

You do know that they have sold passive HDMI cables for 9 years, which are cheaper and longer than tb cables and have certified throughput of 5 Gbit/s?
Why this is not possible with usb3?

You also know that they are also now selling 15 meter long high speed passive HDMI cables, that are certified for 10 Gbit/s and cost less than the famous one and only tb cable?
 
toke,

all good points.

TB has 2 bi-directional 10Gbps pipes. HDMI has 1 unidirectional pipe.

It's not impossible to make a good, reliable cable. Just hasn't happened in volume yet for TB, or for USB 3. As I mentioned it is the entire chain, not just the cable.

TB implementation for shorter cables does permit lower cost. Just look at the TB-Ethernet dongle that Apple is selling for $29.

Also, to OEMs with alternate suppliers like Sumitomo, costs are reduced allowing some new products to include the cable, see the new drive from Buffalo and the earlier product from El Gato. Seagate also has a product configuration for the Mac that bundles a TB cable.

2 years? Not quite. Apple introduced the first TB computers in Feb 2011, and LaCie and Promise did not start shipping products until a few months later.

While there are plenty of passive HDMI cables (good and bad), there are also longer active HDMI cables. The vendors must have some reason to provide them.

I'm not in the camp saying that TB will replace USB 3.0; nor vice-versa. Both technologies serve a particular market with capabilities users want.

Both USB 3.0 and TB are still immature, but still improving, too. USB-IF is still revising the USB 3.0 spec. There still are certain classes of USB devices that can't get USB-IF certification because the specs are still evolving. TB being a non-public standard, it is more difficult to determine what is going on behind the scenes.

We'll see what happens in the coming months as PC manufacturers actually start shipping alternatives to the Mac computers.

I do agree that TB serves a niche; not the mass consumer market. The latter will still be dominated by USB peripherals. We'll just need to get to some stage of maturity.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.