Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I think the success or failure of Thunderbolt will depend on how open Intel and Apple are going to be with their respective patents. If the main game is widespread adoption then they're going need to license it to people like AMD, nVidia etc. to get the adoption in the peripheral sector. If they're going to try to hold onto it as an Intel/Apple USP, then I suspect that it will lose out to USB 3 in the long run in spite of TB's clear technical superiority.

Wouldn't the "ND" part of "RAND" mean that they would have to license it?

In any event NVIDIA and AMD don't make very many (any?) graphics cards, they sell GPUs to others.

There are a bunch (62) of cards with mDP already listed at Newegg, like this one with 2 DVI ports, and HDMI port, and 2 mDP ports.

14-103-182-TS
 
I wouldn't say that Firewire failed as such. It didn't penetrate the consumer market so much, but it did pretty well in professional audio and video environments. Firewire was all but synonymous with DV and there's a wealth of firewire audio cards/mixers out in the wild.

You're right though, cost will count for a lot. If the presence of TB adds much ot the price of a peripheral then it's not going to take over from something where USB will do.

Edit: That said, Thunderbolt has convinced me to start saving for my computer upgrade. Thunderbolt has just put the pro back in MacBook pro. Hell, if they're not careful it's going to put the pro into Mac Mini for a lot of AV professionals as well!

It would be cool to get some peripherals. I am interested in seeing how PCIe-TB devices react to hot-plugging. I got a 13"MBP for the wife, not scheduled to get one for myself until 2013 at the earliest (I have a 2010 15"). :(
 
Wouldn't the "ND" part of "RAND" mean that they would have to license it?

In any event NVIDIA and AMD don't make very many (any?) graphics cards, they sell GPUs to others.

There are a bunch (62) of cards with mDP already listed at Newegg, like this one with 2 DVI ports, and HDMI port, and 2 mDP ports.

14-103-182-TS

this bad ass card has 4x ;)

amd-radeon-hd-6990.jpg
 
It would be inconsistent for Apple to complain about "bag-of-hurt" licensing fees, and then to encumber mDP with an Apple tax.

Who ever said Apple was consistent? Steve Jobs has cried about "standards" in regards to Safari versus Internet Explorer before yet Apple is the company that brought us such wonders as AppleTalk and NuBus along with going with "unpopular" (relatively speaking compared to the entire consumer industry at least) standards such as SCSI (I always liked SCSI but the drives were expensive), Firewire (not an issue except when they refused to offer USB2 at first) and now Mini-Display Port (regular Display Port wasn't good enough it seems and frankly I don't know how they can patent the mini-version when it's completely derivative being based on regular display port; maybe one of our armchair legal experts would care to comment on that one).

The only thing consistent about Apple over the years is that they do whatever and say whatever they think is in their own best interests. They support standards when it suits them and ignore them when it doesn't. I don't see that being much different than Microsoft except that Microsoft mostly does it in software, not really being a hardware company (barring the occasional keyboard/mouse and "Zune").
 
Wouldn't the "ND" part of "RAND" mean that they would have to license it?

In any event NVIDIA and AMD don't make very many (any?) graphics cards, they sell GPUs to others.

There are a bunch (62) of cards with mDP already listed at Newegg, like this one with 2 DVI ports, and HDMI port, and 2 mDP ports.

14-103-182-TS

Yes, Apple are committed to be reasonable and non discriminatory wrt licensing MDP. But MDP isn't thunderbolt, only part of it. The rest is at Intel's discretion.

The reason I mentioned nVidia and ATI is that adding thunderbolt requires access to graphics and to the PCIe bus, so putting it onto a desktop graphics card makes sense.

If, on the other hand, Intel intend to keep it strictly on Intel motherboards rather than licensing it then the technology is going to be of limited use.
 
Last edited:
Could be, but I think that intel will probably license it out to other manufacturers since they want it to be accepted.

Agreed; same drama that usually happens, AMD and Intel have temper tantrums then it all is finally resolved a few months later where there is a patent sharing agreement that gives AMD access to the Thunderbolt specifications. It happens almost clock work in the same way there is a spat between Intel and nVidia then resolved a few months later with an agreement - I swear these organisations just put on the spectacle for the lulz more than anything else.
 
interesting question

The reason I mentioned nVidia and ATI is that adding thunderbolt requires access to graphics and to the PCIe bus, so putting it onto a desktop graphics card makes sense.

That's an interesting question - how will Thunderbolt work with separate graphics cards?

If the Thunderbird controller is on the motherboard, you'd need some kind of extra cable to get the DisplayPort signals to the Thunderbird controller. (Dotted line from the dGFX to the controller in the diagram). This is not an issue with a laptop, since both integrated and discrete graphics run the video through the motherboard.

If you put the Thunderbird controller on the graphics card, you have the DisplayPort signals, but you either steal PCIe bandwidth from the graphics card (effectively cutting the graphics card to PCIe x12), or have a cable to an x4 PCIe stub card. Or build a double-width graphics card with an x16 for the graphics and an x4 to feed Thunderbolt - and hope that motherboards standardize on an x4 slot beside the x16 slot.

Thunderbolt_Block_Diagram.jpg

(click to enlarge)
 
If you put the Thunderbird controller on the graphics card, you have the DisplayPort signals, but you either steal PCIe bandwidth from the graphics card (effectively cutting the graphics card to PCIe x12), or have a cable to an x4 PCIe stub card. Or build a double-width graphics card with an x16 for the graphics and an x4 to feed Thunderbolt - and hope that motherboards standardize on an x4 slot beside the x16 slot.

You've been dishing out some of the best scenarios that I've ever seen in my years on MR.

These are far more issues to be concerned about than we've ever had to deal with before.
 
That's an interesting question - how will Thunderbolt work with separate graphics cards?

If the Thunderbird controller is on the motherboard, you'd need some kind of extra cable to get the DisplayPort signals to the Thunderbird controller. (Dotted line from the dGFX to the controller in the diagram). This is not an issue with a laptop, since both integrated and discrete graphics run the video through the motherboard.

If you put the Thunderbird controller on the graphics card, you have the DisplayPort signals, but you either steal PCIe bandwidth from the graphics card (effectively cutting the graphics card to PCIe x12), or have a cable to an x4 PCIe stub card. Or build a double-width graphics card with an x16 for the graphics and an x4 to feed Thunderbolt - and hope that motherboards standardize on an x4 slot beside the x16 slot.

Thunderbolt_Block_Diagram.jpg

(click to enlarge)
I could see a dongle to bring the DP back to the MB. We have internal dongles with multiboard SLI. I also seem to remember having the external cable dongle for a while for SLI. It would probably be the easiest (read cheapest) method.
 
I could see a dongle to bring the DP back to the MB. We have internal dongles with multiboard SLI. I also seem to remember having the external cable dongle for a while for SLI. It would probably be the easiest (read cheapest) method.

Or, we have motherboard mDP ports that are Thunderbolt PCIe-only, and graphics cards with mDP DisplayPort-only ports without Thunderbolt PCIe.

Unless your monitor is a hub/docking station (think Imac with disks, ports, etc but no CPU board), is there any advantage to routing PCIe signals to it? And, why share bandwidth between the PCIe channel and the DP channel if you don't need to?

I also wonder if in the diagram that showing two DisplayPort inputs to the Thunderbolt controller and two mDP outputs is really meant to be either integrated or a graphics card, but not both.

Note that the bandwidth of an x4 PCIe is 10 Gbps bi-directional - so a dual port Thunderbolt controller would be over-committed. (Over-committment isn't necessarily bad, as long as you realize that they share bandwidth.)
 
Last edited:
Or, we have motherboard mDP ports that are Thunderbolt PCIe-only, and graphics cards with mDP DisplayPort-only ports without Thunderbolt PCIe.

Unless your monitor is a hub/docking station (think Imac with disks, ports, etc but no CPU board), is there any advantage to routing PCIe signals to it? And, why share bandwidth between the PCIe channel and the DP channel if you don't need to?

I also wonder if in the diagram that showing two DisplayPort inputs to the Thunderbolt controller and two mDP outputs is really meant to be either integrated or a graphics card, but not both.

Note that the bandwidth of an x4 PCIe is 10 Gbps bi-directional - so a dual port Thunderbird controller would be over-committed. (Over-committment isn't necessarily bad, as long as you realize that they share bandwidth.)
You are probably right, there is a lot to work out in the desktop space. For notebooks it seems to be cut and dry...
 
I could see a dongle to bring the DP back to the MB. We have internal dongles with multiboard SLI. I also seem to remember having the external cable dongle for a while for SLI. It would probably be the easiest (read cheapest) method.
My latest interest has been with Lucid's Virtu GPU virtualization software. (Believe me when I was surprised to learn it was software.) You would be able to use connectors on the motherboard and still have the ability to use a discrete solution. There is a performance hit though.

AMD's Llano had a video demo recently as well.

You've been dishing out some of the best scenarios that I've ever seen in my years on MR.

These are far more issues to be concerned about than we've ever had to deal with before.
I am glad you find Aiden's posts informative as well.
 
I am glad you find Aiden's posts informative as well.

Extremely, especially since I have never been an early adopter. I usually buy one model down and either refurbed from Apple or Craigslist. I've spent the last 2-3 years going back and forth about buying a new machine.

TB helped me to wait for another Mac Pro refresh, and Aiden and yourself are giving me things to think about when it comes to the stability of a system that will be essentially running PCIe cards on a flimsy miniDP cable.

It's one of my biggest pet peeves with FW800, none of the cables I've used have ever sat right in the countless machines I've used.
 
Or, we have motherboard mDP ports that are Thunderbolt PCIe-only, and graphics cards with mDP DisplayPort-only ports without Thunderbolt PCIe.

Unless your monitor is a hub/docking station (think Imac with disks, ports, etc but no CPU board), is there any advantage to routing PCIe signals to it? And, why share bandwidth between the PCIe channel and the DP channel if you don't need to?
Aren't dp path & "general data" path discrete in TB?

Any bets will next/ever MP (before MP is discontinued) have mDP-input-for-TB-output connector in its mb? If they don't have enough effort to get ram channels right, why would they bother to think about PCIe-bottlenecks?
 
Aren't dp path & "general data" path discrete in TB?

There's not much hard information available, but Intel's diagram clearly shows one full-duplex pipe with data inter-mixed.

Thunderbolt_Technology.jpg

(click to enlarge)

It may be implemented as a two-lane 5 Gbps pipe (PCIe 2.0 runs at 5 Gbps).


Any bets will next/ever MP (before MP is discontinued) have mDP-input-for-TB-output connector in its mb? If they don't have enough effort to get ram channels right, why would they bother to think about PCIe-bottlenecks?

Not sure - but this review raises some serious questions about the highly proprietary nature of Thunderbolt. It's not a standard, and all chips have to come from Intel, and Intel certifies the cables.

It was also noticed that none of the other computer vendors joined Intel at the announcement, which is unusual.
 
You've been dishing out some of the best scenarios that I've ever seen in my years on MR.

These are far more issues to be concerned about than we've ever had to deal with before.

Yes, thanks Aiden! You did a much better job articulating (and answering) some the questions I raised earlier in the thread. Inclusion of video really raises a lot of questions, many of which probably haven't been answered since Intel and Apple haven't really shared the specs with other manufacturers.
 
There's not much hard information available, but Intel's diagram clearly shows one full-duplex pipe with data inter-mixed.

Thunderbolt_Technology.jpg

(click to enlarge)

It may be implemented as a two-lane 5 Gbps pipe (PCIe 2.0 runs at 5 Gbps).




Not sure - but this review raises some serious questions about the highly proprietary nature of Thunderbolt. It's not a standard, and all chips have to come from Intel, and Intel certifies the cables.

It was also noticed that none of the other computer vendors joined Intel at the announcement, which is unusual.

Maybe Intel's technology brief (pdf) can help understand a few things. While not as detailled as the developer's toolkit, it still gives you a few things like:

- Thunderbolt technology is based on a switched fabric architecture with full-duplex links. Unlike bus-based I/O architectures, each Thunderbolt port on a computer is capable of providing the full bandwidth of the link in both directions with no sharing of bandwidth between ports or between upstream and downstream directions.

- A Thunderbolt connector is capable of providing two full-duplex channels. Each channel provides bi-directional 10 Gbps of bandwidth.

- The Thunderbolt protocol physical layer is responsible for link maintenance including hot-plug detection, and data encoding to provide highly efficient data transfer. The physical layer has been designed to introduce very minimal overhead and provides full 10Gbps of usable bandwidth to the upper layers.


While not specifically mentionned, I believe that the TB controller connects to 4x PCIe 2.0 lanes, as that's what current (6 series) chipsets have:
PCI Express*
—Up to eight PCI Express root ports
—NEW: Supports PCI Express Rev 2.0 running at up to 5.0 GT/s
—Ports 1-4 and 5-8 can independently be configured to support eight x1s, two x4s, two x2s and four x1s, or one x4 and four x1 port widths
—Module based Hot-Plug supported (that is, ExpressCard*)


About the "highly proprietary nature of Thunderbolt", it's probably better right now. In terms of compatibility, reliability, simplicity, etc... If Intel opened the technology you can be sure that we would get controllers with slightly different specs, ports with different layouts, and cables ranging from Fisher Price to Monster Cable quality... with no precise idea of which one works better with your specific device(s).

And "none of the other computer vendors joined Intel", well, it's unusual for Apple to share the limelight, and probably not many computer vendors were ready/willing to work (or spend $) on TB as much as Apple. It could also be part of the "deal" with Apple... In any case, time will tell. I am just happy that Avid (Pro Tools), Universal audio (UAD DSP), and Apogee (Symphony) are working on TB devices.
 
Maybe Intel's technology brief (pdf) can help understand a few things.

Thanks for the link. I looked through Intel's announcement pages, but missed that.

First of all, it's surprising that Thunderbolt is described as a 10 Gbps full-duplex link, since it seems to be two separate 10 Gbps full duplex links. Usually companies would claim the aggregate bandwidth of all the channels - not the per channel specs. ;) (In fact, the cnet reviewer called it 40 Gbps ...)

The doc says that
  • "allows multiplexing of bursty PCI Express transactions with isochronous DisplayPort communication on the same link"
so clearly it is not dedicating one channel to DisplayPort and one channel to PCIe.

It also says that
  • "Because Thunderbolt technology delivers two full-bandwidth channels, the user can realize high bandwidth on not only the first device attached, but on downstream devices as well"
so that one doesn't need to worry at all about sharing bandwidth until three devices are attached.


About the "highly proprietary nature of Thunderbolt", it's probably better right now.

Or it implies that it's been brought to market rather quickly, and might not be completely baked yet. The confusion about optical vs copper, and the earlier hybrid USB connectors make one wonder.

It's unfortunate if Intel is catching Apple's obsession for secrecy, rather than Apple learning about openness and cross-vendor collaboration.
 
Or it implies that it's been brought to market rather quickly, and might not be completely baked yet. The confusion about optical vs copper, and the earlier hybrid USB connectors make one wonder.

It's unfortunate if Intel is catching Apple's obsession for secrecy, rather than Apple learning about openness and cross-vendor collaboration.

Sounds like Intel was trying to integrate TB into USB, but the USB Implementers Forum were having none of it (as well they might - they see USB3 as the way forward).

The downside with working openly is slowness. How long would it have taken to bring TB to market if it had to go through an open review process.

As a partner Apple is ideal. They have the money, market volume and engineering skill to collaborate and help Intel put together a final product.
 
The downside with working openly is slowness. How long would it have taken to bring TB to market if it had to go through an open review process.

The upside is better peer review and more testing.

If Thunderbolt 1.0 is a turkey, some of the blame will be on the secrecy.
 
The upside is better peer review and more testing.

If Thunderbolt 1.0 is a turkey, some of the blame will be on the secrecy.

I might agree more with your cynicism about it were it not running as a PCIe conduit. As it is, it's all familiar stuff for engineers to integrate with - and driver free. Were it a new standard complete with a complicated driver stack, that would be significantly more difficult to adopt and a lot more risky.

I'm not seeing much downside in this.
 
Thunderbolt is not driverless

I might agree more with your cynicism about it were it not running as a PCIe conduit. As it is, it's all familiar stuff for engineers to integrate with....

However, most Apple hardware and software engineers have never dealt with hot-plug PCIe before. And that's what Thunderbolt is - it extends the PCIe bus to PCIe devices in external boxes. If you plug in a Thunderbolt RAID array, on the array end of the cable is a PCIe RAID controller.

What would you expect to happen if you opened up a running Mac Pro, and inserted a PCIe eSATA card connected to running drives? Would you expect the drives to simply appear and be usable a second or two later?

What if you pull the eSATA card back out - especially if reads and writes are active on the drive?

The tools are familiar, but there's a lot of uncharted territory here.


- and driver free.

There are lots of drivers involved, lots of them. There are no *new* drivers involved - but every external device needs an OS driver. Until now, only the Mac Pro had PCIe expansion, and a relatively few number of supported PCIe devices. Now every Thunderbolt equipped Mac has PCIe "slots", and has to deal with 3rd party drivers.

When you plug that Thunderbolt RAID array into the port, a new PCIe device - probably a SiliconImage controller - appears. The SiI driver has to be dynamically loaded. The drives have to be configured. Lots of work that's traditionally done early in the boot process.

And if you're familiar with device driver software, timing and latency issues are at the root of many bugs. Did you know that switching from a 1m Thunderbolt cable to a 3m cable adds 6 nsec of latency to each packet? A PCIe device at the end of the chain may have 50 nsec or more latency added. If you have several busy Thunderbolt devices - much larger latencies will occur because of contention for bandwidth.

One shouldn't expect that drivers written for a device on an internal PCIe bus will be flawless in this new environment
_____________

And think of the potential "ImacBookStation" - a killer product that Thunderbolt makes possible. Apple is often criticized for not having docking stations, so take basically an Imac and remove the CPU/GPU/memory.

Add Thunderbolt, connect DisplayPort to the monitor, and connect the PCIe to a small mainboard with PCIe devices for GbE, SATA, eSATA, USB 3.0, 1394a/b, sound card. Keep the optical drive (makes it easier to drop the optical from the MacBook). Add two or three 3.5" SATA disks (probably room, since the PCIe mainboard is much smaller and cooler than the Imac motherboard).

Plug it into the MacBook, and a half dozen or so PCIe devices and sub-devices appear - suddenly MacBooks have an incredible docking station. Unplug it, and you're portable again.

But a lot of software has to make that happen.


Were it a new standard complete with a complicated driver stack, that would be significantly more difficult to adopt and a lot more risky.

Often people give the advice to avoid .0 software and initial hardware versions, and to wait for .1 and Rev A.

Since there are no Thunderbolt devices on the market, there's no chance that Thunderbolt has had significant interoperability and realistic real world testing in multi-vendor production configurations.

While certainly the transparency of Thunderbolt is good, some caution is in order.
 
Last edited:
However, most Apple hardware and software engineers have never dealt with hot-plug PCIe before.
_____________
And if you're familiar with device driver software, timing and latency issues are at the root of many bugs. Did you know that switching from a 1m Thunderbolt cable to a 3m cable adds 6 nsec of latency to each packet? A PCIe device at the end of the chain may have 50 nsec or more latency added. If you have several busy Thunderbolt devices - much larger latencies will occur because of contention for bandwidth.
_____________
And think of the potential "ImacBookStation" - a killer product that Thunderbolt makes possible.

Plug it into the MacBook, and a half dozen or so PCIe devices and sub-devices appear - suddenly MacBooks have an incredible docking station. Unplug it, and you're portable again.
_____________
While certainly the transparency of Thunderbolt is good, some caution is in order.

The technical brief also states that:
The Thunderbolt protocol physical layer is responsible for link maintenance including hot-plug detection, and data encoding to provide highly efficient data transfer.

A novel time synchronization protocol that allows all the Thunderbolt products connected in a domain to synchronize their time within 8ns of each other.

And by leveraging the inherently tight timing synchronization (within 8ns across 7 hops downstream from a host) and low latencies of Thunderbolt technology, broadcast-quality media can be produced using Thunderbolt products.

That doesn' t mean developers have nothing to do, but I think that Intel has done its job. Just as with every technology, the implementation also needs to be done properly. I also think that most of the work Apple has done on PCIe Macs (over the years) can "easily" be ported to other models, as well as the work on ExpressCard devices.

Your "ImacBookStation" is certainly one of the devices that Apple should be working on, a perfect companion to the MBAs (and other small computers).

Of course some caution is in order, but I think that we should let manufacturers do their job and then critize or praise their implementation. Maybe I'm too optimistic...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.