Just wondering when you choke the pipe, does TB prioritize dp over data, so the screens won't start to flicker?
Also still wondering why TB is advertised as 10Gbps speed, when it's actually 20 Gbps? I'd guess it would be easier to sell peripherals at 10x price compared to usb3 products, if you could say that it's 3x faster, not only 1.5x faster.
I'm guessing Thunderbolt prioritizes DP over PCIe. Apparently the Pegasus RAIDs don't always play well with isochronous PCIe traffic from other devices though, and can cause audio issues when daisy-chained with an ATD.
Thunderbolt is advertised as 2 x 10Gbps:
I'm guessing that Apple and Intel decided that marketing TB as a straight-up 20Gbps interface would be a PR disaster, because the controllers can only convert a total of 10Gbps of PCIe packets, and can only handle DP 1.1a streams which weigh in at 8.64Gbps. I think they were bracing people all along for the fact that although you get 20Gbps of aggregate bandwidth, no single application can utilize more than 10Gbps.
Before you even take into account the greater efficiency of the underlying protocols, Thunderbolt offers 5x the bandwidth of USB 3.0 over a single cable. If you're only talking about PCIe and ignore the DP capabilities, Thunderbolt is still 2.5x faster than USB 3.0.
In the future, I can definitely see TB being choked as long as they try to carry dp in it.
Next gen displays with 3d, 4k, 10-16 bit colors etc. will need multiple times bandwidth and sooner or later people start to ask why only 2 displays can be attached to TB, when regular cheap pcie GPU card can have 4 dp ports.
Also at least in video editing it is pretty normal that you have source in one box, renders in another and you output to third box. Doing al this in 3d 4k will need lots of bandwidth.
Looks like MP will be still needed also in the future.
Lots of people are judging that you can do all you need with imac or mbp, but things are going to change when 4k & 3d come mainstream. (JVC introduced first consumer 4k video camera.)
And why are these peripherals not getting to sale?
What are already on the shelf?
Those storage products that were introduced year ago?
Why newer are lagging?
Is there anywhere a table about TB products on sale and coming?
I think we'll see Thunderbolt double its per-channel bandwidth to 20Gbps in a couple years. This would allow it to carry DP 1.2 streams at 18Gbps and handle 4k video without issue. The PCIe back end could still be serviced by 4 lanes once the transition to PCIe 3.0 is complete.
A ton of new Thunderbolt gadgets were on display at CES last week. Anandtech reported on quite a few of them:
http://www.anandtech.com/show/5313/lacie-at-ces-2big-esata-thunderbolt-hubs
http://www.anandtech.com/show/5321/oczs-portable-thunderbolt-ssd-lightfoot
http://www.anandtech.com/show/5330/...sd-bus-powered-120240gb-available-in-february
http://www.anandtech.com/show/5351/...herboards-at-ces-now-with-thunderbolt-support
http://www.anandtech.com/show/5352/msis-gus-ii-external-gpu-via-thunderbolt
http://www.anandtech.com/show/5345/seagates-goflex-thunderbolt-adapters
http://www.anandtech.com/show/5400/belkin-brings-home-automation-tv-and-thunderbolt-solutions-to-ces
http://www.anandtech.com/show/5403/...he-new-more-affordable-thunderbolt-controller
http://www.anandtech.com/show/5405/the-first-thunderbolt-speed-bump-likely-in-2014
http://www.anandtech.com/show/5421/sumitomo-electrics-thunderbolt-cable
http://www.anandtech.com/show/5422/aocs-thunderbolt-display
The asr man page doesn't support this idea....
That sounds like all the data is going through the host system's memory, not peer-to-peer.
I tend to use ASR in block-copy mode, which works a bit more like dd, but you're still probably correct in that it is using a small chunk of system memory. I always associated the relative performance with a peer-to-peer transfer, but it's probably related to DMA instead. Nonetheless, FireWire was designed from the outset to be a peer-to-peer protocol, enabling, for instance, a camera to transfer data to a FW attached HDD without a PC needing to be involved at all.
PCIe was designed as a point-to-point protocol, but the specification also required out-of-the-box compatibility with all drivers and software written for PCI, a shared parallel bus. Even in the PCI days, two devices could communicate directly via bus-mastering and utilizing the I/O address space, but this was discouraged, and so almost everything was done through system memory. So even though PCIe introduced the potential for increased point-to-point communications due to its topology, unless someone was willing to take the time to write new software or drivers to specifically take advantage of this, it wasn't happening. However, one such motivated party was NVIDIA with their GPUDirect technology. Since the original post in this thread was about a Thunderbolt connected GPU, this is particularly relevant. Being able to daisy-chain a bunch of GPUs that can communicate directly with each other and with storage devices to a light weight notebook would be pretty awesome for certain compute or rendering applications. I would also not be surprised if OCZ's Virtualized Controller Architecture 3.0 software for the Kilimanjaro platform supports point-to-point PCIe transfers.
Furthermore, DisplayPort is an inherently point-to-point protocol (although DP 1.2 does allow for some more unusual topologies.) And once again, this thread was about an external GPU solution. If you have a DP source in a Thunderbolt device, it would only make sense to connect it to one of the inputs on the TB controller so that you could drive a Thunderbolt display or even the built in display of a host PC, such as an iMac, where the proper signal path exists.