Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The beauty of protocols like FireWire and Thunderbolt is that each node can communicate directly with another. If you want to clone the data on one Thunderbolt drive to another, you can do so with little to no involvement of the host CPU, PCH, or system memory.

Please document this supposition.

In particular, explain how these T-Bolt devices can deal with mounted filesystems and volatile metadata.

You can't "clone" like this over PCIe, so why would you think that it would be possible over T-Bolt?

And, note that there is no such thing as a "T-Bolt drive". There are SATA drives on SATA controllers on T-Bolt boxes. If you can't do something on a SATA drive internal to the system, you can't do it on a SATA drive external to the system.

You're beginning to lose credibility here....
 
Please document this supposition.

In particular, explain how these T-Bolt devices can deal with mounted filesystems and volatile metadata.

You can't "clone" like this over PCIe, so why would you think that it would be possible over T-Bolt?

And, note that there is no such thing as a "T-Bolt drive". There are SATA drives on SATA controllers on T-Bolt boxes. If you can't do something on a SATA drive internal to the system, you can't do it on a SATA drive external to the system.

You're beginning to lose credibility here....

Well, to quote the Intel Thunderbolt Technology Brief located at:
http://www.intel.com/content/dam/doc/technology-brief/thunderbolt-technology-brief.pdf

• A symmetric architecture that supports flexible topologies (star, tree, daisy chaining, etc.) and enables peer-to-peer communication (via software) between devices.

Thunderbolt devices don't deal with filesystems or anything in the upper layers, that's not their job. When I drag and drop a folder icon (which is the OS's representation of some part of the filesystem on my Mac's HD) to a volume on another disk, I'm initiating a whole bunch of SATA, PCIe and possibly USB or FireWire protocol packets to be transferred between various nodes. If I use asr or Disk Utility to clone one FireWire connected drive to another, once the process is started, the data transfer happens almost entirely in a peer-to-peer fashion between the two drives. The SATA protocol is used between the drive mechanism and the FireWire host adapters, and the FireWire protocol is used between the two drive enclosures.

I was under the impression that because PCIe is a point-to-point protocol, traffic wasn't expressly required to flow through the root complex even if it is responsible for generating the transaction requests.

SATA happens to be the storage connection bus du jour, but there's plenty of other ones out there. SCSI, IDE-ATA, SAS, etc. It really doesn't matter, they all do essentially the same thing—move data to and from a non-volatile storage medium to some other bus for transport. In the context of flash based storage, SATA, which started out as the easiest way to attach a device to existing systems, is beginning to just get in the way. OCZ's Kilimanjaro platform is being developed as a native PCIe to NAND flash controller with Thunderbolt devices in mind. (Such as the Lightfoot drive mentioned here: https://forums.macrumors.com/threads/1305636/ ) No, there will probably never be "native" Thunderbolt storage controllers, but that's not what it was designed for either.

I'm not sure I ever had credibility here, btw.
 
If I use asr or Disk Utility to clone one FireWire connected drive to another, once the process is started, the data transfer happens almost entirely in a peer-to-peer fashion between the two drives.

The asr man page doesn't support this idea....

The following options control how asr uses memory. These options can have a significant impact on performance. asr is optimized for copying between devices (different disk drives, from a network volume to a local disk, etc). As such, asr defaults to using eight one megabyte buffers. These buffers are wired down (occupying physical memory).

That sounds like all the data is going through the host system's memory, not peer-to-peer.
 
Of course 10 + 17.282 > 20, so unless you have a 27-inch iMac or some other device with 2 Thunderbolt ports, you're limited to the 20Gbps that the cable can carry. The only real world scenario I can come up with that would bump into the 20Gbps ceiling, is is to daisy chain two Apple Thunderbolt displays and a Pegasus R6 full of SF-2281 based SSD's, and then perform a long sequential write using highly compressible data. This would result in 11.6Gbps of DisplayPort packets and 10Gbps of PCIe packets all heading in the same direction simultaneously and overwhelm the cable's total single-direction bandwidth of 20Gbps. This is an absolute corner case, and yet the performance impact would still be fairly minor.
Just wondering when you choke the pipe, does TB prioritize dp over data, so the screens won't start to flicker?
Also still wondering why TB is advertised as 10Gbps speed, when it's actually 20 Gbps? I'd guess it would be easier to sell peripherals at 10x price compared to usb3 products, if you could say that it's 3x faster, not only 1.5x faster.

In the future, I can definitely see TB being choked as long as they try to carry dp in it.
Next gen displays with 3d, 4k, 10-16 bit colors etc. will need multiple times bandwidth and sooner or later people start to ask why only 2 displays can be attached to TB, when regular cheap pcie GPU card can have 4 dp ports.
Also at least in video editing it is pretty normal that you have source in one box, renders in another and you output to third box. Doing al this in 3d 4k will need lots of bandwidth.
Looks like MP will be still needed also in the future.
Lots of people are judging that you can do all you need with imac or mbp, but things are going to change when 4k & 3d come mainstream. (JVC introduced first consumer 4k video camera.)

And why are these peripherals not getting to sale?
What are already on the shelf?
Those storage products that were introduced year ago?
Why newer are lagging?
Is there anywhere a table about TB products on sale and coming?
 
Just wondering when you choke the pipe, does TB prioritize dp over data, so the screens won't start to flicker?
Also still wondering why TB is advertised as 10Gbps speed, when it's actually 20 Gbps? I'd guess it would be easier to sell peripherals at 10x price compared to usb3 products, if you could say that it's 3x faster, not only 1.5x faster.

I'm guessing Thunderbolt prioritizes DP over PCIe. Apparently the Pegasus RAIDs don't always play well with isochronous PCIe traffic from other devices though, and can cause audio issues when daisy-chained with an ATD.

Thunderbolt is advertised as 2 x 10Gbps:

TB Ad Snippet.png

I'm guessing that Apple and Intel decided that marketing TB as a straight-up 20Gbps interface would be a PR disaster, because the controllers can only convert a total of 10Gbps of PCIe packets, and can only handle DP 1.1a streams which weigh in at 8.64Gbps. I think they were bracing people all along for the fact that although you get 20Gbps of aggregate bandwidth, no single application can utilize more than 10Gbps.

Before you even take into account the greater efficiency of the underlying protocols, Thunderbolt offers 5x the bandwidth of USB 3.0 over a single cable. If you're only talking about PCIe and ignore the DP capabilities, Thunderbolt is still 2.5x faster than USB 3.0.

In the future, I can definitely see TB being choked as long as they try to carry dp in it.
Next gen displays with 3d, 4k, 10-16 bit colors etc. will need multiple times bandwidth and sooner or later people start to ask why only 2 displays can be attached to TB, when regular cheap pcie GPU card can have 4 dp ports.
Also at least in video editing it is pretty normal that you have source in one box, renders in another and you output to third box. Doing al this in 3d 4k will need lots of bandwidth.
Looks like MP will be still needed also in the future.
Lots of people are judging that you can do all you need with imac or mbp, but things are going to change when 4k & 3d come mainstream. (JVC introduced first consumer 4k video camera.)

And why are these peripherals not getting to sale?
What are already on the shelf?
Those storage products that were introduced year ago?
Why newer are lagging?
Is there anywhere a table about TB products on sale and coming?

I think we'll see Thunderbolt double its per-channel bandwidth to 20Gbps in a couple years. This would allow it to carry DP 1.2 streams at 18Gbps and handle 4k video without issue. The PCIe back end could still be serviced by 4 lanes once the transition to PCIe 3.0 is complete.

A ton of new Thunderbolt gadgets were on display at CES last week. Anandtech reported on quite a few of them:

http://www.anandtech.com/show/5313/lacie-at-ces-2big-esata-thunderbolt-hubs
http://www.anandtech.com/show/5321/oczs-portable-thunderbolt-ssd-lightfoot
http://www.anandtech.com/show/5330/...sd-bus-powered-120240gb-available-in-february
http://www.anandtech.com/show/5351/...herboards-at-ces-now-with-thunderbolt-support
http://www.anandtech.com/show/5352/msis-gus-ii-external-gpu-via-thunderbolt
http://www.anandtech.com/show/5345/seagates-goflex-thunderbolt-adapters
http://www.anandtech.com/show/5400/belkin-brings-home-automation-tv-and-thunderbolt-solutions-to-ces
http://www.anandtech.com/show/5403/...he-new-more-affordable-thunderbolt-controller
http://www.anandtech.com/show/5405/the-first-thunderbolt-speed-bump-likely-in-2014
http://www.anandtech.com/show/5421/sumitomo-electrics-thunderbolt-cable
http://www.anandtech.com/show/5422/aocs-thunderbolt-display

The asr man page doesn't support this idea....

That sounds like all the data is going through the host system's memory, not peer-to-peer.

I tend to use ASR in block-copy mode, which works a bit more like dd, but you're still probably correct in that it is using a small chunk of system memory. I always associated the relative performance with a peer-to-peer transfer, but it's probably related to DMA instead. Nonetheless, FireWire was designed from the outset to be a peer-to-peer protocol, enabling, for instance, a camera to transfer data to a FW attached HDD without a PC needing to be involved at all.

PCIe was designed as a point-to-point protocol, but the specification also required out-of-the-box compatibility with all drivers and software written for PCI, a shared parallel bus. Even in the PCI days, two devices could communicate directly via bus-mastering and utilizing the I/O address space, but this was discouraged, and so almost everything was done through system memory. So even though PCIe introduced the potential for increased point-to-point communications due to its topology, unless someone was willing to take the time to write new software or drivers to specifically take advantage of this, it wasn't happening. However, one such motivated party was NVIDIA with their GPUDirect technology. Since the original post in this thread was about a Thunderbolt connected GPU, this is particularly relevant. Being able to daisy-chain a bunch of GPUs that can communicate directly with each other and with storage devices to a light weight notebook would be pretty awesome for certain compute or rendering applications. I would also not be surprised if OCZ's Virtualized Controller Architecture 3.0 software for the Kilimanjaro platform supports point-to-point PCIe transfers.

Furthermore, DisplayPort is an inherently point-to-point protocol (although DP 1.2 does allow for some more unusual topologies.) And once again, this thread was about an external GPU solution. If you have a DP source in a Thunderbolt device, it would only make sense to connect it to one of the inputs on the TB controller so that you could drive a Thunderbolt display or even the built in display of a host PC, such as an iMac, where the proper signal path exists.
 
Although some have noted bandwidth limitations there is the possibility that such an external GPU could be useful for operations that are optimised for OpenCL, CUDA or some other form of technology where additional computational power can be delivered via some additional external hardware. If such is a viable solution then what one could possibly have in such a device is the ability to extend the life of MacBook's, iMac's etc. for longer.

This is actually what I'm really interested in. If the programmer is half-awake, GPU computing should avoid much communication between the CPU and the GPU when at all possible, so the bandwidth bottleneck may be less of an issue. Again, if the whole power solution could be fixed, being able to add GPU computing capabilities (real ones) to a Air would be neat, and you could make a decent compute node out of a Mini.

Though the whole "You can cluster a bunch of Minis and it will be just like a Mac Pro!" make me shake my head. I don't think anyone who actually *deals* with clusters wants their home computer to do that.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.