Eating bits and bites while driving in the fast lane ... .
I’d rather have a bite of sweet potato pie rather than just a little bit of it because I have a big mouth and the better the pie the wider that mouth can open. Show me a computer interface solution that can mimic that feat.
Ahh. So the 20 Gbps bandwidth limit is due to the PCIe x4 host adapter, not the PCIe slots in the expansion box (which has two x8 lanes and one x4)? If I understand this correctly, if the host adapter card was x8, the bandwidth would be 40 Gbps instead of 20 Gbps, making full use of the x8 lane bandwidth in the expansion box?
A given: 1 Gbit or Gb is equal to 125 megabytes -
http://en.wikipedia.org/wiki/Gigabit.
Yes, with a 40 Gbps total bandwidth limit, and if an x8 host adapter card is seated in an x8 or x16 PCIe slot of at least the same version and there's no other slot usage to account for in an external chassis, then it appears to be enough bandwidth to fully account for one occupied x8 slot's bandwidth; but for future growth please consider the following:
1) Whether the bee that stung me is big (capitalized) or small (lower case) really matters to me a lot -
http://www.translatorscafe.com/cafe/EN/units-converter/data-storage/15-16/ [ 8 gigabits (Gb) = 1 gigabyte (GB); so it takes 8 little bees to equal 1 big bee ] .
Verified that size does really matter with other reliable sources:
http://en.wikipedia.org/wiki/Gigabyte vs.
http://en.wikipedia.org/wiki/Gigabit .
2) Simple is as simple does - Keeping PCI express basics simple: Know your chipset, number of actual lanes per slot, slot version and the same for any other slots, if any -
http://www.enthusiastpc.net/articles/00003/1.aspx - and what you're measuring, e.g., total traffic from all cars on the freeway (i.e., all lanes from all slots) vs. the traffic in the lanes exiting to the beach (all lanes related to a single slot). In other words, those "40 Gbps" [for the NA250A] and "5Gbps via laptop’s ExpressCard/34 interface as well as 20Gbps when linking to desktop PC or Mac computer" [for the NA211A] figures in the product summaries refer to measures of the total traffic per second that can be accommodated while moving data across the chassis-to-PC interface.
3) Note what I take away from the article on PCI Express Basics that I cite in the last paragraph
above :
“What separates 1.x, 2.x and 3.x mostly is the transfer speed
per lane:
• A PCI Express 1.x
lane can transfer
up to 250MB/s
• A PCI Express 2.x
lane can transfer
up to 500MB/s
• A PCI Express 3.x
lane can transfer
up to 1GB/s
These are Megabytes and Gigabytes
not bits, so quite fast, even on just a single lane
Obviously a 16 lane connection is still 16 times as fast as a single lane so:
• PCI Express 1.x does 16 x 250MB/s = 4GB/s on a x16 connection
• PCI Express 2.x does 16 x 500MB/s = 8GB/s on a x16 connection
• PCI Express 3.x does 16 x 1GB/s = 16GB/s on a x16 connection .” (Emphasis added and rounding not disparaged.)
BTW - 40Gb (gigabit) = 5 gigabytes. Five gigabytes is greater than 4 gigabytes (8GB[for x16 V2]/2[to account for use of x8 instead of x16]=4).
4) Keep in mind, “What’s all in your system’s wallet?” or “How many slots (and what are their characteristics aggregated) in the system or external chassis that you're considering?”
5) The NA255A-XGPU [
http://www.netstor.com.tw/_03/03_02.php?MTEx ] that I earlier recommended that you consider for future growth has data transfer rates up to 128Gbps between host and GPU enclosure (but that's for a PCIe V3 setup) and has 4x PCIe 3.0 x8 (in x16 connectors). A PCIe V3 x8 slot is about equal to a PCIe V2 x16. The NA250A has 4x PCIe 2.0 x8 (in x16 connectors). Thus the NA255A-XGPU has much greater potential to handle more than 3x more data than the NA250A. The 128 gigabits or 16 gigabytes per second data rate between host and GPU enclosure for the NA255A-XGPU (PCIe 3.0 x8) compares very favorably to the 5 gigabytes per second data rate between host and GPU enclosure for the NA250A (PCIe 2.0 x8), but the NA250A does costs about $470 less ( $2200 - $1730 = $470 ). However that $470 seems to me to be a small difference in price to pay when the choice is between feeding and being feed from 4 four double wide GPUs via a 16 gigabyte per second aggregate data rate vs. doing so via a 5 gigabyte per second aggregate data rate, particularly if your GPUs are PCIe V3 rated, as is a GTX Titan. I do, however, recognize that what can happen at the outer limits doesn't always happen at all times.
Addendum: Take note of everything in the mix - unlike the case dealing with only internal slots, you talking about an aggregated system involving a couple of other variables - at least (a) an interface card that plugs into a PCIe slot on your PC, (b) a chassis cable that plugs into another (c) interface on the external chassis to communicate with (d) PCIe cards in (e) the PCIe slots within the chassis. So keep this in mind - THE SLOWEST VARIABLE DETERMINES THE FASTEST SPEED. So familiarize yourself with each of them to better ensure that what you get fully satisfies your needs. However, (and as is pointed out by fhenry in post #11,
above), if the external GPU(s) are to be used for computation only or mostly (e.g., 3d or animation rendering), it probably won't matter as much if the top aggregate interface transfer rate appears low when compared to the total potential of all of your GPUs as it would matter if you were relying on the GPUs for display purposes.