10.9 IP over Thunderbolt bridging - Fast!

Discussion in 'OS X Mavericks (10.9)' started by fortysomegeek, Oct 22, 2013.

  1. fortysomegeek, Oct 22, 2013
    Last edited: Oct 22, 2013

    fortysomegeek macrumors regular

    Joined:
    Oct 9, 2012
    #1
    I just installed and played with Mavericks and in particular Thunderbolt bridging.

    I am getting 760 MB/sec over the network. That is 10Gbe networking using $30 thunderbolt cables. I wrote a whole piece on this but can't link it here.

    here are some picts for your pleasure
     

    Attached Files:

  2. lancemj, Oct 22, 2013
    Last edited: Oct 22, 2013

    lancemj macrumors newbie

    Joined:
    Oct 22, 2013
    Location:
    Baltimore
    #2
    Wow, thank you for posting this! And thank you Google for turning this up in a search result - just what I was looking for!

    I'm really excited to see what this interface can do. Could you adjust the TCP window size either way to see if there's more performance that can be squeezed out of iperf?
    Also, I don't suppose Jumbo packets are supported over thunderbolt :p could you try enabling this to see if it offers any performance gains?

    Thanks for posting your findings!

    Edit: If you happen to have another thunderbolt cable laying around, could you try link aggregation as well? I don't think I can post a link here, but if you Google "802.3ad (LACP) Bonding for fun - SWY's technical notes" it should return some fun results.

    Cheers
     
  3. fortysomegeek thread starter macrumors regular

    Joined:
    Oct 9, 2012
    #3
    Can't do LACP because my 15" Macbook has two Thunderbolt but the 13" Macbook only has one Thunderbolt port.

    I will try to do a three-way with my two Macbooks and 27 iMac all using Thunderbolt.

    Game Changer. 10Gbe networking for just the cost of the cables. the OS was free.
     
  4. ikarus79m macrumors member

    Joined:
    Sep 30, 2006
    #4
    Fascinating! Would it be possible to connect multiple iMacs to one OSX server via IP thunderbolt connection, though? I assume you would need some sort of "thunderbolt switch", though?

    In our small part production shop we have 5 iMacs connected via 10GB switch. I wonder how much more efficient a direct thunderbolt connection would be.
     
  5. lancemj macrumors newbie

    Joined:
    Oct 22, 2013
    Location:
    Baltimore
    #5
    You saw the benchmarks that 'fortysomegeek' was getting - around 6 Gigabits per second. If you have 10 Gbe now, I see no reason to change unless you can manage a faster connection and don't require a switch or long cable length.

    Regarding a thunderbolt switch - I don't know of any devices currently on the market that operate in a fashion similar to your ethernet switch. We could be talking about PLX chipsets which are available in most high end Intel motherboards and allow for the multiplexing of PCI Express lanes.
    Keep in mind that thunderbolt is effectively squirting out raw PCI Express too. Aaand I'm speculating, but it may soon be possible for one to take a PCI Express x16 channel and break it down into four (PCI-E x4 lanes) thunderbolt connections, but I wouldn't hold my breath. Asus temporarily had an offering of sorts in this arena, however I don't believe it ever saw the light of day.

    I'm going to Supercomputing 2013 next month, so it should be interesting to see what offerings and solutions show up to make this desire from the consumer segment a reality.

    Cheers
     
  6. Thomas Kaiser, Oct 24, 2013
    Last edited: Oct 24, 2013

    Thomas Kaiser macrumors newbie

    Joined:
    Oct 24, 2013
    #6
    Hi,

    Just updated a Mini (2011) and an Air (2011) to 10.9 to test. I tried with and without modifications to the TCP stack and it makes no real difference. Performance between both between 6.5 Gbps (800 MBytes/sec) and 7.0 Gpbs (835 MBytes/sec).

    We used our standard settings for 10 GbE:

    Code:
    bash-3.2# cat /etc/sysctl.conf
    kern.ipc.maxsockbuf=16777216
    net.inet.udp.recvspace=8388608
    net.inet.tcp.recvspace=8380416
    net.inet.tcp.sendspace=8380416
    net.inet.tcp.delayed_ack=2
    (to increase kern.ipc.maxsockbuf it might be necessary to increase kern.ipc.nmbclusters prior to that by issueing »sudo nvram boot-args="ncl=131072"« followed by a reboot).

    But it makes no real difference.

    You mean jumbo frames I assume? You can not increase the MTU for Thunderbolt interfaces beyond 1500 and that's also what iperf reports:

    Code:
    bash-3.2# iperf -c 192.168.1.2 -m
    ------------------------------------------------------------
    Client connecting to 192.168.1.2, TCP port 5001
    TCP window size: 7.99 MByte (default)
    ------------------------------------------------------------
    [  4] local 192.168.1.3 port 49157 connected with 192.168.1.2 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  4]  0.0-10.0 sec  8.15 GBytes  7.00 Gbits/sec
    [  4] [B]MSS size 1448 bytes (MTU 1500 bytes, ethernet)[/B]
    The interesting thing is that OS X automatically creates a bridge device the first time you insert a thunderbolt cable. My TB device is en3 and ifconfig's output looks like this when I insert a cable:

    Code:
    en3: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
    	options=60<TSO4,TSO6>
    	ether b2:00:1e:72:43:a1 
    	media: autoselect <full-duplex>
    	status: inactive
    bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
    	options=63<RXCSUM,TXCSUM,TSO4,TSO6>
    	ether 42:6c:8f:c0:5b:00 
    	Configuration:
    		id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
    		maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
    		root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
    		ipfilter disabled flags 0x2
    	member: en3 flags=3<LEARNING,DISCOVER>
    	        ifmaxaddr 0 port 8 priority 0 path cost 0
    	nd6 options=1<PERFORMNUD>
    	media: <unknown type>
    	status: inactive
    When I connect the second machine with the TB cable then IP addresses will be assigned automagically and the NIC's status changes:

    Code:
    en3: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
    	options=60<TSO4,TSO6>
    	ether b2:00:1e:72:43:a1 
    	media: autoselect <full-duplex>
    	[B]status: active[/B]
    bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
    	options=63<RXCSUM,TXCSUM,TSO4,TSO6>
    	ether 42:6c:8f:c0:5b:00 
    	[B]inet6 fe80::406c:8fff:fec0:5b00%bridge0 prefixlen 64 scopeid 0x8 
    	inet 192.168.1.2 netmask 0xffffff00 broadcast 192.168.1.255[/B]
    	Configuration:
    		id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
    		maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
    		root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
    		ipfilter disabled flags 0x2
    	member: en3 flags=3<LEARNING,DISCOVER>
    	        ifmaxaddr 0 port 7 priority 0 path cost 0
    	nd6 options=1<PERFORMNUD>
    	[B]media: autoselect
    	status: active[/B]
    Unfortunately I don't have a machine with 2 TB ports. Would be interesting whether the second TB interface would be added to bridge0 or not (and a new bridge device would be created). In case they would be combined in 1 bridge device this would mean that one can build some sort of a peer-to-peer or mesh network with Macs running OS X 10.9 since any Mac with at least 2 TB ports could act as a bridge (and the new MacPro would be a rather expensive multi port bridge AKA switch with 6 ports).

    802.3ad is useless for one-to-one connections in terms of performance gain. It only works with one-to-many, many-to-many or many-to-one. But based on the technology used it is not able to increase the throughput of a single connection between two hosts by adding more links via LAG (compare with http://www.ieee802.org/3/hssg/public/apr07/frazier_01_0407.pdf if in doubt).

    BTW: I had a look at both layer 3 and layer 2 using tcpdump ("tcpdump -i bridge0 -s 0 -U ..."). Apple implements "IP over Thunderbolt" compatible to "plain old ethernet" using exactly the same frame format as Ethernet. So one could also use a Mac as a bridge between Thunderbolt and an Ethernet port since frame compatibility is guaranteed at layer 2 level (unless one wants to use the 'default MTU' of 1500 still these days).

    Currently I believe Apple chose this outdated MTU setting for IP-over-TB due to its customer base (which consists of no network experts, they assume they can mix TB with Ethernet and it should simply work. And currently I believe that's true).

    Best regards,

    Thomas
     
  7. Fontane macrumors regular

    Joined:
    Feb 3, 2011
    #7
    I want to share in on the excitement but have no idea what the heck you guys are talking about.

    Can someone give a brief and simple description of the benefits the OP discusses...and if it's something casual users should do? :)
     
  8. aliensporebomb macrumors 68000

    aliensporebomb

    Joined:
    Jun 19, 2005
    Location:
    Minneapolis, MN, USA, Urth
    #8
    Let me say.....

    This is freaking cool.

    It's cooler than the time I was doing IP over Firewire800 cool.

    Bravo! I want this so bad now! Gotta get a different Mac to get TB though.
     
  9. lancemj macrumors newbie

    Joined:
    Oct 22, 2013
    Location:
    Baltimore
    #9
    Excellent post, thank you!

    Would you please try running the following command to enable jumbo frames?

    Code:
    sudo networksetup -setMTU en3 9000
    I _think_ it should do what you want. I'm really curious to see how well this spins up the processor and whether that is in anyway a bottleneck here.

    Also, I find a few things concerning. First, Thunderbolt is based upon PCI Express Gen2 and at four lanes should be able to deliver 16 Gbit/s. The Thunderbolt Dev Guide on Apple's website explicitly states "A Thunderbolt port or cable provides two 10 Gbps bidirectional links, but these two links cannot be bonded into a single channel. "
    That by itself is interesting, but I'm still trying to find out how four PCI-E lanes deliver 16 Gbit/s while two bidirectional TB connections able to effectively provide 20 Gbit/s?

    On a side note, the Macbook Pro has two Thunderbolt v2 connections and _should_ theoretically have a maximum aggregated bandwidth of 40 Gbit/s. That will make for some interesting tests.

    More to come, gotta go...
     
  10. Weaselboy Moderator

    Weaselboy

    Staff Member

    Joined:
    Jan 23, 2005
    Location:
    California
    #10
    Currently if you want to move large files between machines on a network your best option is 1Gb ethernet. This new ethernet over Thunderbolt connection give you ten times that speed. It will be really nice for migrating settings over to new machine using Migration Assistant also.
     
  11. Thomas Kaiser macrumors newbie

    Joined:
    Oct 24, 2013
    #11
    Nope. The first thing is that the physical device is attached to a bridge. And then OS X refuses to increase the MTU on the right device:

    Code:
    bash-3.2# networksetup -setMTU en3 9000
    Could not find hardware port or device named en3.
    ** Error: The parameters were not valid.
    
    bash-3.2# networksetup -setMTU bridge0 9000
    Error - 9000 is not in the valid MTU range of 1500-1500
    ** Error: The parameters were not valid.
    No, this is a very common misunderstanding. Thunderbolt is not based on PCIe. It's a brand new technology based on a switched fabric architecture providing interchange of packets with very low latency, high throughput and flexible topology. It is able to carry other 'protocols' like DisplayPort or PCIe (if one would call PCIe in this way) but there also exist 'native' TB implementations. And IP over Thunderbolt is such a thing (compare with https://developer.apple.com/library/mac/documentation/HardwareDrivers/Conceptual/ThunderboltDevGuide/Introduction/Introduction.html or https://thunderbolttechnology.net/tech/how-it-works if in doubt)

    It might be easily possible to build Thunderbolt 'switches' that route the packets between different hosts (in fact every TB device with 2 ports and daisy chaining capabilities is acting like this). As far as I currently understand Apple chose to rely on a point-to-point topology with IP-over-TB in Mavericks letting 'switching' happen a few layers above (not at the TB layer but hosted inside the OS as part of the bridging code of the network stack). But unless i'm able to get a Mac with 10.9 and more than 1 TB port under my fingers I cannot further dig into.

    But it seems like the TB approach is pretty efficient regarding load generation.

    This is my Mini (Macmini5,3 from 2011) transferring data with standard MTU over TB to a MacBook Air:

    Code:
    bash-3.2# time iperf -c 192.168.1.3
    ------------------------------------------------------------
    Client connecting to 192.168.1.3, TCP port 5001
    TCP window size: 7.99 MByte (default)
    ------------------------------------------------------------
    [  4] local 192.168.1.2 port 49160 connected with 192.168.1.3 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  4]  0.0-10.0 sec  7.25 GBytes  6.22 Gbits/sec
    
    real	0m10.039s
    user	0m0.063s
    sys	0m1.569s
    This is a more recent mini (Macmini6,2 from 2012) utilizing 10 GbE (using an external TB enclosure and an ATTO-PCIe card connected directly to the 10 GbE port of an ESXi hypervisor with a Solaris VM) with same TCP stack tuning but jumbo frames instead:

    Code:
    metis:~ la$ time iperf -c 192.168.20.1
    ------------------------------------------------------------
    Client connecting to 192.168.20.1, TCP port 5001
    TCP window size: 8.00 MByte (default)
    ------------------------------------------------------------
    [  4] local 192.168.20.150 port 63647 connected with 192.168.20.1 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  4]  0.0-10.0 sec  6.64 GBytes  5.70 Gbits/sec
    
    real	0m10.034s
    user	0m0.046s
    sys	0m1.979s
    Less to do (MTU 9000), lower throughput and higher load. "IP over Thunderbolt" outperforms clearly.
     
  12. fortysomegeek thread starter macrumors regular

    Joined:
    Oct 9, 2012
    #12
    Easy. In the past, to get 10Gbe ethernet speed, you need to spend $500 per card, $2K for a switch. These ultra fast networking is used for creatives like people who push HD video around from machine to machine. Or move around any LARGE files. With standard gigabit, you have a max top speed of 120MB/sec. Which barely tops out the speed of a regular hard drive. GIgabit networking is not ideal for video production houses.

    Now, with just OSX 10.9 and a $30-50 Thunderbolt cable you can get the same speed of copying files as a 10Gbe set-up. This is short of incredible that shows the real benefits of Thunderbolt first hand. And this is only with the 1st generation of Thunderbolt 1.0.

    In short, I can access the SSD or RAID of another computer in excess of 500 MB/second.
     
  13. eagandale4114 macrumors 65816

    eagandale4114

    Joined:
    May 20, 2011
    #13
    Im really curious as to how TB 2 would affect these rates.
     
  14. Thomas Kaiser macrumors newbie

    Joined:
    Oct 24, 2013
    #14
    Almost twice as fast. But you have to keep in mind that this will only work if your Mac will spent loads of CPU ressources on this sort of network traffic.

    IP over Thunderbolt lacks everything that has been established in _real_ networking technologies in the same speed range (10 GbE or Infiniband for example) to free up the CPU cores and let the work do the NIC (network interface card: checksumming inside the NIC, TSO/LSO http://en.wikipedia.org/wiki/TCP_segmentation_offload or even RDMA http://en.wikipedia.org/wiki/Remote_direct_memory_access)

    With IP over Thunderbolt all the protocol related stuff has to be done by the CPU itself and the operating system. This might affect the Mac's performance when using this sort of networking.

    But hey, it's cheap and current Macs have plently of CPU ressources to spend on things like this :)
     
  15. Embio macrumors member

    Joined:
    Mar 1, 2010
    #15
    I wonder if you could bond thunderbolt links....
     
  16. Thomas Kaiser macrumors newbie

    Joined:
    Oct 24, 2013
    #16
    To do what? Please remember: The topology you will be using is one-to-one. In this scenario bonding/trunking/LAG only gives you redundancy but not more throughput due to the underlying mechanisms. Please compare with page 7 here: http://www.ieee802.org/3/hssg/public/apr07/frazier_01_0407.pdf

    There are some scenarios where certain protocols can make use of parallel links between devices. SMB 3.0 with its 'SMB Multichannel' feature introduced in Windows Server 2012. But using 'Apple protocols' like AFP or SMB2 you will only be able to saturate the links of a bond/trunk/link-aggregation-group in one-to-many, many-to-many or many-to-one situation. If only two peers are connected with a LAG you will not be able to gain more throughput for a single connection.

    BTW: This 'point-to-point-topology' thing leaves room for speculations. Apples Thunderbolt based bridge devices announce that they're capable of doing TCP segmentation and checksum offloading to the 'NIC' ('options=63<RXCSUM,TXCSUM,TSO4,TSO6>' in ifconfig-output). If the Thunderbolt-pseudo-NIC would behave like a real NIC it would have to reassemble the much larger TCP packets the OS sends to the NIC in smaller chunks (based on the MTU size). Since with IP over Thunderbolt it is currently guaranteed that the other peer is also a Mavericks machine with the same TCP stack it might happen that they simply don't do any changes to the packets but instead send them directly through the network to the target (using the OS' VMTU size). Unfortunately I need one of my test machines otherwise I would have had a look at this immediately since this could be the reason why IP over Thunderbolt traffic creates less load while generating higher throughput even with MTU settings of 1500
     
  17. TsunamiTheClown macrumors 6502a

    TsunamiTheClown

    Joined:
    Apr 28, 2011
    Location:
    Fiery+Cross+Reef
    #17
    So let me see if i have this straight. From what you are reading and seeing is the following correct?
    IP over TB:
    • Auto negotiation
    • Only Mavericks Peer to Peer connections
    • Uses a modified OSI stack, i.e. incomplete TCP ??
    • Does not require (or support?) switching (Layer 2)

    I have a Mac with 2 TB ports. Has anyone tried daisy chaining two other macs to the two ports and tried to route traffic between the two end hosts?
    From Mac A to Mac C below:

    {Mac A}--------{Mac B}--------{Mac C}

    I am assuming this would be no problem, but the topology is extremely limiting.
     
  18. Embio macrumors member

    Joined:
    Mar 1, 2010
    #18
    you have completely lost me! I stand very much educated
     
  19. ChrisA macrumors G4

    Joined:
    Jan 5, 2006
    Location:
    Redondo Beach, California
    #19
    What about Apple's "Compessor" that can share workload over a network to other Macs? This seem like a good way to add extra compute power to a video transcoding job. I have a quad core iMac and it might take 3 hours to process a long video. If I connect my MacBook I might get the job down to only 2 hours.

    Compressor already is set up to share processing over a network and so in Logic. I've never tried this because my network is to slow but if a $30 cable gives by 10Ge speeds I'll do it.

    As for a Thunderbolt switch. Even if such a beat were sold I'd expect a high four digit price tag. Better to buy a Mac Pro with its six thunder both ports.
     
  20. fortysomegeek thread starter macrumors regular

    Joined:
    Oct 9, 2012
    #20

    I'm going to try that and report back.
    My Imac has two thunderbolt, my Retina Macbook has two thunderbolt and I have a 13" with a single thunderbolt.
     
  21. Thomas Kaiser macrumors newbie

    Joined:
    Oct 24, 2013
    #21
    Please provide us with the complete output of '/sbin/ifconfig' of the Mac in between running 10.9. I'm very curious whether the both physical TB-devices will be added to the same bridge device. Because then the Mac in between would simply act as a software bridge delivering packets through this mac completely transparent to the OS.

    I'm just driving home to try a different setup: Connecting two 10.9 machines to one Thunderbolt display (to see what's happening at the TB layer: whether this setup will also lead to a 'host to host' TB connection between the two Macs)
     
  22. TsunamiTheClown macrumors 6502a

    TsunamiTheClown

    Joined:
    Apr 28, 2011
    Location:
    Fiery+Cross+Reef
    #22
    Awesome, very interested in what you find out!

    I thought about this topology as well. If your setup works, or something similar (i have a Thunderbolt external drive bay with two TB ports) then we may have a more complete implementation of IP over TB than initially appeared. This would be great.

    However if option 1, using a Mac as a Forwarder works and the latter does not, the Macs could be doing layer 3 switching, i.e. snooping the MAC addresses and maintaining a table of connected devices. Some managed switches do this. I may try some stuff at home as well...
     
  23. yakovlev, Oct 29, 2013
    Last edited: Oct 29, 2013

    yakovlev macrumors newbie

    Joined:
    Jun 21, 2013
    #23
    Not if you’re using the new Mac Pro with 6 TB2 ports ;) Seriously, this is probably how it will get used near-term, with Mac Pros acting as switches. Long-term, specialized “Thunderbolt IP” switches may appear. I don’t think these are going to be expensive. In fact, they should be cheaper than 10GbE switches because of economies of scale. Intel’s dual-port TB2 controller is just $13. Oh, and these “Thunderbolt IP” switches should totally have 2 10GbE ports. Should sell like hotcakes :)
     
  24. Thomas Kaiser, Oct 30, 2013
    Last edited: Oct 30, 2013

    Thomas Kaiser macrumors newbie

    Joined:
    Oct 24, 2013
    #24
    No :)

    The underlying base technology -- Thunderbold -- is able to use different topologies (daisy-chaining, star, tree). It's a host based technology (like USB) but it's also possible to build data paths between different hosts (not only "one host, many peripheral devices").

    IP over Thunderbolt is such a 'host to host' setup. If you connect two Macs directly then this is a direct TB bus able to utilize the entire bandwidth of one TB channel. If there are other TB devices in endpoint mode in between (daisy chaining) it is still one TB bus but the maximum speed might depend on how the controller assigns datapaths to the available channels: http://www.tomshardware.com/reviews/thunderbolt-performance-z77a-gd80,3205-4.html

    Two remarks:

    - if I understand correctly with TB2 we don't have 2 independent channels but one delivering twice the bandwidth (therefore compatibility with TB1 cables)

    - unlike SCSI or FW the topology on a TB bus is different because it depends on where the upstream host controller is plugged in (the TB ports in devices don't act in pass-through mode but there's an endpoint controller inside each device which connects to the host controller upstream and passes TB packets down the cable on the other port if they should reach devices behind). So if you have a couple of endpoint devices on the bus (storage devices, a thunderbolt display) and connect another mac to the end of this chain it won't lead to chaos.

    IP over Thunderbolt always utilizes point-to-point-connections at the physical layer: If you have a Mac with 2 TB ports and connect to each of them another Mac then these are two independent TB buses.

    To exchange data with these buses (or machines since on any bus can only be one other Mac active) Mavericks establishes a bridge device. If you assign only one TB port/bus to this bridge device then network traffic will only flow between those two Macs unless you enable packet forwarding (routing) on the TCP/IP layer (sysctl net.inet.ip.forwarding=1) which will lead to the OS routing packets between different IP enabled interfaces if routing is set up properly.

    This applies also to the scenario where you have two TB ports but decide to let them be assigned to two different bridge devices instead of one. In this case there won't be any packet exchange on this mac unless you enable packet forwarding in the OS.

    Now my assumptions:

    If you have more than one TB port and assign two of them to the same bridge device than something different happens: Not visible from the network layer of the OS a lower layer (driver) acts as a multiport bridge between the two separate TB buses (maybe on layer 2 if it's a real bridge forwarding all sorts of packets maybe layer 3 based and restricted to IP traffic). Only packets that target the local host will be available at the bridge device and appear in the network stack of OS X, all other traffic will be bridged between the 2 otherwise independent TB busses.

    If the new MacPro will be available then we can have a deeper look whether this bridging works like a hub (forwarding all packets to all different TB buses) or like a switch (checking the packet's target MAC address and forwarding it only on the TB bus where this device resides). I would believe the whole thing works like a switch because packet flooding all buses at these speeds is sort of stupid.

    The bridge interface advertises to the OS that it is capable of doing large segment offloading and offload checksumming. In traditional Ethernet environments this is done by more expensive NICs to free the CPU from calculating checksums for rather small packets (based on the MTU settings we inherited from the seventies of the last century). When the NIC provides these capabilties the OS can send packets up to 64 KB in size to the NIC device and a special engine on the NIC itself starts to reassemble the packets to the MTU value (1500 byte historically). The TB bridge device advertises this capability ("options=63<RXCSUM,TXCSUM,TSO4,TSO6>") but won't do any offloading stuff (which had to be done entirely in software because there is no special NIC hardware to assist) but simply sends these large packets on the TB buses (segmented in smaller TB packtes of course) connected to the bridge device. So even if the device announces that it is capable of a MTU of only 1500 bytes in fact packets of as nearly large as 64 KB will be transmitted over the wire (VMTU --> virtual MTU from the point of view of the TCP/IP stack of the OS).

    Offloading will only happen if you also assing a traditional Ethernet device to one of the bridge devices since in this case the lower layers (below/hidden from the TCP/IP stack of OS X) have to reassemble the packets arriving at the bridge device with large MTU values so that it fits in the MTU of the LAN/WLAN device that is also part of the bridge device.

    Apple chose to use the classical Ethernet frame format for the IP over Thunderbolt stuff. But I'm not really sure if it's really "Ethernet over TB" as some people claim (since then the bridge device should be capable to handle all sorts of different protocols and might even have to change the size of Ethernet frames which isn't possible at layer 2 to my knowledge). I would believe it's really "IP over thunderbolt" since packet fragmentation can be done without any problems on layer 3.

    In this case you have 2 independent TB buses: A-B and B-C (and daisy chaining is not involved because this would mean that you have devices in endpoint mode in between). If you assign both ports to one bridge device on Mac B then this Mac will act as some sort of a multi-port bridge between both buses (maybe on layer 2 forwarding all sorts of Ethernet like frames maybe just on layer 3 only handling IP protocol stuff). If this assumption is true then you wouldn't have the chance to sniff any packets on Host B that will be sent from A to C or vice versa.

    You spoke about "routing". This would be the case if you assing both TB ports on Mac B to different bridge devices and enable packet forwarding in between. Then every packet betwen A and C has to pass the upper layers of OS X' network stack (routing between interfaces).

    BTW: A totally different scenario would also be possible. Today we have only TB endpoint devices with 2 ports. Therefore the only TB topology in the wild is 'daisy chaining' (and the new MacPro won't change this because it is a device with 6 TB controllers in host mode each defining the endpoint of one independant TB chain/topology). TB could also use different topologies like tree or star. In this case we would see routing/switching of raw packets on the TB layer (neither 'IP packets' nor 'Ethernet frames' -- the whole thing is something different happening below).

    I have no idea how "IP over Thunderbolt" would deal with such a situation where on one TB bus connected to the TB port of a Mac is more than one other Mac present.
     
  25. TsunamiTheClown macrumors 6502a

    TsunamiTheClown

    Joined:
    Apr 28, 2011
    Location:
    Fiery+Cross+Reef
    #25
    So did you do this yet or not?
     

Share This Page