Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

disconap

macrumors 68000
Original poster
Oct 29, 2005
1,810
3
Portland, OR
I have a 2018 mac mini with a 2.5gb USB-C ethernet adapter connected to a 2.5gb port on an unmanaged 10gbe switch to my server (which has a 10gbe card installed and is connected to a 10gbe port on the switch). I wanted to see if I could increase transfer speeds by merging the onboard ethernet with the adapter (connected to a standard 1gbe port on the same switch). I used this link to get started.

THEORETICALLY I was able to, and via activity monitor there was a definite increase in transfer speeds; however, timing the transfer told a different story. First round of results listed below.

File test: 12.67GB mkv video file

2.5gbe only results
varying between 285-335mb/s in activity monitor
transfer time: 00:48.60

2.5gbe bridged with gigabit NIC:
varying between 325-450mb/s in activity monitor
transfer time: 1:51.83

Clearly something is very, very off, and I am wondering if it is worth bothering continuing with tests. I know that on the server/Unraid side, combined bridged connections require a managed switch, but most of the things I had read about OSX said it was all software. Has anyone had any luck bridging multiple outputs? It's really not a crucial thing, but since the M1 I ordered required a combined monitor hub, and that hub has yup, yet another ethernet port, I thought maybe I could bridge all three, but it's not seeming too likely...

EDIT: So apparently I asked a pre-attempt version of this a couple months ago but didn't see someone answered it. According to them it will increase the overall throughput for individual requests but not speed up transfer of a single file. Will this still be the case if I replace my switch with a managed switch? Would a static link integration actually achieve a faster transfer rate?
 
Last edited:
I have a 2018 mac mini with a 2.5gb USB-C ethernet adapter connected to a 2.5gb port on an unmanaged 10gbe switch to my server (which has a 10gbe card installed and is connected to a 10gbe port on the switch). I wanted to see if I could increase transfer speeds by merging the onboard ethernet with the adapter (connected to a standard 1gbe port on the same switch). I used this link to get started.

THEORETICALLY I was able to, and via activity monitor there was a definite increase in transfer speeds; however, timing the transfer told a different story. First round of results listed below.

File test: 12.67GB mkv video file

2.5gbe only results
varying between 285-335mb/s in activity monitor
transfer time: 00:48.60

2.5gbe bridged with gigabit NIC:
varying between 325-450mb/s in activity monitor
transfer time: 1:51.83

Clearly something is very, very off, and I am wondering if it is worth bothering continuing with tests. I know that on the server/Unraid side, combined bridged connections require a managed switch, but most of the things I had read about OSX said it was all software. Has anyone had any luck bridging multiple outputs? It's really not a crucial thing, but since the M1 I ordered required a combined monitor hub, and that hub has yup, yet another ethernet port, I thought maybe I could bridge all three, but it's not seeming too likely...

EDIT: So apparently I asked a pre-attempt version of this a couple months ago but didn't see someone answered it. According to them it will increase the overall throughput for individual requests but not speed up transfer of a single file. Will this still be the case if I replace my switch with a managed switch? Would a static link integration actually achieve a faster transfer rate?

Just wondering... would buying a thunderbolt -> 10GbE adapter (QNAP make one) be cheaper than replacing your switch with a managed one?

It'd be twice as fast as what you're optimally hoping to achieve.
 
This is called "link aggregation" or "bonding" not "bridging" (which means something completely different) or "merging". That might help with google searches, etc.
As you already heard elsewhere, it is most often done on a server's connect to the network so that total throughput to all clients is increased.
You *can* increase throughput to clients with bonded links if everything is set up correctly. I haven't played with this stuff in a while, but my recollection is that you need LACP on a managed switch as well as drivers on the server and client that support LACP and "round-robin". You'll also probably want to use jumbo frames, and make sure those jumbo frames aren't being fragmented by anything between the server and the client.
Unfortunately, all these changes that make file serving much faster can interfere with regular traffic, requiring a bunch of other tuning and increasing your time and trouble maintaining your network. I would suggest that it isn't worth the trouble just to add 1Gbps, and that nicho is correct in suggesting a 10Gb adapter.
 
Just wondering... would buying a thunderbolt -> 10GbE adapter (QNAP make one) be cheaper than replacing your switch with a managed one?

It'd be twice as fast as what you're optimally hoping to achieve.

It would be a lot faster but unfortunately my current switch only has 2 10gbe ports, both SFP+ (not a real problem but requires an adapter, so another part to buy) and I need the second one for an external hookup for the office I am building this summer, so ultimately I would need a new switch anyway if I went to 10gbe. Plus 10gbe adapters are, themselves, pretty expensive, and most I have read about have serious heating issues...

The managed version of my switch is only an extra $30 or so, plus shipping to return my current one (covid extended returns mean even though I bought it a while ago I can still return it until 1/31). Ultimately none of this is necessary either, which was the real reason for the post; if I CAN get a huge speed bump for minimal cost, I'm game, but I also don't want to waste the time and money if it's not going to work. I will eventually be upgrading all this gear as well, so spending 3-4x the amount I would spend in a year isn't all that appealing either. ;)

EDIT: also should note that this is mostly for file transfer and, occasionally, swapping to a render VM. Most active work I do is on a local USBC nvme drive, I'm not trying to like edit 8k video direct off the server or anything.
 
Last edited:
You could get that but if you want one in different form called the OWC Thunderbolt 3 Pro Dock if you want 10g on Dock!

I had looked at that but since I have all my other ports already sorted, it's too pricey at the moment. If I didn't already have all the other adapters sorted I would totally go that route though, too high a price tag but an awesome doc based on spec...
 
Also since my hunch has been confirmed, I'm gonna go ahead and mark this answered or whatever, but thanks for the input, all!
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.