View Full Version : load balance file server transfers over 2 NICs ?

Mar 14, 2011, 11:19 AM
Hi guys,
I manage a file server (OS X Server) at an office. It has multiple drives attached to it, basically one per department.
Most of the time it works rather well, but for about 4 days each month a lot of video an massive PSD/TIFF files are transfered.
Since the drives are rather well distributed, I was looking to do some upgrades to the network.
I attached the A/V workstations to a small gigabit switch directly to the server, and the rest of the office goes through that switch as well, but with cheaper 10/100 switches.

However I was looking to put in additional NICs to balance the load. Theoretically with the built-in gigabit NIC plus a couple of additional 10/100 cards could give me a nice 20% of extra bandwidth.

Any ideas as to how would I go about doing this in Mac OS X ?

Thanks !

Anonymous Freak
Mar 14, 2011, 04:30 PM
To do it properly, you need a switch that supports load aggregation, port bonding, adapter teaming, etc.

And, if you have one Gigabit, and one or more 10/100, that will suck. What happens when the one guy who starts transferring a huge file gets one of the 10/100 NICs?

But, if you really want to, here you go: http://docs.info.apple.com/article.html?path=ServerAdmin/10.6/en/asa7873dc0.html

Mar 14, 2011, 08:52 PM
Very insightful post, was exactly what I was looking for. Thanks !

Mar 15, 2011, 08:55 AM
You could have it set up with 2 network cards and 2 local IP addresses, and DNS round robin.

Mar 15, 2011, 06:11 PM
Why would you even consider 10/100 NICs? They cost pennies nowadays. I used to have a pile of them that I found in the trash outside one company, but I got rid of them some time ago. Even gigabit NICs only cost a few dollars these days from Amazon etc.

Good luck with the load balancing / round robin with 2 NICs - it isn't easy to set up if you haven't done it before. The fact that you are talking about scraping together extra bandwidth by teaming a gigabit NIC with a 10/100 NIC, I'm sorry, it shows you're going to struggle with this whole issue.

The first thing you need to do is put end to end gigabit between the server and as many of the media computers as you can. It's relatively cheap and easy. You've said that most of the media computers are on 10/100 hubs and NICs. That's your bottleneck right there. Real world transfer speed over that is about 5-7 meg per second. Transferring a video is going to take forever.

The next thing is to look at the real world data rate your server can handle. Gigabit ethernet is capable of 100 meg/second on a good day, and 60-70 meg/second is to be expected in the real world. Are your server drives able to sustain writing 100 meg/second all day long? (sustained, not burst rates)

If not, then there's no point adding a second NIC. You need to improve your storage first, and that's full of pitfalls. RAID isn't the be all and end all. I've had several RAID arrays go bad on me, and others suffer from mysterious slowdowns.

tl;dr = Throw out these 10/100 bits and convert your network to full gigabit first.

Mar 19, 2011, 08:49 AM
DNS round robin is not what you want.

To improve server access bandwidth, get a second gigabit NIC for the server that supports LACP and a gigabit switch that supports LACP, you can then "trunk" the two connections together (with a small overhead) to increase your bandwidth.


If however you are just using individual drives for storage, then you're trying to fix the wrong place, a decent external RAID array will improve performance.

To monitor the bandwidth you have between the server and the workstations use iperf. To monitor the bandwidth being used on the server, look at the bandwidth meter. If the bandwidth being used on the server during your peak times is not as good as when running iperf, then adding more NICs won;t fix it, but adding an external array might well.