This is a technical question out of pure curiosity. I could probably dig through the documentation, but that would probably take longer than my curiosity would hold out, so I figured "why not ask?" I occasionally switch between wireless (g) and wired (gigabit) networking with my home laptop on occasions when I want the extra speed for accessing large files. I'd noted before that this was pretty transparent, but for the heck of it I did a test today: Connected to my home server (10.4's built-in AFP sharing, gigabit to a new Airport hub) using wireless, and started playing a medium-bandwidth (1.5Mbit/s) video file. With it playing, I connected the hardwired Ethernet, waited a bit for it to get an IP address, then tuned off the Airport card. I was honestly surprised that the video didn't even stop playing (and not due to caching), so obviously the transition was transparent enough to the app that it didn't stall out long enough to mess up the video. Switching back worked as well--in fact, I could tell it was changing, because via wireless there were slight glitches in playback, I'm assuming due to the way the player (mis-)handled preloading on a relatively slow connection. Which got me wondering: How exactly does OSX prioritize network traffic when there are two available paths? All to the faster one, some connections prioritized over others based on hardware, or does it spread traffic around? Also, given two alternate paths to the internet, what kind (if any) of load-sharing will the OS do by default? I'd always assumed that you'd need a special router to combine two separate internet connections (say, cable and DSL), but this got me wondering if OSX (given two network paths) would do some of this load-sharing by itself. I of course don't have two internet connections in the same place to try this with. Anybody with more serious network experience able to provide some more details?