oh of course ther is overhead. but how much, that is the question.. are we talking 1/2 of what the data packets are? twice the size? (this is excluding packet loss of course).
If you ignore overhead and have hard drives on both ends capable of sustaining those speeds, you still won't reach 100 Mbps (or 1 Gbps if that's what you're using). You may come close, but there is always going to be something preventing it, whether it's outside noise, a slight defect in the cable, etc. Perhaps if you stuck the entire network inside a faraday cage so there's no outside interference, and built a high quality, extremely expensive cable you could peak out at 100 Mbps, but in the real world, there is always a limiting factor.
USB2.0 wont ever reach 480Mb/s because of the crappy HDDs that are put into them, same with FW400&800, also a similar story with SATAII.
Hard drive speeds are half the problem. Again, take overhead out of the equation and assume you have hard drives capable of reaching those speeds. You still won't. USB 2.0 bursts, meaning it transfers at a slower speed, occasionally bursting up towards 480 Mbps, but the average speed over a transfer would still be much lower. Firewire is sustaining, meaning it starts transferring at a higher speed and stays around that speed for the length of the transfer. Even then, it will never quite get up to 400 (or 800) Mbps, but that is why people claim that Firewire 400 is better than USB 2.0 even though USB is faster on paper, because average transfer speeds during a transfer tend to be higher.
Misleading? Yeah, but we've become used to it, just like the whole 1,000 vs 1,024 bytes in a kilobyte for hard drive sizes.
anyway, i want to see proof for this...tests would be nice
I don't have any tests, just a Cisco network certification which means I've studied this and know what I'm talking about, or at least I have a piece of paper from Cisco that says I do
