had a nie big long reply but my router conked out and made me loose my page when it went to the error page! ugh
question::when observing the 'transmission rate' of data (e.g. copying files over two computers on the same network) does it include header data etc or is it not included in the rate? if it was included, then there is basically little overhead. of the latter, then yea- there would be more overhead.
good results, shows that it can happenI ran a test on a corporate 10/100 Cat6 network and here is what I got:
Downloading a file from server to my system over Cat6: 95,875 kbps which is 11.70349MBps. That translates to 93.62792 Mbps, 6.37208 Mb shy of capping 100Mbps.
i am not sure about the failure rate of data sent because of 'interference' in the average building, but i doubt it would be limiting it by what you are saying. maybe 20%?If you ignore overhead and have hard drives on both ends capable of sustaining those speeds, you still won't reach 100 Mbps (or 1 Gbps if that's what you're using). You may come close, but there is always going to be something preventing it, whether it's outside noise, a slight defect in the cable, etc. Perhaps if you stuck the entire network inside a faraday cage so there's no outside interference, and built a high quality, extremely expensive cable you could peak out at 100 Mbps, but in the real world, there is always a limiting factor.
lets not get into those other intefaces, id rather we keep it purely ethernet based.Hard drive speeds are half the problem. Again, take overhead out of the equation and assume you have hard drives capable of reaching those speeds. You still won't. USB 2.0 bursts, meaning it transfers at a slower speed, occasionally bursting up towards 480 Mbps, but the average speed over a transfer would still be much lower. Firewire is sustaining, meaning it starts transferring at a higher speed and stays around that speed for the length of the transfer. Even then, it will never quite get up to 400 (or 800) Mbps, but that is why people claim that Firewire 400 is better than USB 2.0 even though USB is faster on paper, because average transfer speeds during a transfer tend to be higher.
no, not misleading at all. you're not exactly talking to a git hereMisleading? Yeah, but we've become used to it, just like the whole 1,000 vs 1,024 bytes in a kilobyte for hard drive sizes.
i have a few M$ networking certificates and the like, but eh - what do they even mean?I don't have any tests, just a Cisco network certification which means I've studied this and know what I'm talking about, or at least I have a piece of paper from Cisco that says I do
i dont think that we are referring to the physical layer here, but rather the link layer.This thread cracks me up.
In networking, the best you can have in actual speeds is 80% of the theoretical. The reason behind this is because before and after each byte sent, there is a starting and stopping bit. With 8 bits in a byte, that equates to 10 bits sent for every byte, or 80% efficiency.
question::when observing the 'transmission rate' of data (e.g. copying files over two computers on the same network) does it include header data etc or is it not included in the rate? if it was included, then there is basically little overhead. of the latter, then yea- there would be more overhead.