The problem is that you can't physically store your data in 10-based size. The minimum read/write unit of a hard disk, sector, is 512 bytes; the situation is worsen in that, for efficiency reason, most file systems under the sun use at least 2KB (4 sectors) as the minimum allocation unit for storing a single data chunk, meaning that you got 2KB (4 sectors, 2048 bytes) less of disk space after saving a file of 1 byte or 2000 byte.
You see, 2-based representation of disk space and file size is a lot to the reality. You 1TB disk is actually 931MB.
I think that your argument is silly.
You realize that you're discussing the difference between 0.9999999980 TB and 0.9999999980 TB, right? (In other words, the numbers are the same even when you use 10 decimal places of precision.)
Similarly, one allocation unit is about 0.01% of the size of a 5 MB music file. Again, you'll need lots of decimal places to tell the difference.
And what about file-related meta-data? Currently that overhead is hidden by most systems. If you look at the total size of the files, you'll see that they're smaller than the total space used on the disk. This means that the current listings are already quite inaccurate.
And, to make your case even weaker, isn't the size reported today in binary the actual length of the data in the file, not the allocated size of the file? If any file is shown as anything but even multiples of the 4KiB default allocation block, then it is the actual (decimal) length that's being reported (albeit divided by some power of two to be mis-represented as Kilo or Mega or Giga).
Every time you see a 1KiB, 2KiB or 3KiB file - OSX is hiding the "wasted" space in the allocation block. What does it matter if it is hidden in power of 2 or power of 10?
By the way, Windows doesn't use an allocation unit for small files - if the file is small it can be stored entirely with the meta-data, so that your 2KiB rounding error is gone. (For NTFS, the default allocation unit is also 4 KiB.)
I think that
it is wrong to misuse a term like Kilo or Giga that is recognized by many international standards bodies, when simply adding a lower-case "i" to the abbreviation brings it into compliance. I think that Apple is doing the right thing to shift to decimal to match the storage and networking and processor people. Let's push to see DIMMs sold in GiB, and caches in MiB, and have truth in labeling.
By the way, did you know that the typical Ethernet packet is 1500 bytes, and 802.11 WiFi uses 2272 byte packets? What does that do to the "computers only know binary" arguments? And what about punched cards - shouldn't they have been 64 or 128 columns instead of 80?
http://en.wikipedia.org/wiki/MTU_(networking)
Fair enough - but you do see my point - correct? I did forget about THAT version of NT and was thinking of the later versions. Anyway - the way NT, Windows 95 et all handles things like networking traffic, stacks, underlay and even the basic art of how the OS communicates with the hardware using the same basic hardware interface.
No, I do not agree with your point.
Try to install a Windows 95 device driver on any version of NT - you'll find that "the OS communicates with the hardware using the same basic hardware interface" is only true in the sense that
"OSX and Vista communicate with the hardware using the same basic hardware interface". Of course they use the same hardware interface - it's hardware!
The networking stacks are quite different as well. Some protocols are shared (CIFS, for example), but the code is very different.
By the way, what do you mean by the term "underlay"? I'm not familiar with that word used as a noun in operating system context, and web searches for '"operating system" underlay' didn't show any technical definition of the term.
The entire lineage of Windows owes itself to its predecessor - as EVOLUTIONARY and not REVOLUTIONARY. I would argue that OS X from OS 9 was REVOLUTIONARY for Apple but EVOLUTIONARY for the user base as it owes MUCH of itself to BSD and Darwin and the Mach Kernel are ALL VARIANTS of BSD - period...although evolutionary enhancements.
Would you not consider that
the break from 16-bit DOS and Windows 3.1 to 32-bit Windows NT was a REVOLUTIONARY step for Microsoft? The entire system was replaced with a completely new design that from the very beginning ran on multiple CPU architectures (x86/Alpha/MIPS/N10 initially, then x64/Itanium/SPARC/PowerPC later).
It had a compatibility layer (NTVDM) that was to the user similar to Classic, so that existing programs would run fine. It had a personality layer with a new set of APIs (Win32) that was an extension of the old APIs (again, echoes of Carbon).
The NT operating system doesn't owe anything to any previous Microsoft OS - it started from a clean slate. (Contrast to OSX, which was a port of NextStep with a Mac-like GUI added on.)
Also note that in some ways Windows95 is a descendent of NT - Windows95 implemented NT's Win32 API set on a hybrid 16/32 bit system.