Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
When I received my CNE and MCSE we were told differently as do many OS books as they are ALL based on the same underlay. I am not arguing that they are identical but they all share the same lineage. Are they the same? NO! Are they still based on overlay and arcane commands that date back to Windows 95 and Windows 2.X? YES!

D

Wait, so Win 9x and Win NT share the same kernel (underlay)? I figured they share the same GUI, but that was it. The underlying technology was different.
 
Nope. Windows NT has nothing to do with Windows 9x. It’s a completely different system, like OS X is different than OS 9. NT just happened to share the same userland with 9x.

When I received my CNE and MCSE we were told differently as do many OS books as they are ALL based on the same underlay. I am not arguing that they are identical but they all share the same lineage. Are they the same? NO! Are they still based on overlay and arcane commands that date back to Windows 95 and Windows 2.X? YES!

D

Wait, so Win 9x and Win NT share the same kernel (underlay)? I figured they share the same GUI, but that was it. The underlying technology was different.

The NT kernel and native userland have nothing to do with DOS/Win9x.

The Win32 APIs (from 9x) are a layer on top of native NT, and implement what was initially compatible with the 9x APIs, so that the Windows GUI and 9x applications could be re-hosted on NT without much effort. (By now, the NT Win32 API set has evolved - taking a recent NT program back to 9x would be very difficult.)

In the early days, NT supported a POSIX layer and an OS/2 layer on top of native NT. Those have fallen away due to lack of use. http://en.wikipedia.org/wiki/Microsoft_POSIX_subsystem

There's no DOS in NT. There is a command shell - a userland program that interprets a superset of the DOS commands that were supported by DOS/9x. Several other shells exist for NT - PowerShell, VBS, bash, tsh, .... Like any shell on a UNIX or other system, these are just userland command interpreters written to the APIs underneath. While you can type a command that looks like a DOS command into the shell, there's no DOS behind the shell - just an NT userland program that parses the commands and calls the Win32 APIs to get the desired effect.

The "native APIs" for NT are undocumented, although much is known about them.

410D6QSST2L._SS500_.jpg
http://www.amazon.com/Windows-2000-Native-API-Reference/dp/1578701996
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
So was that "you gain 6GB after installing Snow Leopard" a lie? and how exactly will this work, if I download an application from the internet which is 297Mb is it going to show up on my system as 320MB (using the same numbers so i don't have to do any conversions my self :p)

I think the 6 Gig coming back is at least partially due to the removal of fat binaries in favor of Intel only binaries.
 
The userland was almost the same. Explorer, IE, etc.

The native userland in NT has nothing to do with the 16-bit and 16/32-bit worlds of DOS and Win9x.

The Win32 subsystem is a native layer that implements the Win32 API set. Any 32-bit Win32 app sits above this layer.

You are right that the Win32 applications in early versions of NT were similar or identical to the Win9x applications. Obvious, since complete binary compatibility was a goal of the Win32 subsystem.

I said "native" though. "Win32" is a "persona" layered on top of NT - it's not real NT.

If you have some familiarity with NT's architecture you are probably aware that the API that Win32 applications use isn't the "real" NT API.

NT's operating environments, which include POSIX, OS/2 and Win32, talk to their client applications via their own APIs, but talk to NT using the NT "native" API. The native API is mostly undocumented, with only about 25 of its 250 functions described in the Windows NT Device Driver Kit.

http://technet.microsoft.com/en-us/sysinternals/bb897447.aspx
 
Good move by Apple

it's kind of stupid to switch from base 2 to base 10. it actually IS base 2 and apple won't change that.

however, the fact that most users still think 1024 = 1000 may have driven apple to that step...

False. It *really is* base 10.

A megabyte = 1000 kilobytes. A mebibyte = 1024 kilobytes. The hard disk manufactures have actually been calculating it correctly, and OSes have been calculating disk space incorrectly.
 
Wait, so Win 9x and Win NT share the same kernel (underlay)? I figured they share the same GUI, but that was it. The underlying technology was different.

I should clarify - I did not say they were the same but they ARE evolutionary changes to the same basic underlay - albeit with changes. The OS is not a radical departure from ANY of the windows versions - they all still use registries, they all access the files is similar way and journal files in similar ways - the kernels ARE different but they are still based on the same principals ALL version of windows are - GUI and UNDERLAY. I repeat - they are NOT the same but they are NOT radical departures form the original Windows 95 iteration. Is that bad? I do not know - but OS X IS a radical departure from OS 9 and all other variants before.

NT is a bit of a different story but it still owes much of its lineage to 95 et all..

D
 
Just wanted to correct this…



Nope. Windows NT has nothing to do with Windows 9x. It’s a completely different system, like OS X is different than OS 9. NT just happened to share the same userland with 9x.

That is what I said! I never said they were the same but they DO share many features and many similarities in operation. They share more than just the GUI and user underlay. I NEVER posted the kernels were the same but they share much of the same methodology. NT was a departure from 95 et all but it was not revolutionary as they still handle many operations the SAME WAY albeit with different code and operations - the process is close enough to owe its heritage to the 95 days. OS X does NOT owe anything from OS 9. It was a clean slate with no preconceptions or rules to abide by.

D
 
NT is a bit of a different story but it still owes much of its lineage to 95 et all..

D

NT was a departure from 95 et all but it was not revolutionary as they still handle many operations the SAME WAY albeit with different code and operations - the process is close enough to owe its heritage to the 95 days.

Interesting - but how do you explain the fact that NT was released in 1992 - 3 years before the Windows 95 that you claim NT descends from? :eek:

Correction: NT was released in July 1993, two years before the August 1995 release of Windows 95. I had Alpha and Beta versions in 1992, and confused the date.
 
its really funny that all u guys are freaking about this.... its really not that big a deal..... and itmay be the first step to getting other os's to do the same.... personally i look forward to my TB drives showing 1TB rather then 931MB

The problem is that you can't physically store your data in 10-based size. The minimum read/write unit of a hard disk, sector, is 512 bytes; the situation is worsen in that, for efficiency reason, most file systems under the sun use at least 2KB (4 sectors) as the minimum allocation unit for storing a single data chunk, meaning that you got 2KB (4 sectors, 2048 bytes) less of disk space after saving a file of 1 byte or 2000 byte.

You see, 2-based representation of disk space and file size is a lot to the reality. You 1TB disk is actually 931MB.
 
That's a man-made artificial distinction. As a matter of fact, sectors on a NAND Flash chip (for example) physically contain a (power-of-two + some overhead) number of bytes. For example, instead of 512 bytes (2^9), a typical NAND Flash sector (the smallest physically addressable and erasable space within the hardware structure) might contain 528 bytes. Of this space, filesystem formatting and error correction will take up some or all of these extra 16 bytes, leaving about 512 bytes available for data.

So you see, the power-of-two groupings are already quite arbitrarily chosen. In general, when you're talking about mass file storage, whether you choose to represent those bytes as being grouped together in base-2 sized clumps or base-10 sized clumps is not based on any physical reality. As long as the chosen definition is publicized clearly to the user, and deployed consistently within any given operating environment, I frankly don't care what definition they use.

You are totally wrong.

You have no sense of 'binary' in just the same way that most computers have no sense of denary.

In a word, if the disk space can't be read/write in unit of 500 bytes, 1000 byte or 1,000,000 bytes, what is the reasoning of presenting disk space in power of 10?
 
Are you serious?? The list is huge
Windows has space available on drive in a clear graphical and easy to read written size of drive and free space.

Windows shows the how many files ...

Those are mostly true but I would say, with all of these advantages of it, Windows Explorer is no better than Finder in serious house keeping; both sucks.

Yes, for this 'daunting' task of house keeping my own data, I need to fire up VMWare Fusion to boot up my loved Kubuntu Linux for running Konqueror or Krusader or 'Midnight Commander'.
 
On a technical note, "cut" is a move operation, not a delete/create operation (as it may seem). When you cut/paste a file, all the system does is change the reference pointer. The actual file on on the drive is unmoved. Another reason why there is no way it could be lost.

Does it work the same way when you 'move' file between folders on the same networked drive (from 'z:\folderA' to 'z:\folderB')?
 
Actually, it is not. Computer memory is the only area where the letters K, M and G have been incorrectly used to mean 1,024, 1,024 x 1,024 and 1,024 x 1,024 x 1,024. [/url]
Where does the system need to know about base 2? I think the system doesn't actually care.
You don't know what you are saying.

Everything in most computer systems is binary based, down to CPU instruction set, code/data register, cache and bus.

Disk plate is also binary based, from sector size to cylinder number. Why, because the controller, buffer cache and bus are all binary-based.

Mind you, most embedded controller, as those used in a disk, knows nothing about decimal thing. For example, you need to be very good at programming in assembly language of the target CPU to perform multiplication or division in decimal.

At the lowest level, you will have a byte offset relative to the start of the hard drive; that needs to be translated into a platter/track/sector number, which is quite complicated because the number of sectors per track is different from track to track. There is no need for binary arithmetic at all.
All those calculation are simple binary 'calculation' like shifting or rotating left/right of bits around and a few inefficient add/subtract if absolutely needed.
The rule of thumb in OS level programming is to stick to binary calculation.
 
I've written up a little rant about the base-10 change (spoiler: I think it's a good thing).

Your reasoning: "Okay, so the "real" storage space taken by a 123456789-byte file is actually 123457536 bytes, but that's still a lot closer to 123.4MB than it is to 117.7MB!"

Why the file manager have to present a file of this size in byte? Why not just said it's 117.7 MB at the first place?

And what about 1000 or 10,000 of JPEG files of 200,000 byte each? You can't simply multiply 1000 by 200K to get the total required disk space.
 
Interesting - but how do you explain the fact that NT was released in 1992 - 3 years before the Windows 95 that you claim NT descends from? :eek:

Correction: NT was released in July 1993, two years before the August 1995 release of Windows 95. I had Alpha and Beta versions in 1992, and confused the date.

Fair enough - but you do see my point - correct? I did forget about THAT version of NT and was thinking of the later versions. Anyway - the way NT, Windows 95 et all handles things like networking traffic, stacks, underlay and even the basic art of how the OS communicates with the hardware using the same basic hardware interface.

It has been a LONG TIME since I have done anything serious or any programming on any Windows platform - but from what I remember NT through XP all had the same basic GUI (sans the ORIGINAL NT) and hardware interface and if was similar in the API's, registry and others. I did not ever say they were the same - all you can do (and everyone else) is pick apart my every sentence yet not comment on the basic premise of my argument. The entire lineage of Windows owes itself to its predecessor - as EVOLUTIONARY and not REVOLUTIONARY. I would argue that OS X from OS 9 was REVOLUTIONARY for Apple but EVOLUTIONARY for the user base as it owes MUCH of itself to BSD and Darwin and the Mach Kernel are ALL VARIANTS of BSD - period...although evolutionary enhancements.

That was the only point I was trying to make - I am FAR from an expert on Windows but I was heavily certiied and worked for many large Fortune 500 companies and even the DoD so I feel confident in the premise of my argument - even though I may not remember every minute detail on Win.

D
 
The problem is that you can't physically store your data in 10-based size. The minimum read/write unit of a hard disk, sector, is 512 bytes; the situation is worsen in that, for efficiency reason, most file systems under the sun use at least 2KB (4 sectors) as the minimum allocation unit for storing a single data chunk, meaning that you got 2KB (4 sectors, 2048 bytes) less of disk space after saving a file of 1 byte or 2000 byte.

You see, 2-based representation of disk space and file size is a lot to the reality. You 1TB disk is actually 931MB.

I think that your argument is silly.

You realize that you're discussing the difference between 0.9999999980 TB and 0.9999999980 TB, right? (In other words, the numbers are the same even when you use 10 decimal places of precision.)

Similarly, one allocation unit is about 0.01% of the size of a 5 MB music file. Again, you'll need lots of decimal places to tell the difference.

And what about file-related meta-data? Currently that overhead is hidden by most systems. If you look at the total size of the files, you'll see that they're smaller than the total space used on the disk. This means that the current listings are already quite inaccurate.

And, to make your case even weaker, isn't the size reported today in binary the actual length of the data in the file, not the allocated size of the file? If any file is shown as anything but even multiples of the 4KiB default allocation block, then it is the actual (decimal) length that's being reported (albeit divided by some power of two to be mis-represented as Kilo or Mega or Giga). Every time you see a 1KiB, 2KiB or 3KiB file - OSX is hiding the "wasted" space in the allocation block. What does it matter if it is hidden in power of 2 or power of 10?

By the way, Windows doesn't use an allocation unit for small files - if the file is small it can be stored entirely with the meta-data, so that your 2KiB rounding error is gone. (For NTFS, the default allocation unit is also 4 KiB.)

I think that it is wrong to misuse a term like Kilo or Giga that is recognized by many international standards bodies, when simply adding a lower-case "i" to the abbreviation brings it into compliance. I think that Apple is doing the right thing to shift to decimal to match the storage and networking and processor people. Let's push to see DIMMs sold in GiB, and caches in MiB, and have truth in labeling.

By the way, did you know that the typical Ethernet packet is 1500 bytes, and 802.11 WiFi uses 2272 byte packets? What does that do to the "computers only know binary" arguments? And what about punched cards - shouldn't they have been 64 or 128 columns instead of 80? http://en.wikipedia.org/wiki/MTU_(networking)


Fair enough - but you do see my point - correct? I did forget about THAT version of NT and was thinking of the later versions. Anyway - the way NT, Windows 95 et all handles things like networking traffic, stacks, underlay and even the basic art of how the OS communicates with the hardware using the same basic hardware interface.

No, I do not agree with your point.

Try to install a Windows 95 device driver on any version of NT - you'll find that "the OS communicates with the hardware using the same basic hardware interface" is only true in the sense that "OSX and Vista communicate with the hardware using the same basic hardware interface". Of course they use the same hardware interface - it's hardware!

The networking stacks are quite different as well. Some protocols are shared (CIFS, for example), but the code is very different.

By the way, what do you mean by the term "underlay"? I'm not familiar with that word used as a noun in operating system context, and web searches for '"operating system" underlay' didn't show any technical definition of the term.


The entire lineage of Windows owes itself to its predecessor - as EVOLUTIONARY and not REVOLUTIONARY. I would argue that OS X from OS 9 was REVOLUTIONARY for Apple but EVOLUTIONARY for the user base as it owes MUCH of itself to BSD and Darwin and the Mach Kernel are ALL VARIANTS of BSD - period...although evolutionary enhancements.

Would you not consider that the break from 16-bit DOS and Windows 3.1 to 32-bit Windows NT was a REVOLUTIONARY step for Microsoft? The entire system was replaced with a completely new design that from the very beginning ran on multiple CPU architectures (x86/Alpha/MIPS/N10 initially, then x64/Itanium/SPARC/PowerPC later).

It had a compatibility layer (NTVDM) that was to the user similar to Classic, so that existing programs would run fine. It had a personality layer with a new set of APIs (Win32) that was an extension of the old APIs (again, echoes of Carbon).

The NT operating system doesn't owe anything to any previous Microsoft OS - it started from a clean slate. (Contrast to OSX, which was a port of NextStep with a Mac-like GUI added on.)

Also note that in some ways Windows95 is a descendent of NT - Windows95 implemented NT's Win32 API set on a hybrid 16/32 bit system.
 
$49 for Win7 upgrade, MS says Snow Leopard a "service pack"

Oh boy, check this Cnet story. I wonder how Apple will respond...

http://news.cnet.com/8301-13860_3-10272259-56.html?tag=mncol

"That truly is a price that we have never even come close to in terms of an operating system release," Corporate Vice President Brad Brooks said. "We've still got a business to run."

Of course, even at the preorder price, Microsoft still finds itself undercut by Apple, which has said it will only charge $29 for Leopard users moving to Snow Leopard (those on older versions of the Mac OS will have to buy a full-boxed copy combining Leopard and Snow Leopard).

Brooks, however, said that comparing the two upgrades is unfair.

"Even their chief software architect called (Snow Leopard) an upgrade of Leopard," Brooks said. "The way I look at it, it's a service pack and we don't charge for service packs."

Microsoft also confirmed, as expected, that a program offering Vista PC buyers a free copy of Windows 7 will kick off on Friday.
 
Snow leopard looks amazing and its supposed to take up less hard drive space than os x leopard. I can't wait till it gets released.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.