Good for you. Since, as I pointed out, Windows hasn't used DOS layering since 2001 it's hardly relevant though.
If you count from the first public beta, Windows NT was ten years old in 2001, when XP released. The business version of Windows was pure 32-bit for 10 years before the consumer version jumped. (In fact,
Windows 64-bit was already out around the time when XP shipped.)
Today, of course, there's no more DOS in Windows than there is OS7 in OSX.
For this very reason: run, and stumble when running a large variety of processor intensive apps simultaneously. One misstep by either the OS, app, or processor, and the whole system comes crashing down.
Actually, a modern OS will not do that - if a "misstep" occurs, the damage will be limited to the minimum impact possible.
In particular, an application error should *never* take the system down - *never*. The OS will first ask the application "why did you do this" (see
exception handling) and give the app a chance to fix the mistake. If the app doesn't, the app is killed - not the system.
Even if the OS code makes a mistake, damage control will be employed. If the mistake is within the user's logged in session, then the OS can kill the session. The system stays running, but you need to log back in.
Only if a serious mistake occurs in the kernel (or corruption of kernel data) does "the whole system come crashing down".
Similarly for hardware errors. A disk error doesn't crash the system, unless the disk IO was done by the kernel. Memory and CPU errors are usually very serious, because almost by definition it means that you can't trust what the CPU or memory are doing.
On a primitive OS such as DOS or Mac OS, the lack of memory protection means that an application mistake can easily corrupt kernel memory and crash the system or other applications. That's one of the main reasons that we no longer use those systems.
UNIX has been rock solid for over 37 years.
LOL, this is one of the absurdly hyperbolic claims that you make that completely undermine your argument.
If you think that UNIX has been rock solid for 37 years, why don't you find a Solaris system from 15 years ago and suddenly pull out its power cord?
If the Solaris box is able to reboot, you'll be faced with hours of file system rebuilding, fixing corrupted files, restoring from backups, ....
Any appearance that UNIX is rock-solid is due to:
- The simple fact that they'be been working on it for 37 years

- Most UNIX systems are servers that sit on the network and run a fixed set of applications - once they're installed and debugged, they'll be reliable.
During Longhorn's development in 2003-04, the entire thing was scrapped and started over, re-building upon the Windows Server 2003 codebase.
Here we go again, another absurd hyperbole.
Read about the
Longhorn Reset, and you'll realize that "the entire thing" was not scrapped.
The core kernel and services were reset to Windows Server 2003 SP1 (which, of course had 4 years of development and enhancement over the XP base underneath the old Vista work), and then the Vista features and enhancements were ported over to the new kernel.
If you've ever worked in multi-stream development, you'd realize that it's work to move code between code branches, but it's much easier than scrapping and starting over. By the way, did you know that Windows XP 64-bit Edition is also based on the Win2k3sp1 codebase? Merging branches of a development tree is not that uncommon....
If you had said that "much" was scrapped, you'd have a valid and hard to dispute argument. Saying "the entire thing" is easily shown to be wrong, and damages your position.
Anyway, this thread is degenerating into two sides that aren't going to convince each other of anything.