jaromski said:
Well technically they use a hybrid of the mach 3.0 kernel and the freebsd system.
Well technically speaking Sort of.
The MACH kernel is very unrecognizable from all of its roots.
It has some BSD stuff in it, BSD stuff would be its Crypto support, Filesystem support for CD9660, DEVFS, NFS, VFS, MEMDEV this doesn't include UFS/FFS support
IP & IPV6 TCP stack support including BPF & IPFW
the BSD/Posix/SYSV system calls sysctl, fork, exec, ktrace,
mmap, etc
Parts from Mach 3 handles tasks, threads, memory management, and
messaging (but i digress)
jaromski said:
So let's not get off into the weeds on this one, as far as I'm concerned you didn't respond to my original rebuttal.
-jaromski
Actually I did just look up a few posts.
But If you want to get off into the weeds and go into a discussion about the benifits/negatives of monolithic kernel vs modular kernel we can go that route if you wish.
Although it will become a complete thread hijack.
IMO the linux kernel went modular to make it easier to build distro's. As you have correctly stated(and I will paraphrase your thoughts and add mine) it is much easier for the distro builders like redhat/suse/ydl/and debian to provide a kernel that has gazillions of different device support prebuilt as modules.
While this hurts overall performance because the modules do not load in contiguous memory it makes it easier for the average user who doesnt know how to build his own kernels. If a person has the know how the best method is to build in only the hardware you actually need to support. i.e. a small tight monolithic kernel.
But none of this discussion actually has anything to do with my original premise which was the linux kernel is superior to mach.
The linux kernel has better thread management. Its schedular is far superior. Its virtual memory manager is great. Mach's is awful. Mach fragments memory worse than anything I've ever seen. We had to rewrite both the schedular and memory manager for mach to even make it halfway usable for us.
And now when we run our code we get repeatable results instead of as high as a 45% fluctuation in performance between runs. And by the way we also gained 5% overall performance.
Now personally I think the whole exercise was a waste of time because we got 15% better performance on our System by running a stock linux 2.6.9 kernel without even building it as a monolithic one.
But alas our customer wanted us to run OS X so we wasted 4 months doing the kernel mods.(But what the heck we got paid for it)
And lets not forget the Linux kernel can be compiled as 64 bit and who knows when mach will be 64 bit.( And as I have correctly stated earlier)
Tiger is not a 64 bit OS. Its kernel resides completly in 32 bit address space and will provide 64 bit memory to POSIX compliant programs(read command line; server daemons etc) thru a library call. Not very efficently I might add
which brings me to The real funny part
Now our customer has OS X now but it still doesnt serve his needs because he needs more than 2GB per single thread for part of his process. So he has to do part of his job on linux and then move his problem to an OS X to run(So he can still use his beloved OS) and whats even funnier is we will get to rewrite the memory manager and schedular again once TIGER Releases.(OH well job security)