The Mach Microkernal

Discussion in 'macOS' started by MacVault, Apr 24, 2006.

  1. MacVault macrumors 65816

    Jun 10, 2002
    Planet Earth
    I saw an article and discussion and I'm wondering if Mach really is that bad or that much worse than the Windows kernel and if so, how easy will it be for Apple to fix. And will they indeed fix it, or just make us live with it's shortcomings? Is Mach really keeping Apple out of the enterprise market, or would Apple just not care about the enterprise anyway? What do you all think?
  2. Kingsly macrumors 68040


    Hmmm, interesting. I would love to see a new, faster kernel. But unless its like "Omigoditssofreekingfast" then I don't really care.
  3. bousozoku Moderator emeritus

    Jun 25, 2002
    Gone but not forgotten.
    Mach, in itself, is good and compact, as a microkernel should be. It passes messages efficiently and the version included with Mac OS X has extensions/drivers which can be loaded and unloaded on demand.

    It is in no way worse than the Windows kernel. There really isn't much of a reason to discuss that, if any.

    The Linux monolithic kernel is another matter. Both Linux and Mach have good and bad attributes. Linux does have better performance, but then, I'm sure that FreeBSD does, too.

    A lot of enterprises are going to keep Apple out because of anti-Apple issues, not whether the kernel is fast or not. Some have even accepted Linux. The truth is that Mac OS X Server requires less maintenance and is easier to use than most, so it requires fewer people.

    Windows servers are falsely inexpensive. They're cheap to buy and install--even the expensive server models--but they end up costing in the maintenance and reliability departments. I'd say that they're also a pain to use but UNIX management utilities generally are more of a pain. SMIT on AIX comes to mind as one of the most unfriendly, user-friendly utilities ever.

    I suppose Tevanian did leave Apple over Mach but it could be that he didn't like being a manager instead of being busy in the code.

    I hope that the kernel is much different for 10.5 but it has to be bulletproof, not just fast.
  4. dr_lha macrumors 68000

    Oct 8, 2003
    bousozoku is hits on a good point. Most IT departments I know are either full of MSCE idiots, who think no problem can be solved without throwing $$$ at Microsoft, or are a bunch of Linux geeks. The middle ground here would be occupied by Apple, but is fairly small.

    Apple doesn't really have a reputation as a server OS, despite the fact that it can do the job exceedingly well. The tools built into OS X server are most admin's wet dream, but unfortuntaly they just don't know about them.

    Personally I don't think we'll see Mach go until at least 10.6. 10.5 will be the kernel were they mainly clean up the Intel side of things and probably introduce x86-64 support.
  5. MacVault thread starter macrumors 65816

    Jun 10, 2002
    Planet Earth
    But just how "easy" or "difficult" is it to throw in a a new kernel? Will it brake lots of stuff in the system?
  6. dr_lha macrumors 68000

    Oct 8, 2003
    Difficult, definitely. You certainly can't just throw one in, for example the Linux kernel, as drivers and the binary format are incompatible.
  7. laidbackliam macrumors 6502

    Feb 1, 2006
    if they put a new kernel in, would that mean YET ANOTHER transition?
    or would it be noticeable to the common osX user?
  8. bousozoku Moderator emeritus

    Jun 25, 2002
    Gone but not forgotten.
    There wouldn't be anything for you to do, except load a software update.
  9. gekko513 macrumors 603


    Oct 16, 2003
    Even though OS X and FreeBSD don't use the same kernel, there is a connection between the two and they have suffered from similar performance problems. The FreeBSD-kernel has been going through a thorough redesign lately in particular with regards to the system resource locks that has become much finer grained which can improve the performance greatly when several system resources are accessed within the same period of time. It will be interesting to see if Leopard brings similar design changes and the accompaning performance improvements.
  10. ChrisA macrumors G4

    Jan 5, 2006
    Redondo Beach, California
    Mac OS has two kernels, in a way. Neither of then is "bad". On a typical BSD system the Kernel is designed very much like Linux, Solaris and other unixes and the kernel directly access the bare hardware. But on MacOSX the BSD kernel uses services provided by Mach. This de-coupling of the kernel from the bare hardware allowed the CPU to be changed ut twice now. Back when MacOS wass called "Next Step" it ran on the M68K and then Apple ported it over to PPC and then the X86. Mach hanfdles stuff like "fat binaries" (AKA "Universal") too. I think Mach als sets up Apple to tae maximum advantage of the new multi-core CPUs. It will not be long before 8-core and 16-cores are common. Sun Microsystems is shipping low cost 8-core servers today. Intel will follow I'm sure. Apple does pay a slight speed penalty for using a large monolithic kernel on top of a micro kernel but I think they buy something with it too.

    Way not use MacOCS for the "Enterprise" Easy answer .... Mac OSX only runs on Applehardware and Apple does not make enterprize class hardware. What's the biggest Apple box? A quad PowerMac? The thing only has four cores and one (count them... "one") power supply and just tr and rack mount the thing.

    Also MacOSX is lacking features that (say) Solaris has that are needed in an enterprise setup..
    1) Lights out managment - og into a machine with a dead hard drive a dianose it, some kind of ROM based system is neeed. No one wants to drive to the office to re-boot a computer should be able to do that remotly
    2) "work around" failed hardware. A dead core should not bring down the whole computer. Neither should a RAM falure nor a smoked disk drive
    3) The OS shold "scale" to 8, 16 or 64 CPUs. Large DBMS systems can actually use this kind of power and no, you can't simply use racks of computers. At least not easyly
    4) A service organization that can be on-site within a given number of hours. No One would build a mission critical system around a computer or OS that could not get an Apple tech on-site in 8 hours to any office location world wide. Apple simply lacks that kind of service organization. IBM, Sun and ohers do have this.

    Apple does have equipment that would work fine in an office of maybe up to 100 people.
  11. Lollypop macrumors 6502a


    Sep 13, 2004
    Johannesburg, South Africa

    I agree, Apple really doesnt have a true enterprise sollution at this stage. Until recently I have never really worked with systems like ChrisA mentions, but I can tell you, they rock! Apple should try the small enterprises first, entise them, and if that works put in the R&D for a real enterprise sollution.

    As for the Mach kernel, if Apple wants to scale it they will have to have kick ass networking in the kernel itself, and I believe they took that step with tiger. Ive never liked the monolythic design, ye its fast, but if something goes wrong the entire thing can go down. With server you need constant up times, and developers have great drivers and a monolythic kernel kan work well, but for the consumer that frequently changes hardware and might make more changes the high quality drivers are a bit less. Im always reminded of the BeOS (microkernel), when it had experimental support for my network card it would crash the network service, but the entire system would go on uneffected, dont really thing that would have bee the case with linux.
  12. gman71882 macrumors 6502


    Jan 12, 2005
    Houston, Tx
    Along this subject, I found this great article yesterday on a similar subject:

    PBS Article

    To Paraphrase a bit:
    It seems that under the Agreement that Apple signed with Microsoft in 1997-2002. Apple got some legal right to the Windows API.
    XP was released in Oct 2001: 10 months before the agreement expired.

    Think of the implications. A souped-up OS X kernel with native Windows API support and the prospect of mixing and matching Windows and Mac applications would be, for many users, the best of both worlds. There would be no copy of Windows XP to buy, no large overhead of emulation or compatibility middleware, no chance for Microsoft to accidentally screw things up, substantially better security, and no need to even take a chance on Windows Vista.

    Its a bit long, but just read the whole thing!!!
  13. matticus008 macrumors 68040


    Jan 16, 2005
    Bay Area, CA
    Don't forget the xServe line. It is lacking some features, still, seeing as though Apple can't support a tremendously diverse hardware range, but as far as reliability and rackmount-ability, there are viable Apple options.

    You can. Remote management options do indeed exist.
    Drive failures and RAM failures are handled gracefully by OS X server. I can't speak to the loss of a CPU/core, as I'm somewhat rusty on the subject.
    If you mean clustering, then yes, OS X has some work to do. So does Windows.
    Bingo. This is Apple's biggest problem in making headway into the enterprise sector. IT staffs aren't up to speed on Apple hardware or OS X, and Apple doesn't provide the kind of service that companies need. They do provide on-site support, but IBM really does this well (Dell isn't too terrible at it, either). With this roadblock out of the way, Apple could develop an enterprise presence, which would then allow them to invest resources in fixing other deficiencies.

Share This Page