Technical question: Why do certain patches need reboot?

Discussion in 'macOS' started by shompa, Jun 20, 2012.

  1. shompa macrumors 6502

    Joined:
    Jul 23, 2002
    #1
    Anybody know why OSX needs to be rebooted during certain patches?
    Why, technically does Apple chose to do this?

    Is it because of the crappy filesystem that never is consistent? (run diskutil every week and find out how many faults the file system creates. Apple need to change filesystem. I am angry about they abandoning ZFS that is by far the best filesystem there is)

    Windows needs to be reboot since they play with registry + the OS lock files.

    Unix machines does not have registry and the OS can access all files. Solaris and other Unix OSes never needs to reboot, even after heavy patching.
     
  2. simsaladimbamba

    Joined:
    Nov 28, 2010
    Location:
    located
    #2
    Maybe it is because the kernel needs to be changed, thus a reboot is "necessary"?
     
  3. iVoid macrumors 65816

    Joined:
    Jan 9, 2007
    #3
    Basically, anything they can't reload into memory needs a restart. Particularly on the 10.x.x, large parts of the system are updated and the only way to get them into memory is to reboot. Anything that updates drivers that other parts of the OS uses will also usally need a restart.

    Overall, I've found Macs need to be rebooted for updates less often than Windows (I support them both).
     
  4. Mr. Retrofire macrumors 601

    Mr. Retrofire

    Joined:
    Mar 2, 2010
    Location:
    www.emiliana.cl/en
    #4
    Apples update system is still based on the update mechanism introduced with Mac OS X 10.0. Apple did not change the file system, the kernel, the dynamic loader (dyld) and so on for a new update system. They use also the old zlib and bzip2 compression algorithms, instead of LZMA for software updates and Mac App Store files.
     
  5. shompa thread starter macrumors 6502

    Joined:
    Jul 23, 2002
    #5
    Thanks all for the answers. :)

    ----------

    I wonder how HPUX/Solaris and other unix distributions don't need to reboot.
    It feels like if Apple wanted, they could do the same since it have a BSD foundation.
    But it is far less rebooting in OSX.

    I want zero reboots :) (I am so old, that I really don't want to turn off computers. From the days where CPU time cost enormous with money)
     
  6. benwiggy macrumors 68020

    Joined:
    Jun 15, 2012
    #6
    Your reason for not wanting to reboot is that you remember when CPU time was expensive, so you're getting the most of it now? What about the cost of the electricity?

    If you are running some mission critical application that needs to be on all the time, then: fine. In which case you're not going to install patches without testing that they aren't going to interfere with your work first.

    Me, I turn my computer off every night. There's no reason for it to be on, it saves money, and rebooting each morning does keep things in check, as data in memory is potentially volatile. (The first troubleshooting step is always "have your tried rebooting?")
    With Lion's Resume feature, I'm instantly back with all my apps and documents loaded.

    I don't understand the machismo that some people have of announcing "336 days without booting!". It's important that the OS is capable of running for a long time if you need to. But if you don't need to, then ... why?

    Just my two cents.:D
     
  7. larkost macrumors 6502a

    Joined:
    Oct 13, 2007
    #7
    AppleSoftwareUpdate is really a black box, so saying it hasn't changed is rather presumptuous. In fact we know that at least the method of recording reciepts changes quite a bit around 10.4, so I can say for certain that you are wrong about that.

    We also know that HFS+ has goen through a number of changes since 10.0. At a minimum we know about hot-file additions, file compression, and hard-links to directories. So again you are wrong.

    The kernel gets updated quite frequently. So unless you mean "changed away from Mach" you are again wrong. OSs often die with the same general Kernel as they started with. And basic tools like dyld you tend to want to be very, very stable; they only change when they absolutely need to or there is a huge win in doing so (not often). But here again we know that dyld has changed, since it can support HFS+ compressed binaries.

    And LZMA is newer, but would not necessarily be a better choice for updates, since updates are going to be mostly binaries, and LZMA is horrible (worse than no compression) on uncompressable data like that. And I am not sure that you are correct about bzip2 being used in the installers. Most of the installers (outside of SIU) are cpio/pax, and the newer ones are wrapped in a xar.

    But none of this really has anything to do with why updates sometimes need reboots. The answer there is because it is really hard to patch running software dynamically. Doing so means that you have to spend a lot of time micro-testing all the situations possible, and it also puts limits on what sorts of changes you can make. For a consumer OS none of that makes any sense. So Apple does the right thing and if the kernel or KEXTs need to change they call for a reboot.
     
  8. Mr. Retrofire macrumors 601

    Mr. Retrofire

    Joined:
    Mar 2, 2010
    Location:
    www.emiliana.cl/en
    #8
    Apples software update mechanism is very well documented. XAR is open source (libxar), gzip (zlib streams in gzip format) is open source, bzip2 is open source, you can view the sucatalog files directly, for example:
    http://swscan.apple.com/content/catalogs/others/index-lion-snowleopard-leopard.merged-1.sucatalog
    ...and so on.

    No, because this has nothing to do with reboots, which are necessary after some updates. Calm down.

    No, because this has nothing to do with reboots, which are necessary after some updates. Calm down.

    HFS compression is a file system feature, supported by the kernel and the kernel extension "AppleFSCompressionTypeZlib.kext". The dyld has nothing to do with HFS compression.

    1. LZMA has built-in compression filters (optimizers) for executable code (x86 and x86_64 machine code), so LZMA can compress the program code better than other algorithms.
    2. Binaries (executable code) are not compressed. What you probably mean, is the file system compression in HFS+ or NTFS file systems, which has nothing to do with binaries (executable code). GCC, LLVM, Clang and MS Visual Studio produce uncompressed executable files.
    3. Your "mostly binaries" assumption is wrong. Just download one of the larger updates from http://apple.com/support/downloads, decompress it with Pacifist and see for yourself.

    Apples XAR-version (libxar) uses bzip2 and zlib compression (see opensource.apple.com). XAR supports also LZMA, but Apple does not use it. I tested LZMA with decompressed .pkg files, and was able to reduce the size by over 30 percent, if compared with Apples .pkg file (which uses the bzip2 and zlib compression algorithms). This means 30 percent faster updates, and the LZMA decompression is much faster than the bzip2 or zlib decompression. If you do not believe it, use google to find some benchmarks. Read this:
    http://multimedia.cx/eggs/bzip2-vs-lzma/

    Which is the reason why you reply.

    Why?

    Are you forced to defend Apple?

    Most of the "necessary" reboots have nothing to do with the kernel (/mach_kernel). Mac OS X supports already dynamic loading and unloading of kernel extensions and libraries. Even Jaguar (Mac OS X 10.2) supported this. Some applications use this already (PGP, Virtual Box, VMware Fusion, Roxio Toast, Apples disk image mounter and so on). AFAIK, the new version of Safari (v6.0 beta) does not require a restart after the installation.

    Btw, nice troll posting.
     
  9. larkost macrumors 6502a

    Joined:
    Oct 13, 2007
    #9
    You get an awful lot wrong, and avid quoting your own post because you contradict yourself in order to show me as "wrong". I demonstrated that all of the things you said did not change have all changed. There is similarly lots wrong with your new post, but it is all summed up in the starkly in this section I have quoted. You don't even know what an internet troll is. A troll would not have a reasoned response. That and the other ad homonym attack (I must be paid to defend Appe) just show how weak your posistion is.

    And you are wrong about dylib not needing to be changed for HFS+ compression of binaries. It did need to be changed, because before the change is assumed that you could always directly memmap between the disk and the memory representations, so virual memory mapping was always consistant. With HFS+ compression this assumption was broken.

    And no, I was not talking about filesystem compression with binaries. But large binaries tend to do poorly with compression, as do many of the other things inside a .app bundle (plist files being the big exception).

    And finaly you avoided talking about my real point: it is really hard to patch software dynamically while it is running. Unless you make really hard rules about what is allowed to change (basically ABI contracts inside your kernel), then it is really hard to make sure you have consistant state to hand over. Being abe to KextLoad and KextUnload does nothing about the ongoing dependancies of other processes that have open connections to the various services that those Kexts provide. This same problem is why every major consumer OS (all versions of Windows, Linux, MacOS) has always had reboots for many (but not all) OS changes.

    And I am not sure why the next version of Safari not needing a reboot proves anything in this context. WebKit (which underlies Safari) is a Framework, not a Kext, and they have probably moved the major version marker on it, so they know that nothing should be relying on it at this point in the OS, and will leave the older version of the Framework in place, so don't have to even unload that.
     

Share This Page