MarkCollette said:
But in any reasonably large (or cross-platform) application there tend to be separate builds for different architectures or uses, like iPhoto bundled with the OS versus iPhoto bundled with iLife, etc. So there have to be config files somewhere saying which bits get assembled for which target. So the multiple-file-binary-diff process has to tie into that somehow.
I guess I'm not understanding your point, since I don't see why it would have to. The binary diff replaces a straight file copy; all of the bookkeeping you're talking about, it seems to me, happens at a higher level regardless of how the new files get to where they need to be.
MarkCollette said:
Plus, as we said before, it's also diffing per original release. So if there are 3 targets and 4 previous releases, then that's 12 diffs.
Actually, as I stated before, it only makes sense to go back to one release, so by your math only 3 diffs. Further, are there really entirely separate installs of iPhoto depending on whether it came with the OS or it came with iLife? I thought Apple was better than that (although I can certainly think of some companies that aren't *cough*) If it came with your computer, it should've been just as if you had done a vanilla install of OS X and then installed iLife, which would result in an identical app.
MarkCollette said:
My point before was that with code it's straightforward, but with all the ancilliary config files that differentiate targets, those have to be tracked and handled, which the diff has to have the complexity of knowing about too.
Again, I guess I'm just not understanding your point, b/c in my mind the diff *wouldn't* have to know about them, and thus the complexity doesn't exist.
MarkCollette said:
But they don't use installers for most things on the Mac. You drag and drop the bundle to where you want it, and just run it.
Yes, but we were talking about the specific case of Software Update, which by nature is meant to deliver incremental upgrades to existing software and contains its own internall install routines, which is where the patcher would go.
MarkCollette said:
If there's some special conversions that have to happen, then they're executed on the first run of the application.
They'd happen either way. No change here...
MarkCollette said:
Plus, for the user, it's easier for them to download version x+1 to their desktop and run that side by side with the previous version x for a while, or at least be able to fall back to version x, whereas if you patch, in place, version x to x+1, then there is no easy downgrade path.
Do users have that option with iPhoto 5.0.4? If so, then it's certainly not through Software Update, and thus whatever separate download path they have now could still be made available. Binary patching doesn't preclude that.
MarkCollette said:
Also what happens if there's a power failure part way through patching.
What happens when there's a power failure when running Software Update? Actually, I can answer that one--hosed install. (P.S. Never have guests over while you're running an OS X update, since they might have a tendency to close your laptop while the update is running, thus ruining the entire OS X install. arrrrrgh) My point is that the issue is separate from binary patching and thus has no bearing one way or the other as to whether binary patching should be used. (Actually, binary patching can be *safer*, since you're patching to a copy of the file typically before overwriting the original, so there's a smaller window for corruption to occur).
MarkCollette said:
With Objective-C's message passing by name, if you change the name sufficiently, then its place in the table may well change, and so every referer still changes. It would be the same issue if one called a different method, like going from a shallow copy to a deep copy, etc.
Ah, I didn't realize that. I was still in a C++ mindset. Still, bits moving about in a file, as long as they are in large chunks, are easily handled by a binary diff, as are lots of tiny bits changing. The only time binary diffs start to not be efficient are either when over 75% of the file has changed or in pathological cases (every other byte in the file changed).
MarkCollette said:
I agree, once the infrastructure is all in place, then it should just work. But QA still has to test every single scenario, whether or not it should work.
If we went by what QA *has* to do, they'd never finish, there simply isn't enough time in the world to test every code path and every combination ever. QA's job is to test the most plausible and likely scenarios to make sure they work, in an attempt to catch as many problems as possible. Run a binary patcher through the QA ringer (say, 100 million random files) and make sure it's near-bullet-proof, and the chance of the binary patches being a problem compared to *other* things to test become so small that it's better for QA to spend time testing other things. Will the chance of binary patching having a flaw ever be zero? Of course not, but that's true of *any* software. It's a risk you have to take--the trick is to get that risk as close to zero as possible, so QA can spend their time tracking down more worthwhile bugs.
If I seem overzealous on this topic, I apologize. It's because I'm a firm believer in the theory that if you're going to do something at all, do it *right*, and this seems to be an issue that, at least IMHO, a large portion of the industry still seems to ignore for some reason.
--mcn