Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Re the size of the update - you'll find that the announced list of fixes for an update doesn't cover every single thing that's changed, they've probably fixed a heap of tiny things that only happen when you have a certain size photo and do a certain sequence of things on it, but they don't list that in the release note. It's not like Apple has a public bug tracking system, Firefox style.

It's still a buggy program though for people who use it intensively - for example if you edit (e.g. rotate) 50 photos, and it crashes on the last one, then when you relaunch all 50 photos are back to normal - very frustrating! Hopefully there's a fix for that in there somewhere.
 
aegisdesign said:
Odd. It's 41MB for me updating from 5.03 in Software Update. 41MB to fix an image rotation bug is taking the biscuit slightly. They really should modularise it more.

I was about to say the same thing, 5.0.3 update was about the same size, maybe they are just replacing a newer copy of the iPhoto instead of updating it.. You know, with iTunes its the same thing, they just replace the working version with the new version, but iPhoto is sold in a pack while iTunes is totally free..

Some of the pics taken with my PowerShot S50 was weirdly and randomly rotated, and some werent, maybe they fixed it all with this update. Lets wait n see..
 
Installed : No probs, it's running fine and *snappy* as usual. more than 1500 photos and 10.3.9 , I like iPhoto. Now preparing my self for Tiger...
 
macorama said:
It's still a buggy program though for people who use it intensively - for example if you edit (e.g. rotate) 50 photos, and it crashes on the last one, then when you relaunch all 50 photos are back to normal - very frustrating! Hopefully there's a fix for that in there somewhere.

To be honest, I thought that had changed in 5.0.3. I'm fortunate in that, aside from the colourshift bug, my iPhoto has been remarkably stable (around 8000 images) but it has crashed a couple of times. In iPhoto 5-5.0.2, if it crashed while editing, I'd find weirdly shaped thumbnails and that many of the changes seemingly hadn't been saved. 5.0.3 crashed once last week but the picture I'd been working on beforehand had been saved and all the thumbnails were correct.
 
I had random crushes all the time myself... what saved the day was to
re-build the iPhoto library (by holding down command-option-shift and launching the iPhoto program). From the dialog that opens,
check the first three entries to rebuild the library and thumbnails.
This usually resolves ALL my crashes...

alex_ant said:
I don't think so. In order for it to do that, something would have to be working properly in the first place. Your list was informative, but you left off what I think are the more important bugs:

5.0 crashed all the time, randomly corrupted your library, was slow as hell, and used up 800MB of RAM with a 20,000-photo library
5.0.1 crashed all the time, randomly corrupted your library, was slow as hell, and used up 800MB of RAM with a 20,000-photo library
5.0.2 crashed all the time, randomly corrupted your library, was slow as hell, and used up 800MB of RAM with a 20,000-photo library
5.0.3 crashed all the time, randomly corrupted your library, was slow as hell, and used up 800MB of RAM with a 20,000-photo library
5.0.4 ? (haven't tried it yet)
 
I'm 'a coder' and I think it is a perfectly valid question to ask. There have not been that many iPhoto 5 releases. Apple could send only the binary delta (or 'diff') between the release you have and the latest version. This would help reduce needless load on their own site and the internet.
I find that 'coders' are more likely to make excuses for the way things are. Those with a more 'abstract' view of computers often are the ones that question strange or obsurd behaviour.

Elektronkind said:
From a coder's standpoint, you have a kind of simpleton reasoning here, so let me shed some light on why a "one little 'ol change" means a 41MB download in this case.

iPhoto, as an application package, has total installed size of around 150MB. This includes many things such as support files (international language customizations for iPhoto menus and dialogues and similar support resource files) as well as the iPhoto program itself and any support programs.

The biggest single file in the iPhoto application package is the executable binary file itself... this is the program that is iPhoto. This is what you see running in a process listing. Its size is around 40MB, and this is also where the seemingly tiny little fix is located.

Now, with these binary files, you can't (in a reasonable fashion) replace little bits and pieces of it. It's more reliable to replace the whole thing. This is why you have a 41MB file waiting for you.

Please remember last time that rarely are the changes in direct proportion to the size of the update... not on such a small scale such as this instance.

/dale
 
What's going on?

alexeismertin said:
If you download 5.0.4 from Apple.com it is actually 5.0.3 when you mount the DMG.

If I go to Software Update it tells me there are no updates I need. I'm still on 5.0.2.

I downloaded 5.0.4 from the Apple site, but it won't install, telling me there is no eligible software in /Applications.

Same for GarageBand 2.0.2 (not that I use it, but I like to keep things up to date).

Anyone else getting this?
 
Mustafa said:
I downloaded 5.0.4 from the Apple site, but it won't install, telling me there is no eligible software in /Applications.

Same for GarageBand 2.0.2 (not that I use it, but I like to keep things up to date).

Have you rearranged your Applications folder into subfolders? If so, Software Update won't see your iApps. Put them back into the main Apps folder and then see what SU finds.

If you really feel the need to have your applications in subfolders, consider creating a new folder with subfolders but putting aliases of your applications in them rather than the actual apps themselves. Then put that folder into your sidebar for easy access.
 
pubwvj said:
No. It is a yawn a minute. iFind iPhoto not iNteresting. I have tried it a few times but always found it to be a dog. GraphicConverter and Photoshop are my best friends. :)

I don't know GraphicConverter, but I think iPhoto/Photoshop work extremely well together. I bring everything into iPhoto to start and do light editing there like slight color and brightness tweaking. If it needs major levels, color, roto or other work, I open it Photoshop (which iPhoto makes very easy to do). After I save and close the image in Photoshop, it refreshes in iPhoto.
 
Wish list for iPhoto 6

I thought iPhoto 5 got an awful lot right: really easy-to-use quicky editing features, great custom books (just ordered my first!), and runs smoother on my PowerBook than iPhoto 4.

I would like to see in iPhoto 6:
* More integrated handling of MPEG-4 images. I love that my little Canon PowerShot can capture MPEG-4 clips, but iPhoto can't do much more with it than hold a place for it at the moment.

Again, I thought iPhoto 5 was a homerun and iPhoto 6 won't need to fix much.

Anything else?
 
Gorbag said:
Good point, and, as i said, Photoshop runs perfectly well on both machines. So why cant iPhoto? Not as if it does a fraction of what photoshop does!
I assume the difference is Photoshop's excellent internal design based on years of development experience and a lot of fine tuning.

About the "diff" issue: Since apps are written in high-level languages with high-level tools such as Xcode, I would expect that changing a single line of code (adding "if RotatingPhoto then DontCrashSoOften" to iPhoto) would change almost all of the resulting binary, because code would shift position.

On the other hand, an application consists of many files, and we'd assume an update doesn't reissue files that have not changed.
 
40mb to fix camera rotation?

I think the entire iView media pro application had a smaller download size!

Why has OS X turned Apple into a bloatware company?
 
Doctor Q said:
About the "diff" issue: Since apps are written in high-level languages with high-level tools such as Xcode, I would expect that changing a single line of code (adding "if RotatingPhoto then DontCrashSoOften" to iPhoto) would change almost all of the resulting binary, because code would shift position.

On the other hand, an application consists of many files, and we'd assume an update doesn't reissue files that have not changed.

In this case, the 40MB is talking about the main executable (single file). Regardless, even with programs written in high-level languages, relatively little changes. Although the code might shift in position, binary patchers can handle such a case very efficiently. The only issue left is the actual changed instructions and any jump offsets that have changed due to the shift in code position, but those are relatively miniscule changes compared to the entire executable.

Now, if they fixed a *lot* of bugs that required lots of code changes, then the binary patching might not be worth it, but I'm operating on the assumption of a small set of changes compared to 40MB of code.

And yes, us coders tend to make more excuses about the way things are than not. It's something that we really should change. IMHO, every time a non-coder user uses an app of mine and goes "but why does it have to be this way?", there better be a *damned* good reason, or I've failed in my job.

But then, maybe that's just me :)

--mcn
 
I think it's debatable whether objective-C is high-level :)
Also, typical binary diff tools will understand the scenario you describe.

Doctor Q said:
About the "diff" issue: Since apps are written in high-level languages with high-level tools such as Xcode, I would expect that changing a single line of code (adding "if RotatingPhoto then DontCrashSoOften" to iPhoto) would change almost all of the resulting binary, because code would shift position.

On the other hand, an application consists of many files, and we'd assume an update doesn't reissue files that have not changed.
 
mactim said:
I think it's debatable whether objective-C is high-level :)

Hey, I've written some *very* high-level objective-C code in my time! Not much, granted, but some! In fact, at last check, it was almost to level 50! Just a few more baddies...

--mcn
 
A few things:

1. Please use MD5 instead of MD3
2. Do you really want to spend the QA time testing every upgrade scenario when you can just drop in one single working file? I'm sure the extra QA cost would offset the bandwidth expenditure
3. If the old version causes data corruption, do you really want to delay a release to setup the extra testing specific to your patching strategy, when a simpler release sooner will save people's data?


Mathew Burrack said:
Not to continue the continued tangent, but...

There are ways around both issues, though. The installer, from a high-level perspective, doesn't know or care about what the files are that it's installing, just that they're strings of bits to store on the hard drive. From that standpoint, binary-patching x86 code vs PPC vs 68K code wouldn't make a bit (haha) of difference. Also, IIRC, the fat binaries are really packages containing multiple binary files, one set for each platform, so there's no danger of patching a single file meant for multiple architectures anyway. (Don't hold me to that, I'm working off of memory here :)

As far as reliability, MD3 hashes are pretty darned reliable in this case. I implemented a system at Cyan that would run MD3 hashes before and after for a patch, and if the file to patch didn't match the initial MD3 hash, it would go and download the full version of the file for you. Ditto if, after the patch, the file's MD3 hash didn't match up. Out of literally terrabytes worth of patching that system did over its lifetime, not once did we find a case where the system failed. Plus, you get the added bonus of fixing any files that might have become corrupted previously.

Granted, it's a more complicated system, but not by much. I could easily see it being integrated in to Software Update as a pre-check to determine whether you can download the binary-patch version of the update or the full-blown version. (They already went a step in this direction anyway, so it could be that it's already in the works and just hasn't gotten to prime time yet). The only remaining chance of something going wrong would be that you had a corrupt file that somehow managed to match the original MD3 hash *and* matched the patched MD3 hash after patching. The chances of that happening are, well, practically nil, and if you happen to be the one unlucky one it happens to, well, that's what reinstalling is for :)

urm

*ahem*

*shoves thread back on track* nothing to see, move along...

:)

--mcn
 
MarkCollette said:
A few things:

1. Please use MD5 instead of MD3
2. Do you really want to spend the QA time testing every upgrade scenario when you can just drop in one single working file? I'm sure the extra QA cost would offset the bandwidth expenditure
3. If the old version causes data corruption, do you really want to delay a release to setup the extra testing specific to your patching strategy, when a simpler release sooner will save people's data?

1. Gah, sorry about that, my brain had a small error trying to remember that detail. It was MD5. I was wondering why MD3 looked so strange when I typed it... Thanks for the correction :~)

2. Depends on the QA cost and the bandwidth cost, naturally. And there is really only one additional scenario to consider: binary patching against the last revision (there's no point in supporting binary patches for multiple revisions back, since you have a greater chance as you go back of the binary differences being close to just replacing the entire file. Also, you acheive the greatest impact that way: those that upgrade often get small patches, which is good since they do it often, vs. those that upgrade rarely get big patches but don't care as much since they upgrade rarely :)

3. To answer your Q with a Q: do you want to create a blanket policy for all updates that might be only applicable some of the time? In your case above, I would simply generate a full patch w/o the binary diffs, since timeliness is important, and generate binary patches when we can spend the extra time for testing/release (which is hopefully the rule rather than the exception. Cyan's patcher worked this way, actually--if we didn't want to wait to generate/test binary patches, we simply didn't generate them, and the patcher utility would simply fall back to the full file option when it couldn't find the binary patch).

As for more complexity/increasing QA time, the key was to make a very robust patching utility up front--one that, if *any* problem was ever detected, would fall back on the full-file version. Thus, if for some reason the binary patches for a new release were garbled or buggy, the patcher utility would catch it when checksumming the final product (checksums were generated on the original masters). In that way, we could actually end up with buggy binary patches and never even know it, since the patcher would catch it and react appropriately to ensure the user had correct data.

They key was to deliver to the user correct data quickly, caring more about correctness than speed, but still attempting to do the fast method first if at all possible. That way, most users are happy about the speed (except the rare case that has to fall back to full file upgrades) but all users are happy about correctness. Without binary patching, all users are happy about correctness and most/none are happy about speed. Frankly, I don't see much of a contest there :)

ymmv

--mcn
 
Mathew Burrack said:
3. To answer your Q with a Q: do you want to create a blanket policy for all updates that might be only applicable some of the time? In your case above, I would simply generate a full patch w/o the binary diffs, since timeliness is important, and generate binary patches when we can spend the extra time for testing/release (which is hopefully the rule rather than the exception. Cyan's patcher worked this way, actually--if we didn't want to wait to generate/test binary patches, we simply didn't generate them, and the patcher utility would simply fall back to the full file option when it couldn't find the binary patch).

I think we both agree that given time it might be best to put the effort in, and that in this specific case there was not a lot of time.


Mathew Burrack said:
As for more complexity/increasing QA time, the key was to make a very robust patching utility up front--one that, if *any* problem was ever detected, would fall back on the full-file version. Thus, if for some reason the binary patches for a new release were garbled or buggy, the patcher utility would catch it when checksumming the final product (checksums were generated on the original masters). In that way, we could actually end up with buggy binary patches and never even know it, since the patcher would catch it and react appropriately to ensure the user had correct data.

They key was to deliver to the user correct data quickly, caring more about correctness than speed, but still attempting to do the fast method first if at all possible. That way, most users are happy about the speed (except the rare case that has to fall back to full file upgrades) but all users are happy about correctness. Without binary patching, all users are happy about correctness and most/none are happy about speed. Frankly, I don't see much of a contest there :)

ymmv

--mcn

I don't think you can write off the QA time by having a solid patching program. With different software versions you sometime have non-code changes like config files changing, documentation errors being fixed, etc. that might be missed by just looking at code changes. And even if you catch all of those, what about files that are generated by the code only on the user's machine, like preferences, caches and indexes. Those might have changed format, or become redundant.

Plus we don't know what was causing the photo corruptions. It could have been bizarre interactions with many different subsystems. Or maybe when fixing some little issue, some coder thought "I should rename this method that's called everywhere, to have a more descriptive name" :)
 
WARNING: continuing the off-topic-ness. Apologies to any offended parties in advance. :)

MarkCollette said:
I think we both agree that given time it might be best to put the effort in, and that in this specific case there was not a lot of time.
If you're talking about this iPhoto update (see, we're on topic! :), then yes, I agree. I was more addressing the patching issue in general.

MarkCollette said:
I don't think you can write off the QA time by having a solid patching program.
Write off, no. Be less cautious is more like it, especially if you make sure to put the patcher itself through the QA ringer beforehand (thus making the worst-case QA for any patch re. the patches themselves a regression test).

MarkCollette said:
With different software versions you sometime have non-code changes like config files changing, documentation errors being fixed, etc. that might be missed by just looking at code changes.
All of which are just bits, and handled by the binary patching. I'm not talking about code changes, I'm talking about a system that takes a set of files that are labeled "iPhoto 5.0.3" and another set called "iPhoto 5.0.4" and figures out bit-for-bit what's different. That would catch, well, everything :)

MarkCollette said:
And even if you catch all of those, what about files that are generated by the code only on the user's machine, like preferences, caches and indexes. Those might have changed format, or become redundant.
And they won't be handled by full-file patching, anyway. In those cases, it's either a) the installer's job to upgrade or erase them, or b) the app's job to gracefully handle older formats. Either way, it's a separate issue from binary patching.

MarkCollette said:
Plus we don't know what was causing the photo corruptions. It could have been bizarre interactions with many different subsystems. Or maybe when fixing some little issue, some coder thought "I should rename this method that's called everywhere, to have a more descriptive name" :)
I *hope* they don't have debugging symbols included in those builds! In case they do, though, the name change would only be in one place (the symbol table) and thus only add a few dozen bytes to the binary patch :)

As far as the many-different-subsystems, as I said, if the binary diff becomes too large (approaching the size of the original file), you just fall back to the full-file method. No harm done, since that's not what the binary diff is meant to help with anyway.

Maybe it would help to think of it a different way: the binary patching would basically just be a more efficient file copy than the file copy that the installer/Software Update performs *already*. Everything else remains identical, it's just that you get a file copy without having to actually download the original file. It could literally be dropped into place without affecting the rest of the entire update utility, very easily. If it were any other way, then it *would* be a QA nightmare and not worth the risk. It's the self-contained, modular, narrow-field-of-focus approach to binary patching that makes it work so well in patching/updating/installing land and keeps the QA and risks to a minimum.

Just to keep this back on track wrt. iPhoto, in this case, yes, getting the patch out the door to fix corruption issues was paramount. With a binary patching method in place already, they could've just released the full version to Software Update, then generated the patches afterwards. The initial adopters that *had* to have the update immediately would have to suffer through the full download, but they'd get it ASAP, which is all they care about. Those that waited would still get the patch, but would get the smaller binary-diff patch (once they were done generating), which will make them happy since the time-to-download is a larger (relatively) issue for them.

...

ah, to hell with it. Just gimme access to the Software Update code and I'll show you. Easier to just demonstrate it and be done with it :)

--mcn
 
Mathew Burrack said:
2. Depends on the QA cost and the bandwidth cost, naturally.

Another point: Tiger assumes you have broadband. Look at Dashboard, iTunes, etc, all of these assume you have an always-on connection (Last I saw Canada had ~ 75% broadband penetration so its a fairly safe assumption here).

Once broadband is assumed, you are discussing adding a significant number of QA effort-hours to reduce a 3 minute download to a 1 minute download, which the end-user won't even notice.

So why spend the money and increase risk on a negligible gain?
 
stcanard said:
Once broadband is assumed, you are discussing adding a significant number of QA effort-hours to reduce a 3 minute download to a 1 minute download, which the end-user won't even notice.

So why spend the money and increase risk on a negligible gain?

In my experience (and *only* in my experience, so don't take it as gospel), "assuming" broadband as part of a design is a bad idea. In the best case world, yeah, you get upwards of 256Mb/sec download or higher. In the worst case, you can get down to as slow as 2Mbit/sec depending on the connection, server load, network traffic, router traffic, wireless interference on an Airport, etc.

Plus, here's the flipside: let's assume your comparison of 3 minutes to 1 minute to download (which is VERY pessimistic for binary patch savings, but we'll go with it). To an individual user, yes, such a small difference is meaningless. However, you've just *tripled* the number of users that the update servers can now handle. Or, if that increased capacity isn't used, you've now cut the amount of bandwidth/power/processing time/hardware wear-and-tear for distributing that updated by 2/3rds. That, to me, is far from a negligible gain. Is it worth the offset in QA time? Dunno, but it certainly doesn't seem so clear-cut in that case, and certainly worth investigating the tradeoffs at the very least. Plus, scale it up to 3 hours vs 1 hour and I guarantee that your end user will suddenly care a great deal, even though the ratio hasn't changed.

(Bandwidth savings in particular would be wonderful. It doesn't matter how many people have broadband, you can *never* have enough bandwidth. Just ask anybody trying to distribute HD video over the 'net :) The less bandwidth used for software updates, the more you have to spare for other products/services you provide to customers, which again is worth lots of $$, even indirectly.)

--mcn
 
Mathew Burrack said:
In my experience (and *only* in my experience, so don't take it as gospel), "assuming" broadband as part of a design is a bad idea.

But they already have! So it's a given, not a theoretical assumption.

(Bandwidth savings in particular would be wonderful. It doesn't matter how many people have broadband, you can *never* have enough bandwidth. Just ask anybody trying to distribute HD video over the 'net :) The less bandwidth used for software updates, the more you have to spare for other products/services you provide to customers, which again is worth lots of $$, even indirectly.)

--mcn

Generally I would agree, but I'm betting if you look at the daily bandwidth that goes through Apple, the 5 or 10 million iPhoto updates probably don't even register as a blip.

Again it all comes down to cost vs. benefit. And it goes on a per-fix thing too; I see my Garage Band update was 14MB, and I know that GarageBand is a lot bigger than 14.

Personally if I was release manager, assuming it has no measurable impact on my bandwidth charges, and the majority of my customer base is broadband, I would avoid binary patching and update entire libraries. Its not worth the QA and support headache.

The same argument goes with Apple's overzealous reboot policy on updates. Yes, we know that we could just HUP the appropriate processes, and they could write the installer to do it, but it's not worth the additional QA and support hours that would go into it.

Remember QA is _always_ pressed for time. They are always making up for the slips in the development schedule. QA time is a very precious commodity that needs to be used wisely.
 
stcanard said:
But they already have! So it's a given, not a theoretical assumption.

Fair 'nuf.

stcanard said:
Personally if I was release manager, assuming (my emphasis) it has no measurable impact on my bandwidth charges, and the majority of my customer base is broadband, I would avoid binary patching and update entire libraries. Its not worth the QA and support headache.

Michael Abrash's Rule #1 of Programming: Assume Nothing.

To elaborate: I would rather, if I were a release manager, have the option available to me to use if the bandwidth savings are significant, and simply not use it if they are not. Yes, if the bandwidth savings are minimal, I agree, it's not worth the QA headache, but what if it *is*? I'd rather have the option in that case.

stcanard said:
Remember QA is _always_ pressed for time. They are always making up for the slips in the development schedule. QA time is a very precious commodity that needs to be used wisely.

I agree, which is why I wouldn't go w/ binary patching if I didn't think it was worth it. Again, I'd rather have that option open to me, so that if I determined at some point it *was* worth it, I could take that option.

Here's another case-in-point, unrelated to software updating but the same principle applies. Most games nowadays assume a certain (rather high) level of graphics capability on their users' machines (a fact that the Mac community is only too aware of). Think in particular EverQuest 2, which had outstanding (to some people) graphics, requiring a correspondingly outstanding gfx card to handle it all. In comes Blizzard with WoW, which has decidedly simpler graphics in terms of capability, but they use it more wisely and manage to deliver a *better* graphics experience while requiring less gfx processing power to do it (just to keep it relevant, too, it can be argued that it takes more effort and work to do more with less resources, so Blizzard had to put more work/time/effort/money to keep the resource requirements down.)

EQ2 has roughly 400,000 subscribers. WoW just topped 3.5 *million* subscribers. Clearly they're doing *something* right, and their conservative approach to hardware resources is a significant part of it. Hell, just look at how many Macs can play WoW versus how many would be (theoretically) capable of playing EQ2. Meanwhile, ask Blizzard whether bandwidth matters :)

Wish for the best-case scenario, plan for the worst. You'll generally end up with much better products in the long run, and (sooner or later) your end users will thank you for it :)

--mcn
 
jettredmont said:
Did you actually try viewing the RAW from your D30 in iPhoto? I know it doesn't list it as supported, and that the D30 was Canon's first DSLR effort (but not their first digital camera overall, nor their first RAW camera), so it's likely that the RAW formats have changed, but seems it wouldn't hurt trying. Certainly, the more recent Canon RAW cameras are supported ...

I've tried and can't view, get unsupported file type. Strange thing is that I do get preview thumbnails in finder of the RAW images. Photoshop Elements 3 works fine with the images.

Mike
http://www.recordproduction.com
 
Mathew Burrack said:
To elaborate: I would rather, if I were a release manager, have the option available to me to use if the bandwidth savings are significant, and simply not use it if they are not. Yes, if the bandwidth savings are minimal, I agree, it's not worth the QA headache, but what if it *is*? I'd rather have the option in that case.

Actually, we completely agree then :)

It all comes down to is the right tool for the right job, and keeping the options open!
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.