I hear what y'all are saying. But, respectfully, these comments reveal a lack of understanding of what goes into large-scale backup and recovery systems. The problem is: you test your solution, and it works. You do spot-checks, and they come up okay. But the fact is, unless you have a few supercomputers at your disposal you can't guarantee the integrity of the backups, and no matter which of the big vendors you're working with (sounds like Apple is likely working with Sun for the server farm, so is mostly likely using Sun's own backup solutions or EMC I would guess) there will sometimes be failures. It happens. It sucks, but there it is. Now and then, it's just going to happen. Until people are perfect, software and hardware won't be either.
Now, the stand-in for perfection is redundancy. But to get 99% reliable redundancy (even banks don't have 100%), it gets exponentially expensive very quickly. It can't possibly be done for a $99/year service, which is why Apple never makes any such guarantee. This isn't to excuse the screwup. Whoever's responsible should be held accountable, users should be fairly compensated with refunds or what have you as necessary... but words like "inexcusable" are off the mark.
That's my two cents as someone who deals with this issue every day, and has been through the process of madly trying to recover thousands of mailboxes after realizing that the backups weren't working precisely as designed.