Since I started using OS X back in the Panther days, my impression is that whenever a new version of OS X is released there are tons of threads and complaints about applications not working in the new version of OS X. Some developers then release free updates, others require their customers to pay for an upgrade, and some won't provide fixes at all. As a developer, I can relate to all three positions. The more interesting thing is why stuff break, and how to possibly reduce the risk of that happening to applications. Now, since I've mostly developed software for embedded systems, servers, websites and Windows previously, I've only just begun scratching the surface of developing for OS X and hope that more seasoned developers will share their experiences on the topic. If I develop an application now, what are your recommendations on how to increase the likelihood that for example 10.7 will run it as is? Or 10.6.x for that matter... In your experience, what have the major pitfalls been historically? Are problems limited to things integrating tightly with the system? Is it use of undocumented or poorly documented APIs? Are completely unexpected, mundane things often at risk? I realize there aren't any definite answers, and that 10.6 just got out, but any general suggestions or previous real-life experiences would be greatly appreciated. If the risk for problems can be reduced by following certain advice in addition to Apple's guidelines, I'd rather know it sooner than later in the development process.