I disagree with that statement because I think you're specifically speaking to major iron and not commodity servers.
Major systems, like mainframes, run for years and years and years because they're loaded with proprietary systems (like our payroll) and they cost an enormous amount to replace. Mainframes are still a huge ($$) market because they run the infrastructure of may organizations. They often run for ages but only because organizations pay through the butt for serious support contracts to keep them alive.. but that's still way less than replacement.
Our IBM support contract for the mainframe at my last place of work was $70k per year, 10 years ago. Then we finally decommissioned it and I got $1500 for the mainframe when we sold it for scrap. keeping it running was worth a lot.. even well after the hardware was obsolete.
Saying servers, in general, have a much longer upgrade cycle is very misleading though.
You're insane if you're running a mission-critical X86 server outside of warranty. The only time I'd disagree with that is in our compute clusters where we can loose a node with no real impact in the overall functionality of the system.
Heck, I only upgrade from the stock 3yr warranty on my x86 boxes when they're expensive enough that it's difficult to replace them in 3 years (that also coincides with them being powerful enough to be useful 5 years out).
One of the key selling features of servers, at least to the midrange level that cost $25,000 and up, is scalability. Even vendors selling low-end servers mention it. Blade servers are all about what you can add to your existing system as you see fit.
I'm just comparing the server market to the consumer market - especially stuff like phones, the iPod touch and the iPod - and how different these business models are and what Apple is good at.