According to some articles, it could extend to desktop chipsets as well, although the difference in cooling and power cycling (laptops are more frequently turned on/off) between desktop vs laptops could make the desktop chips fail much later - i.e. after most warranty periods - making the problem much less of an issue (for NVidia).
But it's all rumour and speculation now. The only way we'll know for sure is if NVidia fesses up. I doubt they'll be giving us the whole truth given the potential nature of such a problem. What can you do? You can't recall millions of laptops and cards to have their GPU's replaced, at least not if you want to stay in business. Wind up fans to compensate? That rep alone is going to have people/OEM's going to ATI in droves. What about people who haven't bought extended warranties? etc. The sooner they lay out what they're going to do the better, although given the reaction it seems like they're determined to band-aid the problem, leave those who experience failure outside the warranty in the cold and ride out the press - although I don't see them getting away with it.
However if the 'desktops less likely to fail due to better cooling / more stable usage cycling' is the case, the Pro occupies a borderline ground. Running as stock in ~20C ambients, the 8800GT in the Pro in general-purpose OS X computing runs slightly hotter than e.g. the hotter card in an SLI setup in a decently cooled gaming machine while in low/mid-level gaming, especially if you have more than one drive bay occupied. If it's working as part of a dual-card setup as I have in one of my home Pros and ambient reaches ~25C, some pretty darned high (and darned laptop-like) temps can be achieved on a constant basis.
If any desktop 'PC' can be considered likely to exhibit heat-accelerated GPU failure based on what has been speculated over about NVidia's problem, it will almost certainly involve Pros alongside badly cooled DIY's, etc.