As for Linux, frequently there are no 'standard' implementations for a particular technology. It's a community, with lots of different ideas on how to implement things. Adobe, pick an H.264 decoder you like and run with that, or write your own friggin decoder yourself. What the heck, you want someone else to do all your work for you?
They already have to provide their own an H.264 decoder for Linux and OS X. That's actually the core of the problem we're talking about here - the fact that the H.264 decoder has to be written as a software process instead of using the hardware-accelerated H.264 decoder which may or may not be physically present depending on the capabilities of the video chipset.
In Linux (and OS X, and Windows, for that matter), the video chipset itself is a privileged device. Normal user code is not permitted to directly talk to it - the virtual memory subsystem of any modern operating system explicitly prohibits it.
Instead, software has to go through an API exposed by the kernel. Windows' published API happens to include a single entry point, generic enough to be applicable no matter what particular video card you're using, that explicitly provides access to the dedicated H.264 hardware accelerator if it exists in your video card.
On the other hand, if the published API for your OS doesn't include a standard entry point to access a particular feature of the video hardware, then the software has no choice but to to do without that hardware feature.
In Linux, for example, even if the kernel module for your particular video card does provide an entry point to access H.264 acceleration, there's likely going to be different entry points, with different calling conventions, for every different card that is introduced. In order for any software developer to make practical use of such features, all the driver vendors would have to agree to a single H.264 wrapper API that exposes the same entry points and the same calling conventions for every video card on the market.
In the absence of such agreement among vendors, software developers are left with the daunting task of either creating custom versions of their code to access the unique H.264 hardware decoding circuits of each different video card on the market, or the much more palatable option of decoding H.264 in software instead of hardware. The latter is exactly what the Flash player does.
In OS X, starting with 10.6 (however, previous users would be out of luck), OpenCL would be a good starting point for an alternative fall-back solution, which would still involve writing a software H.264 decoder, but the software could be executed on the GPU rather than on the CPU. Although likely less optimal than directly accessing the dedicated H.264 decoding hardware itself, this option would probably be better than doing it all in the CPU.