There are many different X86 compilers out there. The X86 architecture is really just an instruction set, and thus companies/developers are free to create their own X86 compiler that will determine how to process the instruction set code.
Both Intel and AMD add extensions to the architecture, and thus companies usually have to try and support the extensions that both companies add (at least, that's if the developers want the software to run as efficiently as possible on either an Intel or AMD processor).
The problem is that, as far as I know, when AMD adds extensions, they have to offer the extensions to Intel under the terms of their licensing agreement for X86. However, I'm not certain if Intel is required to do the same. The compiler would then have to be able to handle these extensions.
What Intel is partly accused of doing, is basically taking their X86 compiler, which ideally would be optimized for both Intel and AMD processors (if you're doing fair competitive testing, after all) and optimizing it so that performance was seen to be greater on their own products under a "comparative testing environment".
Someone such as cmaier (who has far more experience in this field) may want to correct me on this, but I believe it's perfectly legal to have an optimized compiler if its optimized to support any new extensions added to the instruction set. So, for example, let's say Intel had just added in SSE4 support, and AMD hadn't yet implemented it. Intel is perfectly justified in using a compiler optimized for SSE4 and showing the performance enhancements as a result.
My guess as to what Intel actually did (and I haven't read the full complaint, so this is only from what I've seen on the surface) is that they actually modified the compiler to not be able to handle certain functions of AMD's X86 line. Thus, Intel was saying "Hey, all things being equal as much as possible, our system performs better!", when in reality, they had crippled AMD X86 support.
If that is the case, it reminds me somewhat of what nVidia pulled during the GeForce FX fiasco, where in game-based benchmarks its performance was being trounced by the Radeon 9*** series, but in 3DMark the FX had a suprisingly high score. It was later revealed that nVidia had optimized their drivers to produce an artificially high 3DMark score, so as to lessen the negative press they were getting amongst review sites and enthusiasts.