It looks like the tests that Parallels was better than native were graphics and display intensive (like scrolling text).
But the graphics card is simulated in Parallels, while in BC, they are just like the real thing (because they actually are).
I'd suspect that the native drivers are poorer than the OSX drivers (does Apple supply the graphics drivers, or are they from the graphics manufacturer?), and that Parallels has a good implementation for their graphics.
Except that the Parallels drivers made by Parallels. Which really aren't that great. The BC drivers are Apple's, which are actually pretty good (as per
PC World's comments that they're the fastest PCs), but you can use the manufactures as well. ATI or Nvidia. They should be, and are, faster.
One of the many reasons those of us who actually use them know this can't be correct.
Hilarious to see the VMware fanboys jumping up and down about how the methods and numbers must be wrong.
Yes, of course. We're all just VMW fanboys. Every single one of us. Except for that
1 poster who had a problem with it and now uses Parallels.

Or those of us who've used them are crying foul because in real world experience, they don't match reality. If they did, we wouldn't be complaining.
While it would have been nice to test the newest VMware, the authors seem to have a very good reason for not doing so: they took their time and methodically ran a huge set of tests over and over to get the most accurate results.
But it still doesn't make sense. I've been using Fusion since before it came out, and besides a few minor issues, it's been better than Parallels, which I've also been using since early beta days (as I said, we use it at work, and only still do so because it came out of beta sooner and corp is slow to move to upgrade). Nether of them are better than BC though, even with the older drivers. Something
must be wrong with these tests, and we're right to question them. Looking at the vast majority who do, I can't say I blame them since my experience matches theirs.
Hopefully they re-run their tests with the newer versions, now that the initial results are out.
Looking around at other sites that also question their credibility, I doubt it would help, but maybe they could at least do another preliminary one with the current software, at least before trying to defend this one.
On the question of how virtualization was able to beat out the real thing, I'm surprised that very few people seemed to understand that this is, indeed, possible. AidenShaw got it:
Except he was wrong. Parallels drivers and Apple drivers maybe, but not native ones. And emulation is still going to be slower than the real thing. Which it is. Which again, is one of the reasons why we don't trust the tests.
In addition to that, there's also the possibility that either Parallels or OS X (or both) is able to speed up things like disk accesses by doing a better job of caching parts of the virtual hard disk file than XP did of caching the actual hard disk. If OS X preloaded a piece of data into memory and XP did not, the version running under Parallels would still benefit from the OS X caching while the Bootcamp version would not.
I supposed it could, but it doesn't.
Also keep in mind that as the authors point out, this is a limited set of tests designed to simulate certain tasks. Like all benchmark tests, they perform better under certain conditions and worse under others; and they may or may not represent real world usage.
They don't, which is why we're questioning them.
