OK, folks, just a simple reality check. First of all, *any* chip combining CPU and graphics will have severe memory bottlenecks, so it's hard to see it ever competing with a discrete graphics card. But let's look at what you get:
Intel HD 3000: 12 execution units, operating at:
ULV: 350 MHz
LV: 500 MHz
Mainstream: 650 MHz
High end desktop: 1100 MHz
Most websites I've seen that have looked at application performance say the high end desktop variant of the HD 3000 is roughly on par with the Geforce 310M, with the ULV version about 1/3rd the speed. Now here's where it gets tricky:
According to an Apple Insider thread, Supposedly, the "320M" integrated chipset was a version of the GT216 core created just for Apple which also did not have its own memory. Quoting: The 320M has 48 cores and is not to be confused with "Geforce GT 320M". Apple rated it as 80% faster than the 9400M, but at about half the speed of the (genuine) Geforce GT 330M.
While it's hard to say just how fast the 320M really is, I've not seen a thread anywhere that says it's *slower* than the HD 3000. It generally ranges from "on par" to around 3x faster...
But honestly, if you care about GPU performance, go with a discrete part. GPUs are by definition bandwidth intensive, and do do well being hobbled in a CPU socket...