Frankly, if TDP (and the associated power consumption) wasn't an issue, I'm sure Intel could slap one heck of a GPU on their processors (quad core GT3 maybe?) but then how would anyone be able to cool a GPU like that and how many people would truly take advantage?
Cool it? Easy, just chop the clock rates. There is tons of highly localized power management inside of this CPU packages now. In fact that was one of the major design objectives for Haswell to increase the scope and level of power management. The CPU cores , GPU + L3 cache , and other aspects are all locally managable.
So for the GT3 (HD5200) options Intel just chops the clock rates.
http://www.cpu-world.com/news_2013/..._i7-4850HQ_and_i7-4950HQ_CPUs_in_Q3_2013.html
The HD4600 models run at a base rate 0.4GHz higher on x86 cores and the GPUs base rate is 0.2GHz higher. Dynamic range is lower on the HD5200 models and likely can't both run full blast on all 4 x86 cores and crank to highest GPU levels on all of its cores at the same time. Asymmetric loads will run better ( modest GPU with high x86 core utilization or mostly single threaded x86 core with high GPU utilization,*cough* a reason number of games, ).
It is the same thing Intel does with their ULV products these days. Same stuff just clock capped to keep the temperatures down. Gut the GPU core clock rate and then if need more room gut the x86 core clock rate and/or the number of x86 cores.
For the GT3 the lower clock rate on GPU is somewhat offset because it has faster/low latency memory throughput. Probably tuned drivers, a bit of a big "if" for Intel, and this can result in higher throughput.