Flawed on multiple levels.
1. At least one of the GPUs shared RAM space with the CPU. (it is an integrated GPU). Two processor packages are probably not going to share RAM. The two GPUs in the current Mac Pros do not.
RAM + ARM cpu and RAM + x86 cpu means higher RAM bill of material (BOM) costs in the system. Apple wants their 30% margin. Driving up BOM costs with little to no value add isn't likely to fly with them.
2. Intel's Turbo (dynamic over/under clocking) and the "race to sleep" modifications of OS X basically get the lower power consumption for low horsepower workloads and high consumption on higher workloads all with just the x86 solution. If throw very low workload at a modern Intel CPU the clock rate (and power consumption) drops.
3. The over major concept missed is that these are System on a Chip ( SoC) solutions. The notion that the CPU only doing the general computations is largely gone in most Mac and all of the iOS systems now. The SoC is CPU + GPU + IOhub chipset ( maybe + RAM + etc. etc. etc. ). Only need one IO hub chipset per PC system. The external ports ( USB , Display , etc ) are hooked to just one. Once start to include SoCs and have multiple USB , Display , Flash , etc. controllers then driving up BOM costs for no good reason.
There are ARM ( or ARM like ) controllers in a mac. They are basically I/O controllers though with fixed tasks. An Ethernet controller with TCP/IP offload typically has a very small embedded CPU inside. Wifi/Bluetooth ... ditto to handle/offload the more advanced functions. SSD drives have a Flash controller which typically an embedded CPU inside.
The application running processor though is typically all by itself or homogenous.
GPGPU programming is heading to a point where CPU and GPU share an address space and to some extent RAM. But that is more a split on extremely parallel processing and more general processing. There is power saving in sending the computation to the processor that can do it most efficiently but flip/flopping on a mix of the same apps not so much.
If there is some super duper upside to having a low power and high power coupled together an Intel solution that merged some Atom cores and a "big" Core i5 core(s) would probably be more effective to OS X than some cross vendor hodge podge. The concept already exists in the ARM world big.LITTLE
Image
http://www.arm.com/products/processors/technologies/biglittleprocessing.php
Right now Intel is concentrating on getting "LITTLE" to work well all by itself. Once more ironed out, it wouldn't be a big leap from them to same thing if Apple (and a few other system vendors ) wanted to place orders for 10+ million x86 big.LiTTLE like implementations. Intel is opening up to doing more custom work combining their processors if customers are willing to pay. On the more big-iron side of their business.
http://www.theregister.co.uk/2015/01/13/amazons_new_ec2_compute_instances_run_on_secret_intel_chips/
4. The final huge flaw is related the immediately above. There is an implicit notion that Intel can't do a low power x86 implementation. That ARM is the only path to lower power. That is deeply flawed.
It wasn't been a high priority for Intel from 2000-2010 but it is now. Intel used to put the "low power" on process technology 1-2 generations back from the latest stuff. Now, Intel is targeting latest process technology for low power implementations. The ARM threat is bigger than the AMD threat and they have adjusted accordingly. ARM isn't magically more efficient in the low power context. There is some overhead in doing the x86 to micro ops translation but as the overall transistor budget goes up and the translations are dynamically cached it is
not a show stopper issue.
Head to head implementation competition with Intel .... Apple isn't necessarily going to win that in the Mac space. Intel has a larger group of as good or better designers than Apple does.