The Rosetta framework was included there during transition but when you are taking one machine code and running it on a virtual x86 machine you will probably at least halve the performance (maybe more). With the Power architectured applications they ran considerably slower on newer hardware (for almost a year). Now take that same concept and apply it to a low power CPU and you will have a machine that would likely have less performance than half the power of the latest iPads. It would take a year for some applications to sort out with the "fat binary" as you said.
Yeah, I but wasn't talking about emulation, just the fact that Win RT was different in other ways than the CPU it ran on. OS X on two architectures would be the same OS, with the same frameworks and so on.
Then you have the "fat binary" which is basically two copies of the same application, one for each platform - which makes applications.... fat... It was only meant for transition.
A fat binary is a feature of the Mach-O file format, http://en.wikipedia.org/wiki/Fat_binary
If they were to do it again, they would probably move to just compiling down to llvm byte code.... The effect is that it would effectively increase the testing budget significantly (in one case I was personally involved in at a large shop - it was 2 millions of dollars in additional costs to certify each - and that was just a difference in operating system versions not even architectures).
Then they also need a JIT compiler for that llvm code, I mean why not then compile it to multiple architectures and solve the same problem.