The above numbers may be correct. But a CPU does not only consist of a decoder. Current CPUs have millions of transistors. The decoder part gets lost in the total amount of transistors. Therefore the difference at the end is much smaller. There will be an advantage for ARM. But it will probably not be that huge.
ARM is not magic (as some think). The laws of physics also apply to ARM.
Already in 2005 the following was written on ArsTechica:
"The original Pentium spent about 30% of its transistors on hardware designed solely to decode the unwieldy x86 ISA. Those were transistors that competing RISC hardware like PowerPC and MIPS could spend on performance-enhancing cache and execution hardware. However, the amount of hardware that it took to decode the x86 ISA didn't grow that much over the ensuing years, while overall transistor budgets soared. On a modern processor, if x86 decode hardware takes up twice as many transistors as RISC decode hardware, then you're only talking about a difference of, say, 4% of the total die area vs. 2%. (I used to have the exact numbers for the amount of die area that x86 decode hardware uses on the Pentium 4, but I can't find them at the moment.) That's not a big enough difference to affect performance."
https://arstechnica.com/uncategorized/2005/11/5541-2/
And I say it again. The quote is from 2005. At that time CPUs had significantly fewer transistors than today and even then the difference was only small.
ARM may have a small advantage. But it should be small rather than huge. In the end it should be negligible, especially for desktop and server CPUs. You can build good CPUs with ARM. And you can build good CPUs with x86_64. You can also build garbage with both architectures.
By the way, Apple A SoC is also affected by Meltdown (AMD is not affected by Meltdown). I hope Apple didn't sacrifice security for performance.