I think this is kind of what you'd expect to happen.
x86 ruled the roost for a long time. The fact that it ran what always ran before was valued. And people bought a lot of them. So the companies making x86 were awash in funds to apply to the R&D necessary to make it perform sufficiently well to keep selling.
x86 has been increasingly hobbled by the fact that it had to carry so much backwards compatibility, and we've always know it wasn't the "optimal" design. But what was going to replace it? There's no point investing a ton of money into a design that's just as good as x86-- who would buy it? It has to be enough better than x86 to justify breaking backwards compatibility.
So it's no surprise that when we see something truly compete with x86, that it's significantly better. We wouldn't see it if it weren't.
So why is Apple the company that did it? Because Apple had a revenue stream to fund R&D on a scale of what x86 sees. To compete with x86, it's not enough to simply have a better architecture, you need to to match its implementation. You don't just license Arm, click compile and go to fab-- if you want to win, you need to pay attention to the details. I've seen
@cmaier has been hammering on this repeatedly: every other Arm maker has been using ASIC methodologies, while Apple put the time, money and effort into optimizing and tuning the actual implementation. You don't bother to do this if you're building an Arm into a toaster oven, but you can if you're trying to make the world's leading mobile devices and hoovering up the lion's share of smartphone profits year after year.
This put Apple into a position where they had a better architecture, an implementation that's at least as good as the x86 makers and, to Intel's great disgrace, access to a superior process. We've known this could happen, but now the planets aligned to make it happen.