That’s true, but M1 also uses a more advanced process node (TSMC 5nm vs 7nm), big.LITTLE, and has an optimized OS. It’s not a completely fair fight.
If RISC is so much better than CISC, why did Apple have to switch from a RISC architecture (PowerPC) to a CISC one (x86), specifically because of power efficiency? Most of the advantages of ARM would apply to PPC as well.
cmaier has answered some points, let me answer others.
First the issue of RISC vs CISC is mostly unimportant. It's a stupid fight from thirty five years ago that's mostly irrelevant to anything today, and is kept alive by the fact that it allows a certain class of people to feel like they understand an issue they actually do not understand at all.
Let's take Moore's law seriously. 2x the transistors every two years. So thirty years, fifteen doublings, 32 THOUSAND more transistors! Do you really think arguments about how best to design a CPU are still relevant when you have THOUSANDS more transistors available to throw at the problem?
ISA is just not *that* important. Ideally you want an ISA that's a better match to today's languages, SW design methodologies, and CPU design techniques, but you can work around many things. IMHO ARMv8 is substantially better than RISC-V which is substantially better than x86, but these are not dispositive.
More important is a cluster of more nebulous things:
- do you try to obtain speed via IPC (simultaneously executing many instructions in the same cycle) or via GHz?
Intel has always pushed GHz, was badly burned by this with P4, appeared to learn some sense in the immediate P4 years (the years, precisely, that Apple switched to Intel -- no coincidence there) and then, as a new generation of management and engineers took over, forgot everything they had learned from the P4 debacle.
To use a rough analogy, Apple designs CPUs using Swift, Intel designs them using x86 assembly. In other words Apple designs the CPU (and SoC) at a high level, and uses tools to convert that high level design into, ultimately, a transistor layout; Intel designs at a much lower level, closer to the transistors.
In theory you can write faster code in assembly -- but in practice what happens is
+ you become incapable of making LARGE changes because that's just too much work. You can make small patches of code run fast, but you can change the algorithmic structure of your code to a better algorithm
+ writing and validating your code takes far more man power and far longer.
That's the trap Intel finds itself in. Because they have pushed GHz so hard, they find it extremely difficult to engage in even small redesigns of their cores, let alone total rewrite from scratch. Meanwhile Apple can, in a sense, tell the compiler "perform a hash table lookup here", then next year turn that into "let's change that hash table to an array" and make a substantial design change, while the compiler does all the low level work.
By luck or by design Apple realized at just the right point that going forward transistors were not getting any faster, but they were still getting denser. So Apple built their entire design methodology on "use as many transistors as you like, but don't waste effort in super specialized circuit techniques to run your clock cycle faster".
- what do you imagine you are selling? Apple were selling iPhones -- so they added whatever they wanted to the iPhone to make it better. This included not just CPUs, but ISPs, security, a variety of accelerators, always-on coprocessors, etc.
Intel always say themselves as selling a chip that runs x86 code, and weren't much interested in adding anything else. Eventually they did, but in usual Intel fashion, too little too late. For years their GPUs were a joke. They claimed about ten years ago to add an NPU but it was a pathetic little thing that could slightly accelerate a specialize voice model, it was in no way what we think of as a GPU. The security stuff has been an ongoing disaster because they insist on doing it as part of x86 not as a separate piece of the chip. In the same way, no interest in accelerators, low-power co-processors, functional DMA, all the other stuff.
- Apple don't waste effort. Apple (to simplify) designs one core each year. The pattern APPEARS to be (this is a guess, but it looks like) they design something new from scratch every four years. This is, in a sense, a massive overdesign in that some parts seem way bigger than they need to be. Then, over the next three versions (as more transistors become available) they fill in the pieces that at first seemed too small. Then another design from scratch. The small cores appear to be reaparameterized versions of the large cores (same general design, just use one of everything rather than two of everything!). Compare to Intel who (even apart from the idiocy of the zillions of official SKUs) design Xeons, desktop, mobile and atom. And claim they will share between these but seem to do a terrible job of that actual sharing.
Apple understand that you don't need to DESIGN a cheap version of a chip, you just use the version from last year or the year before. That works better for customers, for developers, for Apple.
So that's Intel's issues:
- obsession with GHz at the expense of IPC
- obsession with providing a zillion market segments with a zillion different SKUs
- obsession with x86 as the expense of looking beyond the CPU
In technical terms, you need to look at what makes cores fast nowadays. Intel has these problems
- the design is almighty complex, with all manner of strange interactions, and an Intel promise that none of this will ever change. This makes design and validation take forever (made worse by the low level design techniques) and means they live in terror of large changes
- the variable instruction length makes fetching and decoding ever more difficult as you go wider. I suspect they also make many of the details of implementing branch prediction more difficult.
- some especially stupid design choices around things like flags and partial registers make the flow of a modern OoO machine just that much harder
- the memory model (ie the rules around exactly how loads and stores can be re-ordered relative to each other, and when these memory value changes have to be communicated to other CPUs) limits many of the neatest optimizations that can be performed in the load/store/cache system which is probably the most difficult part of a modern machine
But ULTIMATELY almost all of this boils down to personality!
Some individuals are comfortable with change, and are willing to make large changes for the sake of an eventual improvement.
Some individuals think continuity is most important, and are willing to engage in a lot of work to maintain continuity.
Apple has ALWAYS been a change company, has attracted change engineers, and has acquired change developers and customers. (If you are unhappy with constant small changes, you just won't be part of the Apple world for more than a few years!)
Intel (and MS) have always been continuity companies.
Obviously I'm on the side of Apple in this. Constant small change is irritating, yes, but not as irritating as the continuity people claim. My view is the continuity people have a dangerous view of reality. By assuming they can create a world of no visible change, they actually create an extremely fragile world, one where code and hardware is not updated annually to match new realities. And so we get situations where twenty year old software fails, and no-one knows how it works. Or companies are attacked because of security issues from twelve years ago that were never fixed.
Regardless of that, these "RISC vs CISC" arguments are REALLY mainly not about technology but about "lifestyle". Should companies and individuals accept constant small pain because it allows for constant improvement? Or should we accept massive inefficiency so that we can all pretend the tech world is static, and code once written never needs to be updated? Don't be fooled -- this is a bikeshedding argument, not a tech argument. If you want to learn technology, you have to ask tech questions; ANY question that can be converted into a "personality" question will devolve into uninteresting nonsense.