Sorry, this is incorrect from start to finish.People do not really understand ARM vs Intel. ARM is a RISC chip, Reduced Instruction Set Computer. The ARM CPU is less complex and has to break up instructions/code into small pieces.
Because it is less complex it is easier to shrink to 10/7/5nm. It benefits from from using less power, which produces less heat and requires less cooling. This makes it the best choice for mobile devices.
Intel chips are CISC chip, Complex Instruction Set Computer. The Intel CPU's are way more complex and thus harder to shrink. At the same time they way more powerful in many ways. Being more powerful requires more power, which produces more heat which requires more cooling. This makes it better for computers in terms of space/power/cooling (minius the throttlebook of course). The CISC CPU's can run way more complex software and do it faster. The IPC of a Intel CPU over a ARM CPU is many times greater. This is why we do not see full blown Photoshop, CAD apps, games like Witcher 3 etc running on ARM. We do see light versions of all of this.
If Apple or Qualcom (Windows 10 on ARM) keep making their ARM CPU's more powerful to take over for Intel x86 they will eventually run into the same issues with power, heat, and cooling. There is no way around it. Unless of course they never want to run anything but light/less complex applications which might be fine for the majority of people. Powerful, complex applications will require more than ARM can deliver today.
All processors are Turing complete, and as such they can all run the same type of software, regardless of complexity.
Simply speaking, in modern CPU's, the difference between RISC and CISC is the instruction decoder. For x86 this takes up ~20% of the die area. This translates directly into either making the chip smaller and cheaper to manufacture, or into being able to pack more stuff on the die (cores, caches, ...).
This is then also the reason why RISC processors may "shrink better". They can skip the 20% extra complexity of the instruction decoder. But otherwise CISC transistors shrink the same as RISC transistors.
RISC processors were the first superscalar processors, had deeper pipelines, and as such were able to execute more instructions per clock than their CISC counterparts. Over time, CISC processors also got deep pipelines, multiple execution units, and became superscalar. In modern processors it's not clear to me that either technology would fundamentally be capable of higher IPC than the other. There's also no fundamental reason why one should be able to execute software faster than the other.
For modern RISC architectures that are quite competitive vs x86 you can look at SPARC or POWER for example. It has been attempted with ARM architecture as well. This hasn't yet seen the same level of performance, but it's not because of RISC vs CISC. I'm sure these powerful RISC chips consume just as much power as CISC chips.
The one thing you got right was that both RISC and CISC processors are ultimately limited by the laws of physics.
Edit: While not a chip designer, I do program these chips regularly at a low level, both x86 and ARM. And I guess I technically built a one-instruction CPU for a uni lab once, but it's not clear whether that counts for anything. And I think I have the 20% figure from cmaier in an earlier discussion, any mistakes there would be mine.
Last edited: