Well, you should: think differently.
ARM CPUs are old. Candidly, they're 1980s vintage technology. x86 is older, 1970s technology.
RISC-V is 21st century technology, from 2010.
If you extrapolate how long it took mass market adoption of ARM from early desktops in the UK in the 1980s such as the Acorn Archmiedes (first released in 1987) up until user-friendly Apple started shipping M1 based Apple Silicon (but still really just ARM) Macs in 2020, you can guesstimate that mainstream user friendly computing lags research and development by 33 years, at least.
We still haven't fully phased out x86, but it is not a matter of "changing the architecture being difficult" because most individuals do not program in assembly, and the Unix rewrite of PDP assembly into C circa 1972, was the beginning of that paradigm shift. LLVM, GCC, Linux, FreeBSD, OpenBSD, (O)KL4, golang and many more, already support RISC-V. Certainly, consumer computers based upon MOS6502 and MC68k designs were still seeing some writing in assembly up through the 1980s and even into a little bit of the 1990s, most high end R&D by the 1990s was MIPS (being the first to market with a 64bit CPU even before DEC's Alpha) such as used in Silicon Graphics workstations.
NVidia has been working on transitioning its GPU cores from 32bit ARM cores to 64bit RISC-V cores, and has spoken about that publicly since at least 2016 e.g.
being one of the earlier public presentations on such research, but they continue to present newer findings on RISC-V. It seems pretty clear that regardless of whether NVidia's acquisition of ARM from SoftBank is approved or not, for their future designs, they will predominantly be RISC-V based.
Even pedagogy has been shifting to RISC-V for some time, which makes sense as it was originally created as a learning architecture for students. Of particular interest, I thought MIT's Xv6 (
https://pdos.csail.mit.edu/6.828/2019/xv6.html) was especially fascinating as it is essentially a RISC-V iteration of the "Lion's Commentary on Unix" style of operating system research for undergraduate students. As you may or may not know, Lion's Commentary on Unix was apocryphal for a while. Particularly during the AT&T vs BSDi lawsuit (which I might add, AT&T lost and settled out of court with BSDi which had to change a few things to what they shipped in their BSD offerings. BSDi later renamed itself to iXSystems. Full disclosure: iXSystems were a consulting client of mine circa 2013 where I was blessed to be able to work with the likes of jkh (Jordan Hubbard, founder of the FreeBSD project and former Director of Unix Technologies at Apple for a dozen years before he left after Steve Jobs passed away to become CTO of iX).
You don't have to look very far to see RISC-V already making inroads into other vendors, notably Western Digital with their SweRV implementation professing to ship in billions of devices annually. What many seem to ignore is that microcontrollers have largely been replaced by tiny embedded computers running, more often than not, something such as a BSD derivative, and those little licensing costs add up at scale. Apple has a perpetual ARM license, so this worries them less, but NVidia? Does not, and how many 32bit ARM cores are in their recent GPUs? Thousands.
It can widely be regarded that SiFive is sort of the "reference" RISC-V vendor, and earlier this year, the HiFive Unmatched finally began shipping using a 28-nm process U740 "Freedom" 64bit RISC-V CPU fabbed by TSMC (the same semiconductor fab which Apple, NVidia and AMD utilize among others). Since then, TSMC has already demonstrated a proof of concept 5nm fabbed RISC-V chip from one of SiFive's upcoming designs. Intel has even made some headlines that they will probably be fabricating RISC-V based CPUs in the future, and while they were an early investor in SiFive, they supposedly even made an offer to purchase the fabless firm earlier this year.
As you will recall, even Intel was trying to get out from under their inefficient x86 legacy with Itanium. That was a failure, but for a variety of reasons, not the least of which was their insistence upon Rambus for RAM which media outlets such as Tom's Hardware pooh poohed at the time. A lot of DIY minded sorts think they're god's answer to piecing together hardware from off the shelf components, when it's pretty clear to anyone who has ever done circuit board design, that the smaller the traces are, the faster an overall system can be, which is presumably why Intel was gunning for Rambus long ago due to its patents on improved memory timings, not entirely dissimilar to how Apple's M1 Silicon has a memory controller and memory all in one package. The DIY gamer overclock crowd would have you believe that vendors are trying to deprive us of choices, when the reality is: tighter integration is the key to better performance at scale.
Meanwhile, last year, the firm MicroMagic demonstrated a RISC-V chip running at 5GHz and consuming only 1W of power. Some time before that in the embedded space, ONiO.zero announced a 24MHz RISC-V implementation for microcontroller and embedded applications which utilizes energy harvesting techniques to operate without any active power draw. Vendors of the Raspberry Pi alternative, the ARM based BeagleBone Black from BeagleBoard.org® Foundation, also have the BeagleV RISC-V based system in preliminary samples out to developers before their systems go retail (with estimated street prices of $149 and $199 when they eventually start shipping depending on how much RAM is equipped).
Long story short: RISC-V appears to be, where the puck is heading, to mangle a phrase Steve Jobs borrowed from Wayne Gretzky. x86 (and its grafted on AMD64 extensions) as well as ARM, are where the puck has been.
Truly, my only reservation with
_Spinn_'s comment is that RISC-V is not basically the Linux of chips, it is the BSD of chips. RISC-V even shares BSD's UC Berkeley provenance.
At least from my vantage, FreeBSD and OpenBSD and LLVM are "upstream" of macOS and iOS. I sincerely doubt those projects would have already invested the time and energy into RISC-V if they did not think it was worth their while. They are after all, libre/free open source software and they can't afford a lot of resources to expend constrained developer attention on things which appear to be dead ends.
You can buy a 64bit ARM based multicore PineBook Pro for around $200 these days. In my experiences, it offers a pretty darned good experience, if perhaps with less polish and refinement to an Apple M1 based laptop. Why should *anyone* moving forward, be paying royalties to ARM, for a decades old CPU ISA given that even a lot of hardware design is iterated by software? The tooling is largely in place to make a transition seamlessly to RISC-V now, and I am certain than in 33 years time if we are still using ARM based CPUs, they will be considered old and legacy and we will hopefully be focusing on some newer iteration which addresses the sorts of things that RISC-V failed to foresee. For the time being though, RISC-V looks really darned good and we don't even have 128bit iterations of it yet which are already in the specification. It's the most refreshing CPU ISA I have encountered in decades in the field.