Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I still don't get why Google is wasting money on hardware and "custom" chips (read: modified Exynos's). Slavishly copying Apple hasn't worked for anyone else, why should it work for them?
Slavislhy copying the Macintosh operating system worked pretty well for Microsoft... :cool:

EDIT: but seriously, the world will move to arm. If Google, Microsoft and Apple are using the same "family" of CPU, Intel will be there too, so will AMD.
arm has been around for ages, but Apple made it the best CPU for mainstream computing first. Let's see what comes out of this. Will be great for consumers!
 
Last edited:
They can just take an off the shelf chip right now and put chrome on it. You can already run a Linux desktop quite happily on a raspberry pi 8gb chip. Just slap one in with cooling, job done.
 
You ain't a playa unless you got your own processor... Amazon already has the AWS Graviton... Question is who's next... FB or MS?
 
That was a huge mistake to have Google on Apple’s board & Steve Jobs was running the show back then. His lifelong mission to destroy Google died with him. Now Android phones are dominant worldwide. It’s too bad as Microsoft’s phone was not a copy and really interesting. If Google didn’t have it’s insider head start the Microsoft phone might have had a chance.
That's an interesting point. But if the MSFT phone would have succeeded, we might have had a monopoly today, were Msft has all the business customers on Windows, big share in Azure and then would have all the users on WinMobile.
It's actually strange how slow Msft was to react given that they had made so many phones, before the iPhone.
 
That's an interesting point. But if the MSFT phone would have succeeded, we might have had a monopoly today, were Msft has all the business customers on Windows, big share in Azure and then would have all the users on WinMobile.
It's actually strange how slow Msft was to react given that they had made so many phones, before the iPhone.
Not really that strange if you remember Steve Ballmer’s laughing reaction to the iPhone (“it doesn’t have a keyboard which makes it not a very good business machine” etc)
 
  • Like
Reactions: iBluetooth
I think this is great for the market, having someone like Google and the Chromebook create their own processor makes more competition in the Market, with drive innovation and keeps prices low. Look what it has done for the TV market. This lets people buy a TV for $100 while still offering state of the art 100in 8K TV to be purchased. The Chromebook allows anyone with little money to own a computer, which was just a dream 30 years ago.
 
That's an interesting point. But if the MSFT phone would have succeeded, we might have had a monopoly today, were Msft has all the business customers on Windows, big share in Azure and then would have all the users on WinMobile.
It's actually strange how slow Msft was to react given that they had made so many phones, before the iPhone.

WinMobile had a large market share back in the day. It just was, well, carrier driven. It wasn't anything close to being like iOS.


I actually liked the newer Windows Mobile (Metro) phones, and I still think live tiles are great. But it died because handset makers went for the cheaper option (which is exactly how Windows beat MacOS back in the day - it was the cheaper and available option).
 
  • Like
Reactions: iBluetooth
I think not. EU, UK, and USA already concerned about the acquisition of ARM and many companies disagree about it.

RISC-V isn't well used and they dont have any main uses. For example, x86 is for computer and ARM is for mobile devices. Of course, Apple is slowly change ARM to computer but the transition from x86 to ARM is extremely difficult and even now, Apple is the only one. If x86 to ARM is difficult, using RISC-V will be way more difficult cause the majority uses x86 and slowly with ARM. Changing the architecture is very difficult so it will take another decades to solve the problem.
Well, you should: think differently.

ARM CPUs are old. Candidly, they're 1980s vintage technology. x86 is older, 1970s technology.

RISC-V is 21st century technology, from 2010.

If you extrapolate how long it took mass market adoption of ARM from early desktops in the UK in the 1980s such as the Acorn Archmiedes (first released in 1987) up until user-friendly Apple started shipping M1 based Apple Silicon (but still really just ARM) Macs in 2020, you can guesstimate that mainstream user friendly computing lags research and development by 33 years, at least.

We still haven't fully phased out x86, but it is not a matter of "changing the architecture being difficult" because most individuals do not program in assembly, and the Unix rewrite of PDP assembly into C circa 1972, was the beginning of that paradigm shift. LLVM, GCC, Linux, FreeBSD, OpenBSD, (O)KL4, golang and many more, already support RISC-V. Certainly, consumer computers based upon MOS6502 and MC68k designs were still seeing some writing in assembly up through the 1980s and even into a little bit of the 1990s, most high end R&D by the 1990s was MIPS (being the first to market with a 64bit CPU even before DEC's Alpha) such as used in Silicon Graphics workstations.

NVidia has been working on transitioning its GPU cores from 32bit ARM cores to 64bit RISC-V cores, and has spoken about that publicly since at least 2016 e.g.
being one of the earlier public presentations on such research, but they continue to present newer findings on RISC-V. It seems pretty clear that regardless of whether NVidia's acquisition of ARM from SoftBank is approved or not, for their future designs, they will predominantly be RISC-V based.

Even pedagogy has been shifting to RISC-V for some time, which makes sense as it was originally created as a learning architecture for students. Of particular interest, I thought MIT's Xv6 (https://pdos.csail.mit.edu/6.828/2019/xv6.html) was especially fascinating as it is essentially a RISC-V iteration of the "Lion's Commentary on Unix" style of operating system research for undergraduate students. As you may or may not know, Lion's Commentary on Unix was apocryphal for a while. Particularly during the AT&T vs BSDi lawsuit (which I might add, AT&T lost and settled out of court with BSDi which had to change a few things to what they shipped in their BSD offerings. BSDi later renamed itself to iXSystems. Full disclosure: iXSystems were a consulting client of mine circa 2013 where I was blessed to be able to work with the likes of jkh (Jordan Hubbard, founder of the FreeBSD project and former Director of Unix Technologies at Apple for a dozen years before he left after Steve Jobs passed away to become CTO of iX).

You don't have to look very far to see RISC-V already making inroads into other vendors, notably Western Digital with their SweRV implementation professing to ship in billions of devices annually. What many seem to ignore is that microcontrollers have largely been replaced by tiny embedded computers running, more often than not, something such as a BSD derivative, and those little licensing costs add up at scale. Apple has a perpetual ARM license, so this worries them less, but NVidia? Does not, and how many 32bit ARM cores are in their recent GPUs? Thousands.

It can widely be regarded that SiFive is sort of the "reference" RISC-V vendor, and earlier this year, the HiFive Unmatched finally began shipping using a 28-nm process U740 "Freedom" 64bit RISC-V CPU fabbed by TSMC (the same semiconductor fab which Apple, NVidia and AMD utilize among others). Since then, TSMC has already demonstrated a proof of concept 5nm fabbed RISC-V chip from one of SiFive's upcoming designs. Intel has even made some headlines that they will probably be fabricating RISC-V based CPUs in the future, and while they were an early investor in SiFive, they supposedly even made an offer to purchase the fabless firm earlier this year.

As you will recall, even Intel was trying to get out from under their inefficient x86 legacy with Itanium. That was a failure, but for a variety of reasons, not the least of which was their insistence upon Rambus for RAM which media outlets such as Tom's Hardware pooh poohed at the time. A lot of DIY minded sorts think they're god's answer to piecing together hardware from off the shelf components, when it's pretty clear to anyone who has ever done circuit board design, that the smaller the traces are, the faster an overall system can be, which is presumably why Intel was gunning for Rambus long ago due to its patents on improved memory timings, not entirely dissimilar to how Apple's M1 Silicon has a memory controller and memory all in one package. The DIY gamer overclock crowd would have you believe that vendors are trying to deprive us of choices, when the reality is: tighter integration is the key to better performance at scale.

Meanwhile, last year, the firm MicroMagic demonstrated a RISC-V chip running at 5GHz and consuming only 1W of power. Some time before that in the embedded space, ONiO.zero announced a 24MHz RISC-V implementation for microcontroller and embedded applications which utilizes energy harvesting techniques to operate without any active power draw. Vendors of the Raspberry Pi alternative, the ARM based BeagleBone Black from BeagleBoard.org® Foundation, also have the BeagleV RISC-V based system in preliminary samples out to developers before their systems go retail (with estimated street prices of $149 and $199 when they eventually start shipping depending on how much RAM is equipped).

Long story short: RISC-V appears to be, where the puck is heading, to mangle a phrase Steve Jobs borrowed from Wayne Gretzky. x86 (and its grafted on AMD64 extensions) as well as ARM, are where the puck has been.

Truly, my only reservation with _Spinn_'s comment is that RISC-V is not basically the Linux of chips, it is the BSD of chips. RISC-V even shares BSD's UC Berkeley provenance.

At least from my vantage, FreeBSD and OpenBSD and LLVM are "upstream" of macOS and iOS. I sincerely doubt those projects would have already invested the time and energy into RISC-V if they did not think it was worth their while. They are after all, libre/free open source software and they can't afford a lot of resources to expend constrained developer attention on things which appear to be dead ends.

You can buy a 64bit ARM based multicore PineBook Pro for around $200 these days. In my experiences, it offers a pretty darned good experience, if perhaps with less polish and refinement to an Apple M1 based laptop. Why should *anyone* moving forward, be paying royalties to ARM, for a decades old CPU ISA given that even a lot of hardware design is iterated by software? The tooling is largely in place to make a transition seamlessly to RISC-V now, and I am certain than in 33 years time if we are still using ARM based CPUs, they will be considered old and legacy and we will hopefully be focusing on some newer iteration which addresses the sorts of things that RISC-V failed to foresee. For the time being though, RISC-V looks really darned good and we don't even have 128bit iterations of it yet which are already in the specification. It's the most refreshing CPU ISA I have encountered in decades in the field.
 
Last edited:
Well, you should: think differently.

ARM CPUs are old. Candidly, they're 1980s vintage technology. x86 is older, 1970s technology.

RISC-V is 21st century technology, from 2010.

If you extrapolate how long it took mass market adoption of ARM from early desktops in the UK in the 1980s such as the Acorn Archmiedes (first released in 1987) up until user-friendly Apple started shipping M1 based Apple Silicon (but still really just ARM) Macs in 2020, you can guesstimate that mainstream user friendly computing lags research and development by 33 years, at least.

We still haven't fully phased out x86, but it is not a matter of "changing the architecture being difficult" because most individuals do not program in assembly, and the Unix rewrite of PDP assembly into C circa 1972, was the beginning of that paradigm shift. LLVM, GCC, Linux, FreeBSD, OpenBSD, (O)KL4, golang and many more, already support RISC-V. Certainly, consumer computers based upon MOS6502 and MC68k designs were still seeing some writing in assembly up through the 1980s and even into a little bit of the 1990s, most high end R&D by the 1990s was MIPS (being the first to market with a 64bit CPU even before DEC's Alpha) such as used in Silicon Graphics workstations.

NVidia has been working on transitioning its GPU cores from 32bit ARM cores to 64bit RISC-V cores, and has spoken about that publicly since at least 2016 e.g.
being one of the earlier public presentations on such research, but they continue to present newer findings on RISC-V. It seems pretty clear that regardless of whether NVidia's acquisition of ARM from SoftBank is approved or not, for their future designs, they will predominantly be RISC-V based.

Even pedagogy has been shifting to RISC-V for some time, which makes sense as it was originally created as a learning architecture for students. Of particular interest, I thought MIT's Xv6 (https://pdos.csail.mit.edu/6.828/2019/xv6.html) was especially fascinating as it is essentially a RISC-V iteration of the "Lion's Commentary on Unix" style of operating system research for undergraduate students. As you may or may not know, Lion's Commentary on Unix was apocryphal for a while. Particularly during the AT&T vs BSDi lawsuit (which I might add, AT&T lost and settled out of court with BSDi which had to change a few things to what they shipped in their BSD offerings. BSDi later renamed itself to iXSystems. Full disclosure: iXSystems were a consulting client of mine circa 2013 where I was blessed to be able to work with the likes of jkh (Jordan Hubbard, founder of the FreeBSD project and former Director of Unix Technologies at Apple for a dozen years before he left after Steve Jobs passed away to become CTO of iX).

You don't have to look very far to see RISC-V already making inroads into other vendors, notably Western Digital with their SweRV implementation professing to ship in billions of devices annually. What many seem to ignore is that microcontrollers have largely been replaced by tiny embedded computers running, more often than not, something such as a BSD derivative, and those little licensing costs add up at scale. Apple has a perpetual ARM license, so this worries them less, but NVidia? Does not, and how many 32bit ARM cores are in their recent GPUs? Thousands.

It can widely be regarded that SiFive is sort of the "reference" RISC-V vendor, and earlier this year, the HiFive Unmatched finally began shipping using a 28-nm process U740 "Freedom" 64bit RISC-V CPU fabbed by TSMC (the same semiconductor fab which Apple, NVidia and AMD utilize among others). Since then, TSMC has already demonstrated a proof of concept 5nm fabbed RISC-V chip from one of SiFive's upcoming designs. Intel has even made some headlines that they will probably be fabricating RISC-V based CPUs in the future, and while they were an early investor in SiFive, they supposedly even made an offer to purchase the fabless firm earlier this year.

As you will recall, even Intel was trying to get out from under their inefficient x86 legacy with Itanium. That was a failure, but for a variety of reasons, not the least of which was their insistence upon Rambus for RAM which media outlets such as Tom's Hardware pooh poohed at the time. A lot of DIY minded sorts think they're god's answer to piecing together hardware from off the shelf components, when it's pretty clear to anyone who has ever done circuit board design, that the smaller the traces are, the faster an overall system can be, which is presumably why Intel was gunning for Rambus long ago due to its patents on improved memory timings, not entirely dissimilar to how Apple's M1 Silicon has a memory controller and memory all in one package. The DIY gamer overclock crowd would have you believe that vendors are trying to deprive us of choices, when the reality is: tighter integration is the key to better performance at scale.

Meanwhile, last year, the firm MicroMagic demonstrated a RISC-V chip running at 5GHz and consuming only 1W of power. Some time before that in the embedded space, ONiO.zero announced a 24MHz RISC-V implementation for microcontroller and embedded applications which utilizes energy harvesting techniques to operate without any active power draw. Vendors of the Raspberry Pi alternative, the ARM based BeagleBone Black from BeagleBoard.org® Foundation, also have the BeagleV RISC-V based system in preliminary samples out to developers before their systems go retail (with estimated street prices of $149 and $199 when they eventually start shipping depending on how much RAM is equipped).

Long story short: RISC-V appears to be, where the puck is heading, to mangle a phrase Steve Jobs borrowed from Wayne Gretzky. x86 (and its grafted on AMD64 extensions) as well as ARM, are where the puck has been.

Truly, my only reservation with _Spinn_'s comment is that RISC-V is not basically the Linux of chips, it is the BSD of chips. RISC-V even shares BSD's UC Berkeley provenance.

At least from my vantage, FreeBSD and OpenBSD and LLVM are "upstream" of macOS and iOS. I sincerely doubt those projects would have already invested the time and energy into RISC-V if they did not think it was worth their while. They are after all, libre/free open source software and they can't afford a lot of resources to expend constrained developer attention on things which appear to be dead ends.

You can buy a 64bit ARM based multicore PineBook Pro for around $200 these days. In my experiences, it offers a pretty darned good experience, if perhaps with less polish and refinement to an Apple M1 based laptop. Why should *anyone* moving forward, be paying royalties to ARM, for a decades old CPU ISA given that even a lot of hardware design is iterated by software? The tooling is largely in place to make a transition seamlessly to RISC-V now, and I am certain than in 33 years time if we are still using ARM based CPUs, they will be considered old and legacy and we will hopefully be focusing on some newer iteration which addresses the sorts of things that RISC-V failed to foresee. For the time being though, RISC-V looks really darned good and we don't even have 128bit iterations of it yet which are already in the specification. It's the most refreshing CPU ISA I have encountered in decades in the field.
ARM took several decades to be a major architecture. RISC-V is very far from that. Since many companies still dont want to use ARM as a major chip, RISC-V will be much harder.
 
I think I already pointed that out, from the Acorn Archimedes, to the Apple M1 Macs, was 33 years, so yes several decades.

ARM was widely used in mobile devices, and even servers (e.g. Google/Alphabet Inc.'s Tensor predecessor has been powering their servers for years and Google/Alphabet Inc. have more servers deployed than just about any company except perhaps Facebook last I checked. Note: both Google and Facebook were clients of one of my previous employers, namely iSEC Partners/NCC Group), so I would qualify it as a "major architecture" even before Apple began shipping its M1 iterations. Windows, also runs on ARM, albeit, to a lot less fanfare.

RISC-V may be a long ways away from being as widespread today, but in two decades? I think ARM based designs will seem positively arcane. To me, they already feel that way. I really do not think it will be "much harder" as you claim, from my vantage, the industry is already rallying around RISC-V as the direction it will be heading, just as x86 and AMD64 are being left in the dust by ARM today, in another two or three decades, ARM will not keep up and presumably educators and researchers will be looking to something to replace what I estimate will be a much more widely deployed RISC-V footprint. No hardware is eternal.
 
  • Like
Reactions: _Spinn_
I think not. EU, UK, and USA already concerned about the acquisition of ARM and many companies disagree about it.

RISC-V isn't well used and they dont have any main uses. For example, x86 is for computer and ARM is for mobile devices. Of course, Apple is slowly change ARM to computer but the transition from x86 to ARM is extremely difficult and even now, Apple is the only one. If x86 to ARM is difficult, using RISC-V will be way more difficult cause the majority uses x86 and slowly with ARM. Changing the architecture is very difficult so it will take another decades to solve the problem.

Well, you should: think differently.

ARM CPUs are old. Candidly, they're 1980s vintage technology. x86 is older, 1970s technology.

RISC-V is 21st century technology, from 2010.

If you extrapolate how long it took mass market adoption of ARM from early desktops in the UK in the 1980s such as the Acorn Archmiedes (first released in 1987) up until user-friendly Apple started shipping M1 based Apple Silicon (but still really just ARM) Macs in 2020, you can guesstimate that mainstream user friendly computing lags research and development by 33 years, at least.

We still haven't fully phased out x86, but it is not a matter of "changing the architecture being difficult" because most individuals do not program in assembly, and the Unix rewrite of PDP assembly into C circa 1972, was the beginning of that paradigm shift. LLVM, GCC, Linux, FreeBSD, OpenBSD, (O)KL4, golang and many more, already support RISC-V. Certainly, consumer computers based upon MOS6502 and MC68k designs were still seeing some writing in assembly up through the 1980s and even into a little bit of the 1990s, most high end R&D by the 1990s was MIPS (being the first to market with a 64bit CPU even before DEC's Alpha) such as used in Silicon Graphics workstations.

NVidia has been working on transitioning its GPU cores from 32bit ARM cores to 64bit RISC-V cores, and has spoken about that publicly since at least 2016 e.g.
being one of the earlier public presentations on such research, but they continue to present newer findings on RISC-V. It seems pretty clear that regardless of whether NVidia's acquisition of ARM from SoftBank is approved or not, for their future designs, they will predominantly be RISC-V based.

Even pedagogy has been shifting to RISC-V for some time, which makes sense as it was originally created as a learning architecture for students. Of particular interest, I thought MIT's Xv6 (https://pdos.csail.mit.edu/6.828/2019/xv6.html) was especially fascinating as it is essentially a RISC-V iteration of the "Lion's Commentary on Unix" style of operating system research for undergraduate students. As you may or may not know, Lion's Commentary on Unix was apocryphal for a while. Particularly during the AT&T vs BSDi lawsuit (which I might add, AT&T lost and settled out of court with BSDi which had to change a few things to what they shipped in their BSD offerings. BSDi later renamed itself to iXSystems. Full disclosure: iXSystems were a consulting client of mine circa 2013 where I was blessed to be able to work with the likes of jkh (Jordan Hubbard, founder of the FreeBSD project and former Director of Unix Technologies at Apple for a dozen years before he left after Steve Jobs passed away to become CTO of iX).

You don't have to look very far to see RISC-V already making inroads into other vendors, notably Western Digital with their SweRV implementation professing to ship in billions of devices annually. What many seem to ignore is that microcontrollers have largely been replaced by tiny embedded computers running, more often than not, something such as a BSD derivative, and those little licensing costs add up at scale. Apple has a perpetual ARM license, so this worries them less, but NVidia? Does not, and how many 32bit ARM cores are in their recent GPUs? Thousands.

It can widely be regarded that SiFive is sort of the "reference" RISC-V vendor, and earlier this year, the HiFive Unmatched finally began shipping using a 28-nm process U740 "Freedom" 64bit RISC-V CPU fabbed by TSMC (the same semiconductor fab which Apple, NVidia and AMD utilize among others). Since then, TSMC has already demonstrated a proof of concept 5nm fabbed RISC-V chip from one of SiFive's upcoming designs. Intel has even made some headlines that they will probably be fabricating RISC-V based CPUs in the future, and while they were an early investor in SiFive, they supposedly even made an offer to purchase the fabless firm earlier this year.

As you will recall, even Intel was trying to get out from under their inefficient x86 legacy with Itanium. That was a failure, but for a variety of reasons, not the least of which was their insistence upon Rambus for RAM which media outlets such as Tom's Hardware pooh poohed at the time. A lot of DIY minded sorts think they're god's answer to piecing together hardware from off the shelf components, when it's pretty clear to anyone who has ever done circuit board design, that the smaller the traces are, the faster an overall system can be, which is presumably why Intel was gunning for Rambus long ago due to its patents on improved memory timings, not entirely dissimilar to how Apple's M1 Silicon has a memory controller and memory all in one package. The DIY gamer overclock crowd would have you believe that vendors are trying to deprive us of choices, when the reality is: tighter integration is the key to better performance at scale.

Meanwhile, last year, the firm MicroMagic demonstrated a RISC-V chip running at 5GHz and consuming only 1W of power. Some time before that in the embedded space, ONiO.zero announced a 24MHz RISC-V implementation for microcontroller and embedded applications which utilizes energy harvesting techniques to operate without any active power draw. Vendors of the Raspberry Pi alternative, the ARM based BeagleBone Black from BeagleBoard.org® Foundation, also have the BeagleV RISC-V based system in preliminary samples out to developers before their systems go retail (with estimated street prices of $149 and $199 when they eventually start shipping depending on how much RAM is equipped).

Long story short: RISC-V appears to be, where the puck is heading, to mangle a phrase Steve Jobs borrowed from Wayne Gretzky. x86 (and its grafted on AMD64 extensions) as well as ARM, are where the puck has been.

Truly, my only reservation with _Spinn_'s comment is that RISC-V is not basically the Linux of chips, it is the BSD of chips. RISC-V even shares BSD's UC Berkeley provenance.

At least from my vantage, FreeBSD and OpenBSD and LLVM are "upstream" of macOS and iOS. I sincerely doubt those projects would have already invested the time and energy into RISC-V if they did not think it was worth their while. They are after all, libre/free open source software and they can't afford a lot of resources to expend constrained developer attention on things which appear to be dead ends.

You can buy a 64bit ARM based multicore PineBook Pro for around $200 these days. In my experiences, it offers a pretty darned good experience, if perhaps with less polish and refinement to an Apple M1 based laptop. Why should *anyone* moving forward, be paying royalties to ARM, for a decades old CPU ISA given that even a lot of hardware design is iterated by software? The tooling is largely in place to make a transition seamlessly to RISC-V now, and I am certain than in 33 years time if we are still using ARM based CPUs, they will be considered old and legacy and we will hopefully be focusing on some newer iteration which addresses the sorts of things that RISC-V failed to foresee. For the time being though, RISC-V looks really darned good and we don't even have 128bit iterations of it yet which are already in the specification. It's the most refreshing CPU ISA I have encountered in decades in the field.
Looks like Apple is at least slightly interested in RISC-V at the moment. I don’t doubt the transition would take quite some time but for a company like Apple that likes to control as much as possible it seems like it could be plausible in the future.

 
Looks like Apple is at least slightly interested in RISC-V at the moment. I don’t doubt the transition would take quite some time but for a company like Apple that likes to control as much as possible it seems like it could be plausible in the future.

Yeah, I saw it. But the problem is using RISC-V will require ANOTHER TRANSITION. Since Apple is struggling with the transition from x86 to ARM, RISC-V will be another huge transition. It's just more than 1 year. Why would Apple is spending their money and time on RISC-V? I dont know but it will be difficult to use RISC-V unless it works fine with ARM based device and software.
 
Yeah, I saw it. But the problem is using RISC-V will require ANOTHER TRANSITION. Since Apple is struggling with the transition from x86 to ARM, RISC-V will be another huge transition. It's just more than 1 year. Why would Apple is spending their money and time on RISC-V? I dont know but it will be difficult to use RISC-V unless it works fine with ARM based device and software.
To see what RISC-V could do for them in the future. The first hints Apple was experimenting to boot Darwin/macOS on ARM can be traced back to 2010.
 
  • Like
Reactions: _Spinn_
Likely to see if it is even more low power compared to ARM? Would be ideal for wearable such as AirPods.
Could be, although Apple uses Cortex-Mx CPUs in, for example, AirTags. SiFive's RISC-V CPUs seem to be even more power efficient than these CPUs.
 
  • Like
Reactions: _Spinn_
Yeah, I saw it. But the problem is using RISC-V will require ANOTHER TRANSITION. Since Apple is struggling with the transition from x86 to ARM, RISC-V will be another huge transition. It's just more than 1 year. Why would Apple is spending their money and time on RISC-V? I dont know but it will be difficult to use RISC-V unless it works fine with ARM based device and software.
Why is "ANOTHER TRANSITION" a problem?

macOS (Apple Silicon and Intel) today, was previously OS X (Intel and PPC), which was previously NeXT Step (x86), which originally ran on the NeXT Cube (MC68K).

From my vantage, that means that every decade or two, Apple (and by extension, NeXT) got to shift their CPU ISA, get rid of old cruft, and like a snake shedding its skin, keep growing and innovating. Due to being a microkernel in design, as well as predominantly written in C rather than architecture specific assembly, macOS keeps the Unix portability paradigm intact which began back when Unix was re-written in PDP assembly to C from the 1970s.

A lot of other vendors have been much more "locked in" to their hardware, and predictably, accrue technical debt and security vulnerabilities which are easy enough to leave behind with a bit more intention and awareness.

I'm guessing that if Apple ever decides to embrace RISC-V as a core CPU (something which would be far from now) it would be possible for them to have something akin to Rosetta/Rosetta 2 to interpolate ARM instructions to RISC-V instructions as well. I very much doubt that they would make a transition difficult, given that the past transitions from PPC->Intel and Intel->M1 Apple Silicon have gone relatively smoothly, requiring little more than standardizing on XCode (remember when CodeWarrior was instead prevalent?) and recompiling projects.

If anything, I tend to see a lot of individuals get way more fixated on the transitions than I ever have observed downsides in practice, no one is taking away your old hardware, they just provide new options for the future, and software developers tend to adapt pretty quickly without end users needing to do much more than update as applicable.

To me, it seems indicative of avoiding stagnation. It's also, part and parcel with capitalism, how many iterations have the Playstation or Xbox or Nintendo consoles been through by now? They *want* to sell new hardware, even if they may occasionally offer older software in emulated forms, that isn't what drives new sales, nor does it potentially bring new users to their platforms.
 
  • Haha
Reactions: sunny5
Why is "ANOTHER TRANSITION" a problem?

macOS (Apple Silicon and Intel) today, was previously OS X (Intel and PPC), which was previously NeXT Step (x86), which originally ran on the NeXT Cube (MC68K).

From my vantage, that means that every decade or two, Apple (and by extension, NeXT) got to shift their CPU ISA, get rid of old cruft, and like a snake shedding its skin, keep growing and innovating. Due to being a microkernel in design, as well as predominantly written in C rather than architecture specific assembly, macOS keeps the Unix portability paradigm intact which began back when Unix was re-written in PDP assembly to C from the 1970s.

A lot of other vendors have been much more "locked in" to their hardware, and predictably, accrue technical debt and security vulnerabilities which are easy enough to leave behind with a bit more intention and awareness.

I'm guessing that if Apple ever decides to embrace RISC-V as a core CPU (something which would be far from now) it would be possible for them to have something akin to Rosetta/Rosetta 2 to interpolate ARM instructions to RISC-V instructions as well. I very much doubt that they would make a transition difficult, given that the past transitions from PPC->Intel and Intel->M1 Apple Silicon have gone relatively smoothly, requiring little more than standardizing on XCode (remember when CodeWarrior was instead prevalent?) and recompiling projects.

If anything, I tend to see a lot of individuals get way more fixated on the transitions than I ever have observed downsides in practice, no one is taking away your old hardware, they just provide new options for the future, and software developers tend to adapt pretty quickly without end users needing to do much more than update as applicable.

To me, it seems indicative of avoiding stagnation. It's also, part and parcel with capitalism, how many iterations have the Playstation or Xbox or Nintendo consoles been through by now? They *want* to sell new hardware, even if they may occasionally offer older software in emulated forms, that isn't what drives new sales, nor does it potentially bring new users to their platforms.
Because making a huge transition is a huge problem. This is why macOS lacks software compared to Windows and even now, the gaming market and virtualization market is def worse than Intel Mac era because it's not compatible with x86.

Apple is not a leader. The market share proves it and many transitions isn't great.
 
  • Haha
Reactions: Jorbanead
Because making a huge transition is a huge problem. This is why macOS lacks software compared to Windows and even now, the gaming market and virtualization market is def worse than Intel Mac era because it's not compatible with x86.

Apple is not a leader. The market share proves it and many transitions isn't great.
I do not think that you have provided any evidence of any of these being "huge" transitions, nor "huge" problems. The gaming market, has never been Apple's key demographic, and the virtualization market is predominantly applicable in the server space, which Apple also doesn't really seem to care very much about. The transitions previously mentioned from MC68K to x86 with NeXT Step, as well as to PPC and then later back to Intel with OS X, and most recently to Apple M1 Silicon, have gone remarkably smoothly, particularly as contrasted with Windows' aborted efforts to port their OS to DEC Alpha and Intel's Itanium. Windows does currently run on ARM, but as a second class citizen, whereas Apple has been running iOS on ARM exclusively.

Big Sur seems more or less flawless on Apple M1 Silicon in my experience, with legacy AMD64/x86 apps running without issue thanks to Rosetta 2 for the straggler developers who didn't jump on Apple's DTK (Developer Transition Kit) which was offered well in advance of M1 Silicon hardware being released, with a buy back program afterwards. I know of no similar programs provided by *any* other vendors in the entire industry to help ease and smooth over hardware transitions, and I have been in this field for longer than Macs or Windows have existed.

I do concur with you insomuch as, Apple is not a leader. They are typically conservative with their adoption and deployment of new technology though they are also not a straggler with regards to adapting R&D into the consumer space. Their market share is just one aspect, of the company's health as a whole though, and last I checked, Apple has $207.06 billion cash on hand, ranking it number one in such metrics as contrasted with Alphabet Inc./Google, Microsoft, Amazon, and Facebook all trailing it.

I for one, am certainly grateful that some companies take a more aggressive approach to adapting new technology and abandoning old cruft. The COBOL technical debt in the industry is a blight that isn't going to be solved by schools so much as companies sucking it up and jettisoning old code and migrating to newer alternatives as COBOL developers age out and die. The same goes for companies which still have a dependence on a long past its prime OpenVMS infrastructure. Even HP (who acquired DEC) officially discontinued Itanium support in July of this year, which is where OpenVMS had gone after DEC Alphas were phased out, and while I have read that there is a company (VSI) which is working on an AMD64 port of OpenVMS, that seems like a waste of effort and folly from my vantage and appears to only be in field tests at the moment, not the sort of thing any sane and keeping up with the times business would ever be able to truly justify running in prod. Having used COBOL and OpenVMS, in my professional opinion, they are trash, perpetuated by the sorts of "good old boy" "you scratch my back, I'll scratch yours" types of cronyism which is best avoided at all costs.

In contrast, some of the libre/free open source software projects such as OpenBSD, are actually significantly more aggressive than Apple with regards to deprecating useless and vulnerable code branches (e.g. OpenBSD deprecated loadable kernel modules long before Apple did, and many other Unix and Linux vendors still offer such things, despite their limited utility and routine use as exploit vectors), but OpenBSD can also typically claim having limited resources and constrained developers as part of its justification, not to mention a significantly smaller user base (though Apple and Microsoft are among companies which routinely borrow code from the OpenBSD project).

Anyway, this isn't the first time I haven't seen eye to eye with your writings and I doubt it will be the last. We may operate from significantly different realms with regards to technology and how we perceive its evolution, and Apple's role in such things differently.
 
Last edited:
  • Haha
Reactions: sunny5
I do not think that you have provided any evidence of any of these being "huge" transitions, nor "huge" problems. The gaming market, has never been Apple's key demographic, and the virtualization market is predominantly applicable in the server space, which Apple also doesn't really seem to care very much about. The transitions previously mentioned from MC68K to x86 with NeXT Step, as well as to PPC and then later back to Intel with OS X, and most recently to Apple M1 Silicon, have gone remarkably smoothly, particularly as contrasted with Windows' aborted efforts to port their OS to DEC Alpha and Intel's Itanium. Windows does currently run on ARM, but as a second class citizen, whereas Apple has been running iOS on ARM exclusively.

Big Sur seems more or less flawless on Apple M1 Silicon in my experience, with legacy AMD64/x86 apps running without issue thanks to Rosetta 2 for the straggler developers who didn't jump on Apple's DTK (Developer Transition Kit) which was offered well in advance of M1 Silicon hardware being released, with a buy back program afterwards. I know of no similar programs provided by *any* other vendors in the entire industry to help ease and smooth over hardware transitions, and I have been in this field for longer than Macs or Windows have existed.

I do concur with you insomuch as, Apple is not a leader. They are typically conservative with their adoption and deployment of new technology though they are also not a straggler with regards to adapting R&D into the consumer space. Their market share is just one aspect, of the company's health as a whole though, and last I checked, Apple has $207.06 billion cash on hand, ranking it number one in such metrics as contrasted with Alphabet Inc./Google, Microsoft, Amazon, and Facebook all trailing it.

I for one, am certainly grateful that some companies take a more aggressive approach to adapting new technology and abandoning old cruft. The COBOL technical debt in the industry is a blight that isn't going to be solved by schools so much as companies sucking it up and jettisoning old code and migrating to newer alternatives as COBOL developers age out and die. The same goes for companies which still have a dependence on a long past its prime OpenVMS infrastructure. Even HP (who acquired DEC) officially discontinued Itanium support in July of this year, which is where OpenVMS had gone after DEC Alphas were phased out, and while I have read that there is a company (VSI) which is working on an AMD64 port of OpenVMS, that seems like a waste of effort and folly from my vantage and appears to only be in field tests at the moment, not the sort of thing any sane and keeping up with the times business would ever be able to truly justify running in prod. Having used COBOL and OpenVMS, in my professional opinion, they are trash, perpetuated by the sorts of "good old boy" "you scratch my back, I'll scratch yours" types of cronyism which is best avoided at all costs.

In contrast, some of the libre/free open source software projects such as OpenBSD, are actually significantly more aggressive than Apple with regards to deprecating useless and vulnerable code branches (e.g. OpenBSD deprecated loadable kernel modules long before Apple did, and many other Unix and Linux vendors still offer such things, despite their limited utility and routine use as exploit vectors), but OpenBSD can also typically claim having limited resources and constrained developers as part of its justification, not to mention a significantly smaller user base (though Apple and Microsoft are among companies which routinely borrow code from the OpenBSD project).

Anyway, this isn't the first time I haven't seen eye to eye with your writings and I doubt it will be the last. We may operate from significantly different realms with regards to technology and how we perceive its evolution, and Apple's role in such things differently.
lol, both gaming market and virtualization market is a proof. How many are there supporting M1 natively? Only a few of them. Dont tell me they still need times, they had enough time to port their games. For visualization within x86, it's totally gone. The transition is flawless? Tell that to games, virtualization, and more. So far, only Apple friendly software migrated flawlessly, not others.

After all, you have no idea what you are saying. x86 is still dominating the market and yet Apple ditched it and therefore, many software won't gonna support macOS. First of all, macOS suffers lack of software compared to Windows even with x86 so what do you expect?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.