Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
These would simply be recompiled. Not a lot of arm assembly language in the source code for any of the things you’ve listed.

You cannot just recompile Node.js/V8, .Net/Mono or JDK as they all based on JIT and/or AOT engines. To my knowledge none of these exist in stable and performant up-streamed version for RISC-V yet - despite a major push from the industry over the last 5 years.
Good luck, if you are on your own for this undertaking without the industry community.
 
Last edited:
If anyone on here knows what the next big breakthrough in processor tech will be they should definitely go get a job at Apple, Intel, or AMD
 
We've reached 5nm, and even ARM Cortex X-1 is still on ARMv8.2-A instruction wise. Are we already close to the limit of ARM? I mean sure, there's still 3nm and 2nm, but there's a literal physics limitation, no?

So what's next beyond ARM? It seems that the ceiling would be reached quite fast.
Obligatory mention that I AM NOT AN EXPERT, but as someone above said, ASICs look to be promising for improvements to future performance.

Also, I've read that germanium can be scaled smaller than silicon(?) so maybe there's something there.

We also have IPC improvements to look forward to I think, maybe not infinitely but it'd be foolish to think there isn't room for improvement.

Memory speed increases come to mind, the faster the CPU can access the memory, the faster programs would become.

And sidestepping the question, even when computers get faster every year, a lot of programs seem to get slower and more sloppily made. Maybe if the speed jumps slow down this would incentivize companies to focus on improving the performance and stability of their software rather than "ship it now, we'll figure it out later!"
 
And once there is no trick left in the sleeves... no idea.

I've always assumed that in the long run there would be a shift away from local processing to streaming over the internet. It would make a lot of sense with small, mobile devices - they're a tough balancing act between processing power, battery life, and heat dissipation. The Apple Watch in particular would benefit greatly if the only thing it had to do was grab a multimedia stream from a nearby iPhone and show it on the screen, only processing user inputs internally. The iPhone likewise, especially given that it already is an always-on mobile device.

In this world the technical details of the servers on which everything runs would be of interest only to hardcore technical enthusiasts. For something like video editing I understand that cloud-based rendering has been popular for several years. It makes a lot of sense if you're ultimately going to upload the video to Youtube, although in the long run - when it's no big deal to upload and download multi-terabyte files - there won't be a need to render video. For local video editing or gaming a network stream adds an unavoidable delay that could perhaps be bypassed by clever branch prediction.

I suppose the ultimate limit is the speed of light. That's one of the limits Seymour Cray hit when he developed his early supercomputers - it's why the Cray 2 was a compact torus shape, to keep the data transfer pathway as short as possible. Even if the machine could process data instantly data transfer would still be limited by the speed of light. But perhaps there's a clever predictive software approach that could eliminate even that. Somewhat akin to a branch predictor but on a much larger scale.

I like to think that in the far, far future network latency will no longer be an issue. Our computer will simply bombard us with petabytes of media at the upper limit of the human brain's ability to process it, 24/7, and we will just accept it all. We will consume the filmography of the twentieth century in the blink of an eye.
 
One thing you have to understand is that the Cocoa Classes, of CODE, that are now being implemented using Swift, are really patented and nailed down. There is quite a bit of overhead with the code, which some programmers, especially in the past, think is useless, and they were kinda right.

But now, with like what the previous poster said, "ASICs", or in other words smaller and smaller "hardwired extensions" to main cores, things can really get insane. Because imagine if you don't need to load as much code and calls are strictly processor calls, load data and branch. You start to get some really huge speed increases!

The beauty of Swift (cocoa libraries) with ARM is that back in the old days, like with AltiVec on PPC, you had to specifically break your code down, into the necessary means to utilize the features of the chip, or chips even, like for basic Intel chip features, you had directly code FOR the chips.

Now you just program in Swift and use the provided Frameworks and Apple has dedicated "asics pieces" ready to fire away and you didn't have to do that old heavy lifting! I mean sure METAL is still work, but with like Machine Learning you can take advantage of ML code/chips without having to directly format your code for the ML chips, you just use the frameworks and everything is under the hood, and screams. They are creating the CHIPS for the CODE, a reversal...

So in general they have three avenues, maybe even four:

Add cores
Add asics
and continue with decreasing the size and increasing the speed
oh and the fourth is our blessed Operating System Speed Increases 🙏

One other thing to meniton is, IDK how many people have a watch, a macbook, and homepod or even an iphone too, but when you say "Hey Siri", I get 3 devices trying to get the request completed, now I know this is nuts, but:

With Apple Silicon, they are making MORE DEVICES, around you! And if they all start sharing the loads of work, that helps in the long run too...

It's like T2 chip in MacBooks, it's extra but it does lifting...

So imagine you get:

Glasses
AirPods
Watch
iPhone
MacBook
HomePod

That's SIX devices (in your ecosystem) to help you do whatever it is you gotta do...

I know TL;DR

Laters...
 
  • Like
Reactions: ian87w
What's the future of computer chips... I'm not a chip engineer, but thinking from the perspective of a physicist:
The diameter of silicon atoms is about 0.2nm. I'm not sure how dense they are packed, but that leaves about a maximum of 25 atoms across current chip structures. Quantum tunneling effects are probably already pretty noticeable in current 5nm processes. TSMC reportedly works on 3nm and 2nm structures, but there might be limit to shrinking sizes in this range, below 1nm sounds very tough to engineer.

As someone mentioned before, heterogenous computing is on the rise, this is definitely something that Apple will push to increase. Also, in this respect there might be (would be interesting to hear chip designer's comment on that) a possibility to apply optical processors for some subtasks of computing.

Quantum computing might be possible at research labs, but I don't know if one will ever see those in personal devices, and if so, this will take decades, so nothing related to ARM.
 
We've reached 5nm, and even ARM Cortex X-1 is still on ARMv8.2-A instruction wise. Are we already close to the limit of ARM? I mean sure, there's still 3nm and 2nm, but there's a literal physics limitation, no?

So what's next beyond ARM? It seems that the ceiling would be reached quite fast.

Apple is always leading the pack when it comes to the instruction set version. Not sure how the Cortex being on v8.2 means the future of the instruction set is stagnant. The yearly updates to the instruction set are mandated by Apple’s Arm contract anyways it makes sense that others would ignore them. Arm v9 is coming possibly next year.

Also the 5nm marketing number is irrelevant and not indicative of the actual size of transistors. When the industry moved away from 2D aka planar transistors and started making 3D transistors aka FINFETs they adopted a new naming scheme that approximated the scaling of performance that roughly translated to the previous performance of shrinking planar transistor feature sizes. The actual measurements of transistor features sizes haven’t changed much since the move to FINFETs but they are made differently allowing denser chips. From a physics perspective we aren’t even close to reaching the limits of silicon. According to Jim Keller, the size of a transistor are 1000 x 1000 x 1000 atoms wide and could conceivably be shrinked to 10 x 10 x 10 atoms. A million times decrease in area.

So good news, the reports of current chips transistors literally hitting the barriers of physics with the upcoming 5nm, 4nm, 3nm, and 2nm nodes are not true. The bad news is the economic cost of shrinking is a massive barrier compared to previous years node shrinks. The costs are ballooning rapidly with each new node. It used to cost a few million dollars to build a cutting edge fabrication plant in the 80s, now it costs a few billion. The market of companies who make cutting edge nodes has rapidly consolidated for this reason. It’s now only Intel, TSMC, and Samsung after Global Foundries dropped out of the cutting edge node race due to R&D costs not being compensated for by revenue. They now only make older nodes like 14nm which have their R&D mostly payed off and are profitable for them.
 
  • Like
Reactions: ian87w
Apple is always leading the pack when it comes to the instruction set version. Not sure how the Cortex being on v8.2 means the future of the instruction set is stagnant. The yearly updates to the instruction set are mandated by Apple’s Arm contract anyways it makes sense that others would ignore them. Arm v9 is coming possibly next year.

Also the 5nm marketing number is irrelevant and not indicative of the actual size of transistors. When the industry moved away from 2D aka planar transistors and started making 3D transistors aka FINFETs they adopted a new naming scheme that approximated the scaling of performance that roughly translated to the previous performance of shrinking planar transistor feature sizes. The actual measurements of transistor features sizes haven’t changed much since the move to FINFETs but they are made differently allowing denser chips. From a physics perspective we aren’t even close to reaching the limits of silicon. According to Jim Keller, the size of a transistor are 1000 x 1000 x 1000 atoms wide and could conceivably be shrinked to 10 x 10 x 10 atoms. A million times decrease in area.

So good news, the reports of current chips transistors literally hitting the barriers of physics with the upcoming 5nm, 4nm, 3nm, and 2nm nodes are not true. The bad news is the economic cost of shrinking is a massive barrier compared to previous years node shrinks. The costs are ballooning rapidly with each new node. It used to cost a few million dollars to build a cutting edge fabrication plant in the 80s, now it costs a few billion. The market of companies who make cutting edge nodes has rapidly consolidated for this reason. It’s now only Intel, TSMC, and Samsung after Global Foundries dropped out of the cutting edge node race due to R&D costs not being compensated for by revenue. They now only make older nodes like 14nm which have their R&D mostly payed off and are profitable for them.
Jim Keller ain’t no circuit designer :)
 
We've reached 5nm, and even ARM Cortex X-1 is still on ARMv8.2-A instruction wise. Are we already close to the limit of ARM? I mean sure, there's still 3nm and 2nm, but there's a literal physics limitation, no?

So what's next beyond ARM? It seems that the ceiling would be reached quite fast.
Loads of money + best engineers + time = surpassing any technical limit
 
  • Like
Reactions: ww1971
There are obvious physical limitations to using silicone as the medium, but eventually, when we’ve squeezed every conceivable bit of life from it, there’s always the possibility of moving to new substrates.

One possibility is graphene and carbon nanotubes - which has its own issues at the moment, but is still being actively developed.
Another, could be nanomagnetics, again, something which is in active investigation.
Or there’s always the one we hear about most, quantum computing.

One thing is for sure, silicone will continue to be the only real choice, and the one that will be pushed to its absolute limits, until a viable alternative is ready for mainstream use.
 
apple chip made by apple with rosetta 3 to make Mac m1 applications compatible
 
M1 is already an Apple chip fully designed and made by Apple...
One wonders how often this exact same point needs to be made. M1 has nothing in common with ARM outside of the ISA, which also includes Apple specific instructions.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.