Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I don't understand the fascination with fab tech in this community, it doesn't really mean anything to me at all! Just tell me the performance numbers in comparison to other processors, and the expected battery life, that's how I decide what to purchase.
 
This makes me excited about the future. Just got my 14-inch M3 MacBook Pro and I will most likely upgrade in 2026.
 
Introducing the M6 MacBook Pro with the M6-Pro, using the 1.4nm in the A14, but not that A14.

Introducing our 1.4 nm chip M6 With 8 gb of ram😉
Wouldn’t that be M7 not M6
  • iPhone 13 Pro (2021): A15 Bionic (5nm, N5P) - M2
  • iPhone 15 Pro (2023): A17 Pro (3nm, N3B) - M3
  • iPhone 16 Pro (2024): "A18" (3nm, N3E) - “M4”
  • "iPhone 17 Pro" (2025): "A19" (2nm, N2) - “M5”
  • "iPhone 18 Pro" (2026): "A20" (2nm, N2P) - “M6”
  • "iPhone 19 Pro" (2027): "A21" (1.4nm, A14) - “M7”
Not that it really matters
 
No 0.5nm no buy! /s

This stuff is amazing. You really have to realize how tiny they are not only designing but actually manufacturing these things.
 
OK - thanks for all the snarky responses - now a serious question. If silicon is 0.2 nanometers thick, what is the absolute minimum thickness a silicon chip can be? Is it 0.2 nanometers, or is there a physics reason why it needs to be at least two or more molecules thick. Answers from people who actually know the science only, please. Thanks.
 
So...how much smaller can get go before quantum mechanics puts an end to our shrinking shenanigans?🤔 I read a few years ago that 4nm was supposed to be the limit. But now we've got 3nm chips and started pushing for 1.4nm, yet quantum mechanics hasn't drop the hammer.
 
  • Like
Reactions: amartinez1660
Screenshot 2023-12-14 at 1.17.01 PM.png
Wouldn’t that be M7 not M6
  • iPhone 13 Pro (2021): A15 Bionic (5nm, N5P) - M2
  • iPhone 15 Pro (2023): A17 Pro (3nm, N3B) - M3
  • iPhone 16 Pro (2024): "A18" (3nm, N3E) - “M4”
  • "iPhone 17 Pro" (2025): "A19" (2nm, N2) - “M5”
  • "iPhone 18 Pro" (2026): "A20" (2nm, N2P) - “M6”
  • "iPhone 19 Pro" (2027): "A21" (1.4nm, A14) - “M7”
Not that it really matters
Introducing the M7 MacBook Pro with the M7-Pro, using the 1.4nm in the A14, but not that A14.

Of course if 1.4nm gets delayed the iPhone 19 Pro may use a 2nm N2PeePee

😀
 
Last edited:
7nm to 5nm = 28.6% decrease
5nm to 3nm = 40% decrease
3nm to 2nm = 33.3% decrease
2nm to 1.4nm = 30% decrease

Fairly consistent, but 3nm is the best node over node decrease. Really enjoying it in my M3 Max, which blasts through tasks quickly and then returns to being cool, quiet, and power efficient.

Still all these baby steps. Why do they bother with all these in between numbers? Why not just go straight to, say, 0.9nm or 0.5nm instead of mucking around with 1.2nm? They know it'll be coming in a few years anyway, why wait?

It's all relative to the node size, as the performance roughly follows the transistor density improvements and related power efficiency improvements. But there are a few big reasons:
  1. They can keep making steady improvements each year while keeping their margins up. These are businesses, not charities. Perhaps if there were more competition, but there isn't. So here we are. Even so, these node performance increases are still pretty good compared to much of the 2010s.
  2. We are almost at the end of Moore's Law regarding silicon. None of these silicon-based manufacturers are necessarily racing to pass that finish line first, because that means bad things for them if they don't have a replacement ready, such as graphene or a fundamentally new transistor design.
  3. I'm not 100% sure on this one, but I'm pretty sure they set up factories or various lines within factories well in advance. You're talking many years of development for each process change, from prototyping to Engineering Validation Testing to Design Validation Testing to Production Validation Testing and then ramping up for production. As each new process change is drawn up and tested in a lab, they get passed to teams for implementation and there are a lot of phases to get it ready for production. So there is always a team working on the next step for implementation, which is probably why this 1.4nm stuff is leaking now. It's probably out of the lab and a lot more people are learning about it since they need to put it into production and start building specialized equipment and tools for creating the new wafers.
Behind the scenes there is a lot of work being done to research new methods for post-silicon. I'm sure the major chip companies have invested a lot of time and money into researching the post-silicon world, and there are radically new types of computers being developed at large companies, such as quantum computers, though those aren't great for general computing and currently require a lot of heavy duty equipment for supercooling, etc. For silicon, from 1.4nm down to some multiple of the width of a silicon atom (0.2nm) they've got it figured out. But I don't think anyone has made a transistor smaller than 1nm yet, including in a lab (at least that is publicly known), and to minimize quantum effects, it's theorized the gate width needs to be larger than 0.2nm, and gate width is what I believe TSMC's node naming refers to. So probably the smallest is in the 0.4nm to 0.6nm range.

For that reason, I see these as what we will potentially see for future nodes below 2nm. It gets tricky, because I think the gate width needs to be in multiples of 0.2nm, because that's the width of a silicon atom.
  • 1.4nm (2027, 30% decrease)
  • 1nm (2029, we have seen this in a lab, 28.6% decrease)
  • 0.6nm (2031, might be 0.8nm to stretch things out if needed, but that would only be 20% decrease, this is 40% decrease)
  • 0.4nm (2033, 33.3% decrease, this size might not be possible)
These are all numbers we've seen at the very beginning of my post: 30%, 28.6%, 40%, 33.3%, so I think these are likely targets we could see and stay within the range they've recently been doing. This should get us through probably 2032 on A6+, maybe through 2034 if A4+ is possible, and I think by then it's pretty likely there will be a replacement for silicon ready.

I'm no expert, but I keep reasonably up to date about what is happening in these areas so if anyone has more detailed or recent information, please correct this post! I'm basically trying to ballpark about how many years we have left on silicon and I think it's in the 8-10 years range. Obviously silicon chips will exist far beyond that for simpler applications, but I'm talking about the cutting edge, high end chips only.
 
Any engineers in the house? I understand the need to make transistors smaller, so we can fit more on the die. But how small can we go before quantum tunneling takes over and the gate is no longer a barrier?
 
Last edited:
  • Like
Reactions: z4co
Oh no! My 2nm MBP that I was planning to buy in 2025 is already obsolete!
Hahaha… indeed, when one reads this announcements, the temptation is to wait indefinitely. But I think the refined P processes (N3P, N2P…) will have the best value overall. This is a wild guess, but I think once they have mastered a new node shrink, the second version allows to take more advantage of such technology.
 
I find it very exciting that Moore’s law is back on track with Apple Silicone and ARM chipsets. It slowed in recent years but is very cool to see it come alive again.
 
It's a race between the law of diminishing returns and Moore's Law.
Moore's Law kinda cracks me up. It wasn't ever a physical law of any sort, but a paradigm for the advancement of Intel's chip tech. Doubling the transistor count every 18 months would let them meet their targeted advancements in chip capability and financial return as a harmonious pairing. Then we get to the late '90s and Intel starts having doubts about continuing with that paradigm... Early 2000's and Intel had pretty well cast it by the wayside, started saying it no longer applied and all that, but AMD started giving them some real competition and they kicked Moore's Law back into effect.

These days, it really isn't applicable. New chip designs and shrinking fab tech cause transistor counts to rise and fall. There's an ebb and flow to the design process and there's a whole lot more to how well a chip performs or how powerful it is beyond simply counting transistors. But Intel likes to tout Moore's Law as if it's some grand achievement or universal constant. But whatever.
Wouldn’t that be M7 not M6
  • iPhone 13 Pro (2021): A15 Bionic (5nm, N5P) - M2
  • iPhone 15 Pro (2023): A17 Pro (3nm, N3B) - M3
  • iPhone 16 Pro (2024): "A18" (3nm, N3E) - “M4”
  • "iPhone 17 Pro" (2025): "A19" (2nm, N2) - “M5”
  • "iPhone 18 Pro" (2026): "A20" (2nm, N2P) - “M6”
  • "iPhone 19 Pro" (2027): "A21" (1.4nm, A14) - “M7”
Not that it really matters
I think we have a couple more years at 3nm. 2nm process won't even begin test sampling until late '25. That means 2nm chips in 2027 most likely. OTOH, 1.4nm could show to be just as effective in its continued testing and they could skip 2nm altogether if they don't feel the return would justify investment in 2nm fab lines. Or initial samplings of 2nm and beyond could run int troubles and we could be stuck on 3nm until 2030. There's a lot of unknowns at this point. It also makes one wonder how much of the industry and US and ally governments will back TSMC on these advancements to keep ahead of China. There's a bit of a panic in some industry sectors at the moment as it seems China has their own 5nm chip tech and is advancing fine on their own.
 
Based on this chart I’ve been getting the last phone made on the most improved process node. Seems to be working out so far. By that logic 16 will be the next upgrade from my 13 Pro I’m still perfectly happy with.
 
Introducing our 1.4 nm chip M6 With 8 gb of ram😉

So 8GB is now "insufficient".. next year 16GB will be insufficient, followed by 32GB. It will never stop so long as people don't stop and think that capacity and efficiency are not equals. My self-developed software platform does 100x the amount of work in the same 2GB server configuration as it did 20 years ago. That's because of a very strong focus on efficiency, the same efficiency that Apple encourages by not going crazy with RAM specs on its devices. It's up to the developers to do the work to operate efficiently, not be lazy because they have endless resources.

Curious, would you drive around with a trailer filled with gasoline just to have "more capacity"?
 
  • Like
Reactions: amartinez1660
Any engineers in the house? I understand the need to make transistors smaller, so we can fit more on the die. But how small can we go before quantum tunneling takes over and the gate is no longer a barrier?
Roughly sub-20 nm, it has been a concern for a while now.
 
and 256GB SSD...

Oh God let's hope not 😭
By then, the minimum specs will be 12 or 16 GB with 512 GB of storage, but people (particularly those in forums like this) will complain the minimum specs should be 32 GB/2 TB.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.