Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
IMHO, Apple should fund a dedicated production facility and staff for TSMC to do this. It seems like Apple is soaking up more capacity at the expense of other chip production for every other company that uses chips. When I see lead times of over a year to get chips that used to be readily available, I get grouchy.
 
So the timeline is looking something like this:

2022- A16 (4nm)
2022/2023 - M2 family chips (5np)
2023 - A17 (3nm)
2023/2024 - M3 family chips (4nm)
2025 - A18 (2nm)
2025/2026 - M4 family chips (3nm)

In other words if you own an iPhone 13 and already own an M1 based Mac - wait until the 2023 iPhone and the 2023/2024 Macs for the best bang for your buck upgrade.
I don’t personally agree with that forecast.

I strongly believe that M2 chips will either be 4nm (based on A16) or even 3nm if this leak is accurate and we indeed get the 3nm in 2023. I don’t think we will have a 5np based M2, and the M3 isn’t gonna be 4nm.
 
  • Wow
Reactions: Billrey
I don’t personally agree with that forecast.

I strongly believe that M2 chips will either be 4nm (based on A16) or even 3nm if this leak is accurate and we indeed get the 3nm in 2023. I don’t think we will have a 5np based M2, and the M3 isn’t gonna be 4nm.
Leaks are conflicting right now. 4nm would be nice for M2. I’ve seen more reports saying the M2 will be based on the A15 which means 5nm.

Pushing the M2 to 4nm and the M3 to 3nm would be ideal but let’s see what happens. Chip shortages may disrupt the flow.
 
  • Like
Reactions: Argoduck
One thing to keep in mind is that these names are just marketing and not representative of the dimensions. Different companies measure things differently. For example, Intel's "10nm" was actually as dense as TSMC's N7 (7nm). That's why they renamed their nodes to be more inline with TSMC (Intel 7,4,3 etc).

TSMC themselves have actually begun to slide back as 3nm got delayed to 2023. Funnily enough, Intel recently moved their 18A node forward to H2 2024 so there's actually a distinct possibility that Intel might actually be capable of fighting for Apple's contracts.

Presumably, Apple's lineup would go
2022 - A16 (N4), M2 (N4 or N5P)
2023 - A17 (N3), M2 pro/max/ultra (N4P)
2024 - A18 (N3E), M3 (N3E)
2025 - A19 (N2), M3 pro/max/ultra (N3E)
 
Die sizes are kind of a joke right now.

Years ago, the die size was meaningful. But now it's the smallest trace on the chip, not the overall process.
Pretending that Apple is somehow so far ahead of the rest of the industry is a joke. They aren't, won't be, and can't be.
TSMC is in a pretty good position. But they can't work miracles, and they can't overcome physics.
 
At this point ‘3nm’, ‘2nm’ etc are basically arbitrary labels to describe progress in transistor manufacturing processes. They have little to do with actual feature size on the microchips, which can vary widely in a single microchip.

Also, process sizes are not comparable between different CPU manufacturers because they also serve as marketing tags for that particular manufacturer’s processes.

Don’t base expectations or calculations on the ‘Xnm’ label.
While you can't compare devices sizes between manufacturers, that doesn't mean that metric is arbitrary, since device sizes within a manufacturer are related to average transitor density:

1650681744792.png



If we replot density vs. 1/(device size)^2 for Intel and TSMC, we see there is a break, so it is not linear across the entire range. But again, it's not meaningless either:


1650683135397.png
 
I think after 1 NM what’s gonna happen is a lot of chip fusion. Think of more M1 Ultra like designs. What likely will need to happen is more apps will need to be better optimized for parallel computing to take advantage.
Because there are no more numbers after 1?
 
I think after 1 NM what’s gonna happen is a lot of chip fusion. Think of more M1 Ultra like designs. What likely will need to happen is more apps will need to be better optimized for parallel computing to take advantage.
We're already there. It's called multi-chip-module design, and AMD has been doing it for half a decade now, more or less. Comes with its own pitfalls in terms of data locality and latency, which is why I think we'll see more ASIC integration alongside it, meaning specialized fixed function silicon like encoders and such.

I don’t personally agree with that forecast.

I strongly believe that M2 chips will either be 4nm (based on A16) or even 3nm if this leak is accurate and we indeed get the 3nm in 2023. I don’t think we will have a 5np based M2, and the M3 isn’t gonna be 4nm.
A16 == M2, and hence N4 (which is N5++) in 2022
A17 will be out of step, probably moving to N3, 2023
A18 == M3, possibly still on N3, maybe some N3 derivative. 2024
Everything after that is beyond wild speculation.
 
  • Like
Reactions: EntropyQ3
If anyone's interested about the future of chip design, check out this video.


Get it straight from one of the guys building the stuff...
Well it was his job to motivate, so that’s what he’s trying to do.

What he had to do to create an image of computational power progression was to continously reduce applicability. As you move to vector processing, to multicore vector processing, to GPUs to NPUs, you can demonstrate nice progression in computing ”power” or operarations per second. What you are also doing is reducing the applicability of the technology to ever more limited use cases.

There is a certain amount of co-evolution going on of course - if computing power is only significantly growing in a certain small area of computing, software guys are going to see if they might be able to leverage that for at least a portion of their code. That’s better than seeing almost no progress at all, but not only does it suffer from the narrowing applicability of the underlying hardware resource, but also from the mathematical fact of Amdahls’ law.

Bottom line - us oldtimers have already seen the rate of progress slow down tremendously and there is nothing really indicating that this ”pushing your way up an exponential rise” won’t be the future as well. Smaller improvements, slower, at higher cost.

To what exent this constitutes a problem depends on where your interest in the industry lies.
 
What he had to do to create an image of computational power progression was to continously reduce applicability. As you move to vector processing, to multicore vector processing, to GPUs to NPUs, you can demonstrate nice progression in computing ”power” or operarations per second. What you are also doing is reducing the applicability of the technology to ever more limited use cases.
That's spot on.

Considering this apple's architecture is actually a pretty impressive feat, because it goes toe to toe with much more power hungry desktop chips on a sequential single thread basis. Sure, Apple has to do all sorts of tricks inside the CPU, including limited parallelism (out of order execution), but at least it works in practical scenarios, and not just benchmarks.

People kept dumping on intel for not going beyond 4 cores on the desktop. Fact is: doing normal computing things you will be hard pressed to saturate even two of those. And - I feel I have to say this at this point - that has nothing to do with us programmers being lazy. You can't just "make" a problem of sequential nature adhere to rules of parallelism. That's like saying time travel is perfectly possible - you just have to break causality.

So if we look at actual general purpose, single thread CPU performance .... Moore's Law, or what people believe it to be, has been dead for quite a while. Probably since the Pentium 4 era - which, interestingly, is right around the time Apple jumped on the intel bandwagon because PPC wasn't going anywhere either.
 
That's spot on.

Considering this apple's architecture is actually a pretty impressive feat, because it goes toe to toe with much more power hungry desktop chips on a sequential single thread basis. Sure, Apple has to do all sorts of tricks inside the CPU, including limited parallelism (out of order execution), but at least it works in practical scenarios, and not just benchmarks.

People kept dumping on intel for not going beyond 4 cores on the desktop. Fact is: doing normal computing things you will be hard pressed to saturate even two of those. And - I feel I have to say this at this point - that has nothing to do with us programmers being lazy. You can't just "make" a problem of sequential nature adhere to rules of parallelism. That's like saying time travel is perfectly possible - you just have to break causality.

So if we look at actual general purpose, single thread CPU performance .... Moore's Law, or what people believe it to be, has been dead for quite a while. Probably since the Pentium 4 era - which, interestingly, is right around the time Apple jumped on the intel bandwagon because PPC wasn't going anywhere either.
For me, the Apple Silicon advantage has very little to do with peak performance, and everything to do with how efficient and smooth everything is running. This is precisely what makes Apple Apple. They didn’t just rely on chips getting faster to make better computers. They innovated on everything around the core. Which, precisely because Moore’s Law is dead, is where the real differentiation is now, if you look at the experience of using a computer, rather than performance numbers.
 
No, because it’s gonna be extremely difficult. Just like Intel went through it’s 14 NM+++++ for years, the industry go through the same once it hits 2 and 1 NM.
I agree, just wanted to be sure because I had a discussion the other day with a guy who literally thought that NM’s was some kind of unit…
 
For me, the Apple Silicon advantage has very little to do with peak performance, and everything to do with how efficient and smooth everything is running. This is precisely what makes Apple Apple. They didn’t just rely on chips getting faster to make better computers. They innovated on everything around the core. Which, precisely because Moore’s Law is dead, is where the real differentiation is now, if you look at the experience of using a computer, rather than performance numbers.

I was reading about how much Apple programming efficiency improved the speed of the older XR/XS phones in just the past year or two. Those phones are several years old now, and still completely viable simply because Apple keeps improving and tweaking the OS. Speed of those chips has actually improved in many cases, due to efficiency of the OS. This is exactly the reason why I switched to Apple after 10 years of Samsung.
 
For me, the Apple Silicon advantage has very little to do with peak performance, and everything to do with how efficient and smooth everything is running. This is precisely what makes Apple Apple. They didn’t just rely on chips getting faster to make better computers. They innovated on everything around the core. Which, precisely because Moore’s Law is dead, is where the real differentiation is now, if you look at the experience of using a computer, rather than performance numbers.
Well, in the the end it does come down to raw performance. Although, and I do agree on that, it isn't just a single area Apple currently has an advantage.

M1 uses a lot of the silicon budget for on-die cache - 24 megabytes total. For 4 performance cores. AMD has 32 Megabyte for 8 cores - which then also do SMT. The decoders of the performance cores are 8-wide, which, even if it's "just" a RISC architecture, is a lot of logic, and then there is the unheard of 600+ instruction deep reorder buffer, which, again, even for a true RISC architecture, is outright insane.

Then, M1, especially Pro, Max and Ultra, uses incredibly expensive memory on a very wide bus at very low latency. Apple Silicon Macs, again especially the Pro, Max and Ultra variants, use very expensive, very fast flash storage with incredibly high IOPS, and something equivalent to PCIe4 x4 throughput (probably it's exactly that) on some custom controller hardware - which again isn't cheap.

Finally: Apple still is a whole node ahead of everyone else - at least until AMDs Zen 4 chips become available, but by that time Apple will have moved to N4, so they are still at least half a node ahead. Apple pays top dollar at TSMC for their advantage. And I'm pretty sure the initial plan for A16 and M2 was to go N3 this year, which would have meant again a full node ahead of everyone else.

All of this is very expensive. Apple does a lot of very expensive things to gain relatively small advantages which do compound into quite an impressive package. If that justifies the prices Apple puts on SSD and RAM upgrades we can very much debate. But Apple has pretty much done the opposite of what AMD and intel have been doing for quite some time: rigorously focus on single thread performance and high level integration at the cost of scalability, modularity and, well, cost.

Bottom line the M1 Macs are the most no-fs-given consumer computer design we've seen in quite a long while. Nothing here is this way or that way because Apple actually wants to sell server CPUs. And while Apple will very much make a very healthy profit on them, at least now you get something for your boutique prices. Because Apple clearly could sell a much less impressive package with much higher profit margins, and people would still buy it. So .... without singing the Tim Apple praises too much .... the kind of splash the new ARM Macs made is absolutely justified.
 
Well, in the the end it does come down to raw performance. Although, and I do agree on that, it isn't just a single area Apple currently has an advantage.

M1 uses a lot of the silicon budget for on-die cache - 24 megabytes total. For 4 performance cores. AMD has 32 Megabyte for 8 cores - which then also do SMT. The decoders of the performance cores are 8-wide, which, even if it's "just" a RISC architecture, is a lot of logic, and then there is the unheard of 600+ instruction deep reorder buffer, which, again, even for a true RISC architecture, is outright insane.

Then, M1, especially Pro, Max and Ultra, uses incredibly expensive memory on a very wide bus at very low latency. Apple Silicon Macs, again especially the Pro, Max and Ultra variants, use very expensive, very fast flash storage with incredibly high IOPS, and something equivalent to PCIe4 x4 throughput (probably it's exactly that) on some custom controller hardware - which again isn't cheap.

Finally: Apple still is a whole node ahead of everyone else - at least until AMDs Zen 4 chips become available, but by that time Apple will have moved to N4, so they are still at least half a node ahead. Apple pays top dollar at TSMC for their advantage. And I'm pretty sure the initial plan for A16 and M2 was to go N3 this year, which would have meant again a full node ahead of everyone else.

All of this is very expensive. Apple does a lot of very expensive things to gain relatively small advantages which do compound into quite an impressive package. If that justifies the prices Apple puts on SSD and RAM upgrades we can very much debate. But Apple has pretty much done the opposite of what AMD and intel have been doing for quite some time: rigorously focus on single thread performance and high level integration at the cost of scalability, modularity and, well, cost.

Bottom line the M1 Macs are the most no-fs-given consumer computer design we've seen in quite a long while. Nothing here is this way or that way because Apple actually wants to sell server CPUs. And while Apple will very much make a very healthy profit on them, at least now you get something for your boutique prices. Because Apple clearly could sell a much less impressive package with much higher profit margins, and people would still buy it. So .... without singing the Tim Apple praises too much .... the kind of splash the new ARM Macs made is absolutely justified.
And when you combine that with the vertical integration of hardware and software, you magnify the performance affect. One thing that Apple was able to do was to looks at specific tasks that the OS does a lot of and to optimize those operations in silicon. An example was reference counting. That happens with every operation. Apple said that that was an area where they made sure that AS was doing it as quickly and efficiently as possible. You don’t get that kind of integration with other hardware and software.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.