Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
A major contributing problem with the 14->10nm transition was that Intel tried to "make up lost ground" by compressing more stuff into a shorter timeline. 14nm was a bit late so stuff an extra dose of density bump into 10nm to catch up. Trying to do all of the catch up in one big jump was a mistake.

I think folks are not fully appreciating that by shifting to Angstroms Intel gets to take smaller increments. Going from 2nm to 1nm is a jump of 10 Angstroms. Going from 20A to 18A is just 2. It is an approach where they are taking smaller increments. So pulling forward a 2A process increment by 3-6 months is different, smaller hurdle to jump over than pulling back 10A increment by 6 months.

The bigger jump that Intel is talking about doing is the Intel 7 to Intel 4 jump. 4 to 3 will likely be more a refinement than a huge jump.


Samsung has had problems getting gate-all-around to work. (essentially what Intel calls RibbonFet) TSMC hasn't been doing peculiarly better either ( they have avoided the transition away from FinFet. ) . So one issue is that Intel's two competitors can't necessarily quickly extend their lead either. There are two major inflection points coming ( gate-all-around and High-NA EUV tools) . What Intel mainly has to do is not shoot themselves in the foot and 'win' one step at a time.

However, if Samsung has a "Eureka" moment after diligently working away at the problem , then Intel will have issue ( that they don't control). Samsung open a window for Qualcomm interest in 20A.


Qualcomm has a variety of chips they need to make. they don't need to dump everything onto INtel 20A , but if 20A is tuned for what they need for a subset product then it could be a better choice. And Intel could lean on Intel 3 longer to get to 18A for a broader set of their products.

Similarly. not all customers are interested in ultra-hyper density.

"... TSMC’s N3E node was explicitly designed to improve the process window to speed up time to yield, increase yields, boost performance, and lower power. The report says this would all be accomplished at the cost of lower transistor density compared to the original N3. Initially, TSMC expected to initiate high volume manufacturing (HVM) using N3E about a year after N3 HVM start, sometimes in Q3 2023. But since test production yields of N3E are already high, TSMC wants to start using it commercially earlier, sometimes in Q2 2023. ..."

TSMC seems to be forking off more supplemental variants on each generation now. N5 , N5P , N4 , N4P , N4HPC ( maybe N4N(vidia) on Nvidia slides). . There is a decent chance they will get bogged down a bit on that at N3. TSMC trying to make the widest set of variants makes them move vulnerable to a vendor who limits the focus to a more narrowly defined subset ( to cherry pick off a subset of clients. ) .

If the super duper highest transitory density has wafer costs that are 2x TSMC N3 wafer costs some customers are going to pass.


It wasn't laughable that AMD could catch up 5-6 years ago either. Again... narrow focus and stop shooting yourself in the foot and the odds aren't as bad. ( and willy nilly chunking out your fab business was one of those shot yourself in the foot moves made by AMD. A fab business that is completely addicted to 'lock in' wafer order contracts isn't really competitive. ) . Asking the Intel foundry services business to be competitive actually would help the whole ecosystem of Intel products.

Intel isn't guaranteed a win . but not guaranteed a loss either. Few were picking the Cincinnati Bengals to be in the Super Bowl back in October 2021. The Bengals didn't win the Super Bowl but just getting there was very good outcome. ( And if 3 or 4 plays had lucky bounces had gone the other way .. they would have won. )




I think we will probably see lower overlap between the processor package products from those five over the next 2-3 years rather than more. (or everyone trying to match Intel's make "everything for everybody" processor package approach. )

that isn't necessarily going to be bring more competition inside of specific , narrow product segmentations. (e.g., low end laptop or extremely high end workstation. )

Consumers in the same product category will probably be paying just as much (if not more). The factor of some consumers being able to move a lower tier and still get their needs met ( e.g., desktop -> laptop with no performance problems. ) has been in play over last 40+ years.
Alder lake's mere existence has disciplined Zen 3 prices. The mighty 5950x has a MSRP of $799 (more than an entire base model Mac mini) and was sold out in much of late 2020 and 2021. When in stock during that time period, it routinely sold for above MSRP. Microcenter sometimes had it on sale for $850 or $900 and Amazon and Newegg were even worse. Intel Comet lake and rocket lake couldn't compete with Zen 3 and AMD reigned supreme for a period of time.

Then alder lake came out and the 12900k defeated the mighty 5950x in many tasks (not all, but many and it was convincinly stronger in gaming). And the 12900k was cheaper too. Just this weekend, I went to microcenter and the mighty 5950x that once sold in that same store for over $850 was on sale for $519. A 39 percent discount There's no way the price of Zen 3 would've declined so dramatically if Alder Lake wasn't as strong as it is.

That's what I mean when I say competition will benefit us consumers. Strong parts from Intel amd, eventually Qualcomm and Nvidia will discipline the prices they can comeptitively charge. So long as there aren't any major shortages or collusion amongst the brands. And if amd and/or Intel has a stronger part in terms of raw performance and performance per watt than apple, Apple will feel some competitive pressure too in terms of pricing. This means that Apple will have to innovate and execute in order to be able to continue extracting the profits that it is used to. And we will benefit. Apple has added back ports on it's machines, fan noise is no longer a problem, and m1 max performs exceptionally well in a thin and light form factor. And is extremely power efficient. Apple wasn't doing this 5 years ago. This is Competition and innovation at play. And it is beautiful to see.
 
  • Like
Reactions: jeffpeng
They may. Many things are possible.
The A16 may ship on N4, while M2 ships on N3.
Hell, we may even see that M2 ships on N4, but M2 Pro, Max (which need separate masks anyway) ship on N3.
Apple have done things in the past equivalent to these sorts of splits.
Unlikely. N4 and N3 are not design compatible. Having derivative silicon on different nodes would mean they would have to essentially implement it twice. Can be done, but bringing a design to actual silicon maturity is a process that's both long and expensive, and a company will avoid doing it twice if they can.

And frankly just having two years of architectural improvements (A14 to A15 alone was quite impressive) plus two node shrinks actually doesn't look too shabby. It won't be the M1 revolution all over again, but that was to be expected. The fact is that M1 1 1/2 years after release is still very much competitive, and still market leading in mobile applications. And considering AMD is just arriving a 5nm and intel 4 is still a year away (and we'll see how that performs, and if intel can deliver in volume) releasing an N4 fabbed A16 based M2 might actually still not look too bad two years from now.

We don't really know how what A16-gen silicon brings to the table. A15 has vastly better efficiency cores than A14, much more than an incremental update has any right to be, but the additional cache on the performance cores also shows. It's conceivable that the A15s engineering budget was very much skewed towards things that matter on mobile. If so, it's equally conceivable that A16 will again focus more on the performance side of things. And a denser process should yet again allow for more cache. (And as the 5800X3D demonstrates: cache solves almost anything ?)

So, all things considered, M2 might have 20% over M1 for all we know, or 50%. Or something in between. It's all speculation. But I have the feeling it will be right in line to stay competitive until 2024 - which then might not even be N3, but some derivative node like N3P. For people that just got their Mac Studio (like I did) or their MBP 14/16 I'd recommend not to bother too much, but enjoy their device. It's good, really fast, and probably more than you need. M2 will not be a must-upgrade, and imho not even a should-upgrade, but a rather positive can-upgrade.
 
If so, it's equally conceivable that A16 will again focus more on the performance side of things. And a denser process should yet again allow for more cache. (And as the 5800X3D demonstrates: cache solves almost anything ?)

Apple needs to couple extra cache with stronger cores. Even with the impressive 5800x3D, although it might beat or tie Alder Lake in gaming, but loses handily in content creation and productivity tasks. Tasks in which additional cache doesn’t net much, if any gains.

What will be interesting to me is to see what accelerators, if any, Apple adds to its future SoCs. They added accelerators for ProRes but what if they add hardware to accelerate other codecs like Canon’s and others. What if they add ray tracing cores? There’s so much apple can do because they don’t use up their entire silicon budget with cpu cores, they add all kinds of other hardware like neural engines, ProRes etc.
 
  • Like
Reactions: jeffpeng
What will be interesting to me is to see what accelerators, if any, Apple adds to its future SoCs. They added accelerators for ProRes but what if they add hardware to accelerate other codecs like Canon’s and others. What if they add ray tracing cores? There’s so much apple can do because they don’t use up their entire silicon budget with cpu cores, they add all kinds of other hardware like neural engines, ProRes etc.
That's kinda the conundrum with fixed function hardware, is it not? You cannot use it for anything else. And going other proprietary codecs from other companies is something I don't see implemented in the SoC. Canon, however, could implement their own Add-In Board, or even just a Thunderbolt-Box.

Another avenue would be to bring FPGAs to the Mac. Afaik the OG Afterburner card was an FPGA design. But if Apple could make those generally part of at least Pro Macs ..... man that would be something. Not as potent and efficient as ASIC solutions could ever be, but still a heck of a lot better than bruteforcing decoders and encoders on the GPU's or even CPU's general purpose hardware.
 
  • Like
Reactions: falkon-engine
Intel is going to name their processes 20A and 18A (equivalent to 2nm and 1.8nm). A stands for Ångström. It's also worth to mention that 5nm doesn't stand for anything in particular and transistors are much bigger than that so expect node shrinks to continue for a while longer.

Also going below 1nm is not the same as going sub-zero nm. :p

I keep seeing people state this about 5nm and pointing to Wikipedia but I’ve not seen any chip manufacturer specifically state this.
 
I keep seeing people state this about 5nm and pointing to Wikipedia but I’ve not seen any chip manufacturer specifically state this.
The companies themselves will do good not to officially comment on their own nodes and how much of it is actually marketing. To be fair: communicating why this node is better than the other - and for what application - is something that's hard to communicate even to people in the tech space, and would be impossible to communicate to the customer. It used to mean gate pitch, but since FinFETs all that got kinda wonky - especially because not all gates were created equal since then, and competing chip makers like TSMC and Global Foundries started spinning what intel process they believed their process was equivalent or superior to despite them being based on sometimes very different design concepts (tri-gate vs. FinFET vs. planar).

Customers shouldn't bother too much with metrics such as nanometers and gigahertz, technically not even that much with cores and threads. Semiconductor products these days are so different in so many dimensions that the best thing you can do is look at a few reviews, and then get the thing that does your stuff best for the money you are willing to pay.

The most important thing to know about N5 and derivatives is .... it still beats the living snot out of anything else, especially power per watt wise. And while Apple's cpu architecture sure is amazing .... half of what makes it so much faster than the competition is node advantage.
 
The first product to be introduced with the 3nm tech is.... an iPad!

The most overpowered and underutilized tech from Apple is still the 2017 iPad Pro - still not needing any upgrade for most needs :)

Maybe it'll be that dual iPad Pro/Mac device.
 
The first product to be introduced with the 3nm tech is.... an iPad!

The most overpowered and underutilized tech from Apple is still the 2017 iPad Pro - still not needing any upgrade for most needs :)

Maybe it'll be that dual iPad Pro/Mac device.
All M2s on 3nm makes sense. Maybe the X1 for the Mac Pro. Announced at WWDC, shipped by October (as per the “2-year transition”)
 
Can you imagine? The distance of 3nm is only 15 Atoms of Silicium...
Remarkably, it's even less than that—11 atoms of silicon. While the nearest-neighbor distance in crystalline silicon is 0.235 nm, the average interatomic distance (which can be calculated from the density of the crystal and the atomic weight of silicon) is 0.272 nm. So 3 nm is 11 atoms:

[1/(2.329 g/cm^3)*(28.085 g)/mole/(6.02214 *10^23/mole)]^(1/3)*1 m/(100 cm)*10^9 nm/m = 0.272 nm

0.272 nm/3 nm = 11
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.