Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
An expectation from who exactly?
Better performance, better graphics, faster ram, better battery life are all more significant than it looking better in geekbench.
Not sure why *anyone* expected a major leap from M1 to M2.
Noticeable major performance jumps between every generation of chip is just not something that anyone should expect.
Right? The only thing the M2 has to be is “more performant than the M1”. And it will be. Will it destroy AMD/Intel? Nope, and it doesn’t matter as they’ll still be the fastest Macs in the world.
 
  • Like
Reactions: Macative
Since when has process node been the defining factor in iPhone performance?
Since they decided to post articles to suggest such a thing.

Apple can still release an improved chip under the existing fab process. And they will.

When 3nm comes around, it will be a bigger leap in performance.
 
  • Like
Reactions: Unregistered 4U
Do people really think Apple’s plans have changed since March when the Senior VP of Hardware Engineering said that the Ultra was the last M1? There are not going to be any new M1s. I have no trouble believing that the M2 is a minor update on the M1 but there won’t be any new M1s. I’m also very skeptical that Apple would introduce a new MacBook Air more than 18 months after the M1 MBA with the same SoC. Logic says that having an M2 with the A15 or A16 CPU and GPU cores is going to be requirement by Apple’s marketing.
I don't understand the point of this article. The M2 was never going to be based on 3nm. That was never in the cards. So it was always going to be a small update, the same fab process with improved cores. This article just demonstrates how inept macrumos writers are and how easily manipulated the readers are.
 
  • Like
Reactions: opeter and jdb8167
Lots of folks on the forums seem to want to skip over N4. Mediatek isn't. Qualcomm isn't. They are placing large orders for it. But somehow Apple "has to" skip it. ( for some reason it can only pick a 'P' version. Either has to be 5P or 4P. ).

The latest news reports I found (last few months) said that Mediatek and Qualcomm are seeing defect rates of between 30% (TSMC) to 60% (Samsung) on their SoCs using these latest processes.

Apple is not going to accept a 30% defect rate on A16 - and that presumes A16 has a similar transistor density and complexity to Snapdragon 801. If it is more complex, then the defect rate could be even higher if they switched to that process.
 
The latest news reports I found (last few months) said that Mediatek and Qualcomm are seeing defect rates of between 30% (TSMC) to 60% (Samsung) on their SoCs using these latest processes.


"news reports". There are more rumors sites on this an hard news sources.

At what stage of the rollout did they measure TSMC N4. TMSC 'at risk' N4 production doing 30% and Samsung "full production" doing 60% defect rate, then that is a "slam dunk" dump of Samsung. It is more than highly dubious that Qualcomm would switch a production run over to TSMC less than "half way" through the major part of the lifecycle without first doing some small-medium scale runs ( at least couple of hundred wafers ) at . In late '21 N4 was in "at risk" production ( started in Q3 '21 so Qualcomm could do a hasty port Q2-Q4 and catch then in of the 'at risk' segment.) . Qualcomm does the runs and is getting better than "production" at Samsung (which has another Quarter or two to find a fix to the problems. )

However, that doesn't mean that N4 is going to be a 30% defect rate in July-September of '22 ( which is more timeframe of where Snapdragon 8 plus gen 1 will ship ( Q3 '22). ) .


Advanced%20Technology%20Leadership.mkv_snapshot_03.02_%5B2020.08.25_14.15.08%5D_575px.jpg



On the diagram 7+ starts off lower defect rate than N7. And N6 starts off low also .

The TSMC pre HVM (high volume manufacturing) defect rate is much higher than when it gets to HVM. And even after the initial HVM market it still drops. Perhaps a bit of selective hand-waving but N6 ( with a 18% area shrink. Not 8 , but 18% ) doesn't show any wild increase in defect rates when it starts off. Even if N6 initial 'at risk' was incremetnally higher than 7+ starting point, it would still be below where N5 HVM started off at.

So N4 , a mild optimization of N5 (similar in relation of N6 to N7), shots off into the 30% zone for HVM defect rates? Errr, probably not. If it did then N3 is likely very screwed ( because an even bigger process 'formula' change.) Nor could Qualcomm afford to wait until port Q1 '22 to order up large wafer starts of N4 in the 2H of '22. These days nobody can just walk up to TSMC at the last minute and order up thousands of leading edge wafers.

So pretty good chance those "February '22" rumor leaks about TSMC vs Samsung defect rakes were from wafer runs done in 2021 from both shops. They caught early N4 before it got to HVM stage.


Apple is not going to accept a 30% defect rate on A16 - and that presumes A16 has a similar transistor density and complexity to Snapdragon 801. If it is more complex, then the defect rate could be even higher if they switched to that process.

It isn't the 801. The name scheme has now changed. It is just plain 8. Then a 'gen' then which gen number.
They have thrown a plus for this fab switch. Snapdragon 8 Plus gen 1 . Next year it will be SD 8 gen 2 . The 801 was released back in 2014. The transistor density of it is 28nm era; many Moore's law iterations ago.


Apple didn't have to ship the A16 in the 1H of '22 when N4 was closer to 'at risk' defect rates. So that doesn't really matter. Apple doesn't have to start to ramp the A16 production until May-June time frame.

Also if Qualcomm , Mediatek , and Nvidia are all ramping in 2H '22 on TSMC N4 then limiting the A16 orders to just the iPhone Pro makes more sense also. To some extent Apple may not have the option of doing the rest of the iPhone line up on N4 because there are not enough wafer starts to go around. If the A16 is only initially going into a more expensive phone, then 30% probably wouldn't be that bad. ( Pretty doubtful that it would stay at 30% long term, but Apple could have started ramp in April if necessary. )

Apple sticking with N5P would be to "save a buck" for some reason (e.g, sharing with some other Apple product. Couldn't get the 'volume' discount they wanted ,etc ) and discounts wrangled out of TSMC . 'at risk' production for N5P started back in Q4 '20. Volume production started in 1H '21. Why would Apple being going with a year old process node in 2H of '22? The M2 also. Why roll it out with about a year old process tech? [ Unless it was suppose to ship out in Feb-March '22, and is backlogged. ]



It isn't because N4 is relatively 'bad' once a quarter or so into HVM production. N4 is meant to be a relatively easy "hop" from N5 (or N5P) to N4. Apple not taking it is more indicative of some problematic issue with Apple than TSMC. N3 and N4P aren't ready in time for A16.

Perhaps a corner case where Apple has extra early tape out access to N4P so start early, but doubtful that was a plan 2-4 years ago. N4 would have met the late 1H '22 start time. N4 has good chance of being a more long lived target so makes sense for A16 on N4 to trickle down into stuff like AppleTV , entry iPad , etc. As I've said previously going to N4 would get the die bloat, which over the long term has decent chance of making it more affordable to produce. The lifecycle of the SoC is across the whole set of Apple products they'll use it in. Not just the initial iPhone Pro product. ( an iPhone Pro 14 with better cameras , video recording, better AI/ML , etc. will sell. It isn't necessarily about single thread Geekbench porn scores. )


P.S. so the progression for Apple would go.


N5 : A14 , M1
N5P : A15
N4 : A16 , M2 (subset)
N3 : A17 , M3 (probably start very large expensive and work to small over time. Cover refreshed M2 subset last)

Those were all of the "in HVM" in May-June over 2020-2023 time frame . M-series is only picking up transistor density increases and skipping anything that is just a chance to bump clocks ( or no bump and just save power).
[ skipping non-transistor density increases is same thing the iPad Pro A--X sequence did over last couple of iterations. The bigger the die the longer the wait for a 'big enough to be interesting" density increase. Also the higher the priced product so moving at relatively much lower volume so fewer folks to spread yearly updates costs over. ]


P.P.S. I am not seeing much in the "news" about MediaTek's yields. Looks like there is presumption here that Mediatek's yields have to be Qualcomm's yields. Probably not. Mediatek did not do a hasty port ( SD 8+ gen 1 probably isn't highly optimized. They just wanted off of Samsung 'quickly' ). Also pretty good chance the die sizes are different also. Not going to be huge, but also not the same.
 
Last edited:


I of course acknowledge that the TSMC 16nm A10 is bigger than the TSMC 16nm A9. I'm pointing out that the A10 had a more efficient die layout as per Chipworks' analysis. Essentially Apple put more effort laying out the A10 such as placing the blocks closer together to reduce dead space. Apple also made better use of more efficient track libraries when laying out the chip to save space. Chipworks estimates that if the A10 were laid out in the looser style of the A9, it would have been ~150mm^2 instead of the 125mm^2 it ended up being. I don't have the expertise nor access to the full paid report for the details to verify Chipworks' analysis, so I'm relying on their expert opinion provided in their summary unless you can provide evidence to contradict Chipworks.

re-jiggers the 'floorplan' got Apple die space savings, but that doesn't necessarily turn into substantive performance gains. Especially if started off with decent floorplan in the first place. They didn't end up at 150mm^2 because more focused on doing just one floorplan for TSMC. Not a design ported across two fabs. The TSMC A9 was 'looser' in part because the Samsung 14nm was denser. ( A9 Samsung 96mm^2 , A9 TSMC 104mm^2 ... 8% bigger. The subcomponents can get closer on Samsung ... they could not on TSMC. So probably have to put in some gaps to use the same general floorplan. ).

If flooplanning could get significant bumps in performance increases you would have though Intel would have done the "magical" floorplan shift when they were stuck on 14nm. That didn't save them. And it didn't make sense for Apple to continue to dual source the A-series.

Better floorplan can make a small contribution to performance. Using less power generally ( some shorter wires ) will help open up more power used elsewhere on peak sprints through some critical code core sections. A little better latencies in some cases.


As for whether Apple has ever stuck with the same general process node for 3 generations before, according to the Chipworks analysis of the A10, they actually have and the A10 is the shipping product. Reportedly, the 20nm planar process used in the A8 is technically the same general process generation as the 16nm FinFET process used in the A9 and A10. The 16nm FinFET process has similar gate densities as a 20nm planar process while the addition of FinFET improved performance. "16nm" is primarily a marketing term to indicate you get the performance improvement of a shrink vs 20nm planar without actual physical shrinkage, very much like the difference between N5 and N5P. The SemiWiki article below also provides an explanation.


From the linked article

"... One team has chosen to define the performance of their FinFET as a “half node” improvement (e.g., 20nm ->16nm), whereas the other has chosen to represent the performance of their FinFET as equivalent to a “full-node shrink” (20nm -> 14nm). There will be slightly different fin_height, fin_thickness, and fin_pitch parameters between the two processes but the circuit density is really still the same as 20nm ..."

How is the "same as 20nm" when it is characterized as a " 'half-node' improvement ". The density is about the same, the effective holistic performance is not. Density is everything. It is usually very helpful but it also isn't everything. Especially if trying to delivery highest performance/watt (and trying to minimize leaks and other adverse impacts. )

So no, Apple really has squatted on the same process node for three cycles before. Going to FinFet make a substantive difference. ( just like adding 3-4 more layers of EUV to the process would. 5 -> 4 . Same general process node family in terms of design but substantive difference in effective delivered performance. )


I do agree Apple should make use of the N4 process since even minor density improvements are valuable even if there isn't much performance or power improvement compared to N5P. I was just trying to point out that it's possible to improve performance/watt without relying on a process shrink and not trying to diminish the importance of smaller processes.

For one iteration? Yes. If moving at a forced , relatively quick pace to do iterative designs and get them out the door on a yearly basis there is typically stuff didn't have time to fully optimize. But three years in a row? Unless started off with something that was pretty suboptimal in the first place , then usually not much left after the first round of corrections on an already decently good design. ( Even Intel's 10nm (Canonlake ) --> 10nm (partially refactored ) Ice Lake -> 10nm SuperFin but by the end it is a substantively different node. It isn't a full or half node increase but also really not the "same tech". Intel dialed back the density , but also changed substantibe other parameters. ).



The N4 process probably costs more, but that doesn't seem to be an impediment for Apple.

Well they all cost more now that TSMC has rolled out price increases. Unless there was some super long term contract that Apple got to permanently 'hide' inside of, they have all gone up for new wafer start batches.

"... Meanwhile, since N4 further extends usage of EUV lithography tools, it also reduces mask counts, process steps, risks, and costs. ... "

N4 is like N6 (relative to N7) in that it is suppose to be "tweaked so that easier to cost effectively produce " . It is mainly a process improvement feedback cycle for those who don't want to be on the bleeding edge. It is not suppose to cost more ( so charts from 3-4 years ago would not have had 'price hike' attached to this as a prominent feature. It would have been the process TSMC would have been trying to sell price/risk sensitive folks on back then. )

If N4P or N3 was projected to be available around the same time as N4 then yeah Apple would have skipped it ( "We have enough money that can minimize the risk" ). But Apple passing up a "save money" opportunity when there aren't other risk modified other options to take ? Probably not.

Apple beats on their contractors for cost savings all the time. (e.g., that iPad to OLED transition stalled in part because nobody wanted to build the higher cost panel (double stacked) at the prices that Apple wanted to pay. )
Apple doesn't look for quality to drop to get to lower costs. Just want the contractors to take the margin hit instead of them.


As for why Kuo thinks Apple will stick with N5P vs switching to N4, I guess he believes Apple's greater experience with a mature N5P process provides more opportunities for optimization that can overcome N4's minor density improvement and result in a better overall chip. I don't have the knowledge to judge that.

N5P was in at risk production back in Q4 '20 . It went into production in '21. Decent chance A15 is N5P. It was already 'speed bumped' . The notion that there is lots of unoptimized , "low hanging fruit" there is probably not true.

If Apple splits the orders between iPhone 14 'full' A15 and iPhone Pro 14 'full' A16 then in some sense Kuo would be right about sticking with N5P because a bit over half the new phones would be on the non-binned A15. But with the volume lower on the higher priced iPhone Pro for "half" of the orders only really makes sense to keep the Pro back on N5P if there is some volume discount reason that Apple would "loose" if split the orders that way.

To do 'new' chip on a process that first went into production back in '20 now would be odd, if the primary reason wasn't to save money. (either paid way in advance for wafer starts or some kind of 'volume' discount because had a load of other stuff to do on N5P. ).

For A16/M2, I'm not seeing why Apple would have composed a plan 3-4 years ago to be squatting on N5P. Putting the iPhone on a node that is over a year old when your competitors probably were not going to do that would make sense how? Apple missing a new TSMC node for 6-9 months ( 2-3 quarters) because the TSMC and Apple product schedules don't match/sync up .... sure. (iPhones go every 12 months. Moore's law 18-24 months... missed syncs are going to happen over time. ) . But skipping for more than 12 months. That would be odd.
 
  • Like
Reactions: jdb8167
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.