Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Doesn’t it take a lot of time and planning as well as to negotiate and secure capacity?… like way more than a year?

Meaning if they haven’t decided by now it would be much too late for 2023 production? Aren’t roadmaps for this stuff done several years in advance?
Hard to believe a decision like that is made in a few months timeframe. Any 3nm based devices that would ship in 2023 should already be in a far state of development/testing. From a planning perspective this sounds nuts.
Yeah, this was very likely settled awhile back, but Digitimes has to continually release information to maintain relevancy. It’s to their benefit if they can convince folks that years long processes actually happen in far shorter timeframes. That way, if their wrong, they can say “well, they changed their minds last week and decided not to do the thing.”
 
This seems to be a rather dubious , non contextual take on the situation. MediaTek and Qualcoom have just released TSMC N4 SoC late 2022:

Qualcomm Snapdragon 8 gen 2

MediaTek Dimensity 9200.


While these two have 'released' these there is not full blown mass distribution yet. Q1-Q2 customers will be consume these through the product pipeline as there are phone roll outs.

It was Apple who was growing the die bigger on a move to N4. ( Apple has the die bloat issues. )


The other bigger issue for Qualcomm and Mediatek is that they have integrated modems. N3 (and future) doesn't necessarily offer much benefits at all to the analog subsection of a modem. The wafers being more expensive makes that problem worse, but there is another whole layer to the 'problem' at the foundation. Pragmatically Qualcomm and MediaTek make a substantively different kind of phone SoC than Apple does. In this context having a discrete modem is actually somewhat helpful.


It is somewhat doubtful that Qualcomm and MediaTek were heavily invested in the initial N3 (now N3B) at all anyway. TSMC was scheduled a while back to come out about a year after N3. So that would have been Q3-Q4 2023. If pushed super hard maybe they could start in early Q3, but if folks are still 'digesting' the N4 models ... who is really going to buy it? TSMC reporting has moved N3E up to Q2-Q3 but who can move a whole phone rollout earlier by a whole quarter. Apple doesn't. Things go later.... but forward a whole quarter. Eh probably not.


Qualcomm and MediaTek sliding a N3E into Q1 2024 would not be a huge debacle. That gets them out of the "end of 2023" window. Sure Apple will have some kind of 'hype train' they can run for an extra quarter , but that isn't going to be all that bad. And frankly this might let the SoC/Phone vendors a chance to catch up on getting the latest Android out ( Google drops a new Android and it takes how many quarters to show up on phones? Or OS upgrades for last year's phone after how many quarters? )

Qualcomm and MediaTek could do with a quarter off the hype train for a quarter or two and more time on the engineering train getting stuff done right.

Rumors are that N3E wafers are incrementally cheaper. And rushing into FlexFin probably is a dual edged sword. And if the die is "too monolithic" the N3 presents problems that need to be solved.
[ Apple's M2 Extreme appears to have gotten chewed up in the initial rush to N3 ... it isn't all 'fall off a log' easy node transition. ]

You’ve explained in terms Apple might understand (quarters) why everyone needs to just take 2023 to refine what we already have. I’ve been saying this for a while in terms of software, but it seems hardware as well.

Apple has been ahead for a few years but it seems they’re slowing down now as well. But it’s slowing down on the Android side too.
 
  • Like
Reactions: Stevez67
I hate it when writers say “One of the only.” Either they’re the only, or they’re not. If they’re not, the entire article can be scrapped.
 
As TMSC pointed out, the N3 node did not offer density improvements in on chip cache over the prior refined N5 (N4), which is a big deal for what should be a really big die shrink - not to mention the increase in prices for the N3 node. It's good to use it as Apple will, but there's alot of reasons to just skip it if you can and hope things will be better on a follow on node.
 
iPhone 6 bending
Agree that Apple makes mistakes, but that bending of iPhone 6 was a bit over manufactured to me. I guess the bigger iPhone 6 Plus was more prone to this, but I know of many iPhone 6 still in use and I had one myself for long and didn’t see any problems. Maybe if you deliberately tried to bend, but why would you do that? :)
 
Expecting to see better battery life with A17 on iPhone 15 Pro models. Competition will be far behind.
Identical performance to the A16 with about 50% more battery life would be the best possible outcome of the 3nm process.

Heck I'd even say the same performance as A15 would be acceptable if it meant 50% more battery life.
 
Doesn’t it take a lot of time and planning as well as to negotiate and secure capacity?… like way more than a year?

Meaning if they haven’t decided by now it would be much too late for 2023 production? Aren’t roadmaps for this stuff done several years in advance?

Qualcomm and MediaTek be like

1672855231352.png


Whammy "Big Bucks, Big Bucks ... oooohhhh nooooo!"
 
  • Haha
Reactions: compwiz1202
As TMSC pointed out, the N3 node did not offer density improvements in on chip cache over the prior refined N5 (N4), which is a big deal for what should be a really big die shrink - not to mention the increase in prices for the N3 node. It's good to use it as Apple will, but there's alot of reasons to just skip it if you can and hope things will be better on a follow on node.

Errr. N3E SRAM density is actually worse than N3. It backslides to almost exactly N5's. In contrast , the N3B(N3) has a small improvement in density (but the trade-off is that manufacturing time/complexity is up). The later stage N3 and early N2's aren't going to fix it.

There are alternatives but not really coming until 2025-6 timeframe (if everything goes exactly right maybe late 2024 which likely is too late for any A or M series iterations in 2024. ). This isn't one of those "they are going to fix it real soon now" situations. [ if there is a fabrication biz bust after the boom everything going "exactly right" isn't particularly likely. ]

The Analog circuits and Memory/*RAM are not going to realign to the same levels that logic is going to progress to over the same interval. Might get back to some 'forward' progress on *RAM , but there is likely to be permanent gap going forward between the progressions. More disaggreated (non-monolithic) RAM hierarchies will likely show up.

And decent chance when the alternatives do come ... it is not going to make wafers cheaper to process and/or dies to package also. More and more chip progressions that are price sensitive are going to fall off of chasing the bleeding edge.

Caches slowing down won't necessarily kill off individual processor improvements. It likely puts a larger damper on the 'homogenous core count wars". Where increased core counts is the most hyped marketing feature. if not enough cache growth ( and die size fixed or pressure to make smaller) it gets harder to keep more 'data hungry' cores feed with data.
 
Apple is in control of the design.
Yeah, and I design my pizza from Dominos and Papa John's.
Design doesn't really matter, as it's primarily an economics game of price-yield trade-off rather than a technical one.
It's like saying I asked Dominos to put in Pepperonis on my large pizza, so I should get most of the credit for "designing" the pizza. LOL...

The manufacturing process is the most important and the most intellectually difficult part of chip development. Design is child's play and a rather insignificant part. Anyone can design a chip. It's not difficult asking your manager what their goals are and what your budget is. Back to my analogy of pizzas, your mom might give you $25 for a pizza, but a $25 pizza will get you a large pie with only 2 toppings.

Given there are multiple toppings you can add, you're limited by what the 2 topping limit.
Dominos might improve their pizza-making process and can offer a $25 pizza with 3 toppings.
You can now add the 2 toppings you wanted and 1 extra topping, making your pizza better. In this analogy, Apple is the boy with the mom while a fab like TSMC is Dominos. Who should receive credit for improving the pizza? Sure, Apple ultimately made the decision to pick 3 toppings, but anyone could have done that.
 
Yeah, and I design my pizza from Dominos and Papa John's.
Design doesn't really matter, as it's primarily an economics game of price-yield trade-off rather than a technical one.
It's like saying I asked Dominos to put in Pepperonis on my large pizza, so I should get most of the credit for "designing" the pizza. LOL...

The manufacturing process is the most important and the most intellectually difficult part of chip development. Design is child's play and a rather insignificant part. Anyone can design a chip. It's not difficult asking your manager what their goals are and what your budget is. Back to my analogy of pizzas, your mom might give you $25 for a pizza, but a $25 pizza will get you a large pie with only 2 toppings.

Given there are multiple toppings you can add, you're limited by what the 2 topping limit.
Dominos might improve their pizza-making process and can offer a $25 pizza with 3 toppings.
You can now add the 2 toppings you wanted and 1 extra topping, making your pizza better. In this analogy, Apple is the boy with the mom while a fab like TSMC is Dominos. Who should receive credit for improving the pizza? Sure, Apple ultimately made the decision to pick 3 toppings, but anyone could have done that.
The PowerPC G5 analogy is still bad.
 
Yeah, and I design my pizza from Dominos and Papa John's.
Design doesn't really matter, as it's primarily an economics game of price-yield trade-off rather than a technical one.
It's like saying I asked Dominos to put in Pepperonis on my large pizza, so I should get most of the credit for "designing" the pizza. LOL...

The manufacturing process is the most important and the most intellectually difficult part of chip development. Design is child's play and a rather insignificant part. Anyone can design a chip. It's not difficult asking your manager what their goals are and what your budget is. Back to my analogy of pizzas, your mom might give you $25 for a pizza, but a $25 pizza will get you a large pie with only 2 toppings.

Given there are multiple toppings you can add, you're limited by what the 2 topping limit.
Dominos might improve their pizza-making process and can offer a $25 pizza with 3 toppings.
You can now add the 2 toppings you wanted and 1 extra topping, making your pizza better. In this analogy, Apple is the boy with the mom while a fab like TSMC is Dominos. Who should receive credit for improving the pizza? Sure, Apple ultimately made the decision to pick 3 toppings, but anyone could have done that.
Saying design is irrelevant even though Apple’s “irrelevant design” has them far ahead of anyone else in performance and efficiency and it isn’t even close.

It matters. Big time. To think otherwise is just intellectually dishonest or a total lack of understanding. You can pick which one.

I haven’t even owned an iPhone in 5 years and I know how bad a take (and absurd analogy) this was.

I’m still in awe of you going on for paragraphs (straight-faced probably) about ordering a pizza and conflating it with SoC design.

Wow.
 
Like when they made the PowerPC G5 chip for the Powerbook?

Oh, wait...
No like when Apple made the PowerPC G5 chip in the Power Mac G5 and led the industry for 9mths with 'The Faster Computer Ever" which pissed off the competition so much that when finally Dell got their hands on the update Intel P4/Xeon to catch up

Apple releases the worlds faster computer - The Power Mac G5
June 23, 2003
“The 64-bit revolution has begun and the personal computer will never be thesame again,” said Steve Jobs, Apple’s CEO. “The new Power Mac G5 combines the world’s first 64-bit desktop processor, the industry’s first 1 GHz front-side bus, and up to 8GB of memory to beat the fastest Pentium 4 and dual Xeon-based systems in industry-standard benchmarks and real-worldprofessional applications.”

Delivering the industry’s highest system bandwidth, the Power Mac G5 lineoffers dual 2.0 GHz PowerPC G5 processors, each with an independent 1 GHzfront-side bus, for an astounding 16 GBps of bandwidth. The line alsofeatures the industry’s highest bandwidth memory (400 MHz 128-bit DDR SDRAMwith throughput up to 6.4 GBps); the industry’s fastest PCI interfaceavailable on a desktop (133 MHz PCI-X); and cutting-edge AGP 8X Pro graphics capabilities, all within a stunning new professional aluminum enclosure featuring innovative computer-controlled cooling for quiet operation.


Apple G5 Fastest Desktop Ever
The G5 is 10 percent slower than the P4 and Xeon in SPEC int scores in single-proc units, but 20 percent faster in FPU scores, and the dual-proc G5 beats the dual-proc Xeon in all SPEC scores.


Apple told to halt 'world's fastest' claims for G5
March 26, 2004 12:46 p.m. PT (Exactly 9 months later)

The Council of Better Business Bureaus has recommended that Apple Computer discontinue comparative performance claims regarding its Power Mac G5 desktop.
Acting on a tip from Apple rival Dell, the council's National Advertising Division (NAD) "determined that the evidence provided by Apple did not provide a reasonable basis for its broad unqualified claims that its Power Mac G5 is 'the world's fastest, most powerful personal computer' and that it 'edged out the competition on integer.'"

In a statement Thursday, NAD also said it took issue with Apple's claim regarding the computer's 64-bit processor. The "advertiser's claim, 'the world's first 64-bit processor for personal computers,' could reasonably be interpreted to apply to workstations, in the context in which it was presented." This claim was unsupported by evidence, according to NAD. The organization said that although the advertisement had run its course, it recommended that Apple "modify this claim to effectively limit it to personal computers."

According to NAD, Apple said in a statement that its ad campaign has already run its course and that it "will be mindful of NAD's views in its future advertising." The company was not immediately available for further comment on the issue.

A Dell representative said in an e-mail: We "notified NAD because we felt there were some inaccuracies in Apple's advertisement and wanted to act on behalf of consumers in the marketplace who deserve accurate information on which to base their purchase decisions...Essentially, we felt that clarity in the marketplace benefits consumers, and NAD agreed."

Took Dell 9 mths! It's cause Apple's PowerMac G5 was indeed used as a workstation and a personal computer and Dell waited for Intel to have a more powerful processor to release a new product. Since the ad was most likely hurting sales then they complained.

Apple NEVER stated the G5 was going into a PowerBook. Jobs stated "i'm sure some of you wanted a G5 in a PowerBook" simple.
 
Is nobody going to ask the question: if a wafer is $20k and it's a 3mm process, how many chips ARM will be on that wafer? General estimates are fine.


Depends upon whose die.





<total dies > ( <defect free> <gross yield %> )


A12 -- 83mm^2 ( 9.89 x 8.42 )



300mm wafer , 0.2 #/sq cm defect rate --> 703 ( 597 85% )



A14 -- 88mm^2 (estim 9.45 x 9.35 )



300mm wafer , 0.2 #/sq cm defect rate --> 661 ( 556 84% )





A15 -- 108mm^2 (estim 10.5 x 10. 3 )



300mm wafer , 0.2 #/sq cm defect rate --> 536 ( 433 81% )





A16 -- ( about the same as A16, slightly bigger ) ( estim 10.7 x 10.3 )

300mm wafer , 0.2 #/sq cm defect rate --> 520 ( 418 80% )


M1 -- 119 mm^2 ( 10.99 x 10.96 )



300mm wafer , 0.2 #/sq cm defect rate --> 479 ( 378 79% )



M2 -- 142mm^2 ( estim 12 x 11.8 )



300mm wafer , 0.2 #/sq cm defect rate --> 404 ( 306 76% )





Snapdragon 855 -- 74 mm^2 ( 8.42 x 8.64 )



300mm wafer , 0.2 #/sq cm defect rate --> 808 ( 700 87% )





Snapdragon 8cx -- 113 mm^2 ( 8.3 x 13.5 )



300mm wafer , 0.2 #/sq cm defect rate --> 516 ( 414 80% )



Intel skylake 4+2 -- 122mm^2 ( 9.19 x 13.31 )



300mm wafer , 0.2 #/sq cm defect rate --> 471 ( 371 79% )



M1 Max --- ~452mm^2 (19.96 x 22.66 )



300mm wafer , 0.2 #/sq cm defect rate --> 114 ( 49 43% )





A several of those came from this page.






One 'gap' between Apple and Qualcomm and MediaTek SoC is often the amount of die that Apple throws at the their solution versus the area limitation that the hold themselves too. Qualcomm's laptop targeted SoC 8cx is in similar ballpark to the A16 ( granted A16 is waaaaaaaaay off the norms for iPhone chip size. ) Apple needs N3 because the A16 bloated about as big as it can get. Apple needs to implement what they got ... smaller.

That isn't driven by Apple being "richers" or having more money. That is entirely driven by Apple sucking more money out of their customer's pockets. It isn't Apple's money.


Similar issues with whatever "chiplet" Apple wants to use for Ultras ( and above). They need smaller more than some feature war checkbox. [ the yield percentage there is as 'horrible' as it looks. Flip off some broken CPU and/or GPU cores and have a working die to sell. As long as can sell it then make money. ]





Something that is A14 88mm^2 sized then 20K/556 cost basis of about $36/die. That is not the end of the world if Apple is charging arounf $80-90 for the die.



Something that is M1 119mm^2 sized then 20K/378 cost basis of about $53/die . Again if can charge $110 for the die... far , far ,far from the end of the world.



M2 142mm^2 sized then 20K/306 ... $66 . Still if billing at $110 still profitable but it is eating into margins.



M1 Max 452mm^2 sized then 20K/49 is costs basis of $408 but if charging $800 for it... not particularly a margin problem.



if could shrink the 452mm^2 die to so got 65 dies/wafer that would knock cost basis down to $307. If Apple only has access to a fixed sized allocation of total wafers, then getting the 'large' M-series dies smaller helps a lot. And far more than chasing maximum Geekbench scores.

P.S. Similarly if trimmed off the UltraFusion connector that is useless for the laptop deployment of the M1 Max could get 3-5 more dies per wafer. Which over 10,000 wafers is another 30-50K dies. (and chopping overhead expense of $2.00M [ fixed missing decimal] )
 
Last edited:
Design doesn't really matter,
And still, Apple’s design net a greater performance per watt than anything else TSMC manufactures. Better yet, Apple’s using an instruction set that other companies ALSO use, so there’s no special benefit there either. The biggest difference is indeed who designed the solution.

Maybe Apple’s the only company sending the order over to TSMC WITHOUT anchovies? Everyone else COULD leave off the anchovies, but for whatever reason, their designs still include anchovies. And mayonnaise.
 
Last edited:
Are there really any users left on Android that care about performance or on-device processing? Most just want the cheapest phone with oled and a decent camera.
 
That isn't driven by Apple being "richers" or having more money. That is entirely driven by Apple sucking more money out of their customer's pockets. It isn't Apple's money.
I have to disagree there tho. Apple's gear have traditionally been pricier compared to their competitors. The thing as I see it with Apple Silicon is that, the middle man's margin has been cut out. That leaves Apple more room to implement more expensive solutions compared to their competitors.
 
  • Like
Reactions: Colstan
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.