Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Slower ramp than what? I don't think there were any viable alternatives to Apple not using N3B. This seems like a lemons -> lemonade situation.

Slower than if multiple customers were each paying for full wafers at a faster pace.

If early on, Apple orders 5K wafers , company B orders 5K , and company C orders 5K wafers then that is an aggregate pile of 15K wafers providing feedback.

If that falls back to just TSMC covering just the 5K wafers than Apple wants than it is just 5K aggregate. Apple isn't going to get more early dies back just because no one else is asking for any. And TSMC sure isn't going to throw away even more of their own money handing Apple die they didn't ask for.

While TSMC has their own 'test mule' designs they push through for development, then will get different feedback by pushing different designs through the process. If just building simple SRAM or simple Arm core were enough to expose all the bugs in N3B , TSMC could have go live sooner. Testing bias pops up all the time within homogenous organizations.

The smaller the sample size, the less robust feedback going to get out early parts of the ramp. The N3B takes months longer to make than N7 or early generations. So the whole cycle is slower. Dropping out other folks generating useful data early only makes for a much longer feedback cycle to generate enough data to be very useful.
TSMC doesn't need 10's or 100's of different designs , but need more than just 1-3.

Additionally, Apple didn't 'need' max full wafer output flow until June . If there were another customer with an earlier timeline they would have higher flow demand in Dec-January (when Apple isn't really all that interested. ).
 
  • Like
Reactions: Pinkyyy 💜🍎
AMD is supposedly going to use the N3B node for their Zen 5 chips


That doesn't say AMD is using N3B at all.

"... It is rumored that "Zen 5" is being designed for the TSMC 3 nm node, and could see an increase in CPU core count per CCD, up from the present 8. TSMC 3 nm node goes into commercial mass-production in the first half of 2023 as the TSMC N3 node, with a refined N3E node slated for the second half of the year. ..."

Given the Zen 5 variants aren't coming until 2024 ... N3E is just as plausible ( if not more so).
There are rumors that AMD did both N4 and some flavor of N3 for Zen 5. ( I think Moore's Law is Dead' and others). The N4 stuff is what is coming sooner rather than later.

If AMD doing both then N3E's SRAM being the same size as N5(N4) means won't have to do juggling to deal with differences in cache size layout issues. AMD has also already done lots of work on 3D cache so can just 'glue' more cache on top if really need more. Not sure what N3B is going to 'buy them'.
 
  • Like
Reactions: Pinkyyy 💜🍎
"Gripping any mobile phone will result in some attenuation of its antenna performance, with certain places being worse than others depending on the placement of the antennas. This is a fact of life for every wireless phone. If you ever experience this on your iPhone 4, avoid gripping it in the lower left corner in a way that covers both sides of the black strip in the metal band, or simply use one of many available cases."
There WAS that set of videos showing how almost all phones made at the time attenuate when held in ways that folks would consider as “normal”. It’s essentially the same as the iPhone making speakers buzz when placed next to them. The same thing the entire industry expects due to physics is now a critical issue because an Apple logo is attached to the story. :)
 
  • Like
Reactions: Pinkyyy 💜🍎
The pace of chip innovation has leapt past the simultaneous content, software and hardware advancements that Apple is working on. Our iPhones are only getting marginally better with mindblowing chip advances. I don't care how many transistors you put on the chip, Siri will still play the wrong song, or tell us that she can't do that right now.

Apple is about to meet reality that today's investors, and to some degree consumers, are all in on AI advancements, and Apple is far behind.
 
  • Disagree
Reactions: Shirasaki
Using 'older' A-series is how Apple lowers the price points. If they dropped the A17 before its planned lifecycle was to be completed, they then would have to change the product development for 2-4 different other products. that will cost money. And what do they do stick with the A16 for an extra 1-2 years and take less of a competitive performance lead for those products? After the first year of deployment the typical A-series is only about 'half done' with its unit volume lifecycle (and multiple years in usage).
Somewhat. They will use a factory as long as possible without retooling. Sometimes, they can revamp a product without requiring significant retooling. This is where SE products come from.

A-series chips are not a challenge if available, and can be tossed into all sorts of products, even monitors.

Pushing the A18 ( presuming it uses N3E ) onto the SE either pushes its launch date out further or raises the price of the product. Neither one of those is good. Also more than decent chance the A18 will be a larger (with more 'stuff') and therefore more expensive die if it is also on N3E. It quite be a 'cheaper, sooner' option for over a year after released.
Right. The reason they put new technology in the Pro is because there are higher margins to pay for new parts, and they don't need those new parts to be available for the entire onslaught of new phone sales each fall.

They could retool an existing line from say the 12 to make an SE, they could use a new chip - but the part cost and volume aren't a good match. Process cost basically goes up on an exponential curve
 
  • Like
Reactions: Pinkyyy 💜🍎
Unusual? Been in semi industry for decades and a lot of companies pay for “known goo die”…
Not from the foundry, they don't. You buy wafers and then you have them tested, diced, qualified, etc. Usually at another facility. Testing facilities will identify "good" and "bad" dies, then dicing vendors will isolate them into groups, but from the foundry you (usually) buy whole wafers.
 
Not from the foundry, they don't. You buy wafers and then you have them tested, diced, qualified, etc. Usually at another facility. Testing facilities will identify "good" and "bad" dies, then dicing vendors will isolate them into groups, but from the foundry you (usually) buy whole wafers.
if that were the case, no foundry would have any incentive to actually improve the yield ...
 
They pay by the wafer. Part of the reason that binning is a popular practice is because it can make use of some of those dies that would otherwise have to be thrown out. If a company wanted to design a 1600mm^2 die and didn't utilize binning, it would be a financial hit on them rather than on the fab for designing a chip that would be very difficult to manufacture with high yields.
Are you suggesting TSMC is selling apple due to third parties?
why would any company agree to pay per wafer and not known good die?
 
if that were the case, no foundry would have any incentive to actually improve the yield ...

Apple moved their wafer buying business off of Samsung to TSMC. Qualcomm dump Samsung on Snapdragon 7+ ( basically same chip ... just moved. ). That these designs are not portable when the yield drops low enough simply isn't true. If you have really bad yields , you get dropped. That is a very bad thing when have to buy multiple $800M ASML fab machines+infrastructure and an even more costly building to put it inside of to even be a player. If have a $800M machine that is only 10% utilized that is a problem for the fab company.

Even at TSMC there is much higher customer demand for N3E than there is for N3B. Lower costs and earlier in lifecycle higher yields has lead to more customers.

Advanced%20Technology%20Leadership.mp4_snapshot_03.06_%5B2021.06.01_20.37.28%5D.jpg




If the 'theory' of no incentive was correct the defect density in -2Q would look the same as +2Q . It didn't for neither N7 or N5. Nor is it a flat line for N3.

If TSMC makes their customers successful then they come back and buy more. That is the core of the feedback cycle. Unsuccessful customers are likely going to be too broke to buy new stuff. Long term what do that get the fab vendor????

Defects on the wafer itself are also not the only source of bad dies. They are handled after leaving factor. The wafer is cut . The die is clocked at some rate the foundary doesn't set. etc.

There are also a number of folks who won't start their orders until the defect density is low enough. ( first chart. volume in month 1 and 2 is nothing like month 7 ).
 
  • Like
Reactions: Pinkyyy 💜🍎
My take on this is TSMC couldn’t guarantee a certain yield and apple turned it back on TSMC. In turn, TSMC clamped down on the process control with more in line testing, tighter control limits and stopping the line and taking down tools sooner..
 
  • Like
Reactions: Pinkyyy 💜🍎
Apple moved their wafer buying business off of Samsung to TSMC. Qualcomm dump Samsung on Snapdragon 7+ ( basically same chip ... just moved. ). That these designs are not portable when the yield drops low enough simply isn't true. If you have really bad yields , you get dropped. That is a very bad thing when have to buy multiple $800M ASML fab machines+infrastructure and an even more costly building to put it inside of to even be a player. If have a $800M machine that is only 10% utilized that is a problem for the fab company.

Even at TSMC there is much higher customer demand for N3E than there is for N3B. Lower costs and earlier in lifecycle higher yields has lead to more customers.

Advanced%20Technology%20Leadership.mp4_snapshot_03.06_%5B2021.06.01_20.37.28%5D.jpg




If the 'theory' of no incentive was correct the defect density in -2Q would look the same as +2Q . It didn't for neither N7 or N5. Nor is it a flat line for N3.

If TSMC makes their customers successful then they come back and buy more. That is the core of the feedback cycle. Unsuccessful customers are likely going to be too broke to buy new stuff. Long term what do that get the fab vendor????

Defects on the wafer itself are also not the only source of bad dies. They are handled after leaving factor. The wafer is cut . The die is clocked at some rate the foundary doesn't set. etc.

There are also a number of folks who won't start their orders until the defect density is low enough. ( first chart. volume in month 1 and 2 is nothing like month 7 ).
In line testing and defect impact can provide a good die estimate. heck when there’s 1500 to 3400 die per wafer with 10k+ wafer starts per week, something will yield.
the question remains will TSMC be able to rein in the process control and meet ships. Or will some customer be pacing back and forth wondering If their wafers will make the starts schedule.
 
  • Like
Reactions: Pinkyyy 💜🍎
It depends on the kind of defect we're talking about.
Binned chips are used on entry level products all the time. Like the 7-core GPU M1 or the 4-core GPU A15.
If these are the yields, we'll almost certainly get the full A17 on the iPhone 15 Pro, and the binned version (less CPU/GPU cores, same efficiency) on the iPhone 15.
I thought the iPhone 15 is getting the A16?
 
  • Like
Reactions: Pinkyyy 💜🍎
Around 85% which is not bad, but not perfect/optimal for sure.

They can build other products from the "defective" chips. Look at A15 for example. It has so many variations, because of yield issues:

Full A15 (5 GPU / 6 CPU cores):
  • iPhone 13 Pro/13 Pro Max/14/14 Plus
Binned A15 #1 (4 GPU / 6 CPU cores):
  • iPhone 13/13 Mini, SE (3rd gen)
Binned A15 #2 (5 GPU / 6 CPU underclocked cores)
  • iPad Mini 6
Binned A15 #3 (5 GPU / 5 CPU cores - 1x Blizzard efficiency core disabled):
  • Apple TV 4K (3rd gen)
Intel used to do the same thing (and probabbly still do).
 
Who said that the defective chips are actually put into products?

This is actually pretty normal practice. The chips are designed so that defective parts can be deactivated without affecting the rest of the chip.

For example, the low-end M1 chips have 7 GPU cores instead of 8. But all 8 cores are physically there. If there is a defect on one of the GPU cores they just deactivate it and ship it as a 7-core chip instead of an 8-core one.
 
  • Like
Reactions: Pinkyyy 💜🍎
Unusual? Been in semi industry for decades and a lot of companies pay for “known goo die”…

I now wish Macrumors to stop reporting anything hardware. We end up now with a world of Apple Supporters, who think they know hardware or manufacturing or supply chain, but in reality they know next to nothing.
 
  • Like
Reactions: Pinkyyy 💜🍎
This is actually pretty normal practice. The chips are designed so that defective parts can be deactivated without affecting the rest of the chip.

For example, the low-end M1 chips have 7 GPU cores instead of 8. But all 8 cores are physically there. If there is a defect on one of the GPU cores they just deactivate it and ship it as a 7-core chip instead of an 8-core one.

It makes sense to design the chip to allow one design meet several performance / price points; thereby cutting waste and costs. I remember when Intel in the 486 days disabled math coprocessors to sell the same physical chips at two price points. Of course, people complained back then Intel was screwing them because the capability was there but Intel disabled it, forgetting they paid less for the device.

That's not unique to computer manufacturing, car manufacturers do it as well. BW's for example, have (had?) features not enabled by the build order but were still part of the vehicle. Rewriting the build order could enable them. Sometimes wiring harness contained wires for features not there because some parts are missing, that was how I added factory bluetooth to my e90 using factory parts.
 
  • Like
Reactions: Pinkyyy 💜🍎
Correct me if I’m wrong but TSMC were reportedly charging a fortune for 3NM wafers. Apple probably shouted at them behind the scenes if they were going to deliver a lot of defective product for an insane price.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.