Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I don’t know, ask those companies that canceled orders last year why they’re not interested in 3nm node processors?
I don’t know, ask those companies that canceled orders last year why they’re not interested in 3nm node processors?
I seen the link the first time and that is ONE source, of which there are many, and I have read a few. You’ve now completely corrected from your original point which stated ‘nobody wanted 3nm’. Which is the point I have questioned. You’ve changed your point to nobody wanted N3B and waiting for N3E / waiting out production issues, as though that’s what you said all along. This isn’t what you said.
 
This is repeatedly stated in a misleading and inaccurate way here on MR. There are two, orthogonal statements here that describe the process improvement in different ways:

1. 35% lower power for the same performance

2. 15% greater performance at the same power

Once you state it correctly, clearly you can't call 15% greater performance at the same power a '35% power efficiency improvement'. You can have 35% greater efficiency at the same performance, OR you can have 15% more performance at a lesser efficiency advantage. You don't get to have both.


More accurately you can't have both at the SAME TIME. You can get much of both with some power gating and clock control. When the A17 is largely idle doing nothing then can possibly get more battery life when there is only background administration to do on the phone. Similarly, video decoding at the same 60Hz the video was at before. (generally not going to try to run the video faster. ). It is up to the performance manager built into the A17 to dail things back and reap the savings.

What not going to get is more time with same size battery if constantly hammering the cores with some very high performance/power consuming app. ( more 3D video game playing time . ) May get more elements involved in the game on screen, but the battery will die just as fast as before.

Over the course of 12 hours the user can 'get' some of both if you balance your workloads. But yes, the folks who have high amounts of screen time because they are constantly 'fussing' at their phones ... N3 isn't going to 'save' them lots of battery drainage problems.

Apple's aales material for battery life is centered on a fixed function ( fixed rate) test of just video playback. So pretty good chance that statistic will be goosed by N3. Because it really isn't about more performance.
 
Pffft! Mine was the great and mighty Commodore 64, making me computing king of my 'hood for a few months.
Pffft! My H-89 was old when the Commodore 64 came out... 64KB was a home upgrade. So was any sound other than a beep. HD?!? One 5 1/4in floppy. It did come with a soldiering iron though... 😉
 
No, they don’t.
Major customers CANCELED orders last year. So, it’s not surprising that one of the remaining customers that’s NOT canceling orders would end up with a greater percentage of their 3nm capacity.

The snippet from the article that gets embedded explicitly states that the customers temporarily canceled orders. That is far more really a "time shifted orders" (please change my room reservation from the 16th to the 22nd ) rather than a "cancel"(we don't want these anymore at all. ).

In TSMC's perspective they end up with lots more idle equipment over the short term. But long term still just as likely to get to full capacity. It is just taking a bit longer to completely fill the capacity pipeline. Time shifted orders are still going to make TSMC money. ( Money is still coming.... they just can't use it to goose the next Quarter's, or two, results. )

It isn't necessarily the capacity. It is the percentage of the wafer start orders. There were reports a month ago the capacity levels at TSMC on the N3 line was running at 50% utilization. There was lots of empty wafer slots there. Once TSMC starts doing both N3B and N3E there probably won't be any empty wafer slots anymore.
 
Funny, the prior “rumors” had apple at 100% of the chip capacity. Now it’s 90%.
 
More accurately you can't have both at the SAME TIME. You can get much of both with some power gating and clock control. When the A17 is largely idle doing nothing then can possibly get more battery life when there is only background administration to do on the phone. Similarly, video decoding at the same 60Hz the video was at before. (generally not going to try to run the video faster. ). It is up to the performance manager built into the A17 to dail things back and reap the savings.

What not going to get is more time with same size battery if constantly hammering the cores with some very high performance/power consuming app. ( more 3D video game playing time . ) May get more elements involved in the game on screen, but the battery will die just as fast as before.

Over the course of 12 hours the user can 'get' some of both if you balance your workloads. But yes, the folks who have high amounts of screen time because they are constantly 'fussing' at their phones ... N3 isn't going to 'save' them lots of battery drainage problems.

Apple's aales material for battery life is centered on a fixed function ( fixed rate) test of just video playback. So pretty good chance that statistic will be goosed by N3. Because it really isn't about more performance.

True, this is simply describing two different points on a continuous power/performance curve, but that's the point that is stated misleadingly. It's as if you can somehow get 15% more performance at 35% greater efficiency, the way this is usually (mis)stated here.

I think this also fuels expectations of bigger leaps than are reasonably possible, and I think if people understood this better they might be slightly less grumpy when the A17 and M3 come out with significant but hardly staggering performance gains.

It's also interesting to me at least that this is quite a common spread for new process nodes - the biggest difference between the processes seems to be felt at the point of similar performance (the 35% uptick), rather than at maximum performance (the much more modest 15% uptick). You get more in terms of same-performance efficiency gains than you do in terms of absolute performance gains from most node shrinks it seems. Again something to temper expectations when hoping for massive performance increases from node to node.

You do make a good point though about benefiting from all the different points on that power/performance curve across different workflows. In some you'll get the maximum efficiency benefits and in others the maximum performance benefits.
 
  • Like
Reactions: gusmula
The unlikely characterization is dependent upon Apple brute forcing the Max into a chiplet role at the M3 stage. The Max is not really a good chiplet design. It is bit too chunky, dubious lack of function decomposition when it comes to scaling past two die, and lots of laptop monolithic baggage. It doesn't scale well. Apple could do a good chiplet design. AMD's desktop/server chiplets have 8 cores. There would be absolutely nothing wrong with just 10 ( two 4 core P clusters and a chopped down 2E cluster ). If always use at least two chiplets in package then the number of CPU cores will outnumber the M2/M3 Max. ( 20 vs 12 CPU). [...]
As usual your analysis is spot-on technically.

I would like you to be right about how it all turns out, but I think you won't be, because I'm dubious about how much Apple will be willing to invest in chiplet designs for high-end machines. The pros are a tiny part of their revenue stream. If they were willing to do a truly custom design for the Mac Pro, then I think you'd be right on the money.

There is one possibility though that might push things in favor of your scenario. If they decide they need massively wide CPU/GPU for their xR headset line (future generations, if not the first) then they might see investment in a real chiplet design as paying off for xR devices, not just Mac Pros.

Hopefully we'll get some clarity at WWDC. I'm pretty disappointed we didn't get first M3s already by now, though I won't be truly surprised by them not showing up unless they're still MIA in the fall.
 
As usual your analysis is spot-on technically.

I would like you to be right about how it all turns out, but I think you won't be, because I'm dubious about how much Apple will be willing to invest in chiplet designs for high-end machines. The pros are a tiny part of their revenue stream. If they were willing to do a truly custom design for the Mac Pro, then I think you'd be right on the money.

It doesn't have to be just for highest end. If dribbled down to the Mac Studio and perhaps a large screen iMac there is would be a wider base to spread costs over. Part of problem is the cap on number of Mac desktop models ( to get Studio had to kill the large screen iMac. As long as have that kind of dynamic around there is always going to not a big enough desktop user space to support mild derivate alternatives. If the MBP 13 stays around after the MBA 16 comes also shows how arbitrary the cap on desktop Mac models is. ). But yes, there is some likelihood that Apple will shave costs that will hurt the Mac Pro's competitiveness in general workstation market.

Apple could push the M3 Max to have 16 cores ( two 4 core P clusters and two 4 core E clusters). Apple could cap out an Ultra with 32 cores and just give away more of the old Mac Pro user space. So 32 , not 40, and half of them E cores. Throw an I/O chiplet between the M3 Max dies and spawn a small amount of PCI-e provisioning to be backhaul for some PCI-e slots and just stop.

There might be a Max-Plus die that tossed in more GPU cores (and memory controllers) and kept the CPU count the same. And then pair those. Kind of doubling down on the "too chunky" chiplet notion to go even more chunky.
(i.e., whatever shrinkage N3/N2/etc provide just keep same M2 Max die size and just stuff more in there. ) . However, still giving up on the > 2 die options. Also gives them good excuse to avoid implementing ECC on Memory because Max capacity is going to be limited.


If Apple is going that route I think there is a way to expand the revenue stream by putting some larger M-series on a PCI-e card and selling "Mac on Card" options for the folks who are just going to leave go to an x86 container workstation. 400-600W GPU cards provisioning opens the door just sticking a whole computer on a PCI-e card and running a separate OS instance on the card. However, that requires Apple to think outside the box to expand into new markets.


There is one possibility though that might push things in favor of your scenario. If they decide they need massively wide CPU/GPU for their xR headset line (future generations, if not the first) then they might see investment in a real chiplet design as paying off for xR devices, not just Mac Pros.

Reportedly the higher end headset has two processor packages. That actually makes sense given the limited enclosure. That would be somewhat opposite of chiplets though since spreading them out to fit on either side of the two eyes. Even if they did it would only be a pairing of just two heterogeneous chips (one UltraFusion connector to rule them all ... still )

The hurdle for the xr headset is that there is lots of input coming in. 12+ cameras and inferencing on that raw data that really , really , really doesn't need to be shared with mostly everything else (besides mirroring for the screens) except for the relatively much smaller results.


Similar with the iPhone once Apple gets a modem deploy. Base A-series die with one , and only one, modem chiplet. ( Fab process wise may not make sense to make them use exactly the same stuff.)




Hopefully we'll get some clarity at WWDC. I'm pretty disappointed we didn't get first M3s already by now, though I won't be truly surprised by them not showing up unless they're still MIA in the fall.

If WWDC reveals that Apple has thrown a collectively huge amount of transistors at the headset's processing abilities. Just how custom the design is could be a factor in why don't want to put highly custom other dies into the design workload queue.


I thought that Apple would have used TSMC N3 to solve some of the Mac Pro 'problems' in matching performance and shown something by now. But looks like that they are not being that aggressive. I would be surprised if Apple wanted to build a notion that the M-series was going to refresh on some rigid 12 month cadence. It works for the iPhone because the iPhone generates huge revenues (and sells one , two year old stuff as new. Millions and millions of 'hand me down' models to sell into.).

So a >12 month cadence isn't surprising. I though the Mac Pro might get corner case because haven't even started yet. They also are not generating good expectation by being grossly late to even start.

Won't be surprised if the iMac 24" gets M3 first. That on some 'wish thinking' plan worked out years ago the M3 iMac would have arrived around the same time as the 25th anniversary of the first iMac. N3 production slid out about 6 months from the most optimistic roadmap back then and so that iMac slides back about 6 months. But fixed in stone A-series launch dates and even longer inventory build production times for N3 messed that plan up. There was no pre demand bubble inventory slot left to get modest volumes out in the Spring. iMac isn't strategic product now so slide doesn't matter.
 
Intel took a virtual monopoly and blew it. What are we to do? Its free enterprise. We cant have a double standard on capitalism and a free market. Should we give Intel government funded welfare and subsidies and hope they don’t continue to blow it or do we let the marketplace decide?

decentralisation and anti-trust laws
 
... and still no word on the Mac Pro.
iPhones are still paying the bills at Apple. As of last month (2Q 2023 conference call) iPhone contributed to over 54 % of Apple's entire revenue, while Macs only contributed 7.6 % across all references combined. It is clear that iPhone is a more profitable business for Apple and therefore more effort is put into it, it refreshes every year, new features are added, and new processors are employed, etc.

This really is not about an annual dog-and-pony show or not. Mac Pro 2019 to 2023 is creeping up on 4 years. Apple is in about the same situation as the MP 2013 was in in 2017 ( another 4 year gap). [ somewhat dissimilar in that Apple had MPX GPU updates in 2020 , 2021, and finally replaced 580X in 2022. So at least on 'life support' subsystem updates. ]


Most folks aren't asking for annual as much as regular. Pick a freaking iteration cycle 1.5, 2, or maybe 3 years and then STICK to it. Apple said in about two years they'd be done and ... they are not. The Mac Mini Intel dragged past the two year deadline also. So not really about just the Mac Pro either. How hard would a M1 Pro Mini have been to do? Instead waited until past 2 year deadline to do a M2 Pro Mini due to length of time taken.

The Rip van Winkle mode of doing product development is what many folks chaff at.

Pointing at iPhone revenue is gross misdirection. The Macs split off into a separate subsidiary would still be another Fortune 500 company (even more so if let the subsidiary take Mac coupled services revenue with them).
The Mac division all by itself is big enough to do regular Mac Pro updates. Really a time and effort problem. Not a money problem. [ not to mention maybe close to a $1B money pit sinkhole of Apple Car or over $1B sinkhole of Apple modem. Relatively small fraction of that could have gotten another Mac Pro update over last 4 years. ]


The other kicker is that Apple pretty much as set down a track record of doing 6 month in advance 'sneak peaks' at Mac Pro class systems. So even if not ready to ship can at least show it in a 'look , but don't touch' case. The supply chain for the iPhone is so broad the only folks who haven't seen an iPhone 15 mock up are those not looking for one. Macrumors has taken to running front page articles speculating on iPhone 16 because the iPhone 15 rumors are just about all worn out. Providing concrete information ( directly or indirectly) is the sore point here.


From the mix of rumors over the last 3 years it appears Apple had planned to do something that was a bust. And like the pandemic made recoverying from that worse. But at this point they should at least have a pretty solid mock up of what the system is going to be. If they don't that's the root problem.


Also something many people are not aware or don't like to acknowledge is that Macs do not have a big user base outside of the US, partly because of costs, but there are other reasons too, while iPhone is indeed a sought after product in many markets outside the US (Europe, Japan, Asia, even LatAm).

20+ M units per year is not small. The Mac Pro entry price was boosted 100% with the MP 2019 model. It has a built-in relatively low volume (for Mac space) tax built into the system price.
 
Last edited:
From the mix of rumors over the last 3 years it appears Apple had planned to do something that was a bust. And like the pandemic made recoverying from that worse. But at this point they should at least have a pretty solid mock up of what the system is going to be. If they don't that's the root problem.
We may never know for sure, but I think there were two major issues:

1) M1 Max, and especially Ultra, has serious scaling deficiencies. CPU and GPU both fail to scale across certain workloads despite some astounding technical achievements, such as the 20tbps link in the Ultra. It was their first try, and a great learning experience, but it definitely wasn't up to what they needed for a Mac Pro. But that's OK, they figured they could take all that learning and hit it out of the park with the M2. Except...
2) TSMC was late on N3. So late that Apple had to shelve the original M2 designs, which were all for N3, and not easily backportable to N5/N4. Thus we got the decidedly so-so M2, with only modest performance improvements, as all the major IP blocks were based on M1 IP. Obviously, that killed their chance at a Mac Pro M2, since the NoC and (potential, as it's not released) UltraFusion tech is barely-tweaked M1 generation.

If I'm right about all this, then we'll see the Mac Pro with M3, which will be an improved version of what they originally intended to be M2. If we don't that will be a good indicator that either there was another major issue beyond the two I've identified... or that they've given up on this entirely. That certainly wasn't the case as of a couple years ago, based on the patents they've been filing, but it could be now. I reeeeeally hope not though.

I didn't quote the rest of what you wrote, but I basically agree.
 
We may never know for sure, but I think there were two major issues:

1) M1 Max, and especially Ultra, has serious scaling deficiencies. CPU and GPU both fail to scale across certain workloads despite some astounding technical achievements, such as the 20tbps link in the Ultra. It was their first try, and a great learning experience, but it definitely wasn't up to what they needed for a Mac Pro. But that's OK, they figured they could take all that learning and hit it out of the park with the M2. Except...

I doubt that many deep lessons learned from M1 could be folded back into M2 if they are both being done on overlapping concurrent development pipelines.

If Apple want to easily fold lessons from M1 into M2 then they should be using the same "N5-family" for both. I'm skeptical that M2 was targeting N3. N5P or perhaps more hopefully optimistic N4 would be far better risk management.

And Apple didn't ship the XCode tools to really dig into the scaling issues until after the M1 Ultra had already shipped ( they came in the following WWDC release). The hardware isn't the complete root cause of some of the problems the Ultra developed a 'scaling problem' reputation around.




2) TSMC was late on N3. So late that Apple had to shelve the original M2 designs, which were all for N3, and not easily backportable to N5/N4. Thus we got the decidedly so-so M2, with only modest performance improvements, as all the major IP blocks were based on M1 IP. Obviously, that killed their chance at a Mac Pro M2, since the NoC and (potential, as it's not released) UltraFusion tech is barely-tweaked M1 generation.

TSMC notified the timeline for N3 that was troublesome for 2022 products pretty far back.
In July 2019 TSMC had not even completely finalized the N3 the design roadmap. So it pretty hugely doubtful that there were highly optimized software tools to do 'eyeball deep" specific N3 design layout by the end of 2019.

" ... to hear the annoucement that development of TSMC’s 3nm node is well underway, something the company publicly confirmed last week. As it appears, the manufacturing technology is out of its pathfinding mode and TSMC has already started engaging with early customers. ..."
https://www.anandtech.com/show/1466...t-progress-going-well-early-customers-engaged


And for the software EDA tool delay for N5 back in Q4 (October) 2018

"... EDA tools for the N5 node will be ready in November, so chip designs may be well underway now. But while many foundation IP blocks for N5 are ready today, there are important missing pieces, such as PCIe Gen 4 and USB 3.1 PHYs, which may not be ready until June. For some of TSMC's clients the lack of these pieces is not a problem, but many will have to wait. ..."

At risk production didn't start until Q2 2019




M2 design pipeline needed to start by end of 2019. M1 should have been wrapping up the bugs and certifications end of 2019 into very early 2020. "...

By about a year later ( August 2020 ) TSMC had already pegged N3 as being 2H 2022 in open/public discussion. Very likely those with NDA agreements with TSMC found out about that months earlier.

https://www.anandtech.com/show/1602...technology-details-full-node-scaling-for-2h22

August 2020 ( is only about 6 months after major lockdowns started happening ). In the chart, N5 was scheduled to land 2Q 2020. If "moore's law cycle is 18 months how would N3 land in 1H 2022? It wouldn't. Minimally it would landing maybe end of 3Q if everything went exactly right if N5 started at very start of Q2. also N5P is pegged as 2021. Then throw on top that the bake times for N3 were going to jump up to about 4 months and volume product for N3 had a very good chance of landing in 2023.


Just how much deep finished work that Apple would need to backport get done in 3-5 months even if Apple was targeting N3 (which is somewhat doubtful).

IMHO it is more likely M2 missed getting onto N4 than N3. Expectations on N4's arrival should have been substantially more impacted by the pandemic arrival than N3's. N4 would have been much further past the pathfinding stage , but not close to being fully into ramp. It is about the stage where the software tool quality improvement feedback cycle would have been disrupted the most.


IMHO a far more sensible path for Apple would have been put UltraFusion into M1 generation SoCs and then just skip the 'feedback improvement' until M3. if there was minor UltraFusion fail with M1 could fold that into 'late M2' add-on if needed a stopgap hack to get around delay slot time gap for the M3.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.