Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Just the hardware for that is so far away from being cost-effective and affordable for virtually anyone, it isn't funny.

I don't think it's that far away. I'm already running local LLMs (gpt-oss, Gemma) and image models (z-image) on my MacBook Air 24GB. We will find ways to make models effective in smaller memory sizes, and local compute always wins in the end.
 
Maybe time for Apple and the US to get into chip manufacturing so they are not dependent on Asia as the demand increases.

It will take a lot of time and money to pull it off. I think there’ve been efforts to move more that way, but it’s not an easy lift.
 
Intel isn't remotely capable of manufacturing the chips Apple needs and won't be for probably a decade, if ever.

Well, it looks like Apple will be getting Intel to produce low-end M-series and A-series chips from next year.

And Intel doesn't need to produce all of the chips Apple requires. That would make Apple reliant on a single supplier, which is exactly the position they're trying to get out of.

If Intel can make chips for nVidia, they can make chips for Apple.
 
Last edited:
  • Like
Reactions: jarman92
Kind of difficult to feel bad for any of them
When they knew computers, chips, etc would play such a massive, significant role…and they only have 1-2 fabrications to make the most crucial parts. They put all their eggs in one box and refuse to build me. Incredibly expensive? Sure. Incredibly necessary? Absolutely sure. Apple can spend some of that multi trillion dollar value to alleviate their pain.
 
Maybe time for Apple and the US to get into chip manufacturing so they are not dependent on Asia as the demand increases.
Intel actually has the most advanced process with 17A
Intel isn't remotely capable of manufacturing the chips Apple needs and won't be for probably a decade, if ever.
Actually Intel’s 17A which it is using to produce its own latest chips right now is superior to TSMCs N2 which is ready around the middle of this year.
 
Intel had the undisputed lead 2000-2014. But Intel was in deep "14nm" woes by 2015-2016 . So even though TSMC might have still been catching up to Intel in 2016, the writing on Intel's wall was already there. By 2016 everybody knew the watch had no hands, since Intel themselves said they were dropping the tik-tok model in favor of the new tik-tok-tok one (which didn't last them long, but that's another story). So I'd peg Intel's world lead to the 2000-2015 time frame.

Probably not so coincidently 2014-2016 is about the time AMD is struggling with Bulldozer development path (i.e., pre Zen turn around) and GlobalFoundries was wallowing backwards. It is a low point for competition for Intel from their classic foes. If there is a historical track record , it is more so Intel performs badly when they have small-to-no competition.

Intel could have stumbled on "14nm" and learned lessons , but instead somewhat double down on arrogance and tried to do 'twice as much' in one step to 'catch up'.

'Only the paranoid survive' is a somewhat useful mantra when competing to deliver better products/services for customers. It is not so useful when applied to refining monopolistic practices and maximizing office politics.
 
  • Like
Reactions: darkblu
Intel actually has the most advanced process with 17A

Actually Intel’s 17A which it is using to produce its own latest chips right now is superior to TSMCs N2 which is ready around the middle of this year.
1. Intel don't have 17A, they have 18A, their post-18A are 18A-P (ramp up this year) and 18A-PT (2028). Next major node is 14A, planned for 2027 for risk production.
2. TSMC 2N is expected to beat 18A in every viable metric. Intel's post-18A are expected to be interesting wrt packaging (e.g. die stacking in 18A-PT), but PT is for 2028; 18A-P is ramping up this year, but given it's a respin of 18A, there's little to praise it for.
3. Intel are desperate to prove their foundry is good, and yet they struggle to find customers for 18A/-P (not 18A-PT, which is still away). Most of their potential customers are eyeing 14A, but that puts Intel in a catch-22 situation -- they need customers for 18A now in order to offer 14A in the future. So praised be tariffs, I guess.
 
and won't be for probably a decade, if ever. ...
Intel actually has the most advanced process with 17A

Actually Intel’s 17A which it is using to produce its own latest chips right now is superior to TSMCs N2 which is ready around the middle of this year.

the numbers: 17 , 18 , 2/A20 etc are marketing bragging numbers. Those are not actual physical measurements of anything. So saying number X is lower than Y so it is more advanced isn't really technically true.

Intel's 18A isn't uniformly better than TSMC N2. There are a few things 18A is better at ( peak compute with substantive power ) and few things that N2 is better at. ( higher logic density and some low power efficiency cases. Likely higher yields since more conservative use of features ) . Intel isn't 'ahead' , but they are vastly far from being "10 years behind". Back side power delivery is useful but it is also more expensive; so Intel is unlikely to win on costs.

If primarily looking for the most lowest cost , smallest die then N2 will likely win for many designs. If that is definition of "most advanced" then it is a winner. If want to win desktop/server processor 'drag racing' contests then 18A should be very competitive. Different customers are going to put a 'win' tag on that. Depends upon customers design focus.

Intel deeply under invested in buying EUV Fab machines from ASML so that can't match production volume either. That is the aspect that might take 6-10 years to unwind, but that is wafer volume not the tech being used to imprint the wafers. There is an inflection point when High-NA EUV has higher traction to keep going 'smaller". But that will come after 14A (Intel) and TSMC is differing as long as they can.

Apple won't use 18A., in part because it had development hiccups and Apple is on some relatively hard deadline schedules. Reportedly they have a 18A-P design kit which is an iterative refinement (with most of the bugs worked out). If Apple only uses the 'iterative refinement' nodes from Intel then Intel will have trouble being 'ahead' as these usually trail about roughly 8-12 months.


Decent chance this will repeat for 14A. That will probably be more of a bumpy ride to high volume than 14A-P/14A-E/"whatever suffix they stick on"will be. Similar issues with TSMC in that N6 was easier to ramp up than N7 and N4 was easier ramp than N5 , but Intel has wider variation. As long as the variation gets incrementally smaller that would be progress. Intel and TSMC also don't have perfectly sync up high volume release quarters either (nor should they for any rational technical reason).

Intel doesn't have to have the "most advanced" in all aspects to be successful. They just need to steadily get better at delivering. If TSMC has a significant stumble perhaps they could get back to the old days of having 'more than an year' lead , but a sufficient outocome would be where they just were in a leapfrog contest in a substantive number of design spaces for several customers ( weave substatively back from Intel products doing the 'pipe cleaner' work on process improvements. Or worse paying for fumbled nodes. )
 
Not surprising considering all the chips are needed for AI. Apple still has a huge power in negotiating and not expecting to see a hike in prices of devices immediately due to change in component pricing.
 
  • Like
Reactions: mganu
Next logical step is for Apple to make their own chip fab factories and buy their own asml machines. They are a trillion+ dollar company I am sure shareholders will appreciate it.
 
People who keep saying there's an "AI bubble" don't know what they're talking about.

Refer to Taiwan Semi's earnings report yesterday. They cannot meet all the demand they're seeing. There's so much demand that they're increasing their 2026 capital expenditures by up to 40% vs 2025's capex to meet all that demand.


HONG KONG -- Taiwan-based TSMC, the world’s largest computer chip maker, plans to increase its capital spending by as much as nearly 40% this year after it reported a 35% jump in its net profit for the latest quarter thanks to the boom in artificial intelligence, the company said Thursday.

Taiwan Semiconductor Manufacturing Corp., a major supplier to companies including Nvidia and Apple, reported a net profit of 506 billion new Taiwan dollars ($16 billion) for the October-December quarter, a 35% surge from a year earlier, better than analysts’ estimates.

TSMC said it plans to boost its capital expenditure budget to $52 billion - $56 billion for 2026, up from about $40 billion last year.



They wouldn't be investing that much money to build out their manufacturing if there wasn't strong demand that will continue for years and years.



Taipei, Jan. 15 -- Taiwan Semiconductor Manufacturing Co. expects its sales to grow almost 30 percent in 2026 on robust AI demand and a recovery in non-AI applications, and will raise its capital expenditure by up to 37 percent to meet clients' needs.

At a closely watched investor conference Thursday, TSMC Chairman and CEO C.C. Wei forecast sales growth of almost 30 percent in 2026 in U.S. dollar terms, far higher than the 14 percent growth projected for the entire global pure play wafer foundry market.

In the wake of robust demand for AI applications, 5G services and high performance computing devices, Huang said TSMC will raise its capex for 2026 to a range between US$52.0 billion and US$56.0 billion, up 27-37 percent from US$40.9 billion in 2025, when the figure was also 37.4 percent higher than in 2024.
Again, overly clogging the market and causing prices to surge for a product that nobody I know uses.
This means only tech industry companies are using AI, which means it is a bubble since even Dell has admitted that consumers do not care about AI at all.
 
  • Like
Reactions: Premium1
AI didn't screw up RAM. OpenAI , the company, screwed up RAM. They went out and locked up contracts on RAM wafers in the range of 40% of the market. They don't actually make any hardware in substantive volume, but bought gobs of RAM. It is 'AI' fault in that OpenAI has so much 'drunken salior spending money' that they can foolishly buy stuff they may not be able to use. However, it is also no really 'AI's fault that. OpenAI is a non-profit that doesn't have to responsibly spend money either. They spend money as fast as it comes in in part because they are a non-profit. Legally they can't make a profit.

LLMs are only OpenAI and the only AI hardware worth having is Nvidia are not so much a bubble, but over concentration on just two players in a broader market. Pretty good chance OPenAI RAM buy was as much to repress competitors as it was to provision building hardware. So also have 'asleep at the wheel' government regulators.


( Apple only doing a 'AI query hand off' to OpenAI also contributed to the problem. The 5-6 biggest tech companies picking the 'AI winner' for everyone is very dubious move. Safari can optionally 'punt' out to 7-10 search engines )




Google appears to be doing less work to keep the index inferencing more highly tuned. The AI summaries cost 3-4x work to do so sucking up more resources that could have gone into better indexing. So basically a "rob Peter to pay Paul" situation. ( if AI summarizes get to a point they generate 3-4x more money then maybe that will get better. )

The AI engine attached to search summaries probably isn't the one that Apple is getting. Apple needs a more heavyweight AI engine not a lightweight one ( Apple already has a lightweight one that has lower power/compute demand. ). The missing piece for Apple is the more cloud centric , heavier resource model. The better ones of those outside the Apple space are usually paywalled. ( how Apple is going to do heavier model for 'free' long term I'm not sure).
Dude, you are talking to someone about something I do not know anything about, and then speaking in complete jargon where I have zero clue what you are talking about.


And I still do not know what use I would have for AI in my daily life as a regional manager over 5 retail stores. I already have PowerBI and data collection done for me. I have zero use for AI and every time I have to interact with it as a support chat bot I feel like I am talking to a child who is woefully incapable of doing anything other than basically be a CLI that only response to basic commands like a dog.
 
What loyalty? Apple didn't choose TSMC because of some grandfathered agreement or romantic reason. Apple chose TSMC because they were heads and shoulders above everyone else in technology. TSMC beat out Samsung for A9. Everyone else like GlobalFoundries and UMC were miles behind.

If anyone helped TSMC grow the most, it was Nvidia back in 1997.

Apple approached TSMC in 2010 as a backup to Samsung.

I think you have that backwards, TSMC helped nVidia during time of need, one could argue they saved that American tech company. No doubt that fostered a partnership, but it would be silly to ignore Apple's spending and agreements both pre and post pandemic.

Not discounting what nVidia did, but ignoring the billions apple spent in the past, and has committed to in the future is a bit odd to me.

Regardless, Apple has always had strange commitment issues.
 
  • Like
Reactions: Premium1
Next logical step is for Apple to make their own chip fab factories and buy their own asml machines. They are a trillion+ dollar company I am sure shareholders will appreciate it.

Apple doesn't have trillions of dollars. Apple stockholders in aggregate have a trillion dollar asset. The actual company has about $57B in cash and about $112B in debt (a decent amount debt is hocus pocus games to lower tax bill to present higher margins and attract more demand to be a stockholder. ) . Apple doesn't even have more cash than debt and a couple orders of magnitude off of a trillion. Apple needs the cash close the the debt level to lower the interest rate they pay on the debt (since again the 'game' is higher margins and higher interest payments means lower margins. ) So pragmatically the amount of 'free cash' they have is way under $50B.

Apple going into manufacturing probably about exactly the opposite reaction from more than a few stockholders. Lots of folks buy Apple because it has fat margins. One way Apple gets to fat margins is that they do not actually make much of anything. They contract out all the manufacturing with is associated high capital equipment expense costs. Apple proactively dumps all of that from their books.

Folks act as if Apple is paying 100% of TSMC's capital equipment costs. They are not. 100% of the R&D costs? They are not. And that TSMC adds very little value to the production process. Seriously not true. A leading edge fab with only one customer is a long term dead fab. Moore's Law is deeply dying. Each succesive leading edge fab process is likely going to cost even more more. So have constant, highly increasing costs to spread over a user base of just one customer. Long term that will fail.

Strategies that are likely to fail don't draw more stockholder demand. To be a viable leading edge fab you need dozens of customers; not one.
 
  • Like
Reactions: darkblu
1. Intel don't have 17A, they have 18A, their post-18A are 18A-P (ramp up this year) and 18A-PT (2028). Next major node is 14A, planned for 2027 for risk production.

If 14A doesn't get external customers then it is was on track to be cancelled.


[ Of late, there are some hints from Intel that that they did get somebody to commit to something. But if not extended variations/refinements of 18A was going to be the path forward for a while. It made common sense. Intel can't keep kicking the can on customer wins to the 'next gen'. And one of the blockers is having a node that isn't proven mature (and now very low risk). A sizable chunk of TSMC business is selling stuff 3-8 years old. ]


2. TSMC 2N is expected to beat 18A in every viable metric. Intel's post-18A are expected to be interesting wrt packaging (e.g. die stacking in 18A-PT), but PT is for 2028; 18A-P is ramping up this year, but given it's a respin of 18A, there's little to praise it for.

Intel doesn't really classify it as a 'respin'.

" ...
  • The new Intel 18A variant, called Intel 18A-P, is designed to deliver enhanced performance to a broader set of foundry customers. Early wafers based on Intel 18A-P are in the fab now. Because Intel 18A-P will be design rule-compatible with Intel 18A, IP and EDA partners have already started updating their offerings for the variant ...."


If 18A-P has incorporated feedback from potential customers about what they want optimized then that is likely to generate more interest not less. One issue is timing. Complex chip design is a 2-3 year long process. Intel fab has to also synchronize with when folks are starting next gen designs. If the design kits for 18A solidified much slower than the design kits for N2 then it isn't as much as about what the process is as much as 'when' can deliver. Some of the folks who gave feedback probably have moved on. But other customers ( maybe not as 'sexy' a customer name to brag about) will could show up later to are not super early adopters.


Intel previously tried to push 20A as an entry point for folks. Intel eventually dumped 20A to put more resources into cleaning up 18A. Intel is using the first entry in the design family mainly for. Pather Lake ( laptop only SoC) and Clearwater Forest ( their e-core server chip). It only had to cover limited product set. (in had gotten more external commits earlier than maybe could have been tuned to a broader mix , but it is a catch-22). Pretty good chance the initial 18A won't be used for very large dies. ( Cleawater uses lots of "CPU chiplets" relative to previous Intel server designs).

Rumors peg Intel using TSMC N2P for a chunk of the desktop Nova Lake generation's CPU tile/chiplet. 18A-P should arrive about the same time so elements of that could easily round out the rest of Nova Lake. Reportedly the "Hub"/"SoC" tile/chiplet will be on some form of 18A exclusively ( a mix of memory I/O , NPU , display processing, and LP-CPU cores ). So Intel can't really 'cancel' 18A.

At least from what I've seen from the desktop Nova Lake rumors the chiplets are relatively large (for CPU core only 'chiplet'). Massive L3 cache on single die. ( somewhat trying to do what AMD does with 3D cache just not using 3D. ) . Oversized, chunky tiles/chiplets I can see to TSMC for capacity as much as fab tech. Intel only has so much capacity and super chunky chiplets probably reserve for the server class products with potentially higher margins.


3. Intel are desperate to prove their foundry is good, and yet they struggle to find customers for 18A/-P (not 18A-PT, which is still away). Most of their potential customers are eyeing 14A, but that puts Intel in a catch-22 situation -- they need customers for 18A now in order to offer 14A in the future. So praised be tariffs, I guess.

Intel has a 'over promise and under deliver' trust hole to get out of. I'd be surprised if they are 'desperate', but probably do know they have a trust problem to overcome.

If most of the customers Intel is after committed to 14A then Intel wouldn't have enough capacity. Intel has to solicit more folks than they can get commitments from , but some of them walking away at the end isn't a huge negative since they all couldn't get slots anyway.

they need a 'goldilocks zone' customer for a baseline 'load'. Some wafer demand that is not too big, not too small. Also steady so they can plan the other Intel usages around them and far enough ahead of time to not disrupt the flow for any one of the other potential customers.

What Intel needs is for 18A is for external customers to come into 18A family as Intel CPU/GPU gradually moves along to the next node ( whether I 14A or T A16 ). Looking only at the current leading edge is missing the view a healthy fab vendor needs to have. They'll need 2nd, 3rd arrival iteration customers on that 18A also. So missing out on the customers willing to take highest relative risk is only part of the issue (i.e., getting 14A customers to commit in the last 3-6 months. ) . Anyone who hasn't committed to 14a at this point probably isn't a 'first iteration' customer.

TSMC is out getting new customes for N6 all the while they are rolling out N2. (e.g., Rivian AI chip on N6 recent announcement). They need breadth, but are starting out on a relatively very narrow set of options for customers. It is just going to take time. ( and they aren't going to 'win' just with the name 'Intel'; going to have to earn it. )
 
  • Like
Reactions: darkblu
Artificial intelligence is not solving a single problem for humanity. We are not living better nor working less; it’s only creating tension, problems, and difficulties in verifying what is real or not.

all this AI era will end very badly
 
Intel doesn't have enough capacity to do all of Apple and all of Intel (at current levels. If they keep loosing share maybe). Apple is only going to be able to offload a subset there. And Intel has lots of reputation building to do before Apple could exit TSMC.

However, Intel likely isn't 'cheaper' than TSMC. It doesn't appear that Intel is trying to deeply undercut TSMC on costs. Apple isn't gong to arm twist them into extremely deep discounts. Fab costs are going up if Apple wants to stay close to the bleeding edge of what is possible. IT is just harder to do.
They could use Intel for the secondary chips that don’t require cutting-edge manufacturing.

For the flagship chips, though, they will need TSMC.

PS how are globalfoundries doing?
 
So the plan is going accordingly. It will be more and more hard to have our own full fledged devices. Check prices of Ram, storage, etc, that will hit Apple sooner or later. Big companies want a future where consumers buy just terminals and your data, your apps and everything is in the cloud. The only way we can help is with reducing the usage of AI in a collective way.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.