Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
You do realize that Samsung has their own foundry services, right? They operate the world’s
second largest foundry service. That's why they don't (at the moment) use Intel's. But that's expected to change in the future.

From August 2025: https://www.businesspost.co.kr/BP?command=article_view&num=409163

Lee Jae-yong As the chairman of Samsung Electronics visited the United States as an economic delegation to the Korea-U.S. summit, it is expected that he is actively pursuing a strategic alliance with Intel to strengthen the Samsung Electronics foundry (semiconductor consignment production) business.

Samsung Electronics is reportedly considering investing in the post-process packaging sector of the foundry, which Intel has relatively strong strengths, and is using Intel's packaging production line in the United States. Samsung is also reportedly considering how to use Intel's semiconductor glass substrate technology.

Samsung Electronics has been considering additional U.S. investments in $37bn to build a foundry plant in Taylor, Texas, and has been working with Intel to drive investment in packaging production lines.


Have you already forgotten about Microsoft and Amazon? I told you about them using Intel's foundry services a week back.

Intel also makes SoCs for Ericsson's 5G infrastructure equipment using Intel's 18A process.

MediaTek is another customer of Intel's.

Things don't change overnight. It will take time, but Intel is fixing the mess their foundry service was in.

I'm referring to leading edge chips, not low-risk chips sent to Intel fabs by Microsoft and Amazon in order to please the White House.

In a situation with free will, Apple or Nvidia would not choose Intel or Samsung as fab partners.
 
Intel chips should happen. Apple will like to have multiple manufacturers/suppliers for the chip. Still for the foreseeable future, a majority of chips will most likely be made by TSMC.
 
  • Like
Reactions: mganu and TenSeven
Early tests of new Intel processors shows they’ve made a big jump in performance/efficiency. Still can’t match Apple M-Series, but it’s hard to quantify how much of that is the process (TSMC vs Intel) and how much is architecture (Apple vs x86).

Bad news for Qualcomm and ARM Windows laptops, though.

If you can get a Windows x86 laptop with competitive performance/battery life to a Windows ARM laptop, then why would anyone buy the ARM version with all the software incompatibilities?
 
I'm referring to leading edge chips, not low-risk chips sent to Intel fabs by Microsoft and Amazon in order to please the White House.
Intel's 18A node is leading edge.


Intel's 18A node isn't all about yields and density but also performance. According to Taiwanese media 3C News, citing TechInsights research and calculations, the new leader of node performance is Intel 18A. On a custom scale used by TechInsights, Intel 18A gets a 2.53 score, while the performance score of TSMC N2 is 2.27, and the performance score of Samsung SF2 is 2.19. This is all among two nm-class nodes, where Intel leads the category



The 18A production node itself is designed to prove that Intel can not only create a compelling CPU architecture but also manufacture it internally on a technology node competitive with TSMC's best offerings. The node is also the first 1.8 nm-class (or, as Intel brands it, 2 nm-class) process to enter high-volume production anywhere in the world, preceding TSMC's N2 by weeks or even months.


Stop living in the last decade. Intel has.
 
To be honest the real reason is probably instability and threats to and in chip producing countries - which is sad that only low end can be produced here - this might also account for Apple producing lower end MacBooks with A chips instead of M - it’s a smart strategy to ensure supplies for the company
 
Intel's 18A node is leading edge.


Intel's 18A node isn't all about yields and density but also performance. According to Taiwanese media 3C News, citing TechInsights research and calculations, the new leader of node performance is Intel 18A. On a custom scale used by TechInsights, Intel 18A gets a 2.53 score, while the performance score of TSMC N2 is 2.27, and the performance score of Samsung SF2 is 2.19. This is all among two nm-class nodes, where Intel leads the category



The 18A production node itself is designed to prove that Intel can not only create a compelling CPU architecture but also manufacture it internally on a technology node competitive with TSMC's best offerings. The node is also the first 1.8 nm-class (or, as Intel brands it, 2 nm-class) process to enter high-volume production anywhere in the world, preceding TSMC's N2 by weeks or even months.


Stop living in the last decade. Intel has.

Actions speak louder than words. Where are the Apple, Nvidia, or Qualcomm chips fabbed with 18A?

Do you think those companies rely on TechInsights to provide them info about Intel’s processes? Or maybe they’ve already seen plenty more data about it than we know publicly?
 
Not gonna buy these. Intel Macs were the worst. Many probably don’t remember it but they were running hot af and battery life was ****
Yeah because Motorola and IBM were great..... PowerBook G5!!!!!
Intel back when they switched was AMAZING.
I remember my first mini Intel going cycles previews G4 Macs.
 
Oh yes, the tariffs. We’ve seen how it has worked.
No western power will endanger their commercial relationship (and risk retaliation in rare earth export from China) to save Taiwan.
Pretty sure the US has defense treaties with Taiwan, not sure how involved they are though.

China is just sword rattling, like what most ‘major’ powers do any more. They might do something every once in a while (Russia is sort of an outlier), but for the most part they just puff up their chest and act all tough.
 
Pretty sure the US has defense treaties with Taiwan, not sure how involved they are though.

China is just sword rattling, like what most ‘major’ powers do any more. They might do something every once in a while (Russia is sort of an outlier), but for the most part they just puff up their chest and act all tough.
The U.S. has never had direct defense treaties with Taiwan; the policy has always been strategic ambiguity. Some administrations have been more overt than others one way or another, but fundamentally the policy of ambiguity has never changed. The mere threat the U.S. *might* get involved has been enough.
 
Oh yes, the tariffs. We’ve seen how it has worked.
No western power will endanger their commercial relationship (and risk retaliation in rare earth export from China) to save Taiwan.
This is just entirely, and completely false. The only thing preventing a Chinese takeover of Taiwan is the threat of a western (American) response, and yes the U.S. would respond in some way; most security scholars believe via a blockade rather than any sort of direct attack.
 
The U.S. has never had direct defense treaties with Taiwan; the policy has always been strategic ambiguity. Some administrations have been more overt than others one way or another, but fundamentally the policy of ambiguity has never changed. The mere threat the U.S. *might* get involved has been enough.
Thanks. Like i said, i thought they had treaties, but was sure how involved they were.
What I was probably thinking was the military equipment that we sell to Taiwan.
Maybe it’s some of the other countries is that area that have defense pacts.
I know if the USA steps in, Australia I believe has made commitments about joining, Korea would most likely step in, and Japan would if their constitution gets changed to allow it.
 
They absolutely should. They have multiple suppliers for screens, which allows them to set a very high standard and drop the worst performer. Being fully reliant on TSMC pushes pricing and quality out of Apple’s control.
 
I wish this A.I. craze would go on and settle down. Or at least become more efficient. Why do these servers require so much silicon and ram? Can we not find a way to make more powerful, simple, efficient models that run well on current hardware instead of wrecking the market? This all seems really silly.

It depends on what you are doing. It takes many orders of magnitude more computing power to build models than to run them. For example, building an LLM can require thousands of GPUs operating for weeks to months. It can take on the order of 10^26 floating-point operations (FLOPS)–that's 100 septillion instructions. On the other hand, to use the resulting model to respond to a single request may require only one GPU operating for a few seconds executing only a few billion or trillion FLOPS. So... on the order of a quadrillion times more compute power to build than to run.

The disparity gets worse when you consider power. Running an LLM on your phone or Mac your phone requires only ambient cooling or a small fan. (Macs especially are amazingly power efficient compared to NVIDIA GPUs). Those data centers housing the multiple thousands of GPUs require extensive cooling systems, which require a lot of energy to run on top of what is going to run the GPUs.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.