Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Apple are never going back to x86, whether it is intel or AMD.
I wonder if they ever really left. All those iOS devices require a ton of cloud support and data centers. Is anyone here hypothesizing that Apple isn't buying a boatload of either Intel or AMD chips to fill up those huge data center buildings? But scaling out using some form of ARM servers instead?
 
I wonder if they ever really left. All those iOS devices require a ton of cloud support and data centers. Is anyone here hypothesizing that Apple isn't buying a boatload of either Intel or AMD chips to fill up those huge data center buildings? But scaling out using some form of ARM servers instead?

Last I heard (and I could be wrong), most of Apple's services are hosted by third parties (e.g., Azure). Datacenter hardware at scale is not really something Apple does; in fact there's surprisingly few big players who have the capacity to scale up/out to what Apple require. Even Microsoft parter with others to build out their Azure DCs in terms of the hardware platform.

My suspicion is that those services (iCloud, App Store, iTunes Store, etc.) are written in either interpreted or JIT compiled languages on the back-end, talking to database servers running native code on the hosting platform. What that platform is (x86, ARM, Power, etc.), is likely irrelevant.

That's my thought anyway. People aren't writing back end web-app server code in C or assembler in 2021 :D. It's more likely swift, node.js, etc. which is all platform agnostic.

You'd be silly to tie yourself to a specific architecture for web services because its such a low-margin, competitive market that you'd probably want the flexibility to run it wherever the hosting cost is cheaper (obviously assuming it meets the same SLA).
 
Last edited:
  • Love
Reactions: Maximara
My suspicion is that those services (iCloud, App Store, iTunes Store, etc.) are written in either interpreted or JIT compiled languages on the back-end, talking to database servers running native code on the hosting platform. What that platform is (x86, ARM, Power, etc.), is likely irrelevant.
Probably true, but what about XCode Cloud? If all of Apple's platforms are now ARM-native, it makes sense to use ARM servers. I think it's currently on Intel Xeons, but if Apple keeps using them, they're going to have to maintain an x86-compatible version of macOS indefinitely. Maybe they'll switch it to the Apple Silicon Mac Pro in the future?
 
Apple isn't just another customer like AMD but a critical partner. Apple consumes over 25% of all TSMC's output and virtually all of their leading edge process node from risk starts to early ramp. The latter point is critical as TSMC wouldn't be where they are today without Apple and neither would the rest of the fabless semiconductor industry. Moving a leading edge process node into volume production is very difficult and extremely expensive. This is why companies like AMD are not the first mover on this and prefer to be a node behind. You need a company that can move a very large number of units with high margin, is very well capitalized, and has deep experience and skills to leverage the most advanced nodes. There isn't another company on the planet that can move more units, with sufficient margin, than Apple and meets the other requirements above. Without Apple TSMC's leading edge process nodes would roll out a lot slower than they have in the past. Apple has been critical in making TSMC what it is today which has the trickle down effect of allowing companies like AMD to benefit from the rapid development of advanced nodes. TSMC would never do anything to damage this close relationship as it would be detrimental to their own health.

All the more reason why TSMC would like to court a second “customer” to play off against Apple. Just like Apple always plays second suppliers against each other. It’s like the pub scene in Inglorious Basterds - everyone has a pistol pointed at everyone else’s balls…
 
Probably true, but what about XCode Cloud? If all of Apple's platforms are now ARM-native, it makes sense to use ARM servers.

But that is a narrow sliver of Apple's "Cloud" services. For this narrow niche then yes. Athough it will need some Intel Macs because the new Rosetta doesn't do AVX. If trying to test the correctness of a AVX library then going to need a system that can execute it. But sure, iPhone app testing and most mundane Mac apps can get by without it.


I think it's currently on Intel Xeons, but if Apple keeps using them, they're going to have to maintain an x86-compatible version of macOS indefinitely. Maybe they'll switch it to the Apple Silicon Mac Pro in the future?

XCode Cloud probably isn't a dense, multitenant system. A large rack of Minis is sufficient for probably the bulk of the prospective customers. Don't really need an "ARM server" solution. They just need a "Mac solution". Apple probably has some rack model Mac Pro 2019's assigned to it but probably not exclusively. But that isn't going to keep macOS Intel going indefinitely.

XCode Cloud probably already has M1 Minis looped in. Over an extended period of time , cloud services servers are retired for new models. ( 4-5 year rotation perhaps ) . Over time the Intel Macs will be rotated out and most of the service growth that occurs before rotation starts will probably have a very high M-series mix to them.

Until Apple stops selling macOS Intel systems all together, there is like a 6-7 windows on support for that OS branch that keeps sliding forward. When Apple stops selling them then the "countdown" clock will start.

Since XCode Cloud is likely oriented to renting Mac to individual developer/organizations , there is lots of incentive to move folks onto something like Minis sooner rather than later. A M2 , M3 Mini that can cover more ground in terms of customer coverage will probably get an aggressive deployment.


Yes , Apple will need a "Mac Pro" sized solution to augment that for a subset of their customers, but the way macOS is licensed for hosting those systems aren't going to "black hole" drag all of the customers onto those kinds of systems. It isn't about putting the most customers on the fewest Macs possible. It is more skewed toward selling more Macs. Even if Apple is running them. ( makes them more money also.)
 
Last I heard (and I could be wrong), most of Apple's services are hosted by third parties (e.g., Azure). Datacenter hardware at scale is not really something Apple does; in fact there's surprisingly few big players who have the capacity to scale up/out to what Apple require. Even Microsoft parter with others to build out their Azure DCs in terms of the hardware platform.

Apple's cloud services aren't monolithic .

A large bulk of the high bandwidth consuming services like streaming services ( Apple Music , Apple TV ) . app/OS downloads , and static website content is "read only" data. It pays to point folks to mirrored caches that are closer to where they are. That works fine as 100M users can all share the same copy(or copies). That is the kind of scale of data centers that Apple doesn't and probably never should have.

iCloud is probably a different story. The "at rest" storage costs are going to burn money. Properly backing it up even more so. Even more so since lots of "freebie" data storage handed out with iCloud ( no money charges but data stored. ). [ There are services like SmugMug that do "value add" storage on top of Amazon/Google/etc cloud storage but they aren't cheap and certainly largely are not free. iCloud could be a hybrid where the backups go to Apple or some split between how much was in/out. ]

iMessage ... probably not as there some key user state being managed here. the aggregate transaction rate is quite high, but the data is relatively low and there is pragmatically no revenue being generated to pay for either ones of those.

AppleID authentication and security ... probably not a good idea to outsource that.
Same thing with ApplePay ( whole stack needs to be bank security level certified. Maybe some colocatin but not sharing hardware. )


Apple has over 1,000,000 sqr ft of data center space. It isn't like they have paltry resources; just not on the super duper mega scale.



My suspicion is that those services (iCloud, App Store, iTunes Store, etc.) are written in either interpreted or JIT compiled languages on the back-end, talking to database servers running native code on the hosting platform. What that platform is (x86, ARM, Power, etc.), is likely irrelevant.

The app implementation specifics aren't as important as how to do load balancing ( virtual machine/container spin up / spin down) , system performance tuning , etc. Stuff like iCoud Drive isn't going to be stored in a database. ( not relational data. ) . Being able to live migrate some services from one machine to another can mean the different in meeting or missing required service levels.

Not that Apple is permanently stuck on Intel x86_64 , but probably moving from big pod of X to big pod of Y for subservice ABC .


Very high transaction rate, low latency web services like Apple Pay and iMessage highly likely are not interpreted or JIT . It isn't that hard to outsource 150TB per day of grunt data movement and still do 1billion transactions per day in house.
 
Probably true, but what about XCode Cloud? If all of Apple's platforms are now ARM-native, it makes sense to use ARM servers. I think it's currently on Intel Xeons, but if Apple keeps using them, they're going to have to maintain an x86-compatible version of macOS indefinitely. Maybe they'll switch it to the Apple Silicon Mac Pro in the future?
Doubt it

Commodity servers are almost all x86
Most hosting is on x86

They will run on whatever their host runs.

Apple will not have any mandate for any specific cpu architecture for their cloud services.
 
Doubt it

Commodity servers are almost all x86
Most hosting is on x86

They will run on whatever their host runs.

Apple will not have any mandate for any specific cpu architecture for their cloud services.

They may be doing so through a back door before too long due to their requirements about carbon neutrality for their vendors.
 
I'm sure intel will recover. The US government simply won't let their foundry side go under as its otherwise an epic national security problem if the USA is reliant on third parties/other countries for their semiconductor manufacturing.

But holy hell they've been asleep for way too long to catch up inside the next 2-3 years.

Given they are setting up a 2025 ( and next gen EUV fabrication inflection point) in 2025 target date... they aren't trying for 2-3 years (it is really 4-5 years ).

That is actually part of the better news. 14nm was late so they in part stuffed "extra stuff" into 10nm to quickly catch up to being out in front. It is more a " tortoise and the hare" business. That is kind of what 'tick / tock' was indicative of. Control complexity on each iteration step to get to a steady flow of better outcomes. At some point, some at Intel arrogantly tossed that out the window. They stumbled hard on that, but it isn't something they had no idea about before.

FinFet is running out of steam. Gate all around is coming and it will be a major shift for everyone; including TSMC.
If Intel was trying to mirror Samsung and do gate all around before everyone else that would be bigger potential trouble that trying to do it a couple of years later. If Intel can get incremental progress at a steady rate that is more than half the battle. TSMC could stumble and could open the door. There is no guaranteed supremacy to being on even footing with TSMC a couple years out but Intel can do a much better job of being able to take advantage of opportunities to recover. [ Apple could also turn more "Scrooge McDuck" and start looking for not quite as expensive process nodes to be on. (e.g., heavily chopped margins on service leads to cost cutting round. ) ]

For example if there is a hiccup with next gen EUV fabrication machines TSMC can't out pace what ASML can deliver. Done to just one company making the key tech. They stumble... the whole chain slows down.

Similarly if Intel just out spends on TSMC on next gen EUV fab machines. Can't out produce Intel with what TSMC doesn't have. [ Intel can do a wider and longer/deeper pipeline of pathfinding to catch up ... it will cost more but that is tractable to do. That's why largely expanding the Fab services business makes lots of sense . And was a bonehead move not to do before. ]

It is a gross overslimplification that the US govt is going to bail-out Intel. They aren't. If Intel doesn't decouple complexity and does bonehead moves the current policies won't help long term.
 
Well, if you're TSMC why would you give up any sort of hard-won advantage to a competitor with 1/3 your market cap?

Because making $30 is better than making $0 and giving $30 to your competitor.

The flaw here is that Intel only had one choice. They don't. If TSMC said "drop dead" Intel could have gone to Samsung for the extra EUV work. Nvidia seems to be doing quite well in the current GPU market with Samsung tech. It isn't like it would be impossible for Intel to build a GPU line up if turned away at TSMC. Intel would probably have to compete more of value than performance if went with Samsung but it was ( and is ) an option.

It doesn't make lots of sense for TSMC to do custom, out of their mainstream development node work for Intel (e.g. them a semicustom fab process). But if Intel wants to take a standard library stuff and tools and submit work .... that is what TSMC is... fab for anyone 'house'. It doesn't make much sense to turn folks away or play debilitating favorites for some because folks will just walk away when there are competing shop(s).

Giving more money to Samsung is only going to allow them to iterate more on gets. They are incrementally behind but Samsung isn't a bunch of quitters. They keep plugging away until they get it right.

It is bad enough that down to just one bleeding edge Fab equipment maker (ASML). I doubt the major players are going to collectively allow it to just go down to just one Fab contractor. Someone is going to push some work to Samsung. That could be Intel. [ I think Intel is more keenly eyeing Samsung place in the race than TSMC's. But if TSMC forces Intel to boost Samsung ... that doesn't take a player off the table for TSMC. Actually at this point Samsung is closer than Intel. ]






Especially when that competitor is also a competitor/enemy of 3 of your biggest customers (Apple, AMD and Nvidia).


There is no "enemy" ... kill them off notion once get super big. Apple was spiraling the drain with the "Windows has to loose, for us to win" mentality back in the 90's.

Intel flaked on buying enough EUV fab machines to be a super high volume player for more than several years. It was either TSMC or Samsung. It makes sense for TSMC to take the money since that stuff is extremely expensive to buy and run.
 
They may be doing so through a back door before too long due to their requirements about carbon neutrality for their vendors.
Possible yeah, but I don't buy it. I could be wrong, but I suspect they'll just let their vendors handle that. Getting into server hardware for Apple is a huge investment and the only customer for it will be Apple. Facebook, Google, Microsoft and Amazon are already heavily invested in that area (never mind intel and AMD) with a decade plus lead.

Apple don't appear to have ever been interested in server hardware. Or server software (platform wise) for that matter (look what happened to macOS server :D) outside of very niche stuff for their own requirements. I just don't think they need to throw money at that market, the opportunity cost (vs. what they could be spending time and energy on instead - e.g., car, AR, next generation consumer devices, etc.) is just too great.

In my view at least.

I'm not saying Apple Silicon couldn't do it - but they're already selling pretty much everything they can make. They don't have to win that market to thrive.
 
Last edited:
But if Intel wants to take a standard library stuff and tools and submit work .... that is what TSMC is... fab for anyone 'house'.
Neither Intel nor any other TSMC customer has to use their libraries or tools. They need merely submit a gds or open access database which complies with TSMC’s design rules. Quite confident Intel, like other high end customers, will create and characterize their own library.
 
Neither Intel nor any other TSMC customer has to use their libraries or tools. They need merely submit a gds or open access database which complies with TSMC’s design rules. Quite confident Intel, like other high end customers, will create and characterize their own library.

Seeing what Intel does here will say a lot about them. Most companies use TSMC's cells which may be augmented with their own custom cells. There's no shortage of third party IP vendors with substantially higher performing libraries but they are usually a hard pass for many very good reasons. Any fab will be more than happy to take your money and run through anything that passes their PV. However, cells are highly leveraged so they require special care and long silicon qualification. There are now so many submicron effects that design rules and SPICE won't capture. When you own the fab, like Intel's own, you know the unpublished details like statistical placement effects.
 
Last edited:
Seeing what Intel does here will say a lot about them. Most companies use TSMC's cells which may be augmented with their own custom cells. There's no shortage of third party IP vendors with substantially higher performing libraries but they are usually a hard pass for many very good reasons. Any fab will be more than happy to take your money and run through anything that passes their PV. However, cells are highly leveraged so they require special care and long silicon qualification. The are so many submicron effects that design rules and SPICE won't capture.

Which is exactly what is supposed to separate the ASIC guys from the big boys and girls. If you don’t want to leave significant performance on the table, you better figure out how to accurately characterize a cell library, even if that means fabbing test structures to get it done.
 
Which is exactly what is supposed to separate the ASIC guys from the big boys and girls. If you don’t want to leave significant performance on the table, you better figure out how to accurately characterize a cell library, even if that means fabbing test structures to get it done.

I usually see the big boys and girls as the ones who can bring their own 112G PAM4 SERDES to school. The little kiddies use lunch money to buy from Broadcom and Mediatek.
 
Neither Intel nor any other TSMC customer has to use their libraries or tools. They need merely submit a gds or open access database which complies with TSMC’s design rules. Quite confident Intel, like other high end customers, will create and characterize their own library.

Speaking of custom cells you may recall AMD's infamous TLB bug. That was caused by a std cell engineer's mis-characterization of a simple MUX where someone thought it would be clever to have bare source / drain directly on the input pin instead of oxide. Due to the pin cap dependency on the select line the lib file tables were incorrect resulting in a hold violation even though PT said everything was awesome. This was back in the days of easy design rules where you could sign your name with poly, and pass DRC, due to the degrees of freedom. This was so long ago I doubt AMD cares about keeping this skeleton locked in their confidentiality closet. This should be a cautionary tale.
 
Speaking of custom cells you may recall AMD's infamous TLB bug. That was caused by a std cell engineer's mis-characterization of a simple MUX where someone thought it would be clever to have bare source / drain directly on the input pin instead of oxide. Due to the pin cap dependency on the select line the lib file tables were incorrect resulting in a hold violation even though PT said everything was awesome. This was back in the days of easy design rules where you could sign your name with poly, and pass DRC, due to the degrees of freedom. This was so long ago I doubt AMD cares about keeping this skeleton locked in their confidentiality closet. This should be a cautionary tale.

Actually, I have no recollection of that. After my time? Pretty sure the team I worked with wouldn’t have allowed that mistake to happen. Also makes me wonder what they were doing in Primetime - for hold analysis you better build yourself a bunch of margin in those runs.
 
As explained by Anand himself before he was abducted by the UFO


Ah, K10. Yep, after my time.

The problem you noted is a great example of the siloing that happens when management decides they’d rather throw massive numbers of interchangeable bodies at a design rather than have a few people who understand many aspects of the design (e.g. the interrelationship between the extraction flow and layout with cell characterization, and how Primetime uses the .lib data).
 
Last edited:
  • Like
Reactions: Andropov and rukia
Perhaps a reason that we no longer have bleeding-edge chip manufacturing in the US is because of Intel. AMD sold off GlobalFoundries, which eventually didn’t have enough capital to compete- maybe they wouldn’t have had to if Intel didn’t pull anticompetitive crap in them. Intel also got into a big lawsuit with DEC that ended up with them taking control of DEC’s fabs.
Right, Intel is at fault for AMD's decisions. And Nivida's. And Qualcomm's. Etc. pp. :rolleyes:

Of course, if Intel had shared their technology with others like TSMC did, instead of using it as a competitive advantage, there would be more chips fabbed in the USA at Intel’s foundries.
TSMC doesn't share technology. On the contrary.
But they thought they would never lose their lead and kept it to themselves. Now they want the US gov to fund them because they are the only US leading-edge chipmaker. It’s almost funny.
They are mostly asking the US government to step in because TSMC and Samsung receive massive subsidies from their respective governments.
 
Actually ex-Intel employee Francois Piednoel stated "The quality assurance of Skylake was more than a problem. It was abnormall bad. We were getting way too much citing for little things inside Skylake. Basically our buddies at Apple became the number one filer of problems in the architecture. And that went really, really bad. When your customer starts find almost as bugs as you found yourself, you're not leading into the right place."
That's one guy's opinion. May or may not be factual.
As for switching to AMD, their chips had the same problems the x86 architecture as a whole does - x86 runs very hot and the electrical power to computer power ratio blows goats out of a catapult.
We'll see how power-efficient x86 can be when AMD and later Intel get access to the comparable manufacturing processes that Apple can use thanks to TSMC. And M1 is primarily efficient because it is derived from a design made for phones. It remains to be seen how well it scales up to higher-performance platforms.

Then you had the fact everybody and his brother is slowly moving to ARM due to its mammoth energy savings (a big part of server cost is AC/cooling systems to deal with the heat the energy gobbling x86 CPUs produce.)
In fact, ARM has a single-digit market share and it's unclear whether that will change. Amazon's Graviton3, for example, seems rather disappointing in preliminary benchmarks.
 
M1 is primarily efficient because it is derived from a design made for phones.

That's… circular reasoning.

It remains to be seen how well it scales up to higher-performance platforms.

It already has scaled up plenty, though? There's those two rumored higher-end variants that are essentially two or four M1s Max, yes, but Apple's design already spans most of the range Apple cares about, from smartwatch to phone to tablet to laptop/AIO.

It's far more interesting whether Intel can scale down. Jury's out.

In fact, ARM has a single-digit market share

The vast majority of CPUs out there are ARM.

(edit) I guess the above statement may have been about servers, specifically?
 
Last edited:
  • Like
Reactions: huge_apple_fangirl
That's… circular reasoning.
Like that is a surprise
It already has scaled up plenty, though? There's those two rumored higher-end variants that are essentially two or four M1s Max, yes, but Apple's design already spans most of the range Apple cares about, from smartwatch to phone to tablet to laptop/AIO.

It's far more interesting whether Intel can scale down. Jury's out.
So far Intel's efforts to scale down have not been that promising.
The vast majority of CPUs out there are ARM.
I think they were referring to the server market which does indeed sit at 4%. Part of that is likely due to inertia as a lot of server software is written for other CPUs

ARM Servers: Mobile CPU Architecture For Datacentres? shows a bright future for ARM.
 
So far Intel's efforts to scale down have not been that promising.

Because it's not their primary focus. Intel and Nvidia prioritize raw performance first and power second (if not third). It guides how each proposed design change is evaluated and implemented. This is why the rumored top spec RTX 4000 GPU may exceed a half KW and Alder Lake lake client has ~270 watt pstates. People may not realize that Intel's ecores, unlike Apple's ecores, were not primarily included for power purposes but performance. Power efficiency is just a side effect. Apple prioritizes power efficiency then performance. It might guide decisions like what to include on the interposer and memcache size. There's no right or wrong but just a difference in market focus. BTW, this subtle but important distinction is missed by people doing product reviews.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.