Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
In contrast while x86-64 cleaned up a lot about x86, chip designers still cite the cruft in each, especially the latter as one reason why x86 chips are bigger and hotter than they otherwise would be.
Meanwhile, Apple uses precious chip real estate for niche applications like ProRes transcoding. ;)
I would agree however that an emphasis on tight integration between hardware and software and custom chip design is a benefit of ARM’s business model rather than ISA.
I think Apple's primary reason why they are switching to their own CPUs (which are the result of a decade of development on the mobile device side) is not so much performance, but their ability to customize them for their very specific target group. But many of the things they are doing don't easily translate to other segments. For example, they get significant performance advantages by integrating the (unified) RAM on the CPU package, but that doesn't easily scale to the amounts of RAM that are needed for workstations or servers. Similarly, while apparently a sufficiently high percentage of Apple users use ProRes to justify baking it in silicon, in the overall market that is a small niche.
That x86 has 90% of the market share everywhere else is a function of ARM only just only just starting to seriously compete outside of mobile rather than an x86 processor advantage. And indeed x86 may yet win out. An entrenched ecosystem is difficult though not impossible to overcome and as you say AMD and Intel won’t stand still.
Well, there's been talk about ARM taking over for more than a decade and it hasn't happened. Have things changed enough to make it happen now? We'll see. But I think a lot of the perception is just driven by Intel's stumbles on the manufacturing side. They still have very good designs. Next week we'll learn more about Alder Lake, which looks like the most innovative CPU they've launched in years.
 
Yes, there’s been a noticeable shift to more provocative tone from MacRumors editors of late.

For example, look at some of the article headlines. They often seem intentionally worded to generate strong reactions in the comments, and polarise debate – instead of encouraging a rational middle ground approach.
They stir the pot when the bias is in their favor. When it’s not, they disable comments or moderate like crazy to keep it in their favor. Case in point, disabled comments on the article about daily covid testing for Apple employees.

And yes, that is worthy of debate. For anyone who disagrees, if you’re so certain that your opinion is superior, surely you wouldn’t have a problem defending it. But no, you have to hide behind the block button for those of us “inferiors” who dare to disagree with the groupthink.

It’s ok, all of the current event issues as of late will come to a head very soon and common sense will prevail. Even the most politically motivated tire once they see no return on investment, both metaphorically and fiscally.
 
Meanwhile, Apple uses precious chip real estate for niche applications like ProRes transcoding. ;)

I think Apple's primary reason why they are switching to their own CPUs (which are the result of a decade of development on the mobile device side) is not so much performance, but their ability to customize them for their very specific target group. But many of the things they are doing don't easily translate to other segments. For example, they get significant performance advantages by integrating the (unified) RAM on the CPU package, but that doesn't easily scale to the amounts of RAM that are needed for workstations or servers. Similarly, while apparently a sufficiently high percentage of Apple users use ProRes to justify baking it in silicon, in the overall market that is a small niche.

Hardly the same thing. That silicon doesn’t cause the say the CPU to be inefficient when it isn’t even in use. ;)

Actually the business model and even things like memory fits *really* well with servers and *especially* hyperscalers. That’s why the hyperscalers are going this route. They may be using off the shelf ARM cores but they can custom design the chip to fit exactly they need, including accelerators and memory. They’re designing for themselves so they don’t need modularity to fit to a bunch of different product lines to hawk to others’ use cases.

Well, there's been talk about ARM taking over for more than a decade and it hasn't happened. Have things changed enough to make it happen now? We'll see. But I think a lot of the perception is just driven by Intel's stumbles on the manufacturing side. They still have very good designs. Next week we'll learn more about Alder Lake, which looks like the most innovative CPU they've launched in years.

Sure I’m not dismissing either AMD or Intel. And both will try to grow their custom chip divisions to match ARM as well. AMD has always had that (mostly the consoles off the top of my head) and Intel 2.0 claims to be going all in on that as well to combat the rise of ARM. But the business side is more difficult. ARM cores in addition to being more power efficient, especially in certain workloads, are also cheaper to make and to buy, and not by a small margin.

The danger for Intel is that ARM will do to them what Intel did to IBM. Remember IBM’s position in the server, HPC, etc… markets looked similarly unassailable. Then came along new internet companies like Google who said “why buy expensive and reliable/powerful but small numbers from IBM when you can buy cheap and scale to huge numbers from Intel?” Now Google and Microsoft and Amazon are building their own processors and that’s a big threat. (People talking about MS leaving Intel in consumer products are premature, the reports are their in house chips are for servers, probably Azure) Those three alone account for a massive number of server chips. This does also cause both tailwinds and headwinds for third party ARM server chips, but ultimately it’s lost market share for Intel and AMD.

Does this make anything inevitable? No, of course not. Intel has their own history of doing this to others as a cautionary warning of what not to do and market dominance is a quality of its own. But x86 is now actually facing a challenger that is not only putting out a competitive product technologically, but is also cheaper to make and buy and can be completely customized by the biggest (and some smaller) customers. Usually x86 has at least held the price advantage or the fabrication advantage or something. Right now, they don’t … beyond dominating the current market share. Is that a big enough broom to hold back the tide? Maybe … maybe not.

Edit: oh and yes Intel’s missteps in fabrication definitely let this happen … at least sooner than it might’ve.
 
Last edited:
Time is running short. Intel needs to outsource to TSMC or Samsung 5nm/3nm.
 
It’s smart not to have all you manufacturing in one basket.
Sure, if Intel can actually compete with TSMC’s current 5nm+ node (let alone the upcoming 4/3nm ones Apple will be using soon)… Samsung couldn’t and got cut out of the picture.
 
Maybe not related to this, but i'm waiting for Mac mini with M1 MAX ? ?‍♂️
And will be super nice to have an option for at least one M.2 SSD drive in it. That will be awesome - pro users will apppreciate it. :cool:
Nice to have soldered in super fast SSD drive, but place for second SSD for working storage are wellcome.

What you think? Will be great? But i have feeling that it not happens.
 
Hardly the same thing. That silicon doesn’t cause the say the CPU to be inefficient when it isn’t even in use. ;)
It increases the die size (and thus cost) and/or takes away precious room on the die that could be used for performance enhancements with broader applications, such as more CPU/GPU cores. I think something like baked-in ProRes support only works for Apple.

Actually the business model and even things like memory fits *really* well with servers and *especially* hyperscalers.
I was referring to Apple's specific design and packaging. Obviously it is possible to design ARM-based server CPUs, but they don't look like the M1.

But x86 is now actually facing a challenger that is not only putting out a competitive product technologically, but is also cheaper to make and buy and can be completely customized by the biggest (and some smaller) customers.
This has been true for years. But most companies aren't Amazon or Apple and can't afford their own chip design shop, much less compete with the big guys in this space. ARMs biggest problem besides software compatibility is that there currently isn't a standardized ecosystem around it (not just the CPU itself but also platform components etc.). So unless you have the resources to develop your own custom design (plus drivers for it etc.), it isn't a viable alternative. That might change if someone (e.g. Nvidia, if the ARM acquisition goes through) develops such an ecosystem. But then it probably wouldn't be cheaper anymore.
 
It increases the die size (and thus cost) and/or takes away precious room on the die that could be used for performance enhancements with broader applications, such as more CPU/GPU cores. I think something like baked-in ProRes support only works for Apple.


I was referring to Apple's specific design and packaging. Obviously it is possible to design ARM-based server CPUs, but they don't look like the M1.

Well yeah … hence the term customization … I’m not arguing that all chips will look like M1’s … they can be targeted to your customers’ workloads. ***That’s the point***.

This has been true for years. But most companies aren't Amazon or Apple and can't afford their own chip design shop, much less compete with the big guys in this space. ARMs biggest problem besides software compatibility is that there currently isn't a standardized ecosystem around it (not just the CPU itself but also platform components etc.). So unless you have the resources to develop your own custom design (plus drivers for it etc.), it isn't a viable alternative. That might change if someone (e.g. Nvidia, if the ARM acquisition goes through) develops such an ecosystem. But then it probably wouldn't be cheaper anymore.

Ah but that’s one of the tailwinds I was referring to: hyperscalers adopting ARM en masse (currently only Amazon, MS and Google only in plans or rumored) drives the software compatibility forwards *as well as standardization*. In this case standards don’t mean the exact same elements, but rather connected in a regularized way. That’s why MS and Amazon and so far everyone outside Apple have adopted ARM’s SystemReady even in consumer chips. They’ve learned from MIPS what not to do. ;) This doesn’t increase costs but decreases them as everything gets better with scale. ARM chips aren’t cheaper because they lack a software or standard ecosystem, they’re cheaper because well they’re cheaper: smaller, cheaper dies and different business model - similar to Intel vs IBM back in the day.

That said, it’s not all roses. I did refer to headwinds as well. The biggest one is not anything you mentioned but the fact is that if the biggest customers like the ones above are making their own ARM chips for themselves, third party server makers will have far fewer options to make big sales which makes it harder to survive to sell to all the smaller customers out there who also will buying more off the shelf with similar customization to what they might get from AMD or Intel - ie different SKUs targeting different workloads. That increases costs and decreases advantages (though it should be pointed out Ampere which operates on this principle currently is still far cheaper than AMD or Intel). That’s the biggest disadvantage.
 
That said, it’s not all roses. I did refer to headwinds as well. The biggest one is not anything you mentioned but the fact is that if the biggest customers like the ones above are making their own ARM chips for themselves, third party server makers will have far fewer options to make big sales which makes it harder to survive to sell to all the smaller customers out there who also will buying more off the shelf with similar customization to what they might get from AMD or Intel - ie different SKUs targeting different workloads. That increases costs and decreases advantages (though it should be pointed out Ampere which operates on this principle currently is still far cheaper than AMD or Intel). That’s the biggest disadvantage.
I think you have actually said pretty much the same thing as I did. ;)

When it comes to customized SoCs, I believe the future is not that everyone has their own chip design, but that foundry customer interfaces, IP licensing and packaging technologies will allow much more flexible mix-and-match than is currently possible.

Regarding the hyperscalers, it will be interesting to see what Google does. They have been dipping their toes into CPU design, but a little birdie told me that they are working on some interesting new datacenter technologies with Intel. Interesting times in tech. :)
 
I think you have actually said pretty much the same thing as I did. ;)

When it comes to customized SoCs, I believe the future is not that everyone has their own chip design, but that foundry customer interfaces, IP licensing and packaging technologies will allow much more flexible mix-and-match than is currently possible.

Regarding the hyperscalers, it will be interesting to see what Google does. They have been dipping their toes into CPU design, but a little birdie told me that they are working on some interesting new datacenter technologies with Intel. Interesting times in tech. :)

Oh absolutely and also just because Amazon et all make their own chips doesn’t mean they’ll abandon x86 anytime soon in their data centers either. They’ll still be buying Intel and AMD into the foreseeable future. Yeah CPU development outside of mobile was pretty boring for most of the 2010s. I agree it and tech in general is about to get *a lot* more interesting.
 
Oh absolutely and also just because Amazon et all make their own chips doesn’t mean they’ll abandon x86 anytime soon in their data centers either. They’ll still be buying Intel and AMD into the foreseeable future. Yeah CPU development outside of mobile was pretty boring for most of the 2010s. I agree it and tech in general is about to get *a lot* more interesting.
x86 doesn't have to die for Intel to be absolutely screwed. It just has to grow slower than the costs of fabs rise. If custom silicon becomes a thing, it can cut into x86 market share, although I doubt it will replace it. And at some point, the volumes of x86 alone will not justify investment into bleeding edge fabs. That's when the fun really starts for Intel.
 
x86 doesn't have to die for Intel to be absolutely screwed. It just has to grow slower than the costs of fabs rise. If custom silicon becomes a thing, it can cut into x86 market share, although I doubt it will replace it. And at some point, the volumes of x86 alone will not justify investment into bleeding edge fabs. That's when the fun really starts for Intel.

True Intel is in a unique position, strong in many ways yet also precarious, and we'll just have to see how "Intel is now a fab for others" goes.
 
x86 doesn't have to die for Intel to be absolutely screwed. It just has to grow slower than the costs of fabs rise. If custom silicon becomes a thing, it can cut into x86 market share, although I doubt it will replace it. And at some point, the volumes of x86 alone will not justify investment into bleeding edge fabs. That's when the fun really starts for Intel.
Which, along with scaling effects, is probably a reason why they are getting back into foundry services (which, BTW, include licensing of x86 IP for custom SoCs).
 
Which, along with scaling effects, is probably a reason why they are getting back into foundry services (which, BTW, include licensing of x86 IP for custom SoCs).

There’s also a reason they got out of the foundry business. Hopefully they’ve learned from that experience!
 
In this case I don't think it would as Intel then could reverse-engineer the M1 then modify it a little to come up with their own version. I highly doubt that Apple will reach a deal with Intel on this.
As much as I dislike Intel, I highly doubt they'd do this.

Apple could decimate them legally for such an egregious act - and they may even face criminal actions as well.

As desperate as they are, they're not insane.
 
  • Like
Reactions: huge_apple_fangirl
This is not how the business world works. Here's a real example - Samsung produces the displays in each and every iPhone despite their own Galaxy phones being direct competitors. Samsung is also a sprawling company; their smartphone division is separate from their display division which is separate from their semiconductor division. Despite being in competition with Apple in the smartphone arena Samsung isn't going to steal Apple's secrets in areas where they work together because of ironclad NDAs, and because Samsung's display division would lose vital business from Apple if they breached the NDA.
I'm sorry but this example holds no validity. Samsung is a component supplier to Apple. They are not building Apple designed displays. Apple is buying Samsung designed and built displays. This provides no useful analogy to an Apple Intel relationship where Intel would be given Apple's chip designs to be the chip manufacture.
 
I'm sorry but this example holds no validity. Samsung is a component supplier to Apple. They are not building Apple designed displays. Apple is buying Samsung designed and built displays. This provides no useful analogy to an Apple Intel relationship where Intel would be given Apple's chip designs to be the chip manufacture.
It'a actually a very apt analogy, given that Samsung manufactured Apple's chip designs up to the A7 before Apple switched to TSMC.

Foundries use compartmentalization and confidentiality agreements to ensure that no IP leaks to potential competitors served by the same foundry, including internal customers in conglomerates like Samsung. The Intel foundry service will presumably use the same approach. An early example is Amazon AWS, which competes with Intel in the server space but is also one of the first announced clients for their foundry services.
 
Remember the Mac Pro professional equipment runs on a Xeon processor chip that has been a tried and true Server and Workstation platform for many years to work 24 x 7 365 days in all kinds of weather conditions and environments so Apple still has an Interest in an Intel Processor. Apple has come a long way with their M processor but not yet reached a level of enterprise class production and operations required by many large companies.

So if you want to play with the big boys, you got to have the hardware that can play with the big boys with the software that there IT production needs. Which still uses bootcamp and Windows 10 OS. so Apple still has a lot of work on their processors so Intel and Mac are still relevant with an Intel processor.
Sorry what?. Have you missed all the benchmarks. The M1 max is already competitive to the Xeon platform apple use in their 2019 Mac Pro.
A laptop chip, what do you think their bigger chip designed for the desktop will do?

We have seen how much performance and efficiency can just be achieved from having the memory next to the CPU like L3 cash
 
Sorry what?. Have you missed all the benchmarks. The M1 max is already competitive to the Xeon platform apple use in their 2019 Mac Pro.
A laptop chip, what do you think their bigger chip designed for the desktop will do?

We have seen how much performance and efficiency can just be achieved from having the memory next to the CPU like L3 cash
Yeah, but how will that scale to hundreds of gigs, which are required for many professional workstation uses (the Mac Pro supports up to 1.5 TB RAM)? That SoC would have to be as big as a hand to carry that much RAM, not to mention the connection fabric)?
 
Sorry what?. Have you missed all the benchmarks. The M1 max is already competitive to the Xeon platform apple use in their 2019 Mac Pro.
There's little point in comparing it to a 8-core Xeon. I've always wondered thy people buy 8-core Xeon unless for ECC. Throw in a 24-core or 28-core or multi-socket with an additional 2TB to 4TB RAM and a few Nvidia GPU and Apple has a Gameboy in comparison. At that point, we're back to "... but the power consumption...".
Doesn't mean ARM can't be in the server or HPC world, just not from Apple. Nvidia will have that covered with Grace and provide the necessary software as well. Eventually that will trickle down do smaller desktops and laptops, they already have a Jetson platform which is performing very well for their use case.
 
Yeah, but how will that scale to hundreds of gigs, which are required for many professional workstation uses (the Mac Pro supports up to 1.5 TB RAM)? That SoC would have to be as big as a hand to carry that much RAM, not to mention the connection fabric)?
Could be simple, as a Mac Pro likely needs the ability to expand you might be limited to xxGb soldered RAM modules and expand with DDR5 sticks to get more.
But it will likely support 128-256 GB ram without any design constraints with just 2-4 more ram models on ether side of the CPU, unless they stack them like HBM2/3

I don’t think it’s logical to expect it to be limited like the SOC on their compact computers.
There's little point in comparing it to a 8-core Xeon. I've always wondered thy people buy 8-core Xeon unless for ECC. Throw in a 24-core or 28-core or multi-socket with an additional 2TB to 4TB RAM and a few Nvidia GPU and Apple has a Gameboy in comparison. At that point, we're back to "... but the power consumption...".
Doesn't mean ARM can't be in the server or HPC world, just not from Apple. Nvidia will have that covered with Grace and provide the necessary software as well. Eventually that will trickle down do smaller desktops and laptops, they already have a Jetson platform which is performing very well for their use case.
It’s more a comparison what the current M1 can achieve. Nothing says they can’t use ECC memory with their Mac Pro solution or use the low power laptop.

Apple have said they replace them over a two year period and likely will end with the Mac Pro and their own silicon.
 
Could be simple, as a Mac Pro likely needs the ability to expand you might be limited to xxGb soldered RAM modules and expand with DDR5 sticks to get more.
Hm, but then the external RAM no longer has the bandwidth and latency advantages, and it probably can't be used as unified memory for both CPU and GPU.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.