Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

FramboosMatroos

macrumors newbie
Original poster
Aug 9, 2020
2
0
Sorry if I'm dumb, but Apple silicon is ,according to rumors, supposed to be
  • more power efficient
  • generate less heat
  • but also more powerful (a14x better than core i9?)
  • and also be cheaper.
Then why has x86 ever become popular/the standard if Apple silicon/Arm is, I guess, better in every way?
 
Well, x86 CPUs have been - and still are - performance leaders for personal computing, and they are a standard.

High-performing ARM CPUs are a very new development. Apple managed to design a new series of devices that seem to surpass anything before it in several key areas, so they are challenging x86’s superiority.
 
X86 and ARM are architectures that define how the CPU works at a fundamental level.

Both evolved over the years and both have been implemented in a different way by different chip designers.

For a long time Intel x86 processors were the most powerful and efficient, but the improvements brought by Intel in the last 5 years have been marginal at best.

In the meantime ARM evolved at a rapid pace and Apple invested massively in research and development to design their custom ARM-based SOC, striving to squeeze as much performance as possible from low-power chips that went into iPhones and iPads.

The technologies at the core of Apple SOC developed for the iPhone and iPad can be scaled up to build much more powerful and efficient processors for both desktops and notebooks.

And this is why, in 2020, Apple has the chance to build computers that have unprecedented level of performance at a fraction of the TDP of Intel x86 counterparts.

One last note about the price: Apple Silicon Macs have the opportunity to be cheaper not because developing an ARM based CPU is cheaper, but because Apple designs and produces its own processors instead of having to pay for Intel parts on which, of course, Intel has to charge a premium so that it can make a profit.
 
Well, x86 CPUs have been - and still are - performance leaders for personal computing, and they are a standard.

High-performing ARM CPUs are a very new development. Apple managed to design a new series of devices that seem to surpass anything before it in several key areas, so they are challenging x86’s superiority.
Yes true but they have not yet proven a challenge to x/86 in middle to high-end power computing devices. We will need to wait and see if they will be able to do it.
 
  • Like
Reactions: markiv810
Yes true but they have not yet proven a challenge to x/86 in middle to high-end power computing devices. We will need to wait and see if they will be able to do it.

I don’t think we will see many surprises there. Given how much performance they can extract from relatively low CPU frequency, and how little power their cores consume, it all boils down to how big Apple wants to make their chips. Intel and co. need to dramatically reduce the clocks of their cores for larger clusters in order to prevent the chip from melting - Apple does not. Just 16 A14 high-performance cores, in a package with TDP under 80 watts, given sufficient cache and memory bandwidth, would wipe the floor out with any Xeon-W currently on the market.

Based on that, I fully expect sustained multi-core performance of the upcoming Macs to be excellent to the point of being boring. I am much more curious about single-core performance (aka. max clock and power), the GPU and the memory.
 
I don’t think we will see many surprises there. Given how much performance they can extract from relatively low CPU frequency, and how little power their cores consume, it all boils down to how big Apple wants to make their chips. Intel and co. need to dramatically reduce the clocks of their cores for larger clusters in order to prevent the chip from melting - Apple does not. Just 16 A14 high-performance cores, in a package with TDP under 80 watts, given sufficient cache and memory bandwidth, would wipe the floor out with any Xeon-W currently on the market.

Based on that, I fully expect sustained multi-core performance of the upcoming Macs to be excellent to the point of being boring. I am much more curious about single-core performance (aka. max clock and power), the GPU and the memory.
If they are as powerful as you imply I wonder if they will be able to run software written for x/86 in emulation at similar speeds. This would be great for adoption of the technology I would think.
 
If they are as powerful as you imply I wonder if they will be able to run software written for x/86 in emulation at similar speeds. This would be great for adoption of the technology I would think.

I don’t think there will be a simple answer for that. Depends on the software and what machines you are comparing, I suppose. I’m also very curious about that.
 
Sorry if I'm dumb, but Apple silicon is ,according to rumors, supposed to be
  • more power efficient
  • generate less heat
  • but also more powerful (a14x better than core i9?)
  • and also be cheaper.
Then why has x86 ever become popular/the standard if Apple silicon/Arm is, I guess, better in every way?

Intel x86 is popular due to developments going back to 1971 when it introduced the first microprocessor, the 4-bit 4004. Although Intel was a market leader, it faced major competition from companies that you may have never heard of like MOS Technology, companies that you have heard of like Motorola, and companies that you deal with on a daily basis like Exxon. However, IBM's choice of the 8088, the 8-bit version of the 16-bit version of Intel's 8086 changed the market in a major way. Business customers eclipsed hobbyists and home computer and education users. Businesses demanded IBM-compatible, and that meant Intel x86 and nothing else. IBM lost control of its own platform while Intel marketed itself to a fair thee well.

The microprocessor market is 49 rears old. The consumer wing is what gets the publicity, but it is the tail that wags the dog. The embedded processor market is where the real sales are. Intel, however, dominates the consumer end with x86 and x86-64. These processor families have roots in the 8080. Intel has made attempts to move on, but it has found itself unable to release the x86 tweet. x86 has had issues going back to the 386 when the heat that it generated delayed IBM's release of computers featuring the processor. Its clone competitors redesigned the PC clone case to improve heat dissipation and forced IBM out of control of its own platform. Eventually IBM was forced to abandon the PC market.

Intel also had trouble with the Pentium and its flaws in floating-point math. The company had a major breakthrough controlling heat generation back in the 2000s. This is why Apple switched to Intel at a time when IBM dismissed it as a concern. But soon Intel was back to its old self--incremental improvements to milk the most out of x86. In the meantime, its customer base grew frustrated over the fact those incremental improvements didn't seem like improvements at all.

In the meantime, ARM was under active development and showing dramatic improvements in technology and performance. The top two supercomputing clusters on Earth are ARM-based. Users are amazed at the performance of the latest Apple processors based on the ARM ISA. The power and efficiency of ARM and its derivatives seems to have crystalized pent-up frustration with Intel.
 
  • Like
Reactions: yanksrock100
After reading all this (which I found very interesting), I have another basic/stupid question.

Intel makes the chips that the manufacturers put in their products, so the manufacturer chooses the chips they need to give their product the required performance and that's basically it.

That said, I read/heard somewhere that the Apple Silicon chips are not entirely an in-house Apple thing, but that Apple is licensing something (the ARM instruction set?) from someone (ARM Ltd?) instead and on that basis it designs its processors.

What does that mean? Thanks for any reply.
 
After reading all this (which I found very interesting), I have another basic/stupid question.

Intel makes the chips that the manufacturers put in their products, so the manufacturer chooses the chips they need to give their product the required performance and that's basically it.

That said, I read/heard somewhere that the Apple Silicon chips are not entirely an in-house Apple thing, but that Apple is licensing something (the ARM instruction set?) from someone (ARM Ltd?) instead and on that basis it designs its processors.

What does that mean? Thanks for any reply.
Apple pays to use the ARM instruction set, which is basically the language that the chip speaks. They also used to use modified versions of standard ARM chips for the A-series chips, but now they are fully custom-designed by Apple. So Apple builds its own custom chips, but they pay to let those chips use the ARM instruction set.
 
  • Like
Reactions: Krevnik and 2580285
What OP said is only true since at most 2-3 years ago, some might still disagree.
That said once the expectations for laptops shift there’s little reason for x86 laptop to exist.
 
There's nothing like Apple Silicon vs x86, it's just Apple's fancy name to call their own developed SoC based on ARM. So it's more about what ARM has that x86 can't offer: a pretty new ISA without lackluster legacy supporting features that x86 has, no need for complex instructions decoding, and more competition as it's open for licensing while x86 is closed to Intel and AMD only.

Well, x86 CPUs have been - and still are - performance leaders for personal computing, and they are a standard.

High-performing ARM CPUs are a very new development. Apple managed to design a new series of devices that seem to surpass anything before it in several key areas, so they are challenging x86’s superiority.

The only new thing about high performance ARM CPUs is that they may become mainstream for laptops/computers pretty soon thanks to Apple, but they've been existing for years in other areas: ARM supercomputers, Amazon's SoC for AWS that trades blows with Intel's Xeons and AMD's Epycs, etc.
 
Sorry if I'm dumb, but Apple silicon is ,according to rumors, supposed to be
  • more power efficient
  • generate less heat
  • but also more powerful (a14x better than core i9?)
  • and also be cheaper.
Then why has x86 ever become popular/the standard if Apple silicon/Arm is, I guess, better in every way?
X86 has been popular/the standard since 1981 when IBM released its PC. The first Mac was introduced in 1984 and used a Motorola 68000.

The first ARM computer was not introduced until 1987 in the Acorn Archimedes. It was a 32bit desktop computer running an OS called RiscOS. Apple's first device with an A series ARM CPU wasn't released until 2010 (it was the first iPad). Things didn't get interesting until about three years later when their first 64bit design was released in the iPhone 5s.
 
The only new thing about high performance ARM CPUs is that they may become mainstream for laptops/computers pretty soon thanks to Apple, but they've been existing for years in other areas: ARM supercomputers, Amazon's SoC for AWS that trades blows with Intel's Xeons and AMD's Epycs, etc.

ARM server chips are still generally significantly slower core per core than Intel or AMD (but they are gaining). They are used because they end up cheaper to operate. And sure, there are supercomputers - but frankly, most supercomputers suck at single core performance. That’s not what they are built to do - they are massive parallel processors. When you look at the normalized per core performance, they won’t be very impressive.
 
Also add that for a long time x86 wasn’t the powerhouse CPU. It just gained an enormous market share thanks to IBM PC (and clones) that was a way bigger market in the corporate space.

I would claim that Amiga and other Motorola 68000 based computers wiped the floor with x86 in mid 80ies to early 90ies. After that PowerPC beat x86 for a good number of years.

But because of IBM PCs strong hold on the corporate market and x86s legacy layer (witch is one of its problems today) x86 is the one architecture that has survived. It has not survived because it has always been better it has survived because it has had a bigger market share.
 
ARM server chips are still generally significantly slower core per core than Intel or AMD (but they are gaining). They are used because they end up cheaper to operate. And sure, there are supercomputers - but frankly, most supercomputers suck at single core performance. That’s not what they are built to do - they are massive parallel processors. When you look at the normalized per core performance, they won’t be very impressive.
^This. The challenge Apple has is beat to mid to high-end consumer market, where good single-core and multi-core performance with a "moderate" number of cores (<= 16) is most important.

The absolute performance of individual cores is still less than high-end Intel/AMD for most ARM-based CPUs but Apple may change this.

So far, the solution from ARM-based chip vendors such Ampere, Marvell & Amazon has been to increase core count within comparable TDP boundaries, which is why we have 80, 96 & soon 128 core CPUs (most single-threaded). While these compete well with Intel & AMD (especially in performance/Watt and overall running costs in data centers), not all applications can effectively use large core counts. So part of the solution lies with how software needs to evolve.
 
  • Like
Reactions: Nugget
Then why has x86 ever become popular/the standard if Apple silicon/Arm is, I guess, better in every way?
You could ask the same thing about Windows vs macOS/UNIX, and the reasons would be strikingly similar. Basically: it was better in some way (or at least cheaper) at a given point in time, so it became widely adopted and now moving away from it would make a lot of old but Very Important Software incompatible. Which would make some managers scream in terror.
 
One other thing to consider: when the PC industry was making the jump from 32-bit to 64-bit, Intel brought its own solution called Itanium (IA-64), which was incompatible with the x86 instruction set. Meanwhile, AMD was able to build its own solution (AMD64) which was compatible with x86. When Windows XP x64 was in beta, they had both Itanium and AMD64 builds available. Intel also focused IA-64 on the server market, completely overlooking the consumer side, which is where the majority of revenues in the market come from. This led Microsoft to officially announce it would stop support IA-64 in 2010. Today, Intel is licensing AMDs 64-bit instruction set across its entire lineup of processors (including Xeon). Pull up Explorer on any modern PC running an Intel CPU and do a search for "AMD64" - you'll literally see hundreds of folders with that prefix.
The same focus that allowed Intel to become the main player in the PC industry in terms of CPUs is what has caused it to falter in the last 5-7 years. Intel has a tendency to only look at the future of the industry from its own perspective, and either overlooks or outright discounts competing views (i.e. AMD, ARM-based processors, etc.). Part of that is their own hubris because they have such a dominant position in the market that they don't view anyone as a legitimate competitor, even though they went out of their way during the 11th gen Core announcement to throw shade at both a 3 year old nVidia iGPU and a six-year old AMD iGPU as if they were the only competitors to Intel's integrated graphics.
 
It exists because legacy and inertia. If you were designing a platform from scratch today it would not be x86 let’s just put it that way.
 
This is like asking why we had combustion engines when electric cars are so much more efficient and reliable.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.