Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Jmausmuc

macrumors 6502a
Original poster
Oct 13, 2014
877
1,847
I now a little bit about processor technology and have read up on x86 and ARM architecture but I still do not really understand what makes ARM so superior to Intel or x86 technology in general that has people believing the the new ARM Macs will be much better and faster than Intel based macs.

I understand that the advantages of ARM are power efficiency and the ability to have many more cores but isn’t Intel still better in raw power in multi threaded operations?
Will ARM at first be a replacement for intels mobile processors which are arguably already worse in many ways than an A12Z or A13 or will they also be able to create a processor than can beat i9 and even Xeon processors?
Can we really expect a „night and day“ difference?


By the way - just yesterday, it was announced that the fastest supercomputer of the world is now ARM based. it uses ARM processors made by Fujitsu:https://www.arm.com/company/news/2020/06/powering-the-fastest-supercomputer
Fits perfectly to Apples announcement.
 
Who says it is superior? It's different, it should provide some performance, thermal and other advantages over intel/AMD but it is more about bringing all Mac devices entirely under the control of Apple for many reasons, some good others not so much.
 
Who says it is superior? It's different, it should provide some performance, thermal and other advantages over intel/AMD but it is more about bringing all Mac devices entirely under the control of Apple for many reasons, some good others not so much.

not necessarily me but a lot of people in the comments seem to be expecting huge performance gains.
Somebody said that an A12Z matches a 3.5Ghz Skylake Intel Chip with 1/6 the power consumption.
Is that really true or only for specific benchmarks and circumstances?
How hard Willkür be to scale this to match Xeon chips and beyond?
 
When you build your own chip, to work only on your hardware, designed to work only on your software there are going to be gains in performance, no doubt about that. That comes simply from the control you have over it.

The problem with claims about an A12Z matching a 3.5GHz Skylake is that people are comparing tablet performance, nobody actually knows yet how any Apple chip will perform in a desktop. Because right now, the actual desktop chip is being kept under wraps.

Until there is an ARM device running macOS any comparisons are going to be flawed.
 
not necessarily me but a lot of people in the comments seem to be expecting huge performance gains.
Somebody said that an A12Z matches a 3.5Ghz Skylake Intel Chip with 1/6 the power consumption.
Is that really true or only for specific benchmarks and circumstances?
How hard Willkür be to scale this to match Xeon chips and beyond?
many people expect massive performance gains because many people are assuming a rather linear power increase with fewer thermal constraints higher voltages,clocks etc.
so far we've seen no evidence how this scales especially to the high end.

and comparing geekbench or other benchmarks isnt real life, there no way my hacked nintendo switch is comparable to my 2015 rMBP in single & multicore tasks in the real world but benchmarking numbers they are comparable.


personally, with the lack of evidence from apple at this point, I'm cautiously pessimistic till we see more and until i see more data on Apple's Silicon on the high end
I have more faith my threadripper with one or several high-end GPU's can emulate an ARM Mac better than an ARM Mac can Emulate the high-end PC tasks I need to do.

because ive seen ARM emulators on the PC side and can gauge what kind of PC hardware is needed to emulate what, we havent seen performance on the Rosetta 2 Emulation layer or Apple's full OS virtualization.
 
What sort of high end PC tasks?

Once upon a time, a 4k h.265 required a 4 core i7. And then cpu manufacturers decided that the algorithm was important enough to merit custom silicon. Apple will be in a position to decide which tasks are important, and which ones aren't.
 
Who says it is superior? It's different, it should provide some performance, thermal and other advantages over intel/AMD but it is more about bringing all Mac devices entirely under the control of Apple for many reasons, some good others not so much.

this

Apple can optimize better
 
The instruction set wars have long finished, and this hasn't mattered since the early 90s or even before that.

Apple picked arm because they have an architecture license and have built arm chips for their other devices. At this point, the performance of a processor depends on the workload, long-term investments, technology node (7nm, 10nm, etc.), and knowhow of the chip manufacturer. From this perspective, in 2020, you could expect some 20-30% better performance-per-watt of Apple chips than comparable Intel processors in certain cases simply because Intel's been stuck at 14nm for years, and maybe some longer battery life again in certain workloads. If I were you, I'd temper my expectations. There's no free lunch in semiconductor manufacturing.
 
Just look at this from a performance per watt perspective and you can see how great Apple have become at designing their own SoCs.

The average power usage for the AMD Ryzen 3900X and Intel Skylake 9900K are left out for good reason.

With the move to Arm for their full line-up Apple is no longer constrained by the low TDP of a smartphone or tablet form factor and that gives them tremendous freedom to further optimise Apple Silicon for their hardware and software needs.

spec2006-a13.png

spec2006-global-overview.png
 
many people expect massive performance gains because many people are assuming a rather linear power increase with fewer thermal constraints higher voltages,clocks etc.
so far we've seen no evidence how this scales especially to the high end.

and comparing geekbench or other benchmarks isnt real life, there no way my hacked nintendo switch is comparable to my 2015 rMBP in single & multicore tasks in the real world but benchmarking numbers they are comparable.


personally, with the lack of evidence from apple at this point, I'm cautiously pessimistic till we see more and until i see more data on Apple's Silicon on the high end
I have more faith my threadripper with one or several high-end GPU's can emulate an ARM Mac better than an ARM Mac can Emulate the high-end PC tasks I need to do.

because ive seen ARM emulators on the PC side and can gauge what kind of PC hardware is needed to emulate what, we havent seen performance on the Rosetta 2 Emulation layer or Apple's full OS virtualization.

I think some of this aforementioned expectation is driven by the concept of how hard it would be to sell machines that were objectively slower than the current state of affairs. By and large, Apple doesn't use the bottom-of-the-line parts. This MBP16 I'm typing on is 8c/16t, which is pretty respectable. There is no Celeron Mac, or Atom Mac.

So, how do Tim and co take the stage with confidence that they will replace this machine with one that isn't as performant within the next two years? Ignoring things like keyboards, Touch Bars, etc, each generation has gradually (and generally) gotten more performant. Yes, there are some outliers when dGPUs came and went for some models, models went from higher wattage CPUs to lower, etc, but not outright dramatically worse. I fully expect they've run every iteration through a skunkworks at this point, and have chipsets and/or Apple Silicon "platforms" identified that will perform at "Good", "Better", and "Best" performance levels, for both laptop and desktop configurations.

A pretty good win here (IMHO) would be to come within reasonable parity of the current state on generic benchmarks, but for specific software tasks, have more efficiency (aka, this has better battery life) and better hardware support/acceleration for APIs commonly used in Mac software (aka, this feels snappier).

Apple has the opportunity to tailor and optimize both the software and hardware experience, and from what we've seen on mobile, they seem to be able to do that pretty well. That is not really something afforded to Intel or AMD with the x86 platform.
 
Last edited:
My superficial understanding is: Apple silicon with ARM is inherently more efficient by instruction set and because Apple has access to 5nm fabrication lower power; or put another way, inefficiency of x86-64 and Intel being slow BOTH to update their chip design and being behind on fabrication tech.

The magnitude of the technical benefits, and for which use cases, particularly how it is used in conjunction with a powerful desktop class GPU of either Apple's own design or with a partner is unknown.

Assuming the software you need becomes optimized for Mac
I still would speculate, in the absence of data, that the best indication of what apple silicon will do for the mac is provided in the x/y chart from the keynote - that chart showing apple silicon weighted to desktop like performance and beyond (but not by any order of magnitude), with notebook like power draw.

Equivalent hardware should be less expensive for Apple, but there is no reason it will be less expensive for the end user.

Again assuming the software you need is available for Apple laptops it should perform particularly well. Longer battery life and cooler running and ever more "sleek" designs.

For desktops there will be power savings but i do not expect Apple to charge less for equivalent performance elsewhere and therefore for desktops the move benefits Apple more than it benefits the user in my opinion: The desktop user loses x86-64 flexibility, likely saves no money (but will reduce their carbon footprint which of course is non-trivia); Apple gets higher margins. For example, Power supplies will be cheaper for Apple, less RAM may be needed and of course the chips are less expensive.
 
Last edited:
It’s not superior or inferior from a technical standpoint. Just different. It’ll be up to Apple engineers to design great cpus. They've done a good job with iPad and iPhone cpus so far. They may very well match or exceed Intel’s high performance cpus soon. You have to imagine they think they can if there will be an ARM Mac Pro within 2 years.

It’s beneficial for Apple to have all of its devices running on the same architecture. When they control everything, they can optimize everything to run better.

However, none of this means that they won’t hit a roadblock in the future and then have Intel, AMD or someone else surpass them in performance. If that happens we’ll be back to the days of comparing a lower MHz PPC vs a higher MHz Pentium.
 
The x86 instruction set was designed for ease of assembly language programming and a non-pipelined multicycle implementation. The arm64 instruction set was designed for newer optimizing compilers, and thus does not require a bunch of power sucking logic for instructions and difficult instruction sequences that *almost* nobody uses anymore (but that the x86 *must* include for compatibility). Thus arm64 chips can use the extra power and die area (that an x86 of the same size and wattage must waste) on stuff (hidden registers, parallel logic, etc.) to potentially make the arm chip run even faster.
 
It goes far beyond iDevices and Macs.

At some point, Apple will likely kick Intel/AMD to the curb in their datacenters. The days of big server iron or even off-the-shelf blade servers are fading into the sunset. Companies like SuperMicro (just an example) will be left staring at a 2019 purchase order from Apple and wondering if it was their last.

They will replace Intel/AMD/whoever hardware with their own homegrown Apple Silicon-powered devices and achieve better performance-per-watt ratios. That will eventually mean that Apple will pare down their contracts with outsiders like Google, Amazon EC2, Microsoft Azure, whatever.

Apple will be able to run more of their service infrastructure on their own systems and they won't have wasted circuits sitting around doing nothing. And they can save a bunch of electricity by balancing low-power cores with high-performance cores, just like they started doing on their A-series SoCs years ago.

A lot of this isn't pure performance but it is performance-per-watt. Johny Srouji highlighted this key point in the first minute of his appearance during Monday's keynote.
 
Last edited:
I understand that the advantages of ARM are power efficiency and the ability to have many more cores but isn’t Intel still better in raw power in multi threaded operations?

That’s not how it works. There are not really inherent advantages or disadvantages to the instruction sets. ARM is easier on the instruction decoder, which might simplify that part of the chip and that’s what historically gave it advantage in low-power scenarios. People talk about RISC and CISC, but this distinction long lost its meaning with modern CPUs.

The advantage of Apple ARM CPUs is that they are very fast and efficient. They can perform more operations simultaneously than any x86 desktop CPUs, they have humongous caches, they are Asymmetric (specialized cores for performance and power efficiency), and of course, Apple has an entire bag of goodies built around them (power management, GPU, ML hardware accelerator... etc.)
 
It goes far beyond iDevices and Macs.

At some point, Apple will likely kick Intel/AMD to the curb in their datacenters. The days of big server iron or even off-the-shelf blade servers are fading into the sunset. Companies like SuperMicro (just an example) will be left staring at a 2019 purchase order from Apple and wondering if it was their last.

They will replace Intel/AMD/whoever hardware with their own homegrown Apple Silicon-powered devices and achieve better performance-per-watt ratios. That will eventually mean that Apple will pare down their contracts with outsiders like Google, Amazon EC2, Microsoft Azure, whatever.

Apple will be able to run more of their service infrastructure on their own systems and they won't have wasted circuits sitting around doing nothing. And they can save a bunch of electricity by balancing low-power cores with high-performance cores, just like they started doing on their A-series SoCs years ago.

A lot of this isn't pure performance but it is performance-per-watt. Johny Srouji highlighted this key point in the first minute of his appearance during Monday's keynote.

As far as I know Apple has not revealed any intentions to become a Hardware Supplier for global markets, rather than providing their own ecosystem.
 
My thoughts:

Apple has never been concerned with ultimate performance. Their goal has been to offer
a balance between performance and form factor/noise (i.e., performance/watt, since it's TDP that determines the needed size and fan noise).

Thus when Apple can get ultimate performance without needing more power, they have delivered ultimate performance -- for a while their notebooks offered faster SSD performance than anyone else's. But where ultimate performance required high TDP, Apple has demurred (Apple consumer products haven't featured the highest-end graphics chips or the Intel extreme CPUs; and their notebooks also haven't had 4k screens because of the power draw).

Given this, there's a reasonable expectation that AS (Apple Silicon) will offer better perf/watt than what Intel offers, because that's clearly been a design goal for Apple, and they wouldn't have abandoned Intel without knowing they could reach that design goal. Plus the graphic in their keynote emphasized better perf/watt than current designs.

But as to whether it will outperform Intel desktop chips at the high end for single-core performance, or NVIDIA desktop/enterprise GPUs at the high end for graphics performance, is an open question. We won't really know until we see independent (as opposed to synthetic) real-world benchmarks on applications we actually use.

AS is certainly scalable to higher powers. But when scaling to higher powers, does it lose its perf/watt advantage? That's what we don't yet know. If using AS to offer, in an iMac, the single-core performance of the fastest overclocked Intel extreme desktop processor requires the TDP of that processor, then we won't see AS with this performance in the iMac. Likewise, if offering, say, NVIDIA 2080 Ti performance in an iMac requires NVIDIA 2080Ti TDP (~280W), then we probably won't see AS graphics with this performance in the iMac.
 
Last edited:
  • Like
Reactions: leman
I now a little bit about processor technology and have read up on x86 and ARM architecture but I still do not really understand what makes ARM so superior to Intel or x86 technology in general that has people believing the the new ARM Macs will be much better and faster than Intel based macs.

I understand that the advantages of ARM are power efficiency and the ability to have many more cores but isn’t Intel still better in raw power in multi threaded operations?
Will ARM at first be a replacement for intels mobile processors which are arguably already worse in many ways than an A12Z or A13 or will they also be able to create a processor than can beat i9 and even Xeon processors?
Can we really expect a „night and day“ difference?


By the way - just yesterday, it was announced that the fastest supercomputer of the world is now ARM based. it uses ARM processors made by Fujitsu:https://www.arm.com/company/news/2020/06/powering-the-fastest-supercomputer
Fits perfectly to Apples announcement.

This is somewhat of a loaded question...

Intel chips have a number of issues that have been bandaided together to give us the chips we have today. The promise of what ARM is doing, is that it gives Apple somewhat of a clean slate in terms of how they design Macs in general.

The T1 and T2 chips were initial stabs at trying to do things that the Intel processors weren’t necessarily doing, or not doing well. Things like Security, hardware accelerated video, encryption, and IO are all powered by the T2 chip.

Largely Intel has been slow to adopt hardware features that exist in mobile. Namely AI/ML. By offloading those from the processor and using either the dedicated hardware block, or the GPU for those types of functions, the numerous specialized CPU cores to execute highly serialized instructions.

Sure, you can make the argument that an A12Z chip isn’t as powerful as a Xeon. But how would say an 8 high performance CPUs with say 8 or 12 low performance CPUs, with twice the ML hardware blocks and twice the GPUs of the A12Z compare? That’s a very different story. The more interesting thing is that a chip like that could very well be in something like a MacBook Pro...

I don’t think anyone is necessarily saying that ARM is better than X86/X64. Though, comparing how CISC works compared to RISC is always a fun conversation.

Despite whether or not ARM is superior to X86/X64, the fact is that Intel has been behind for a long time. Intel has been notoriously slow to adopt shrink their manufacturing processes.

I think when Apple showed off the MacBook, they were showing what could be done with Intel chips that they expected to continue to shrink and be more power efficient.

That clearly turned the corner after Intel delayed shrinking their process.
 
The instruction set is not necessarily superior, nor is RISC over CISC.

What makes ARM "better" than x86 really has more to do with market forces than raw performance.

x86 has two big licensees, Intel and AMD, and VIA has no real presence. AMD's done wonders with the Zen architecture, but that's mostly because of TSMC's superior manufacturing and their unique chiplet design. Not to downplay the technological marvel that they achieved, of course.

The ARM instruction set is licensed out to whoever, and is extendable however they want. This allows companies like Samsung, Qualcomm, Google, Amazon, Apple, et cetera to design and extend processors to suit their needs.

Using Apple as an example they've added all sorts of things to their processors like the secure enclave, neural processing, and new video codecs, and they've done so faster than Intel or AMD. Where previously you had to rely on brute force to run things like that, now can be accomplished by adding ASICs to the processor itself.

On top of this, modern programs are built to be as platform-agnostic as possible (with the benefits and drawbacks of all that entails), and rely on compilers to do the optimization for them. So long as the compilers for Apple Silicon are good, we're likely to see great speedups without major efficiency hurdles.

I believe if Apple pulls off this transition, other companies like Qualcomm, Samsung, and potentially others are gonna try and up their game too. They just haven't had the clout or design expertise that Apple has built up.
 
  • Like
Reactions: MiniApple
In a nutshell, perf per watt due to efficiency.

CISC originally was preferred historically (and arguably correctly) due to storage being expensive and trying to minimise that by using complex instructions to reduce the number of instructions. Now that storage is largely irrelevant cost wise, RISC seems the more obvious choice for performance. Of course the biggest barrier RISC faced to this becoming widespread was software compatibility.

Intel of course is a hybrid of RISC/CISC which probably attributed to their success over the years (okay maybe they’ve lagged since 14nm).

I should note Intel’s problems aren’t due to CISC vs RISC, as you can see AMD managed to go to 7nm on x86 while Intel struggled - not to go into too much detail why but it’s probably easier for AMD to go to 5nm than for Intel to go 10nm on their high end CPU due to how different they are.

Apple maybe could have switched to AMD but there are some key considerations:

1. Do you switch vendor when AMD arguably had one good CPU year
2. There are some workflows that Intel is still better at
3. Won’t solve the issue of still having product cycles reliant on a CPU vendor
4. Making their own CPU costs them $100’s less per device! (Far from trivial)

So seems it was a good time for them to rip off band aid, suffer a bit with compatibility etc but gain long term homogenised eco system and in a way - drive forward what should have maybe happened decades ago - move away from x86.
 
Last edited:
Well, one thing is to remember this is not a function of ARM at all. ARM is just the instruction set - Apple is one of the few companies holding an architecture license, and as such they use the instruction set (plus a lot of custom instructions they created) and the core designs and everything else is designed by Apple. And Apple built that design team over several years in part by poaching high end design talent from Intel (especially the group that designed Conroe), nVidia and Qualcomm among others. In fact their A Series is in some respects like a RISC version of the Core microarchitecture.
 
  • Like
Reactions: Marty_Macfly
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.