Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Maybe I'm looking at this from the wring direction but it says UP TO 20%. It needs to be qualified so we're doing Apples to Apples.
Single core for example goes up by 11.56%, and whilst it a little simplistic 3.2 increased by 9.1% gives you the 3.49GHz that the M2 is running at so how much is down to architectural changes and how much to just increasing the clock?

There's also the issue that you only see the improvements you measure...

The biggest jump by far in performance of the A15 vs A14 was omnetpp: 37%.
Why is that interesting? Because omnetpp is very sensitive to TLB performance.

So that's one datapoint that TLB processing has improved on the A15.
A second datapoint is a set of recent Apple patents that describe various improvements to a TLB including
- substantially faster invalidation of multiple pages (meaning, among other things, faster virtual machine teardown)
- provision for what are essentially large pages (I haven't yet read enough to see if these are ARM large pages or COLT-style coalesced pages).

Point is: something like TLB handling makes the machine faster, but for very specialized purposes. If you aren't actually testing virtual machine metrics, or code that is known to have a massive TLB footprint, then you won't see these improvements.

I'm not saying A15 is awesome *because* it has a better TLB; I'm saying that A15 was a "cleanup and consolidate" chip, where the goal was to fix up a variety of minor pain points while the substantial redesign happens for the next chip. This isn't to say that the improvement (especially on the energy saving side) were trivial; it's to say that CPU *performance* improvements were not the goal of this design.

Apple has a lot of "big iron" design concepts lined up (one is this TLB stuff, another is a new, scalable, cache protocol). But in a way it's easier to test these on a smaller chip like A15 (where they are not as essential, and where they can be hidden behind chicken bits if necessary), and then M2, rather than immediately placing them on the sort of large Ultra-style chip that they actually target
( *cough* Sapphire Rapids, 9 quarters late and counting *cough* ).
 
It looks like the majority of the target audience for Airs are better off with the original Air. They won’t benefit much from the M2 since single core performance is what really matters for them, and that’s only 12% faster, but you need at least a 20% increase to see a real world difference.

I just bought a 2020 M1 Air for that very reason. The M2 seems like an incredibly impressive machine, but so is the M1 for quite a bit less.
 
The M2 is not using less energy. It is using significantly more. Look at the comparison graphs between M1 and M2. Idling the M2 is using on the CPU front at least 1W extra. On the GPU side it's around 4W.

That's a 5W increase in idle.

These graphs are not measuring idle, but instead 100% load at set power. Apple (or Intel or AMD or whoever makes perf/W graphs) sets a maximum power draw, runs a heavy CPU workload (at 100%), and then measures the output performance.

Every data point here is measured at 100% CPU load. There is no idle measurement, at least from Apple.

Why is the M2 graph cut short? Perhaps it is slower than M1 at lower power (which seems less likely) or Apple often only shows data that shows "big improvements" and they didn't want the M2 = M1 data to be shown (which seems a bit more likely).

That is, we can confirm under 4W, the M2 is not any faster than the M1. Under 4W, M1 perf = M2 perf.

Apple-WWDC22-M2-chip-CPU-perf-vs-power-01-220606_big.jpg.large_2x.jpg


To go into more detail than probably is helpful now, most CPUs idle at very low power, much less than 1W even. For example, the M1 MacBook Air's entire laptop can idle as low as 1.9W. The Acer Swift 3's entire laptop idles can idle as low as 1.8W, 5% less than the M1 MacBook Air even.

IatlqAC.png
 
These graphs are not measuring idle, but instead 100% load at set power. Apple (or Intel or AMD or whoever makes perf/W graphs) sets a maximum power draw, runs a heavy CPU workload (at 100%), and then measures the output performance.

Every data point here is measured at 100% CPU load. There is no idle measurement, at least from Apple.

Why is the M2 graph cut short? Perhaps it is slower than M1 at lower power (which seems less likely) or Apple often only shows data that shows "big improvements" and they didn't want the M2 = M1 data to be shown (which seems a bit more likely).

That is, we can confirm under 4W, the M2 is not any faster than the M1. Under 4W, M1 perf = M2 perf.

Apple-WWDC22-M2-chip-CPU-perf-vs-power-01-220606_big.jpg.large_2x.jpg


To go into more detail than probably is helpful now, most CPUs idle at very low power, much less than 1W even. For example, the M1 MacBook Air's entire laptop can idle as low as 1.9W. The Acer Swift 3's entire laptop idles can idle as low as 1.8W, 5% less than the M1 MacBook Air even.

IatlqAC.png
The graph is measuring CPU performance as it ramps up in power to it's maximum constraint. However, off the bat the M2 starts off with a higher power rating in consumption. Call it idle or call it start of test. The M2 is consuming more power.

Also, wrong, the M1 != M2 in terms of power and performance. The M2 is starting off at a higher relative performance vs the M1 and also starts of 1W above it as well. Not sure how you can equate both of these?
 
  • Disagree
Reactions: SlaveToSwift
The graph is measuring CPU performance as it ramps up in power to it's maximum constraint. However, off the bat the M2 starts off with a higher power rating in consumption. Call it idle or call it start of test. The M2 is consuming more power.

Also, wrong, the M1 != M2 in terms of power and performance. The M2 is starting off at a higher relative performance vs the M1 and also starts of 1W above it as well. Not sure how you can equate both of these?

Frankly, you have completely misunderstood this graph (and all perf/W graphs).

It is not a CPU "ramping up", there is no "start of test", it is not "idle". Every single test there is at 100% CPU load. This is not one test. This is 20+ tests, run at different TDPs.

The M2 does not start at a "higher power rating". Apple simply did not include the lower power draw data points for the M2. Just like M1, M2, Intel Alder Lake, AMD Zen3, Qualcomm, Arm, NUVIA, etc.: the CPU idles in milliwatts and after ~500mW (see chart below), the CPU is out of idle and running computations.

There is not a modern laptop CPU on Earth that idles at 1W or more. 1W is compute/stress-test power draw. Idle power draw is always measured in milliwatts.

To make this plainly obvious,

M1
1W - Apple refused to share (it is in between 0% to 40%)
2W - 40% perf
3W - 50% perf
4 W - 70% perf

M2
1W - Apple refused to share (in between 0% to 65%)
2W - Apple refused to share (in between 0% to 65%)

3W - 65% perf
4W - 75% perf
5W - 85% perf

They idle in milliwatts. That is the base power draw: all power added more than idle is directly used for compute (powering the CPU cores, the IMC, the caches, the NoC, the I/O, etc.).

A more complete perf-watt graph example (note the Apple A9). Note how the performance flattens at very low power draw: CPU firms don't like to show this low-power part because it makes their product look "not as great".

The removal is not a technical point; they have the data. All modern CPUs can scale under 1W. Any removal of data is pure marketing.

VISC-CPU1.png


//

Apple's refusal to share is also common. Intel did the same thing to Apple. It's pure marketing shenanigans. Think of it like a semiconductor's version of a beauty filter. There are imperfections and "straight looking curves" at very low power draw, so they just cut it out.

ZYDWniy.jpeg
 
Everybody remembers those graphs from Apple with a steep ramp in performance.

Single thread:
  • A8 to A9 - 55% increase
  • A9 to A10 - 40% increase
  • A10 to A11 - 20% increase
And keep in mind A10 was riding on the same 16nm process as A9.

We're just seeing smaller and smaller deltas. What used to be 20% increases for single-thread are now multi-thread.
Yes, but even still most of those improvements weren’t automatically noticed on a day-to-day basis.
Sure, the A9 was better than the A8, but when the phone released most comparisons in speed were very minimal at best.
It’s only years later when we truly see what those upgrades actually brought
 
Everybody remembers those graphs from Apple with a steep ramp in performance.

Single thread:
  • A8 to A9 - 55% increase
  • A9 to A10 - 40% increase
  • A10 to A11 - 20% increase
And keep in mind A10 was riding on the same 16nm process as A9.

We're just seeing smaller and smaller deltas. What used to be 20% increases for single-thread are now multi-thread.
Yes, but it should also be noted that the baseline is also increasing so the net performance gain is still big. For example, if the A8 was 100, the net increase from A8 to A9 would be 55, while A9 to A10 net increase would be 62, despite the lower percentage.
 
Unless somebody figures out how to use electrons or quarks.


Not within the next 5 years for mass production.


Yep, but I don’t deal in absolutes such as “You can’t go smaller than an atom”. Not yet we can’t.

Just 15 years ago, 3nm seemed a ridiculous pipe dream and moores law was being proclaimed as dead; yet here we are. Quantum computing is already showing some promise and research is already well underway with optical CPU’s. In the future, we’re probably not going to be using standard interconnects in a number of areas of CPU design.
 
Last edited:
Yep, but I don’t deal in absolutes such as “You can’t go smaller than an atom”. Not yet we can’t.

Just 15 years ago, 3nm seemed a ridiculous pipe dream and moores law was being proclaimed as dead; yet here we are. Quantum computing is already showing some promise and research is already well underway with optical CPU’s. In the future, we’re probably not going to be using standard interconnects in a number of areas of CPU design.
Well, Moore's Law has been dead for the past ~4 years as several OEMs for CPUs have struggled to get 2x densities every 2 years.
 
I'm waiting for actual hardware to get released and some hard core reviews, like from Anandtech. However, these initial benchmarks are reassuring as they show decent performance improvements.

What I really want is the rumored 15" Air which is a year away. I wonder if that'll be M2 or M3.
 
MacOS has specific coding about efficiency and performance cores. Apple has built it into the OS in order to enhance battery life.

Windows on the other hand did not have such enhancement as Intel and AMD released CPUs with all performance codes or rather single cores.

Alder Lake changed that, so in order for Intel to reach their claims, they had Microsoft code specifically for alder lake so that Windows takes the enhancements of the new chips. As a result, AMDs CPUs performed terrible initially when these enhancements were tested in a Alder Lake v Ryzen. Now I believe there was a patch released and AMDs Ryzen is once again walloping Intel.

I mean, just how do you expect the kernel to know what hardware it has at its disposal? Magic?

Not trying to sound condescending but I think you just don't understand much about how modern kernels work.

Basically, every modern OS (iOS, Android and even Raspberry Pi OS) have static hardware profiles made available to the kernel; these profiles are colloquially called Device Trees (at least on Linux)

So, this isn't unique to Apple, not even remotely. Basically, as new chipsets get released, the OS receives updates that update the kernel with these profiles... How do you think older operating systems can take advantage of newer hardware?

There's no Apple magic going on here... Just regular operating system development. The Windows kernel has profiles for most chipsets (but not all due to how ubiquitous Windows is in the modern world). Intel didn't ask Microsoft for any favors... This is just the status quo of kernel development.
 
Last edited:
I mean, just how do you expect the kernel to know what hardware it has at its disposal? Magic?

Not trying to sound condescending but I think you just don't understand much about how modern kernels work.

Basically, every modern OS (iOS, Android and even Raspberry Pi OS) have static hardware profiles made available to the kernel; these profiles are colloquially called Device Trees (at least on Linux)

So, this isn't unique to Apple, not even remotely. Basically, as new chipsets get released, the OS receives updates that update the kernel with these profiles... How do you think older operating systems can take advantage of newer hardware?

There's no Apple magic going on here... Just regular operating system development. The Windows kernel has profiles for most chipsets (but not all due to how ubiquitous Windows is in the modern world). Intel didn't ask Microsoft for any favors... This is just the status quo of kernel development.

I'm not sure what you're on about here.

Windows 11 includes a new scheduler that works better with Alder Lake's heterogeneous cores. I presume that's what @jav6454 was saying.

There's no "magic", but Apple absolutely benefits from developing both their CPU cores and their OS in-house.
 
  • Like
Reactions: Dragon M.
I'm not sure what you're on about here.

Windows 11 includes a new scheduler that works better with Alder Lake's heterogeneous cores. I presume that's what @jav6454 was saying.

There's no "magic", but Apple absolutely benefits from developing both their CPU cores and their OS in-house.
You’d think they would benefit more. Considering they are unable to support a 2 year old Apple chip with new features.
 
In fact I think Apple missed a trick by not adding the second external display out and 16GB RAM as standard (and therefore widely available outside of expensive build to order), since those are things that might have drawn in M1 Air owners to trade up to M2.
At this point, I suspect it may be more a limitation of the M1 chip design. Maybe after adding all the ram and storage, there just isn’t enough bandwidth enough to support 2 displays.
 
Interested to see what M2 Pro and M2 Max bring.
Also very interested to see what they do with the Mac Pro. I'm still on a 2017 rMBP, so M2 Pro/M2 Max will likely be my switching point. I do wish support for eGPU's with the M series chips existed, damn shame they dropped that. For some it might not make a difference, but for workloads that need it, Apples GPU, good as it may be, cannot compete.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.