Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

nquinn

macrumors 6502a
Original poster
Jun 25, 2020
829
621
The M1 bump was one of the most impressive CPU bumps I've ever seen, and I think it's from a combination of variables:

- Apple accidentally "overshooting" with how good their mobile chip was
- It's a strange point in time where desktops and laptops have similar performance at the same core counts
- It's soooo quiet and cool which is more impressive to me than just the performance

The M2 is already giving hints that at higher performance the laptop chassis can't keep up with more heat, and I suspect moving forward, even with a dip to 3nm, 2nm, or below, the sheer amount of performance simply won't be able to be cooled as passively as the M1.

I have the 16" 24-core igpu variant and it's just incredible.
 
  • Like
Reactions: Brad7
Based on the A-series SOC's, probably not year after year. I don't know about you but I keep my MBP for 4-5 years before upgrading. I'm sure the performance increase over 4-5 years will be impressive.
 
  • Like
Reactions: PauloSera
We likely will, but not for a long time. The next time Apple shifts to a dramatically new node, instruction set (I’m sure they’ll eventually go to a proprietary one), or some combination of the two, we’ll see a bigger jump. Otherwise we can expect to see similar jumps to the annual improvements in the A-series.

The M2 is already giving hints that at higher performance the laptop chassis can't keep up with more heat, and I suspect moving forward, even with a dip to 3nm, 2nm, or below, the sheer amount of performance simply won't be able to be cooled as passively as the M1.
Keep in mind that when Apple moves to 3nm for the M3 it will result in some combination of both performance and efficiency improvements, and Apple is likely to lean more towards efficiency, meaning the chip will consume less power and generate less heat.

Also, throttling isn’t a massive concern with the Air. Its customer base primarily won’t be pushing it to its limit (if you are then the MacBook for you is the Pro), and the chip scales wonderfully to small and large workloads, so if you do nothing more than simple productivity tasks you won’t notice any throttling at all.
 
Hopefully not with the M-series since, if we do, that means Apple's not updating the chips often enough ;).
 
  • Like
Reactions: Brad7
TSMC N5 was a good lead while the competition was on N7. Apple needs to leapfrog to N3E while competition is on N5P/N5.

wikichip_tsmc_logic_node_q2_2022-1.png
 
The following is an extreme, 10 miles high simplification, but the ideas are meaningful: the extreme bump in moving from Intel to Apple Silicon is broadly due to two factors: the CISC to RISC move, and the money Apple could invest in realising the RISC potential.

The industry as a whole knew since the late 80s that RISC processors had the potential to provide such speed bumps: but the money available to Intel (and AMD) was multiple order of magnitude higher than the money available to CPU competitors (because Intel was playing in the huge PC market, while competitors played in the restricted workstation market), so finally Intel was able to get high performance even from their architecture (with high developement cost, and as we know, power consumption). When a Intel competitor (Apple) had similar amount of R&D money, coming from the smartphone market, it could finally realize the RISC promises.

It is interesting to note that the first wave of RISC machine gave similar speed bump at the time (Sparc, Mips, Alpha, PowerPC) over the chip used at the time. Also, other big speed bumps happened for other reasons, for exemple the widening of CPU, from 8 to 16 to 32 and marginally to 64 bit.

So, while the Apple implementation is amazing, the fact that a company with enough R&D money could surpass Intel was not completely surprising.

Ideas in computer architecture take a long time to materialise.

So, coming to the OP question: we will see another 3x jump in a single Mac generation ? I doubt that any incremental change (architecture, processor, node) can give again this kind of performance improvement.

You need a change in paradigm; while there are a few other computing/architecture paradigm that are currently investigating, none is known to be mature enough to be considered for a production computer in the short/medium term; of course, there may be a secret laboratory somewhere working to contradict myself :).

So, i think that for what regards CPU we are stuck with incremental improvements for the medium term; but if you consider the Soc as a whole, the result can be widly different.

Maurizio
 
The following is an extreme, 10 miles high simplification, but the ideas are meaningful: the extreme bump in moving from Intel to Apple Silicon is broadly due to two factors: the CISC to RISC move, and the money Apple could invest in realising the RISC potential.

The industry as a whole knew since the late 80s that RISC processors had the potential to provide such speed bumps: but the money available to Intel (and AMD) was multiple order of magnitude higher than the money available to CPU competitors (because Intel was playing in the huge PC market, while competitors played in the restricted workstation market), so finally Intel was able to get high performance even from their architecture (with high developement cost, and as we know, power consumption). When a Intel competitor (Apple) had similar amount of R&D money, coming from the smartphone market, it could finally realize the RISC promises.

It is interesting to note that the first wave of RISC machine gave similar speed bump at the time (Sparc, Mips, Alpha, PowerPC) over the chip used at the time. Also, other big speed bumps happened for other reasons, for exemple the widening of CPU, from 8 to 16 to 32 and marginally to 64 bit.

Things have changed a lot from the 80s. RISC and CISC were useful labels back in those days, where CISC CPUs were entirely microcode driven and therefore slow, but they become quite pointless in a modern superscalar world where the front-end commands get decomposed into micro-instructions and executed out of order. So please, let's just retire these labels and instead speak of the actually meaningful design choices in an ISA (such as load/store vs. reg/mem, fixed-length vs. variable-length, flags register vs compare+branch etc.). As usual, the devil is in details. Saying "it's fast because it's RISC" is like saying "Tesla is fast because it uses an electrical engine". Completely misses the point.

The reason why M1 was such a bump basically boils down to the following:

- Apple's extreme focus on mobile-first performance
- Use of a modern ISA (released just a decade ago) that has been designed from the ground up for efficient superscalar execution
- Access to the most advanced semiconductor manufacturing process
- Higher allocated budget per chip
- Smart design choices (and a lot of money spent on recruiting top chip design talent)

And let's not forget that Apple Silicon is not a panacea, it makes some deliberate tradeoffs to achieve this level of efficiency. For example, Apple has deliberately decided to target flexible data processing over SIMD throughput. One effects of this is that Apple's cache bandwidth is significantly lower than that of x86 CPUs. This probably allows Apple to save tons of energy during operation, but it also means that Apple Silicon has no chance of competing with x86 on tightly optimised vector throughput code.
 
Need to remember that software is a big piece of M1. Getting rid of 32-bit helped and adding specific functionality to speed up various things commonly done on a Mac helps out too. That was done over a period of many years and wasn't painless.
 
  • Like
Reactions: Brad7
The M1 bump was one of the most impressive CPU bumps I've ever seen, and I think it's from a combination of variables:

- Apple accidentally "overshooting" with how good their mobile chip was
- It's a strange point in time where desktops and laptops have similar performance at the same core counts
- It's soooo quiet and cool which is more impressive to me than just the performance

The M2 is already giving hints that at higher performance the laptop chassis can't keep up with more heat, and I suspect moving forward, even with a dip to 3nm, 2nm, or below, the sheer amount of performance simply won't be able to be cooled as passively as the M1.

I have the 16" 24-core igpu variant and it's just incredible.
No.

We were comparing 5nm Apple Silicon to 14nm Intel CPUs.

In the future, we will be comparing Apple Silicon to Apple Silicon.

So no, we won't see a jump like this again unless Apple waits 4-5 years to release a new Apple Silicon chip.
 
  • Like
Reactions: PauloSera
Nah. Intel was just so far behind at the time. We'd need Apple to greatly stumble and somehow come out of that stumble splendidly in order to get a jump like that again.
 
Nah. Intel was just so far behind
I don't think so. A quick google gives me this:

Intel's Flagship Alder Lake Chip Beats Apple's M1 Max, But There's a Catch
1666091823691.png


That doesn't look like Intel is too far behind, in fact it looks like Intel is beating the M1 ¯\_(ツ)_/¯

If you're talking about power consumption, battery life and heat generated, definitely, but if you're talking about performance, well, the numbers speak for themselves.

I'm not knocking the M1, its a great chip and my MBP is awesome but intel has some very good CPUs
 
I don't think so. A quick google gives me this:

Intel's Flagship Alder Lake Chip Beats Apple's M1 Max, But There's a Catch
View attachment 2096984

That doesn't look like Intel is too far behind, in fact it looks like Intel is beating the M1 ¯\_(ツ)_/¯

If you're talking about power consumption, battery life and heat generated, definitely, but if you're talking about performance, well, the numbers speak for themselves.

I'm not knocking the M1, its a great chip and my MBP is awesome but intel has some very good CPUs

I think that the meaning is in 2020. My tenth-generation Intel desktop is pitiful compared to the computers in 2022.
 
Last edited:
  • Like
Reactions: ahurst
I think that the meaning is in 2020. My tenth-generation Intel desktop is pitiful compared to the computers in 2020.
Perhaps, and yes, Intel was struggling all the way up to Alder Lake. I took the context of the post to be that Intel was too far behind, and only if apple stumbled could they get back into the race, and from that perspective that's not true. Alder lake is faster then the M1 (but not as effecient)
 
I don't think so. A quick google gives me this:

Intel's Flagship Alder Lake Chip Beats Apple's M1 Max, But There's a Catch
View attachment 2096984

That doesn't look like Intel is too far behind, in fact it looks like Intel is beating the M1 ¯\_(ツ)_/¯

If you're talking about power consumption, battery life and heat generated, definitely, but if you're talking about performance, well, the numbers speak for themselves.

I'm not knocking the M1, its a great chip and my MBP is awesome but intel has some very good CPUs
Yep. The new Rapter Lake chips are creating some even better benchmarks than this too. Intel is clocking them at almost 6ghz out of the gate, and apparently overclockers are taking it even higher than this.

Apple of course has a massive IPC advantage and doesn't need to clock their chips anywhere near this high (and I'm glad for that, my battery life thanks them for this). But if, say, they were to make some architectural changes to support 5ghz clocks (admittedly easier said than done), their existing M2 chip would get about 2,700 on a single core in Geekbench, and would be a good ~500 points or so above Raptor Lake.

If Apple wants to play the clock speed race, they would win, but my guess is that they are planning on taking it slower on this front. It makes more sense to increase clocks incrementally as new fabs come along than it does to make a frying pan CPU and to throw it in a laptop (looking at you, Intel). I'm looking forward to the M3, I think that will give us a better idea of what Apple's strategy is longer term.
 
Perhaps, and yes, Intel was struggling all the way up to Alder Lake. I took the context of the post to be that Intel was too far behind, and only if apple stumbled could they get back into the race, and from that perspective that's not true. Alder lake is faster then the M1 (but not as effecient)

I'm on my first long trip since getting the M1 Pro MacBook Pro last November and it's a monster of a laptop on a trip. It runs Windows faster than my i7-10700 desktop on Parallels and it can run my full workloads without any heat or fan noise and with all-day battery life. I'm thinking about running all of my Windows stuff on Parallels now.

I've been looking at laptops this week for a gamer and I don't think that there's anything in the Windows world that's comparable at this time.

So Apple is probably going to be the most efficient for mobile while they are underwater in desktop.
 
  • Like
Reactions: ArkSingularity
Yep. The new Rapter Lake chips are creating some even better benchmarks than this too. Intel is clocking them at almost 6ghz out of the gate, and apparently overclockers are taking it even higher than this.

Apple of course has a massive IPC advantage and doesn't need to clock their chips anywhere near this high (and I'm glad for that, my battery life thanks them for this). But if, say, they were to make some architectural changes to support 5ghz clocks (admittedly easier said than done), their existing M2 chip would get about 2,700 on a single core in Geekbench, and would be a good ~500 points or so above Raptor Lake.

If Apple wants to play the clock speed race, they would win, but my guess is that they are planning on taking it slower on this front. It makes more sense to increase clocks incrementally as new fabs come along than it does to make a frying pan CPU and to throw it in a laptop (looking at you, Intel). I'm looking forward to the M3, I think that will give us a better idea of what Apple's strategy is longer term.

It's a winning strategy on mobile for sure. It's a tougher sell on the desktop even though I think that most people like the idea of efficiency. The US, as a whole, though, doesn't care about energy efficiency in it's cars and I'd guess that extends to desktop computers.
 
  • Like
Reactions: ArkSingularity
So Apple is probably going to be the most efficient for mobile
Agreed, my 14" MBP gets insane battery life (compared to my Razer), Its also faster then Razer - when my razer is on battery. I don't think it holds up to my razer when both are powered but for my needs that's not a factor. What it doesn't do that my Razer does, is play the games I want.

I've taken my Razer on some trips knowing that I'll have time to myself and will want to play games or I'll be connecting to work. I've take my MBP other times, when I know I'll not be playing as much games and/or want that battery life.

My MBP is an awesome laptop.
 
It's a winning strategy on mobile for sure. It's a tougher sell on the desktop even though I think that most people like the idea of efficiency. The US, as a whole, though, doesn't care about energy efficiency in it's cars and I'd guess that extends to desktop computers.

Agreed 100%. What's funny is that a few months ago, I (foolishly) wrote a giant post about reasons Apple likely shouldn't release desktop processors that are clocked higher than their mobile counterparts. After seeing what Intel and AMD are getting ready to do on the desktop, I think I've changed my mind.

Of course that's easier said than done, it's not like apple can just flip a switch and clock their processors this high (pipelines would have to be redesigned). But I think it'd make sense for them to say "Here, this is the M1 Extreme, it's going into the Mac Pro, and we are clocking it to 3.9ghz instead of 3.5ghz" - power consumption is certainly less of an issue here.
 
Of course that's easier said than done, it's not like apple can just flip a switch and clock their processors this high
That's the issue as I see it. First let me just say that I'm not a chip designer, and I know next to nothing regarding chip design. Apple's processors have always been about squeezing the most performance with the least amount of power.

Now to have them pivot 180 degrees and design something outside of their expertise, that is no small order.
 
  • Like
Reactions: ArkSingularity
I don't think so. A quick google gives me this:

Intel's Flagship Alder Lake Chip Beats Apple's M1 Max, But There's a Catch
View attachment 2096984

That doesn't look like Intel is too far behind, in fact it looks like Intel is beating the M1 ¯\_(ツ)_/¯

If you're talking about power consumption, battery life and heat generated, definitely, but if you're talking about performance, well, the numbers speak for themselves.

I'm not knocking the M1, its a great chip and my MBP is awesome but intel has some very good CPUs
Did the M1 replace Alder Lake? What was the point of all this?
 
  • Like
Reactions: MayaUser
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.