Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Actually intel released many iterative "generations" of processors that only had small single digit % performance improvements, some as low as 5% and many others only gaining more performance through aggressive frequency and power increases (heat and battery anyone?). Intel was essentially stuck on their Skylake architecture for YEARS as they struggled to move from 14nm to 10nm. Now they have gotten a bit back on track. Ironically some of Apple Silicon's current meandering has to do with TSMC's node improvement schedule and scaling. Its not reasonable to expect more than a 10-15% increase in performance year to year or gen to gen within the same processor family- you will generally get larger performance bumps when new process nodes become available, or every few years when a major architectural design is released. You are not going to see the x86 to arm64 performance shift double over every year, you need to reset your expectations.
g\
It is true that Intel meandered around for years while it was stuck on 14nm, but it has moved on to the next node and is no longer standing pat. Rocket Lake was built on 14nm (a backported design), and within a mere 6 months, Alder Lake rendered it obsolete and increased multicore performance in by +70% or more. Then a mere 11-12 months later, top Raptor Lake brought a +40% multicore perf increase over top Alder Lake. Raptor Lake can boost to 6 Ghz on some of the cores no questions asked. It is an overclocking beast. Intel is no longer standing pat. Neither is AMD on its Zen Adventure...look at the bad Bulldozer days to now Zen4, AMD as been crushing it. AMD is stacking cache in the 3rd dimension on top of the CPU core complex... the 5800x3D is a gaming monster.

Hackintoshes built with Raptor Lake exceed the performance of all M1 silicon, including M1 Ultra in both Geekbench and Cinebench. Of course Apple has Intel beat in terms of performance per watt, and Apple has custom accelerators, for sure. But future Intel lakes will embed accelerators and other custom silicon into the design. Intel is moving towards chiplets with Meteor Lake (which should be coming in 2023 and will be built on the next node, Intel 4, it's first EUV adventure) and beyond... Arrow Lake will be built on an even newer node, and looks to be a beast of a processor that's coming in 2024.

Even Snapdragon is catching up to Apple... Leaked Snapdragon 8 Gen2 multicore scores in Geekbench are approaching A16 levels of performance. 5200 for Snapdragon vs 5400 for A16. Qualcomm is trying its hardest to enter the laptop market with Windows on ARM (using Nuvia).

Apple is falling behind... M2's performance increase over M1 is paltry and an outlier compared to previous years. Competitors are trying their hardest to catch up. We'll see what Apple does with 3nm... but M2 is a lost generation in my opinion if you already have M1... hopefully in the 3nm generation, Apple returns to form, because Intel, AMD, and Qualcomm are not standing pat at all.
 
Last edited:
Then a mere 11-12 months later, top Raptor Lake brought a +40% multicore perf increase over top Alder Lake. Raptor Lake can boost to 6 Ghz on some of the cores no questions asked. It is an overclocking beast.
I have a i7-13700K, doing something like Cinebench instantly pushes it to 100C and throttles it for the duration. This is with a massive dual tower Noctua heatsink. There is literally no way this processor could have gone into an iMac. The Mac Pro heatsink would probably have problems with it.

It's faster in complete isolation from the fact that you probably need a 360mm radiator minimum for the i9. Anything with worse cooling is going to be severely gimped.
 
I'm not surpised.

Apple know that the "refresh cycle" for laptops is generally 3-5 years, instead of the 1-2 years for phones.
They need the M3 Pro/Max/Ultra to be a compelling upgrade for M1 users in 2024. So That will use the new 3nm process.

This will be marketed as better media encoding, aimed directly at the youtubers that market the things for them ;)
That’s exactly it. What a wise analysis that was.
 
  • Like
Reactions: George Dawes
Now that's what I'm talking about. I'm running a reasonably future-proofed Intel mini from January 2019, and it's still serving me well. I bought it knowing it was running Intel integrated graphics, which has always been the weakest point on the Intel minis, so any serious rendering in Blender etc. goes over to the gaming PC. The only thing that will move me off it is a *substantially* substantial increase in graphics capability. Let's see what the line-up looks like around March of next year.
having a 2018 mini might be key - as maxed they have 6 cores. But wow if my M1 mini isn't quiet and cool, which were the two things I wanted for its intended use.
 
having a 2018 mini might be key - as maxed they have 6 cores. But wow if my M1 mini isn't quiet and cool, which were the two things I wanted for its intended use.

Mine's six core, 12 threads. Unless I'm doing a major recompile, that's enough for my non-graphics work. So I'm not really feeling a pinch anywhere that isn't graphics-related.
 
Mine's six core, 12 threads. Unless I'm doing a major recompile, that's enough for my non-graphics work. So I'm not really feeling a pinch anywhere that isn't graphics-related.
so performance in and of itself isn't an issue. Only heat and noise - and if you're ok with those, then again, no issue. I still have a 2012 mini on my desk, but it is quiet most of the time now. Then again, compared to the Intel beast of a desktop, it's quiet in comparison regardless.
 
I am not surprised by any of this. People need to recognize a few things about this. First that this is a pre-release mac and may not be the final version and may include tweaks to performance(clock speeds, cooling, etc). Also Geekbench is a synthetic benchmark and will not 100% translate to real world performance. Further there is some variability between runs as has been pointed out by the second set of benchmarks with a higher score. And we do not know if these scores may have had some thermal throttling achieving these scores.

But a big misconception I see others have is that this is a 3nm chip, it is not. The M2 chips are still a 5nm SoC and we should not expect associated gains with a die shrink. TSMC had some issues with the early 3nm process N3 which pushed timelines back and required significant redesigns that made N3 a dead end process which would require chip redesigns to migrate to future nodes that effectivley scrapped N3, since no company will design a chip and then throw all the work out to redesign it a second time. N3 was redesigned in to N3E which is still almost about a year out for volume production. These N3/N3E processes are not compatible and require different design/engineering so any development for N3 would have to be redesigned to move forward for all future 3nm process nodes. And Apple was not going to wait another year to release a M2 mac on N3E nor were they going to spend the R&D budget for a dead end node and redevelop it a second time. I was disappointed a bit and was looking forward to the M2 being a 3nm chip but when timelines slipped and Apple apple made the choice to not have a stagnant product line any longer than it needed to and the M2 chip was developed with TSMC's N5P node(which is 7% faster than N5 which the M1 is made with). The M2 pro/max/ultra/etc should all be made with TSMC's N4P node(also a 5nm process) which is 11% faster than N5. Using TSMC's numbers this would be only 3.7% faster than N4P that the base M2 chip is made on so we should not expect any significant gains. Also these are TSMC's numbers which may be slightly optimistic. And looking at clock speeds Apple has clocked this M2 max exactly 10% faster than the M1 max chips of the previous generation which is about what I would expect considering TSMC claims a 11% increase between their respective processes.

If one is waiting for a 3nm die shrink it will most likely come in a year or so with the M3 chips. In about a year the newer revised N3E node should hopefully be in volume production. Followed by N3P a 6 months to a year later which should have some nice gains.
 
so performance in and of itself isn't an issue. Only heat and noise - and if you're ok with those, then again, no issue. I still have a 2012 mini on my desk, but it is quiet most of the time now. Then again, compared to the Intel beast of a desktop, it's quiet in comparison regardless.

Performance is an issue, but specifically graphics performance. Given the limitations of the internal GPU of the 2018 mini, I move anything graphics-intensive to a PC with a graphics card. I'd like not to have to do that.
 
Let’s not forget that the M2 GPU had a very significant increase in cores and performance. We are expecting something like 40 cores on the M2 Max.

Even if CPU perf is up by 10 to 15%, the update might be worth it for the GPU alone.
That impressive increased GPU performance compared to M1 was mostly due to the increased memory bandwidth from 68GB/s to 100GB/s and wider memory bus from 128-bit to 256-bit. M2 Pro will probably have 200GB/s memory bandwidth like M1 Pro so the performance increase won't be as high per GPU core.
 
Last edited:
I wouldn't mind so much Apple being behind if they had better interoperability. I'll reiterate: even if X86-64 computers were slightly worse, the fact they are so flexible more than make up for it. And now, with Thunderbolt 4, you'll be able to plug and play dedicated 3D cards (no more 3D cards required to be inside the case)!

Apple has had Thunderbolt 4 for at least a while, but because they are so strict and want to control their upgrade cycle with an iron fist, they are falling behind here too. What happens when x86-64 / AMD users have the freedom to plug and play 3D cards on Thunderbolt 4, but Apple PCs don't?
 
I wouldn't mind so much Apple being behind if they had better interoperability. I'll reiterate: even if X86-64 computers were slightly worse, the fact they are so flexible more than make up for it. And now, with Thunderbolt 4, you'll be able to plug and play dedicated 3D cards (no more 3D cards required to be inside the case)!

Apple has had Thunderbolt 4 for at least a while, but because they are so strict and want to control their upgrade cycle with an iron fist, they are falling behind here too. What happens when x86-64 / AMD users have the freedom to plug and play 3D cards on Thunderbolt 4, but Apple PCs don't?
I was under the impression you could connect them. Look at the AKiTiO Node Titan from OWC they say that over 1200 different cards will work with a mac.

 
That impressive increased GPU performance compared to M1 was mostly due to the increased memory bandwidth from 68GB/s to 100GB/s. M2 Pro will probably have 200GB/s memory bandwidth like M1 Pro so the performance increase won't be as high per GPU core.
Good point. You don't think the bandwidth would increase in the higher-end chips?
 
These numbers don't seem likely to be the final Geekbench 5 results. On my M2 MacBook Air I get 1947 for single core at at CPU clock frequency of 3.491 GHz. This supposedly is the same Avalanche performance core but at a slightly higher 3.544 GHz clock. So I would expect closer to 1975 single core score. It's harder to compare multi-cores though because of the difference in the number of performance vs. efficiency cores. But I get around 9000 on my M2 MBA so I would expect at least over 15000 from just the 8 performance cores with another 1200 from the 4 efficiency cores.

TLDR, the numbers for a theoretical Avalanche/Blizzard SoC seem too low compared to existing M2 MacBooks.
 
That impressive increased GPU performance compared to M1 was mostly due to the increased memory bandwidth from 68GB/s to 100GB/s. M2 Pro will probably have 200GB/s memory bandwidth like M1 Pro so the performance increase won't be as high per GPU core.

I expect the M2 Pro to step up to LPDDR5X memory which would make it 233GB/s or 266GB/s if they go for the latest and greatest. Still not the same 50% bump as the M1 to M2 but it should help keep the added GPU cores working.
 
  • Like
Reactions: AgentMcGeek
It looks on the surface to be a disappointing increase, however I don't think the CPU performance was really the issue with the M1. The fact there are more GPU cores is probably the more important aspect, and it remains to be seen what performance increase there will be in that regard.
 
I wouldn't mind so much Apple being behind if they had better interoperability. I'll reiterate: even if X86-64 computers were slightly worse, the fact they are so flexible more than make up for it. And now, with Thunderbolt 4, you'll be able to plug and play dedicated 3D cards (no more 3D cards required to be inside the case)!

Apple has had Thunderbolt 4 for at least a while, but because they are so strict and want to control their upgrade cycle with an iron fist, they are falling behind here too. What happens when x86-64 / AMD users have the freedom to plug and play 3D cards on Thunderbolt 4, but Apple PCs don't?
We have had that with Thunderbolt 3 external GPUs but it has remained a super niche market with very little market uptake.
 
I was under the impression you could connect them. Look at the AKiTiO Node Titan from OWC they say that over 1200 different cards will work with a mac.


That only works for Intel-based macs. M1 Macs have no eGPU support, even though they have Thunderbolt 4.
 
We have had that with Thunderbolt 3 external GPUs but it has remained a super niche market with very little market uptake.

That's because Thunderbolt 3 is a specialized, high-performance connector. USB 4 is a high speed connector that is much cheaper, so it'll eventually be available everywhere, not just in high performance CPUs. And when it does, the argument for external GPUs will be much more compelling.
 
So this is looking like Apple Silicon cadence will use the same TSMC node for all 'flavors' of a chip version. For example, M2 => M2/M2 Pro/M2 Max/M2 Extreme - same perf/efficiency cores as an A15 (iPad Mini 6 FTW!) on TSMC N5P node.

I'd think the next Apple M3 chips wouldn't be based on the A16/TSMC N4 node, but on a future A17/TSMC N3(E?) node instead. That would mean no M chip using the A16 cores.

TSMC will reportedly ship N3 in 2H22. This is the low volume 3nm node that TSMC is not encouraging customers to use. Maybe Apple is the only customer in that line. So why can't they introduce a new 'MX1' chip in early 2023 specifically for the Mac Pro? A low volume product that they can cut their teeth on for N3 nodes in general, helping to prep for the dump of A17/M3 N3E node products in 2H23.

I'm sure Tim Cook feels the pressure of 'blowing' the 2-year transition, all the more reason to squeeze out a pricey N3 Mac Pro in early 2023.
 
  • Like
Reactions: l0stl0rd
A14 was released in September 2020, roughly 25 months ago. I was hoping that Apple would have the next generation of P-cores ready to ship by now. But both A15 and A16 only offer small evolutionary updates without any IPC improvements, which is the first time in Apple's history. Maybe they are moving to a 36-48 month schedule (understandable given the scale of the effort), which would be fine, except it makes them much more vulnerable to the competition who is advancing quickly.
The A14 was released smack in the midst of a pandemic, with the world adjusting to working from home, and starting to experience supply chain issues. I would think Apple's SoC plans would have been majorly disrupted. It will likely take them a couple more years to get back on track.
 
If they come out with this type of rubbish again, it could be the start of Apples very quick decline 😏....Cook needs to go, he has nothing left to offer Apple!
You realize there have been several challenges in chip fabrication in parts of the world where we manufacture and even design our chips. We don't know why the chips spec bump isn't as grand to your liking
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.