Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
My point being there was no CPU differentiation on M1/2/3 before. It remains valid. GPU differences or higher spec Pro/Max chip CPU variations are a different thing. I was well aware.
There is though... The two variants of the m3 pro have different CPU core counts, that's the same binning for ex, same with GPU yield on the base M3 and M2. They're likely tossing a lot of M3s that arent binned right now for CPU cores, but I'd bet they're seeing that as acceptable because the yield is lower in general on the transitional node than the newer permanent node M4 is on so they're tossing a lot anyway. On the M4 seems like, based on this, they're binning more on CPU than GPU, like the current M3 Pros. This is pretty standard across the industry in CPUs and *very* standard in GPUs... and has been for a long time
 
M3.5 with 256gb and 512gb
M4.0 with 1TB
Score that.
More mudding the waters add to that.
Which test the one without the extra running core?
Nothing like more confusion of apple products.
Just saying...:)
 
  • Haha
Reactions: NetMage
There is though... The two variants of the m3 pro have different CPU core counts, that's the same binning for ex, same with GPU yield on the base M3 and M2. They're likely tossing a lot of M3s that arent binned right now for CPU cores, but I'd bet they're seeing that as acceptable because the yield is lower in general on the transitional node than the newer permanent node M4 is on so they're tossing a lot anyway. On the M4 seems like, based on this, they're binning more on CPU than GPU, like the current M3 Pros. This is pretty standard across the industry in CPUs and *very* standard in GPUs... and has been for a long time
Like I said. No CPU different in base M series chips before, was there?

Edit: Also, removing 1/4 of the performance cores is a big deal.
 
  • M4 - 3,695/14,550
  • M1 - 2,272/8,208
fourth generation chip and less than 2x performance increase over the first gen chip? are my expectations to high? seems... meh?

You are witnessing the asymptotic approach to the end of Moore's law, well pass the end of Dennard scaling. Atoms are just getting too big with respect to the gate sizes. The exponential Moore's law line on a graph doesn't suddenly stop, but gradually tails off flatter and flatter. No more doubling of some single core performance number every 2 or so years.

But annual new multi-rack multi-core MLperf numbers are still going through the roof like crazy (especially the power consumed in data centers number), so you just have to jump to the next more interesting application to find the next upward curve.
 
Very valid points. It's just we've seen zero evidence of IPC uplifts (I think it's around <5% from M1 to M4), so it will be very interesting to see how Apple tackles it, especially are supposedly losing a lot of its chip talent.

Yes, the last part is particularly interesting/worrying, whatever you want to call it.
Because I though it would be fun I chose a GB 6.3 result from each processor (so run-to-run variation comes into play I'm not trying to get exact numbers from averages) and compare the change with clock speed. Given my results I can say that it's more than that but far more importantly, it depends strongly on the workload. For instance since the M1 (again give or take run to run variation and me rounding), IPC for GB's HTML 5 and Background Blur tests have increased roughly 40%, IPC for PDF Render and Photo Library and Object Remover and Ray Tracer have increased ~20%, IPC for Clang, HDR, Photo filter by 11-15%. IPC Text Processing, Asset Compression, Structure from motion, are about 7%, File Compression and Navigation are about 3-4%, while Horizon Detection is completely flat. Object Detection prior to SME went up by 18% between the M1 and M2 but was flat between the M2 and M3. Obviously unknown what it would've done in the M4 without SME. Now if you want you could create "an average" of those by taking the geometric mean for FP and INT tests and weighted arithmetic mean over the two but that would conceal everything that's interesting and why I don't like averages. This shows in fact Apple that is iterating quite strongly in areas they care about for the CPU's performance and they are leaving to clock speed those that they don't. The average is brought down by the latter. I would argue rather than the criticism that Apple "studies for the test" it would appear that they have their own design priorities for what's most important to improve for their users and those are different from GB's. Table attached below. Unfortunately I can't share the full spreadsheet so it could be checked for errors, but I think it's right.

1715362848900.png


With a ~10% clock increase
Well 8.6% if we're being pedantic ... the point in that post was about absolute performance increases if someone wanted to consider if they should upgrade not where those upgrade come from. If you want to talk about clocks, I should also point out that often times big clock speed increases like we've been getting often necessitate microarchitecture changes as otherwise IPC falls as clocks rise. If you raise clocks 38% from M1 to M4, the IPC would go down without changes to the underlying microarchitecture. Thus, part of this is that Apple has been so aggressive with clocks, particular M3 and M4 that it has eaten any IPC gains.
 
Last edited:
I wonder if they could have squeezed more than 10 hours of battery life out of the more recent iPad models if they used an A series chip instead of an M series chip. I bet if they did some focus groups, they’d probably find that most iPad users would trade extra processor strength for more hours of usage before a charge.
 
I wonder if they could have squeezed more than 10 hours of battery life out of the more recent iPad models if they used an A series chip instead of an M series chip. I bet if they did some focus groups, they’d probably find that most iPad users would trade extra processor strength for more hours of usage before a charge.
Honestly, no need to downgrade to an A series chip to enhance battery life when you already have an M4 with SIX efficiency cores.
 
  • Like
Reactions: NetMage
Not sure where Apple will go next (unless losing a lot of silicon designers to Nuvia did hurt them a little) but this post is interesting

Nuvia engineers have been a dud with out Apple. Wake me up when an actual product ships out of Nuvia/qualcomm.
The guy who tweeted should compare Nvidia 4090 and H100(ridiculous comparison). If you see the comparison, most of the improvements in H100 came from accelerators over A100. By that logic Groq AI should shut down because they have heavily optimized chipset to run inferences. Welcome to the future, heavily optimized hardware acceleation for certain tasks. In the past video and audio codecs used hardware acceleration, now it’s gonna be more specialized.
 
You are witnessing the asymptotic approach to the end of Moore's law, well pass the end of Dennard scaling. Atoms are just getting too big with respect to the gate sizes. The exponential Moore's law line on a graph doesn't suddenly stop, but gradually tails off flatter and flatter. No more doubling of some single core performance number every 2 or so years.
No, Moore’s law refers to the number of transistors in an integrated circuit. Reducing gate size is just one way to increase transistors count but it is not the only way. There is increasing die size, chiplets, 3D stacking. That doesn’t even include more advanced developmental techniques like spintronics, quantum well transistors, graphene electronics, etc.
 
I am just gladder every year for the last 3 years that Intel execs can't sleep well whenever a new Apple chip smashes records.
Intel was finally catching up on single threaded performance too. Apple came out with another monumental leap forward at the right time.

(Frankly, I think the M3 was a great generational bump as well. Double digit gains for a single generation aren't bad by any means, and Apple did it twice in a row with two successive releases less than a year apart from each other.)
 
Siri is not Apple AI. It’s like saying what’s the point of Nvidia when their chat bot app doesn’t work well.
I know that SIRi isn't AI but, Apple had whole time in the world to make a better experience with SIRI. It wasn't their priority but they did try to show how great SIRI is in their keynotes, at the same time both GA and Alexa made real progress.
Last year alone Apple purchases 32 AI startups. I do hope that Apple will release some features across all the devices with Neural engine.
 
Because I though it would be fun I chose a GB 6.3 result from each processor (so run-to-run variation comes into play I'm not trying to get exact numbers from averages) and compare the change with clock speed. Given my results I can say that it's more than that but far more importantly, it depends strongly on the workload. For instance since the M1 (again give or take run to run variation and me rounding), IPC for GB's HTML 5 and Background Blur tests have increased roughly 40%, IPC for PDF Render and Photo Library and Object Remover and Ray Tracer have increased ~20%, IPC for Clang, HDR, Photo filter by 11-15%. IPC Text Processing, Asset Compression, Structure from motion, are about 7%, File Compression and Navigation are about 3-4%, while Horizon Detection is completely flat. Object Detection prior to SME went up by 18% between the M1 and M2 but was flat between the M2 and M3. Obviously unknown what it would've done in the M4 without SME. Now if you want you could create "an average" of those by taking the geometric mean for FP and INT tests and weighted arithmetic mean over the two but that would conceal everything that's interesting and why I don't like averages. This shows in fact Apple that is iterating quite strongly in areas they care about for the CPU's performance and they are leaving to clock speed those that they don't. The average is brought down by the latter. I would argue rather than the criticism that Apple "studies for the test" it would appear that they have their own design priorities for what's most important to improve for their users and those are different from GB's. Table attached below. Unfortunately I can't share the full spreadsheet so it could be checked for errors, but I think it's right.

View attachment 2376654


Well 8.6% if we're being pedantic ... the point in that post was about absolute performance increases if someone wanted to consider if they should upgrade not where those upgrade come from. If you want to talk about clocks, I should also point out that often times big clock speed increases like we've been getting often necessitate microarchitecture changes as otherwise IPC falls as clocks rise. If you raise clocks 38% from M1 to M4, the IPC would go down without changes to the underlying microarchitecture. Thus, part of this is that Apple has been so aggressive with clocks, particular M3 and M4 that it has eaten any IPC gains.
Oh I get it..... not that I was going to send back my M3 Max I bought a week ago but it's obviously good to know that most of the claimed 25% performance boost is primarily down to just 2 of many tests, and clocks.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.