Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I really don’t follow you here. If I compile the same code on two machine using the same toolkit and one machine does it twice as fast, I say that this machine has twice the performance. If I run a machine learning algorithm to categorize images on two machines and one does it twice as fast, I say this machine has twice the performance. The list continues. Which machine consumes more power is secondary dir this question, I care about how fast can it run things.

These benchmarks take some common tasks (like compiling code, categorizing images, compression etc.) and see which CPU can run them faster. The higher the score, the faster the CPU.

Incidentally, Apple CPUs can do the same amount as fast as faster while consuming less energy than Intel CPUs. This is what makes them more powerful according to all definitions of “powerful” I am aware of (unless you mean to say that Intel CPU is more powerful because it wastes more power).

Clock speed and wattage is irrelevant. Apple CPUs have more execution units than Intel CPUs. They can do more work with lower clock speed. Which in turn gives them lower power consumption.
You keep describing architecture, which is exactly what I am saying.

The misinterpretation of Geekbench is when people look at a 5W CPU running at 1.9Ghz and think it's as fast as a 105W CPU running at 5.3GHz with similar architecture efficiency because they get similar Geekbench scores. Anyone who understands physics knows why that is literally impossible, and why your iPhone isn't faster than a 105W Desktop grade CPU or similar.
 
If you compare one Geekbench score to another, you are comparing architecture efficiency only, and not raw performance in an application. That is a very important distinction. Apple's CPUs are extremely efficient, but that does not mean you could pop one in your desktop PC and it would be as fast as a desktop CPU.

As an example, Apple's 3-5W iPhone CPU running at 1.8Ghz or whatever might have a Geekbench score of 1600 single core. A 105W desktop CPU running at 5.3 GHz might have the same Geekbench single core score of 1600. This means their architectures are equally efficient, it does not mean the iPhone has the same processing power as the 95W desktop CPU as the wattage and clock speeds are not being accounted for. The creator of Geekbench used a similar example to explain how it is an architecture benchmark.

What Apple does is take these results, and tell people their iPhones are as fast as a PC/notebook, which is at best misleading and at worst blatantly false.

Geekbench is a great benchmark, it just doesn't work how people often think it does, and you can certainly be forgiven for that because the way companies like Apple use Geekbench scores in marketing materials is highly misleading. Just like how they compared their M1 chip to one of the worst possible 4-core Intel chips running on 6-year-old architecture when they made their "3X" "5X" faster claims in their presentation. They may be technically correct, but they actively tried to hide from people what they were really comparing to.

I hope I am explaining myself clearly.
You don’t know what you’re talking about.

Geekbench is made to be cross-platform/architecture.
A 1600 score means it performs 1.6x better than an Intel i3-8100 which is the baseline of 1000, this is regardless of architecture, power consumption, etc.. It’s raw (peak) performance.
A score of 2000 is 2x better than a score of 1000.
Get your facts straight before you start talking nonsense.
 
You don’t know what you’re talking about.

Geekbench is made to be cross-platform/architecture.
A 1600 score means it performs 1.6x better than an Intel i3-8100 which is the baseline of 1000, this is regardless of architecture, power consumption, etc.. It’s raw performance.
A score of 2000 is 2x better than a score of 1000.
Get your facts straight before you start talking nonsense.

Again, you are agreeing with me that it is an architecture benchmark. I am not sure where the confusion is here.

Geekbench is cross platform because it only compares architectures. Comparisons would be meaningless otherwise because x86 vs ARM along with dozens of other variables.

So you're telling me that an iPhone CPU getting 5W of power running at 2GHz is more powerful than a 105W Desktop class CPU getting 105W of power with the same architecture efficiency? That is physically impossible. This is how the creator of Geekbench explains it.

To further illustrate this, here is a 32 core server grade CPU with a 170W TDP (scroll down and click multi core):

And here is an Apple A12X which has a higher multicore score:

Why do you think server farms aren't full of A12X CPUs?
 
You keep describing architecture, which is exactly what I am saying.

The misinterpretation of Geekbench is when people look at a 5W CPU running at 1.9Ghz and think it's as fast as a 105W CPU running at 5.3GHz with similar architecture efficiency because they get similar Geekbench scores. Anyone who understands physics knows why that is literally impossible, and why your iPhone isn't faster than a 105W Desktop grade CPU or similar.
You're way off on this. You could start by reading the way Geekbench itself characterizes its outputs: http://support.primatelabs.com/kb/geekbench/interpreting-geekbench-5-scores.

Andrei's write-up on the A14 goes into detail on Geekbench as well, and notes that it tracks other benchmarks such as Spec fairly closely: https://www.anandtech.com/show/16226/apple-silicon-m1-a14-deep-dive.

You could also just spend some time investigating examples of "real world" usage if you want. We don't have them for M1 yet, of course, but even a little research on things like encoding video or processing batch photo edits on the two-year-old design A12Z should demonstrate that its performance vis-a-vis so-called desktop-class processors is competitive and often better.
 
You keep describing architecture, which is exactly what I am saying.

The misinterpretation of Geekbench is when people look at a 5W CPU running at 1.9Ghz and think it's as fast as a 105W CPU running at 5.3GHz with similar architecture efficiency because they get similar Geekbench scores. Anyone who understands physics knows why that is literally impossible, and why your iPhone isn't faster than a 105W Desktop grade CPU or similar.

Erm, this is really not how things work. A CPU can consume 100 watts and still be dead slow - because most of that energy is wasted. Apple simply needs less energy to do the same amount of work, because their CPUs work “smarter”. I am not sure what about this concept is so difficult to understand.

Another example. Imagine a huge steam engine, burning incredible amount of coal to output two horsepowers. Similar amount of power can be achieved with today’s ultra compact electrical motor at a much lower energy input. Why? Because the electrical motor wastes less energy. It’s much more efficient. The steam engine will transform most of that burned coal into heat that will be radiated into the environment. Only a fraction will be used to turn the turbine. If yiu want, the Intel CPU is the steam engine. The Apple CPU is the modern motor. Muss less energy needed for the same amount of work. As you say, it’s physics.
 
If you compare one Geekbench score to another, you are comparing architecture efficiency only, and not raw performance in an application. That is a very important distinction. Apple's CPUs are extremely efficient, but that does not mean you could pop one in your desktop PC and it would be as fast as a desktop CPU.

As an example, Apple's 3-5W iPhone CPU running at 1.8Ghz or whatever might have a Geekbench score of 1600 single core. A 105W desktop CPU running at 5.3 GHz might have the same Geekbench single core score of 1600. This means their architectures are equally efficient, it does not mean the iPhone has the same processing power as the 95W desktop CPU as the wattage and clock speeds are not being accounted for. The creator of Geekbench used a similar example to explain how it is an architecture benchmark.

What Apple does is take these results, and tell people their iPhones are as fast as a PC/notebook, which is at best misleading and at worst blatantly false.

Geekbench is a great benchmark, it just doesn't work how people often think it does, and you can certainly be forgiven for that because the way companies like Apple use Geekbench scores in marketing materials is highly misleading. Just like how they compared their M1 chip to one of the worst possible 4-core Intel chips running on 6-year-old architecture when they made their "3X" "5X" faster claims in their presentation. They may be technically correct, but they actively tried to hide from people what they were really comparing to.

I hope I am explaining myself clearly.

I don’t want to be rude but you have no idea what you are talking about.
A certain sub score in GB5 means that a certain amount of calculations has been made in a certain time in that test. It tests raw CPU performance.
 
Last edited:
You keep describing architecture, which is exactly what I am saying.

The misinterpretation of Geekbench is when people look at a 5W CPU running at 1.9Ghz and think it's as fast as a 105W CPU running at 5.3GHz with similar architecture efficiency because they get similar Geekbench scores. Anyone who understands physics knows why that is literally impossible, and why your iPhone isn't faster than a 105W Desktop grade CPU or similar.
Also, you keep attributing your position to John Poole, but he himself is on record in many places explicitly saying they're comparable: “the short is answer is yes that the scores are comparable across platforms, so if an iPhone 8 scores higher than an i5, then the iPhone 8 is faster than the i5.”

The caveat he provides is that one must account for thermal overhead, by which he means that a really fast phone will inevitably throttle and won't sustain. But it still really is "as powerful" for the several minute duration of the benchmark.
 
Geekbench is a great benchmark, it just doesn't work how people often think it does, and you can certainly be forgiven for that because the way companies like Apple use Geekbench scores in marketing materials is highly misleading. Just like how they compared their M1 chip to one of the worst possible 4-core Intel chips running on 6-year-old architecture when they made their "3X" "5X" faster claims in their presentation. They may be technically correct, but they actively tried to hide from people what they were really comparing to.

Actually, Apple is rather transparent on its website, in the footnotes, on which basis they get their two/three/four times speed. No mention of Geekbench as far as I can see:

„Testing conducted by Apple in October 2020 using preproduction MacBook Air systems with Apple M1 chip and 8-core GPU, as well as production 1.2GHz quad-core Intel Core i7-based MacBook Air systems, all configured with 16GB RAM and 2TB SSD. Tested with prerelease Final Cut Pro 10.5 using a 55-second clip with 4K Apple ProRes RAW media, at 4096x2160 resolution and 59.94 frames per second, transcoded to Apple ProRes 422. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Air.“

M1 has therefore been compared to the i7 (not the i3, or Atom, or...), from the 10th gen Intel Ice Lake lineup released just one year ago.
 
  • Like
Reactions: NetMage
Another set of benchmarks (not verified):


If these are accurate then the GPU sits between the Radeon Pro Vega 20 and the 5300M which is very impressive for an integrated GPU.
 
Another set of benchmarks (not verified):


If these are accurate then the GPU sits between the Radeon Pro Vega 20 and the 5300M which is very impressive for an integrated GPU.
That's slightly higher than I would have anticipated, but it's perfectly plausible it's legit. A12Z tended to be in the ballpark of, or a bit above, 50fps on Aztec Ruins High, Offscreen. Assuming a 30% improvement to the individual GPU cores as with A12-A14 would suggest 65-70fps, and the M1 is likely a bit faster than that per-core.

EDIT: In the context of other mobile chips, that's about 45-50% faster than the top-end Tiger Lake Xe, and a similar margin faster than the MX450.

EDIT #2 out of excitement: Note that, as @leman and others have speculated for a while now, a doubling of GPU cores from 8 -> 16 should land, at least in GFXBench, about 15% above the Metal score of the 5600m in the top-end MBP16. Assuming no other fundamental changes to the cores / memory / etc. It also suggests that the shipping M1 in a sufficiently cooled chassis like the MBP and Mini may be, as insane as it sounds, faster for gaming than the 1650 Ti Max Q in the Razer Blade Stealth.
 
Last edited:
Even the GPU seem to be competitive. At 10-15W, M1 performs the same as AMD/NVIDIA at much higher TDP. There would be no problem to add 32 core GPU for nearly 4X performance corresponding to a 5700XT even if the chips draw perhaps around 50W.

We already knew the M1 CPU is a winner, but the GPU may be as competitive. Also the neural engines will offload some compute from the GPU and there the gains are even larger. Seems good to me also on the GPU/compute department.

I wish benchmarks reporter performance per watt.
I think they should make a delicate GPU instead of a Soc in larger products like the 16-inch and iMacs
 
EDIT #2 out of excitement: Note that, as @leman and others have speculated for a while now, a doubling of GPU cores from 8 -> 16 should land, at least in GFXBench, about 15% above the Metal score of the 5600m in the top-end MBP16. Assuming no other fundamental changes to the cores / memory / etc. It also suggests that the shipping M1 in a sufficiently cooled chassis like the MBP and Mini may be, as insane as it sounds, faster for gaming than the 1650 Ti Max Q in the Razer Blade Stealth.

I don't necessarily trust GFXBench that much, but these results are very impressive. It is on the upper end of my estimates. If M1 can consistently match a 1650 Ti Max Q (a 35W dedicated Nvidia Turing GPU with GDDR6!), that would be absolutely insane.
 
Again, you are agreeing with me that it is an architecture benchmark. I am not sure where the confusion is here.

Geekbench is cross platform because it only compares architectures. Comparisons would be meaningless otherwise because x86 vs ARM along with dozens of other variables.

So you're telling me that an iPhone CPU getting 5W of power running at 2GHz is more powerful than a 105W Desktop class CPU getting 105W of power with the same architecture efficiency? That is physically impossible. This is how the creator of Geekbench explains it.

To further illustrate this, here is a 32 core server grade CPU with a 170W TDP (scroll down and click multi core):

And here is an Apple A12X which has a higher multicore score:

Why do you think server farms aren't full of A12X CPUs?
In hoping not to sound too harsh ill just say you are wrong , benchmarks measure work over time , they are just a lot of short tests , they use an array of tasks as CPU performs every day and weight the score as the team thinks it should be prioritize (Integer/Floating point/Crypto), if a CPU does something faster ,then it does it faster , if the Uarch and/or the arch it self is better , then its better , what you should care about when comparing archs is the performance in which they perform tasks , if I invent a new Arch and its losing in all benchmarks , it means its a ****** arch and will perform badly in the "Real world" , I never heard of a low performing general purpose CPU that does amazingly well in GB for example (not discussing an Asic that targets specific work loads).

Its hard to grasp , and i see it everywhere in the internet , ppl just cant get around the fact that this is the new reality , and instead of understanding what's going on , they just take the "are you telling me that .... its impossible!!"
Its engineering feat and its a measurable one , all you need is to be able to absorb , analyze and deduct.

If Intel/AMD would create such a CPU (regadless or Arch , i mean a 10W monster) , they will be immediately praised and no one will doubt their breakthrough , as its Apple folks cant handle it ,seemingly believing that an iPhone company cant possibly compete with the Intel/AMD giants of the silicon world , BUT as you see in Tesla , sometimes the giants (German/American/Japan automotive) sleep too long to realize someone is coming for their meal.

TLD - you cannot believe it , but as we are talking computers , all you need to do is LOOK and ANALYZE the data , and if after that you STILL cannot believe , welp thats a different kind of talk we need to do :)
be open minded and understand whats going on instead of ignoring whats being presented to you , have a good one.
 
If you compare one Geekbench score to another, you are comparing architecture efficiency only, and not raw performance in an application. That is a very important distinction. Apple's CPUs are extremely efficient, but that does not mean you could pop one in your desktop PC and it would be as fast as a desktop CPU.

As an example, Apple's 3-5W iPhone CPU running at 1.8Ghz or whatever might have a Geekbench score of 1600 single core. A 105W desktop CPU running at 5.3 GHz might have the same Geekbench single core score of 1600. This means their architectures are equally efficient, it does not mean the iPhone has the same processing power as the 95W desktop CPU as the wattage and clock speeds are not being accounted for. The creator of Geekbench used a similar example to explain how it is an architecture benchmark.

What Apple does is take these results, and tell people their iPhones are as fast as a PC/notebook, which is at best misleading and at worst blatantly false.

Geekbench is a great benchmark, it just doesn't work how people often think it does, and you can certainly be forgiven for that because the way companies like Apple use Geekbench scores in marketing materials is highly misleading. Just like how they compared their M1 chip to one of the worst possible 4-core Intel chips running on 6-year-old architecture when they made their "3X" "5X" faster claims in their presentation. They may be technically correct, but they actively tried to hide from people what they were really comparing to.

I hope I am explaining myself clearly.

I don’t know how to say this nicely but you are completely wrong. Geekbench measures real performance, end of story.

I’ve never once heard anyone explain your theory of architecture and how a 1600 score is relative to the wattage. It doesn’t even make sense. Why would any benchmark measure this? People care about real performance. Especially in a cross platform and cross architecture benchmark. Do you think a 5 watt chip with a score of 1600 is 1/20th the performance of a 100 watt chip? That’s basically what you’re saying.

A 100 watt Intel chip uses much more power because it uses a much higher clock speed. 4.5-5.2GHz. That uses a massive amount more power than an Apple chip at 3Ghz. Apple just flat out has a better architecture. Andrei Frumusanu measured the a12 vs the 9900k and found the a12 has 70% higher IPC. A 3GHz Apple chip is faster than an Intel chip at 5GHz regardless of how much more power Intel uses. It might be hard to believe but it’s true. And yes the chip in iPhones is as fast as Apple says.

As for your comment about Apple comparing it to a 4 core Intel... why wouldn’t they? That was the computer they were replacing. Of course they’ll compare it to that as that is what is relevant to customers of the Mac mini.

I'd be extremely interested in more detail here. You're basically saying, I cannot use Geekbench to compare the performance of two different laptops with two different amounts of energy consumption?

Yes you can. That post is flatly untrue, don’t believe it.
 
Last edited:
  • Like
Reactions: NetMage and bobmans
You keep describing architecture, which is exactly what I am saying.

The misinterpretation of Geekbench is when people look at a 5W CPU running at 1.9Ghz and think it's as fast as a 105W CPU running at 5.3GHz with similar architecture efficiency because they get similar Geekbench scores. Anyone who understands physics knows why that is literally impossible, and why your iPhone isn't faster than a 105W Desktop grade CPU or similar.

You’re making the mistake in thinking an Intel chip uses 100 watts in a single core when you’re doing a single core benchmark. It doesn’t. Probably uses 25 watts or so at most for that core.

The only “physics” here is the clock speed. A higher clock will use more energy, but the performance is higher if your IPC is higher, which Apple’s is far higher.


That’s Metal and the other score is OpenCL. We haven’t seen a Metal score for the M1 from what I’ve seen.

I hope so too, still don’t understand why the 4 core GPU in the iPad air 4 gets a 12,000 metal score. Really wanted the M1 to double that.

It probably will. We haven’t seen it yet.
 
Last edited:
  • Like
Reactions: bobmans
Not wanting to start a thread ,
Tim Millet has some great answers there (one of the top tech folks in the chip team).
 
  • Like
Reactions: EugW
Can someone explain these results to me. Im confused. The two screenshots seem to suggest the older i7 outperforms the new M1 which is meant to be "up to 3x faster than previous Mini's" (especially with apps like Logic etc). Have I got this right? I also ran my own benchmark and got the following results. I thought the results must be wrong but maybe not?


Cinebench R23:
Intel Core i7-8700B CPU
MacMini 6 Cores, 12 Threads @ 3.2 GHz (Single Core @ 4.3 GHz, Multi Core @ 3.9 GHz est.)
Multi: 7084 pts
Single: 1160
 
Can someone explain these results to me. Im confused. The two screenshots seem to suggest the older i7 outperforms the new M1 which is meant to be "up to 3x faster than previous Mini's" (especially with apps like Logic etc). Have I got this right? I also ran my own benchmark and got the following results. I thought the results must be wrong but maybe not?


Cinebench R23:
Intel Core i7-8700B CPU
MacMini 6 Cores, 12 Threads @ 3.2 GHz (Single Core @ 4.3 GHz, Multi Core @ 3.9 GHz est.)
Multi: 7084 pts
Single: 1160
1. The screenshotted Mac is not M1. It is the A12Z dev kit. That's essentially a 2018 part, two generations behind.

2. M1's performance advantage is single-core. It will not compete as well in multi-core against high end 8-core Intel parts or whatever. But that's OK, because M1 is an entry level part.

3. The SoCs for the high end MacBook Pros and iMacs won't be out until 2021, and will likely have double the number performance cores as M1, which will dramatically improve multi-core performance... but likely won't affect single-core performance that much.

4. Cinebench is not a broad spectrum benchmark. Cinebench is a pure CPU rendering benchmark. Other apps may not necessarily behave as Cinebench might suggest. Specifically, there is other silicon on M1 which may give additional boosts to certain applications, esp. multimedia content creation applications.
 
Last edited:
Can someone explain these results to me. Im confused. The two screenshots seem to suggest the older i7 outperforms the new M1 which is meant to be "up to 3x faster than previous Mini's" (especially with apps like Logic etc). Have I got this right? I also ran my own benchmark and got the following results. I thought the results must be wrong but maybe not?


Cinebench R23:
Intel Core i7-8700B CPU
MacMini 6 Cores, 12 Threads @ 3.2 GHz (Single Core @ 4.3 GHz, Multi Core @ 3.9 GHz est.)
Multi: 7084 pts
Single: 1160

You understand that Apple was referring to the Mac Mini that this one replaced, the one with the i3.
 
  • Like
Reactions: joshallegro
Can someone explain these results to me. Im confused. The two screenshots seem to suggest the older i7 outperforms the new M1 which is meant to be "up to 3x faster than previous Mini's" (especially with apps like Logic etc). Have I got this right?

No results for M1 were posted yet, so I no, I don’t think you got this right :) The screenshot in the first page is not M1
 
  • Like
Reactions: joshallegro
1. The screenshotted Mac is not M1. It is the A12Z dev kit. That's essentially a 2018 part, two generations behind.

2. M1's performance advantage is single-core. It will not compete as well in multi-core against high end 8-core Intel parts or whatever. But that's OK, because M1 is an entry level part.

3. The SoCs for the high end MacBook Pros and iMacs won't be out until 2021, and will likely have double the number performance cores as M1, which will dramatically improve multi-core performance... but likely won't affect single-core performance that much.

4. Cinebench is not a broad spectrum benchmark. Cinebench is a pure CPU rendering benchmark. Other apps may not necessarily behave as Cinebench might suggest. Specifically, there is other silicon on M1 which may give additional boosts to certain applications, esp. multimedia content creation applications.


Thanks for your response;

1. Ah ok. So are people predicting the M1 chip just released will be a faster than the A12Z, and by how much?

2. Is the M1 likely to exceed performance of my 6-core i7-8700B (in particular for Logic)

3. Do you think they will released a higher end SoC for the MacMini also?
 
You understand that Apple was referring to the Mac Mini that this one replaced, the one with the i3.

Yes but I was thinking the 2020 i3 might be similar performance to my 2018 i7 6-core. In which it would be 3x faster than my current machine (i7). Maybe the wrong assumption..

Is the 2020 i3 similar performace to the 2018 i3? I can only find benchmarks for the 2018 i3; https://www.macworld.com/article/3318501/799-mac-mini-review.html

If so, these figures would suggest the 2020 i3 provides approx. half the performance of my 2018 i7. In the above article it says the 2018 i3 scored 588 (i presume this is single core..?) vs my 1160 pts 2018 i7 (7084 pts multi)
 
Last edited:
1. Ah ok. So are people predicting the M1 chip just released will be a faster than the A12Z, and by how much?
50% faster? Not sure.

2. Is the M1 likely to exceed performance of my 6-core i7-8700B (in particular for Logic)
In Logic? Dunno, but Apple claims M1 does really well in Logic.

I suggest waiting for the real world comparisons.

3. Do you think they will released a higher end SoC for the MacMini also?
Most definitely, but likely not until 2021.

My guess is iMac, MacBook Pro, Mac mini in 2021, and Mac Pro in 2022. Not sure if they'll drop the iMac Pro or not.
 
  • Like
Reactions: joshallegro
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.