Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
You are increasing power for performance gains. Yes, the chip is more efficient, but it's drawing more power regardless. More efficiency actually means doing more with the same or less. In this case, it's just reducing energy increase, but there is one.
Again, it is only drawing more power at peak. The tests showing it lasting longer already show that. They could have limited it to not go past the prior energy peak, but why would they do that if the peak is only hit when needed and the non peak usage saves enough energy to allow the overall battery life to be longer? The true test will be when someone hammers the old and new models with a full day multi core render and compares to see what percentage the render is at when each battery dies to see if the peak consumption percentage increase is higher than the speed increase. If so, then sure, it is less efficient at peak. The average use case review I linked show it as more efficient in normal usage.
 
  • Like
Reactions: progx
Again, it is only drawing more power at peak. The tests showing it lasting longer already show that. They could have limited it to not go past the prior energy peak, but why would they do that if the peak is only hit when needed and the non peak usage saves enough energy to allow the overall battery life to be longer? The true test will be when someone hammers the old and new models with a full day multi core render and compares to see what percentage the render is at when each battery dies to see if the peak consumption percentage increase is higher than the speed increase. If so, then sure, it is less efficient at peak. The average use case review I linked show it as more efficient in normal usage.
It draws more power during non-peak times as well. Look at the graphs.
 
It will likely be the update for me to take replace my 2018 Mac mini when Apple releases a M2 Mac mini. I'm expecting that to be announced in September. Should be a huge upgrade for me. Though very few files that I work with are large enough or complicated enough for me to notice a difference. Honestly will be partly doing it for the electricity savings and the heat reduction in my home office. My 2018 Mac mini with only 8 gb of RAM works fine for my office workflow, so it just a "nice to have" upgrade and not a need to have upgrade.

Still debating what will replace my Mac mini i7 (late 2018). I have an eGPU and 32GB of RAM on it, but the new minis kill it very easily with just the M1. Can only image what an M2-equipped mini will bring to the line.

My mini will become a server, but I don’t know if I’ll go MacBook Air, 13-inch Pro or iMac (holding out for a larger screen version). Not sure I want to go mini next time around, but if it’s compelling enough than I might add it into my list.

What kills me about the 2018 mini, if you want to run more than 1 monitor, you definitely need an eGPU. I was running 32GB for about a year, the mini just couldn’t handle one 4K monitor and a 1080p secondary.

Anyhow, I hope the M2 mini will be everything you’re hoping for!
 
Wrong. Any increase to frequency via multipliers or base clock requires a new voltage setting is is higher. Have you never overclocked?
They're not the same cores. They're avalanche and blizzard cores, as opposed to icestorm and firestorm cores.
 
That's understood, but the given power requirements increased for M2, there is only one thing to give in. Voltage.
The chip is bigger, there are more GPU cores. I'm not sure what you're saying.

You kept saying because the M2 cores run at a higher frequency than the M1 cores, the voltage must be higher. That's simply not true if the cores themselves are different and are more efficient.
 
The chip is bigger, there are more GPU cores. I'm not sure what you're saying.

You kept saying because the M2 cores run at a higher frequency than the M1 cores, the voltage must be higher. That's simply not true if the cores themselves are different and are more efficient.
In all chips, higher voltages are correlated to higher frequencies. There is no way around that, it's physics. Voltage is what pushes the on/off (0 or 1) in all stages of a CPU. Not enough voltage? Your data becomes corrupted as the CPU doesn't understand a 0.5, 0.25, 0.33 (only 1 or 0). Again physics of transistors and how logic gates are built.

That said, efficiency is related to how the pipeline of execution works in a CPU. How much work can be done on 1 W of power. That means you can do more with less. In other words if the M2 would be clocked exactly as the M1, voltage and hence power consumption would be the same (relatively) but work output would be better than M1. However, as the M2 works faster on its higher efficiency, it can consume less power, but in order to achieve that faster speed a higher voltage and hence power requirement is needed.

All that happened is that the efficiency gains of the architecture allowed the M2 to be a 5W chip vs a 10 W chip (throwing numbers to illustrate the point). In actuality, the wattage will increase proportionately as voltage increases as clock increases.
 
In all chips, higher voltages are correlated to higher frequencies. There is no way around that, it's physics. Voltage is what pushes the on/off (0 or 1) in all stages of a CPU. Not enough voltage? Your data becomes corrupted as the CPU doesn't understand a 0.5, 0.25, 0.33 (only 1 or 0). Again physics of transistors and how logic gates are built.

That said, efficiency is related to how the pipeline of execution works in a CPU. How much work can be done on 1 W of power. That means you can do more with less. In other words if the M2 would be clocked exactly as the M1, voltage and hence power consumption would be the same (relatively) but work output would be better than M1. However, as the M2 works faster on its higher efficiency, it can consume less power, but in order to achieve that faster speed a higher voltage and hence power requirement is needed.

All that happened is that the efficiency gains of the architecture allowed the M2 to be a 5W chip vs a 10 W chip (throwing numbers to illustrate the point). In actuality, the wattage will increase proportionately as voltage increases as clock increases.
I can't help but feel we're talking past each other. What you're saying makes sense if you're talking about clock speeds within the same chip (if we ignore under volting), but when we are talking about the M1 and M2 aren't we talking about different chips?

You understand where you're losing me, right? Are you saying hypothetically that in 10 years Apple can release the M6 chip and if it's clocked the same as the M2 it's going to use the same voltage?

Unless I have a fundamental lack of understanding of electricity (which is possible), I'm pretty sure my old AMD Phenom II X4 CPU clocked at 3.2ghz has a higher voltage than the M2 clocked at 3.49ghz. That's a 125W CPU by the way.
 
I can't help but feel we're talking past each other. What you're saying makes sense if you're talking about clock speeds within the same chip (if we ignore under volting), but when we are talking about the M1 and M2 aren't we talking about different chips?

You understand where you're losing me, right? Are you saying hypothetically that in 10 years Apple can release the M6 chip and if it's clocked the same as the M2 it's going to use the same voltage?

Unless I have a fundamental lack of understanding of electricity (which is possible), I'm pretty sure my old AMD Phenom II X4 CPU clocked at 3.2ghz has a higher voltage than the M2 clocked at 3.49ghz. That's a 125W CPU by the way.
Voltage for general power are almost always the same across the same type of main architecture say ARM, x86. However, CPUs usually have common general power up voltage inputs of either 5V or 3.33V. Unsure what Apple uses.

What varies is the Core Voltage (the one that determines the 1s and 0s inside the transistors/gates). In Intel this can go up to from 1.1V (or less) up to 1.35V in steps of 0.01V or 0.02V to achieve stability of the clock/frequency. With Apple's M-series it is unknown to me. This is what determines the power consumption. There is also the matter of amps/coulombs thru the chip which also affect power draw.

Per your Phenom II CPU, that 125W is usually the TDP and is not equal to power consumed.
 
Voltage for general power are almost always the same across the same type of main architecture say ARM, x86. However, CPUs usually have common general power up voltage inputs of either 5V or 3.33V. Unsure what Apple uses.

What varies is the Core Voltage (the one that determines the 1s and 0s inside the transistors/gates). In Intel this can go up to from 1.1V (or less) up to 1.35V in steps of 0.01V or 0.02V to achieve stability of the clock/frequency. With Apple's M-series it is unknown to me. This is what determines the power consumption. There is also the matter of amps/coulombs thru the chip which also affect power draw.
Hmm I think we are just talking past each other.

Per your Phenom II CPU, that 125W is usually the TDP and is not equal to power consumed.
OK sure, but that chip still consumes many times more power than the M2. My question remains unanswered - the Phenom II X4 consumes vastly, vastly more power than the M2 despite being clocked lower and having less cores. Doesn't this disprove your assertion that higher clock speed always equals more voltage?
 
Hmm I think we are just talking past each other.


OK sure, but that chip still consumes many times more power than the M2. My question remains unanswered - the Phenom II X4 consumes vastly, vastly more power than the M2 despite being clocked lower and having less cores. Doesn't this disprove your assertion that higher clock speed always equals more voltage?
It doesn't cause power is also a definition of the amperage inside the chip. From my knowledge the amps in a chip are determined by the architecture (in the way the logic gates are arranged).

Also, recall chips have two or three voltages in them. One/Two steady state voltages and one variable voltage mainly used for frequency. So the bigger the chip the more power you'll need to sustain voltages across a chip. More so if transistors are bigger, which in the case of the Phenom II X4 were 45nm vs 5nm on a M1/M2.
 
Hmm I think we are just talking past each other.


OK sure, but that chip still consumes many times more power than the M2. My question remains unanswered - the Phenom II X4 consumes vastly, vastly more power than the M2 despite being clocked lower and having less cores. Doesn't this disprove your assertion that higher clock speed always equals more voltage?
Just to add a bit more:

Screen Shot 2022-06-24 at 11.36.35 PM.png


That's a screen shot of Intel's offerings in ultra-low power CPUs. Notice anything? The chips with higher base clocks are always consuming more base power. This is due to that voltage increase needed at higher frequencies.
 
It doesn't cause power is also a definition of the amperage inside the chip. From my knowledge the amps in a chip are determined by the architecture (in the way the logic gates are arranged).

Also, recall chips have two or three voltages in them. One/Two steady state voltages and one variable voltage mainly used for frequency. So the bigger the chip the more power you'll need to sustain voltages across a chip. More so if transistors are bigger, which in the case of the Phenom II X4 were 45nm vs 5nm on a M1/M2.
The more you respond, the further into the weeds we get without addressing the original argument.

OK so the AMD Phenom II x4 is a 3.2ghz chip running at 1.4v. The AMD Ryzen 2600 is a 3.4ghz chip that doesn't go above 1.3v.

Based on what you're saying that shouldn't be possible. You keep saying higher clock speed equals higher voltage across all CPUs. Why can the Ryzen 2600 run at a higher clock speed with less voltage?
 
The more you respond, the further into the weeds we get without addressing the original argument.

OK so the AMD Phenom II x4 is a 3.2ghz chip running at 1.4v. The AMD Ryzen 2600 is a 3.4ghz chip that doesn't go above 1.3v.

Based on what you're saying that shouldn't be possible. You keep saying higher clock speed equals higher voltage across all CPUs.
Architecture changes help reduce/increase power requirements and determine new voltage settings. Yet, increasing frequency will always require an increase in voltage.

Also you are comparing a chip made using 45nm to a chip using 12nm. Voltage will drop as power demands decrease in node reductions. However, as soon as you increase the frequency, the voltage has to go up.
 
Architecture changes help reduce/increase power requirements and determine new voltage settings. Yet, increasing frequency will always require an increase in voltage.
The frequency increased from 3.2ghz on the Phenom II X4 to 3.4ghz on the Ryzen 2600 and the voltage and power consumption went down. The frequency increased, the voltage decreased.
 
The frequency increased from 3.2ghz on the Phenom II X4 to 3.4ghz on the Ryzen 2600 and the voltage and power consumption went down. The frequency increased, the voltage decreased.
Also you are comparing a chip made using 45nm to a chip using 12nm. Voltage will drop as power demands decrease in node reductions. However, as soon as you increase the frequency, the voltage has to go up.
 
Also you are comparing a chip made using 45nm to a chip using 12nm. Voltage will drop as power demands decrease in node reductions. However, as soon as you increase the frequency, the voltage has to go up.
So reducing the voltage without a node reduction is categorically impossible? I am trying really hard to understand your argument and you are not making it easy.
 
So reducing the voltage without a node reduction is categorically impossible?
Yes, you'll break physics otherwise. You can't have a frequency increase without the accompanying power draw using same node.

Now if you say, what about the efficiency increase on the same/similar node, that just means how much more number crunching can be done using the same power. In other words, how much more math the CPU can do using the same 1W.
 
Yes, you'll break physics otherwise. You can't have a frequency increase without the accompanying power draw using same node.

Now if you say, what about the efficiency increase on the same/similar node, that just means how much more number crunching can be done using the same power. In other words, how much more math the CPU can do using the same 1W.
So even if Apple stayed on 5nm for 10 years and made all sorts of other architecture changes, the 3.49ghz would always require the same voltage? Node shrink is literally the only way to lower the voltage?
 
So even if Apple stayed on 5nm for 10 years and made all sorts of other architecture changes, the 3.49ghz would always require the same voltage? Node shrink is literally the only way to lower the voltage?
You can move stuff around to decrease power draw, but without a node reduction the decrease in power draw will be minimal next to negligible. The theoretical case you just stated is why Intel became stuck in performance gains and high power usage in their chips to achieve such gains. Reason why Apple ditched them.

Also, here is a small video that to this day is still relevant. A bit off-topic, but it'll give you a nice overview of how else efficiency is achieved. Why there are several factors in determining performance. Filmed in 2001 by Apple.

 
You can move stuff around to decrease power draw, but without a node reduction the decrease in power draw will be minimal next to negligible.
Thanks. Although hopefully you can see why a few people mistook your initial statements to mean that frequency increase always equals voltage increase even across different nodes and architectures, as when challenged you made little effort to clarify your position.
 
  • Like
Reactions: jav6454
Still debating what will replace my Mac mini i7 (late 2018). I have an eGPU and 32GB of RAM on it, but the new minis kill it very easily with just the M1. Can only image what an M2-equipped mini will bring to the line.

My mini will become a server, but I don’t know if I’ll go MacBook Air, 13-inch Pro or iMac (holding out for a larger screen version). Not sure I want to go mini next time around, but if it’s compelling enough than I might add it into my list.

What kills me about the 2018 mini, if you want to run more than 1 monitor, you definitely need an eGPU. I was running 32GB for about a year, the mini just couldn’t handle one 4K monitor and a 1080p secondary.

Anyhow, I hope the M2 mini will be everything you’re hoping for!
The M2 mini will work for me. I just have office-type workflow. And gaming is done on the console. So my 2018 i5 is sufficient. Only the largest and most complicated PDFs slow it down or there is a bit of a pause as the most complicated excel files compute. Heck, I'm still running it with 8GB (though I manage that because I never got in the habit of keeping tons of browser tabs open and I don't use Chrome). My second monitor is my work PC laptop sitting next to it, so I don't need it to even run a second monitor. Honestly, I'm fine continuing to run the 2018 mini. But since I can sell that "last of the Intel mini Macs" at such a good price, net it will only cost me a modest amount of money to upgrade.

I might make a different decision if I had an eGPU that wouldn't be compatible with it or if I had upgraded RAM already installed (speaking of which, I should take a look at how much it would cost me to upgrade the RAM, I'd probably make that money back reselling it).
 
M2 is a decent upgrade on the low end, still 5 nm, higher clocks and more cache, upgraded cores. Nothing super major. I'm interested in seeing how or if M2 scales up to M2 Pro / Max. Also curious is Apple's strategy is first introducing new cpu and gpu cores on the iPhone then scaling them many months later to iPad and now Mac.

Apple is about to release A16 on iPhone Pro with new cores, possibly with a 3nm design, yet the iPad and Mac are getting the "older" cores from 2021.

Maybe. M2 comes 19 months after M1; maybe a year and a half is supposed to be the future cadence, so we'll see the M3 based not on the A16 but the A17.
 
So will the next MBP 14 / 16 have an M2 chip in, do we think? Presumably it will have one incarnation. Will it be an M2 or an 'M2 Pro' etc? Is it worth waiting on this compared to an M1 Pro? My 2015 MBP is creaking and so it's when to upgrade, not if!

It'll presumably be an M2 Pro / M2 Max, and I'm guessing we won't be seeing that until about 19 months after the M1 Pro, so ca. May of next year.

Just get the M1 Pro.
 
  • Like
Reactions: zerofour
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.