Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I have to say that I don’t quite understand why this has generated so much discussion and controversy. Yes, the design is not optimal from the perspective of enabling high sustained thermal dissipation (and thus performance). But why would one want sustained performance in a phone? A phone is almost exclusively operated on battery, and I don’t know how many people would want their already frame-limited games to run 30% faster at the expense of running out of battery in under an hour.

According to Andrei‘s benchmarks, the iPhones are still significantly faster than any competition after throttling, while their sustained power usage is slightly lower than the competitions. What we have here is a deliberate design that obviously prioritizes usable battery life over sustained performance. And in the typical Apple fashion, they wield thermal throttling as an precision tool for managing power and performance, using a hardware layout that is most certainly optimized for board space while reaching the precise performance and power consumption targets they want.

Or, to make it short: if Apple used a less thermally constrained design their phones would run faster but also hotter and their battery life under load will be non existent. In fact, the main reason why this is being discussed is because Apple does not soft-limit their GPUs, so there is a big disparity between the burst and the sustained performance. About that, duh, that’s a desktop class GPU we are talking about, of course it will scale.
I understand your argument about the need to manage power consumption in a mobile device. I am surprised at this particular sentence, though:
"And in the typical Apple fashion, they wield thermal throttling as an precision tool for managing power and performance."

I've never heard of thermal throttling used as a "precision tool" for managing power and performance. I would think a much better engineering approach would be to simply limit the clock speed of the GPU, rather than relying on it overheating, for two reasons:

1) Thermal throttling doesn't give consistent power management. E.g., you have very different thermal throttling, and thus very different power consumption, in a hot phone vs. a cold phone.

2) Letting chips heat up to the point where they need to be thermally throttled seems like poor engineering design. Isn't it far better for the reliability and longevity of the device to implement better thermals, such that chips run cooler and, if needed, manage power consumption by throttling clock speed after a certain period of time?

As for my personal experience, I can't speak to iPhones, but I have had a lot of thermal problems with the poor GPU cooling on my 15" 2014 MacBook Pro. It ran fine for about 1.5 years after purchase, but then, if the room got warm (say 29 C = 84 F), my computer would throttle and immediately become unusable (kernel task ~600%) when I had a single 4K monitor connected to it. I had to have a fan blow directly onto it, or turn on my A/C. So Apple had to repair it under warranty.

[To be greener I would use a window fan rather than A/C, so my apt. would often get to that temp in the warmer months, which was certainly within my comfort zone, and far below the upper operating temp Apple specifies.]

Then it worked fine for another 1.5 years, when the same thing happened, and Apple had to repair it again. Then it happend a third time and, even though it was out of warranty by this point, because it was a pre-existing problem, they repaired it again (no complaints about Apple's CS support, which is great).

The reason this kept happening was because the GPU was so poorly cooled that it was thermally overstressed by being continuously connected to a 4k monitor. [It seems that the third time they repaired it they did something different, since I've not experienced that problem again.] [Well, now I only experience it when I have three displays connected and the room temp gets into the 80's, and since I can no longer get it repaired I only run it in that configuration when I'm using my A/C.] So sorry to say this, but I almost had to laugh when you wrote "in the typical Apple fashion, they wield thermal throttling as an precision tool". More like a medieval bludgeon.
 
Last edited:
I understand your argument about the need to manage power consumption in a mobile device. I am surprised at this particular sentence, though:
"And in the typical Apple fashion, they wield thermal throttling as an precision tool for managing power and performance."

I've never heard of thermal throttling used as a "precision tool" for managing power and performance. I would think a much better engineering approach would be to simply limit the clock speed of the GPU, rather than relying on it overheating, for two reasons:

1) Thermal throttling doesn't give consistent power management. E.g., you have very different thermal throttling, and thus very different power consumption, in a hot phone vs. a cold phone.

2) Letting chips heat up to the point where they need to be thermally throttled seems like poor engineering design. Isn't it far better for the reliability and longevity of the device to implement better thermals, such that chips run cooler and, if needed, manage power consumption by throttling clock speed after a certain period of time?
So, I’m not trying to answer for leman, however:

Designing things top down only has certain easy guarantees. Thermodynamics is well understood, so using that (bottom up) feedback is an extremely useful tool for the maximizing the energy dynamics of the pocket device, i.e the device is very constrained by, e.g, ambient temperature.

However, the same constraints necessarily don’t apply to a desktop device, or even a laptop, let alone a high end desktop.

TLDR; thermals are an extremely useful tool to use, far more so than just to prevent a thermal failure.
 
Last edited:
So, I’m not trying to answer for leman, however:

Designing things top down only has certain easy guarantees. Thermodynamics is well understood, so using that (bottom up) feedback is an extremely useful tool for the maximizing the energy dynamics of the pocket device, i.e the device is very constrained by, e.g, ambient temperature.

However, the same constraints necessarily don’t apply to a desktop device, or even a laptop, let alone a high end desktop.

TLDR; thermals are an extremely useful tool to use, far more so than just to prevent a thermal failure.
I'm afraid I'm not following. I know of various techniques used to attain thermal management in smartphones, such as dynamic voltage and frequency scaling, use of materials with good heat transfer, proper positioning of subcomponents, etc. [See, e.g., https://www.researchgate.net/publication/262210852_Power_and_thermal_challenges_in_mobile_devices ].

But these are all techniques to address the heat produced by smartphones by improving heat dissipation or reducing the need for heat dissipation. None of them are what Leman seems to be saying, which is that Apple has knowingly implemented a poor thermal design in order to limit power consumption. I just don't understand why the latter would be a good way to limit power consumption, as opposed to repositioning components so everything would run cooler, and then limiting power consumption as needed using clock speed-time throttling, which gives both better longevity and more precise control.
 
Last edited:
  • Like
Reactions: DCIFRTHS
I've never heard of thermal throttling used as a "precision tool" for managing power and performance.

And yet it seems this is how Apple has been doing it since, well, forever. Not only on the phones but on the laptops as well. My understanding of all this is very limited and I am not a thermal design engineer, plus, information is very difficult to come by. The Asahi Linux people write about a semi-autonomous power management system (likely running some sort of AI to optimize the system), and if you look at the temperature curves of Intel MacBooks, it is very obvious that they have been meticulously designed to achieve their maximal sustained design power close to the Tjunction (maximal operating temperature).


I would think a much better engineering approach would be to simply limit the clock speed of the GPU, rather than relying on it overheating, for two reasons:

1) Thermal throttling doesn't give consistent power management. E.g., you have very different thermal throttling, and thus very different power consumption, in a hot phone vs. a cold phone.

I disagree, for reasons @altaic mentions. First, a power management system obviously has to rely on temperature sensors, otherwise you just have arbitrary performance limits. Cold vs. hot doesn't matter, what matters is a) safe operation and b) the sustained equilibrium state. Every phone will eventually become warm, and that's where the game is. In the end, this is about efficiency. You don't want to throttle your system too early or you want get the maximal possible performance. You also don't want your cooling system to be more powerful than your power dissipation target or you are wasting resources.

Apple is excellent at this game, probably even the best in the world, having done it for so long. They carefully match their performance targets and their hardware designs so that the maximal performance exactly what the system can handle, no more, no less. In fact, Apple completely disables Intel's power level limits (PL) in their laptops, instead relying on their own power management to optimize the system. Other laptop makers, who lack the precision, do not have this efficiency and often throttle the CPU too early (this has changed a bit in the recent years where all Intel CPUs run too hot anyway).

And by the way, when I talk about thermal throttling, I don't talk about overheating. The system does not overheat at any time as is evidently clear from the figures Andrei posted. It simply goes into sustained equilibrium, and is likely able to stay there for a long while. This is just about using the full operational range of your hardware.

2) Letting chips heat up to the point where they need to be thermally throttled seems like poor engineering design. Isn't it far better for the reliability and longevity of the device to implement better thermals, such that chips run cooler and, if needed, manage power consumption by throttling clock speed after a certain period of time?

I don't think there is any practical evidence that running components hot has actually any real, relevant effect on system longevity. Apple has been running Intel CPUs at their Tjunction since, well, since they first started selling Intel MacBooks, and somehow MacBooks were always regarded as one of the most reliable laptops in the industry. Same for the iPhone. Intel themselves say that running their CPUs at 100C is safe and not a problem for long term operation. I've seen a research paper that shows that a (admittedly much simple) CPU can run at 120C for over a decade.

I have a strong suspicion that this thing with "heat kills" is still a very prevalent myth from the golden age of overclocking where people would relentlessly over volt and burn out their hardware. It's not the heat that kills, it's the voltage. A highly optimized system like an Apple device, which already runs very low voltages, can afford to run hotter.
Sure, temperature is a huge factor for circuit decay, but these are not timescales at which one operates consumer hardware. I mean, do you really care whether the theoretical lifespan of your CPU is 30 years or 50 years? You are going to get rid of your phone (or it will break for a different reason) within 5 years tops anyway...
 
But these are all techniques to address the heat produced by smartphones by improving heat dissipation or reducing the need for heat dissipation. None of them are what Leman seems to be saying, which is that Apple has knowingly implemented a poor thermal design in order to limit power consumption. I just don't understand why the latter would be a good way to limit power consumption, as opposed to repositioning components so everything would run cooler, and then limiting power consumption as needed using clock speed-time throttling, which gives both better longevity and more precise control.

To clarify: this is not poor thermal design. This is precise thermal design. The heat dissipation ability of the chassis is exactly matched to the performance and power consumption targets. There is no reason to over-engineer your thermal system if you are not going to use it anyway. Apple obviously wants their GPU to run at ~3-4 watts sustained. Why would they build a chassis that can dissipate, say 6W? Just a waste of space and resources.
 
  • Like
Reactions: altaic
None of them are what Leman seems to be saying, which is that Apple has knowingly implemented a poor thermal design in order to limit power consumption.

I think that this is where our disconnect is: it’s not intentional thermal mismanagement, it’s managing different subunits within a dynamic thermal envelope. If that’s not sensible, I have to reread the thread (my apologies if so).

I just don't understand why the latter would be a good way to limit power consumption, as opposed to repositioning components so everything would run cooler, and then limiting power consumption as needed using clock speed-time throttling, which gives both better longevity and more precise control.

I’m unsure of what components you’re suggesting repositioning, so I don’t understand your point.
 
To clarify: this is not poor thermal design. This is precise thermal design. The heat dissipation ability of the chassis is exactly matched to the performance and power consumption targets. There is no reason to over-engineer your thermal system if you are not going to use it anyway. Apple obviously wants their GPU to run at ~3-4 watts sustained. Why would they build a chassis that can dissipate, say 6W? Just a waste of space and resources.
I’d just like to add one point: there are many strategies to decrease energy consumption while maintaining performance. For example, a couple versions of macOS ago, the software scheduler was redesigned to batch computations and sleep in the off time. If you consider the many units in the SoC, such a strategy can have enormous benefits— in hardware.
 
In the context of the thread title - the fear is that
a) Apple will use a thermal design for a new Mini that is incapable of continously dissipating the power generated by the M1x, hamstringing it’s performance for any continous load.
and/or
b) pointlessly using a design for the device that requires noisy cooling for it to operate within desired thermal parameters.

I’m mostly in the b) camp myself. Obviously, we don’t even know if a Mini Plus will even be produced, much less its configuration and thermal characteristics. But its not as if Apple hasn’t produced devices with less than ideal thermal and noise characteristics before by shoehorning silicon into enclosures that arguably weren’t well matched to the requirements of the silicon. So there is some historical cause for concern along with the ”thinner” part of the new Mini rumour.

It would be a shame though. There is no good reason to make a mains powered desktop device with the expected power draw either throttle or make any noise whatsoever.
 
I’d just like to add one point: there are many strategies to decrease energy consumption while maintaining performance. For example, a couple versions of macOS ago, the software scheduler was redesigned to batch computations and sleep in the off time. If you consider the many units in the SoC, such a strategy can have enormous benefits— in hardware.

Exactly! And I think that Andrei's article really drives this point home. This is also why I am saying that thermal design is all about sustained performance and the likely reason why Apple does not limit the burst performance to the sustained levels — it can still have a lot of uses and might even occasionally save energy.
 
  • Like
Reactions: altaic
Apple is excellent at this game, probably even the best in the world, having done it for so long. They carefully match their performance targets and their hardware designs so that the maximal performance exactly what the system can handle, no more, no less. In fact, Apple completely disables Intel's power level limits (PL) in their laptops, instead relying on their own power management to optimize the system.
What's your basis for this?

As far as I know, Apple works with the tools Intel provides, same as anybody else. I've never heard of there being any way to take over control of Intel CPU DVFS. It's always run by their built-in DVFS microcontroller, and they don't provide any way to run alternate firmware.

The only thing like that is in the unlocked "K" suffix chips. You don't ever have direct control, but you can give the DVFS control loop different policy and limit configuration than the factory defaults. (this is the PL stuff you mention)

If a system designer doesn't provide extreme cooling and VRM performance, it's not a good idea to raise all these settings to their max values and just rely on Intel's failsafe damage protection features. You don't ever want Intel failsafes to trigger during normal operation; they tank performance like nothing else. (Intel didn't stop at cutting voltage and frequency to minimums, they also artificially restrict instruction issue rate in each CPU core's front end - or something to that effect.)

That much-reported scandal where i9 16" MacBook Pro models had some problems? That was one of the extremely rare examples of Intel's failsafes being activated in an Apple-designed computer. The root cause was a bad firmware image with a bug that set an outrageously high power limit - well over 100W in a system designed for 45W. Under those misconfigured limits, all-core loads could and did attempt to draw far more power than the MBP's VRMs were designed to provide.

Fortunately, Intel CPUs provide a path for VRMs to tell the CPU when they're in an overcurrent condition. The resulting throttling did its job by preventing permanent damage, at the cost of making the i9 rMBP benchmark below its i7 siblings on all-core loads. Once Apple released a firmware update reducing PL settings to the intended values, order was restored and the i9 benchmarked faster.
 
What's your basis for this?

As far as I know, Apple works with the tools Intel provides, same as anybody else. I've never heard of there being any way to take over control of Intel CPU DVFS. It's always run by their built-in DVFS microcontroller, and they don't provide any way to run alternate firmware.

Apple has traditionally set the values of MSR register to essentially disable any power limit altogether (which essentially sets all PL levels to their maximum value of 100W). This was definitely the case couple of years ago when I last checked. This is also something reviewers have noted, e.g.:

Apple usually removed the TDP limit for Intel processors, so the temperature was the only limiting factor...


As per usual, Apple removes the standard TDP limit and does not restrict the processor – only the temperature is a limiting factor

Apple basically removes the usual TDP limit and allows a continuous power consumption of 100 watts. The temperature is therefore the only limiting factor.

I mean, one could check it on the modern hardware (via https://github.com/sicreative/VoltageShift) but I just can't be bothered with dealing with unsigned texts just to prove a forum point :)

That much-reported scandal where i9 16" MacBook Pro models had some problems? That was one of the extremely rare examples of Intel's failsafes being activated in an Apple-designed computer. The root cause was a bad firmware image with a bug that set an outrageously high power limit - well over 100W in a system designed for 45W. Under those misconfigured limits, all-core loads could and did attempt to draw far more power than the MBP's VRMs were designed to provide.

Exactly. That happened because Apple's power management system didn't activate due to a firmware bug (wrong encryption signature if I remember correctly). Which again goes to show that Apple doesn't use the built-in Intels PL mechanisms but relies on it's own solution. There is good reason to assume that their basic approach on their own hardware follows similar logic (but of course, it's likely much more advanced as they have direct control over everything).
 
I disagree, for reasons @altaic mentions. First, a power management system obviously has to rely on temperature sensors, otherwise you just have arbitrary performance limits. Cold vs. hot doesn't matter, what matters is a) safe operation and b) the sustained equilibrium state. Every phone will eventually become warm, and that's where the game is. In the end, this is about efficiency. You don't want to throttle your system too early or you want get the maximal possible performance. You also don't want your cooling system to be more powerful than your power dissipation target or you are wasting resources.
I belive you've misread my post. There seems to be a disconnect here. I'm not saying thermal management (with its attendant sensors, etc.) is unnecessary. That would of course be ridiculous, and world's away from what I wrote, so I don't know where you're getting that.

I'm happy to hear your counteraguments, but you need to understand what I'm saying before you can argue against it, and I don't think you do at this point, so let me give it one more shot. [This reminds me of how you didn't originally understand my point about the value of upgradeable RAM until we had some back and forth about it: (https://forums.macrumors.com/threads/rumored-pro-chip.2296634/page-22?post=30038803#post-30038803)]

I'm not talking about Apple's thermal management to avoid overheating--that may be precisely done, within the constraints of what those thermal limits are (whether Apple has given their devices good thermal limits is another question entirely). I'm instead talking about your idea that it makes sense for Apple to use thermal limits to extend hours of battery life between charges (by reducing power consumption), which is something entirely different.


There is no reason to over-engineer your thermal system if you are not going to use it anyway. Apple obviously wants their GPU to run at ~3-4 watts sustained. Why would they build a chassis that can dissipate, say 6W? Just a waste of space and resources.

My point is simply this: Anantech seems to be claiming that better thermals are acheivable with no significant downside (no more space or resources) by simply repositioning the SoC. If this is the case (and it's this supposition that my whole discussion is based on), it would allow Apple to manage power consumption far more precisely by using clockspeed x time control, rather than relying on parts warming up to the phone's thermal limits (not overheating, just warming up to the the thermal limits) to control power consumption. The former is clearly more precise than the latter, because in the former case you get the same power consumption control regardless of whether the ambient temperature is 40 F or 90 F (4 C or 32 C). If you're instead relying on how fast you reach the phone's thermal limits to limit power consumption, then you get much more throttling when the temperature is warm, and how is that precise? [Yes, there could be a 2nd-order effect in which you actually need more throttling at warmer temps because of decreased battery performance, but you can just add that to the throttling algorithm, and do so far more precisely than just using thermal throttling.]

I.e., using arbitrary numbers:

My way (power management purely based on clock speed x time) (ignoring 2nd-order effects from differential temperature-dependent battery life):
Battery life with heavy load at 4 C: 4 hours (same performance at high and low T)
Battery life with heavy load at 32 C: 4 hours (same performance at high and low T)

With what you describe as Apple's way (relying on when the phone reaches its thermal limits):
Battery life with heavy load at 4 C: 3 hours, faster performance
Battery life with heavy load at 32 C: 5 hours, slower performance

So, at least given the above simple picture, what I'm suggesting gives far more precise control of power consumption and battery life than what you are describing Apple as doing.


And yet it seems this is how Apple has been doing it since, well, forever.
Saying that 'Apple's been doing this forever' is not an engineering argument. It's just hand-waving. I'm looking for a engineering argument.


I don't think there is any practical evidence that running components hot has actually any real, relevant effect on system longevity. Apple has been running Intel CPUs at their Tjunction since, well, since they first started selling Intel MacBooks, and somehow MacBooks were always regarded as one of the most reliable laptops in the industry. Same for the iPhone. Intel themselves say that running their CPUs at 100C is safe and not a problem for long term operation. I've seen a research paper that shows that a (admittedly much simple) CPU can run at 120C for over a decade.

I have a strong suspicion that this thing with "heat kills" is still a very prevalent myth from the golden age of overclocking where people would relentlessly over volt and burn out their hardware. It's not the heat that kills, it's the voltage. A highly optimized system like an Apple device, which already runs very low voltages, can afford to run hotter.
Sure, temperature is a huge factor for circuit decay, but these are not timescales at which one operates consumer hardware. I mean, do you really care whether the theoretical lifespan of your CPU is 30 years or 50 years? You are going to get rid of your phone (or it will break for a different reason) within 5 years tops anyway...
1) It's not just about the CPU, it's about the associated components as well.

2) I checked on Google Scholar, and it is hard to find info. about this, but I wouldn't dismiss it as a myth before checking the literature. There are reports (e.g., https://people.iiis.tsinghua.edu.cn/~weixu/Krvdro9c/dsn17-wang.pdf ) of higher failure rates in servers placed in upper racks, or that are positioned directly above power modules, when the cooling comes from the floor, which the authors presume is from higher temperature (but they don't investigate this in detail).

Since it's a server farm, I'm going to assume the servers are monitored and kept within their thermal limits. Thus the higher failure rate, if temp-dependent, would be from running closer to the thermal limits, not from running above them.
 
Last edited:
I think that this is where our disconnect is: it’s not intentional thermal mismanagement, it’s managing different subunits within a dynamic thermal envelope. If that’s not sensible, I have to reread the thread (my apologies if so).



I’m unsure of what components you’re suggesting repositioning, so I don’t understand your point.
There's definitely a disconnect. For that, please read the reply I just posted for Leman, immediately above. Hopefully that will help.

For the latter (what components I'm suggesting can be repositioned), please see the post from me that started this sub-discussion, which references a quote from Anantech: https://forums.macrumors.com/thread...or.2307038/page-3?post=30416853#post-30416853
 
Last edited:
From

faviconV2
(kidding) its not a real hardware , is just a presumption that can fall short over the Intel 11800HOC and that could be incredible since that cpu is drawing over 110W

So this can get real..ok
As Leman wrote, the numbers are plausible.

The 4 efficiency cores in the M1 are approx. equiv to one perf core. Thus the M1 is effectively 5 perf cores, so 7779/5 = 1556/perf core.
If the M1X is 8 perf cores + 2 eff cores, where the 2 eff cores do the job of the 4 eff cores in the M1, then the M1X is effectively 9 perf cores, so: 15975/9 = 1775/perf core, which is a plausible 14% per-core improvement over the M1.

[If we edit this based on the post from @deconstruct60 , we have:

15975/8.5 = 1880/perf core, which is a still-plausible 21% per-core improvement over the M1.]


I'd actually expect a somewhat higher per-core improvement over the M1. Maybe 10-15% for better process/design, and 10-15% for higher clock, for a total of ~25% higher. But we shall see soon :).
 
Last edited:
I belive you've misread my post. There seems to be a disconnect here.

Oh, I don't think that we are misunderstanding each other (and of course I would never assume that you are arguing for a pure arbitrary software-based power management solution). I believe his is a the case of two (hopefully) rational people looking at thing from such different perspectives that they are confused why they are in disagreement.

My point is simply this: Anantech seems to be claiming that better thermals are acheivable with no significant downside (no more space or resources) by simply repositioning the SoC. If this is the case (and it's this supposition that my whole discussion is based on), it would allow Apple to manage power consumption far more precisely by using clockspeed x time control, rather than relying on parts warming up to the phone's thermal limits (not overheating, just warming up to the the thermal limits) to control power consumption.

I see what you mean, and of course, it makes sense. It would certainly be a "safer" solution. I just don't think that Apple is that much about safe solutions :) They are confident that they know what they are doing, and they are usually right. If you can design a system that passively reaches equilibrium at the targets you want, why wouldn't you do it? It is definitely more elegant than using active solutions.

The former is clearly more precise than the latter, because in the former case you get the same power consumption control regardless of whether the ambient temperature is 40 F or 90 F (4 C or 32 C). If you're instead relying on how fast you reach the phone's thermal limits to limit power consumption, then you get much more throttling when the temperature is warm, and how is that precise?

Hm, are you sure that the system is limited by the dissipation into the environment? It seems to me that it is constrained by the internal heat transfer capacity, so I am not too sure that it will depend on the external temperature as much as you suggest it would.

This should be easy to test though...

Saying that 'Apple's been doing this forever' is not an engineering argument. It's just hand-waving. I'm looking for a engineering argument.

I don't think that you will get a compelling engineering argument from anyone who is not on Apple's engineering team. But I disagree that this is hand-waving. The point being, Apple engineers are not stupid. In fact, Apple probably knows more about power management than anybody else in the world, because they are pretty much the only company who has direct assess to the full stack, from the basic chip design to the power controllers to the power system to the chassis. They likely have unimaginable amount of data on this at a granularity other companies can only dream of. We outsiders have it easy criticizing Apple for what we see as engineering flaws, but I am certain that many of those "flaws" are actually intentional choices that make sense for the targets the designs are supposed to meet.

In the end, it seems obvious (at least to me) that Apple has a preference for designing their cooling systems in a way that utilizes temperature-based throttling. This tells me that it works well for what they want to do.
 
  • Like
Reactions: altaic
There's definitely a disconnect. For that, please read the reply I just posted for Leman, immediately above. Hopefully that will help.

For the latter (what components I'm suggesting can be repositioned), please see the post from me that started this sub-discussion, which references a quote from Anantech: https://forums.macrumors.com/thread...or.2307038/page-3?post=30416853#post-30416853
Do you have a graphic of “sandwich PCBs”? I fail to pick up what you’re putting down from your self referenced link. All PCBs are sandwiches. SoCs use various technologies to layer things. 2.5d is pretty good, yeah? I’m not sure what our fundamental disagreement is that you’re trying to describe.

PS Please don’t reference a long post. I attempt to make my points digestible, so I have the same expectation (due to time constraints).
 
Last edited:
Do you have a graphic of “sandwich PCBs”? I fail to pick up what you’re putting down from your self referenced link. All PCBs are sandwiches. SoCs use various technologies to layer things. 2.5d is pretty good, yeah? I’m not sure what our fundamental disagreement is that you’re trying to describe.

PS Please don’t reference a long post. I attempt to make my points digestible, so I have the same expectation (due to time constraints).
I wasn't self-referencing*, I was referencing an article by Anandtech. And, for the convenience of the reader, I did digest Anadtech's long article, picking out the two key paragraphs. With all due respect, I'm not sure what more you expect in terms of my presenting what Anandtech wrote on this topic.

[*Well, it was trivially self-referenced, in that I was directing you to the post in which I linked the Anandtech article and extracted the two key paragraphs, but that would be a very odd use of self-reference since, substantively, I'm referencing Anandtech's work, not my own.]
 
Last edited:
I wasn't self-referencing, I was referencing an article by Anandtech. And, for the convenience of the reader, I did digest Anadtech's long article, picking out the two key paragraphs. With all due respect, I'm not sure what more you expect.
Apologies, I suppose I don’t agree with Anandtech’s conclusions on that, then. However, if they are correct, a reasonable explanation may be that keeping the phone case a bit cooler is important for the user experience. OTOH, I’d think Apple would accomplish that some other way 🤷‍♂️
 
Apologies, I suppose I don’t agree with Anandtech’s conclusions on that, then. However, if they are correct, a reasonable explanation may be that keeping the phone case a bit cooler is important for the user experience. OTOH, I’d think Apple would accomplish that some other way 🤷‍♂️
Also, regarding your question about “sandwich PCBs”: It would have been helpful if Anandtech included a graphic, but they didn't. Having said that, it sounds like what they're saying is that while many vendors (Apple, Samsung, etc.) use two PCB's sandwiched together, most attach their SoC to the outside of the sandwich assembly (which puts it into direct contact with the heat spreader). By contrast, Apple's SoC is placed between the two layers of the sandwich.

Then again, it sounds like that was your interpretation as well, since you opined that perhaps Apple did that to avoid hot spots on the case.
 
Last edited:
Also, regarding your question about “sandwich PCBs”: It would have been helpful if Anandtech included a graphic, but they didn't. Having said that, it sounds like what they're saying is that while many vendors (Apple, Samsung, etc.) use two PCB's sandwiched together, most attach their SoC to the outside of the sandwich assembly (which puts it into direct contact with the heat spreader). By contrast, Apple's SoC is placed between the two layers of the sandwich.

Then again, it sounds like that was your interpretation as well, since you opined that perhaps Apple did that to avoid hot spots on the case.
I’m still a little confused— are they just talking about Package on Package, or something else?
 
I believe his is a the case of two (hopefully) rational people looking at thing from such different perspectives that they are confused why they are in disagreement.
Who, me, rational? I think the jury is still out on me for that! :D
Hm, are you sure that the system is limited by the dissipation into the environment? It seems to me that it is constrained by the internal heat transfer capacity, so I am not too sure that it will depend on the external temperature as much as you suggest it would.

This should be easy to test though...
Well, instead of giving me the "Hm", you might have checked the literature. Here's an article from IEEE (about smartphones generally, not Apple specifically) in which they say:

"For one workload, we found that throttling occurred within seconds when ambient temperature was 35°C and in just two minutes when ambient was 10°C."

The point being, Apple engineers are not stupid. In fact, Apple probably knows more about power management than anybody else in the world, because they are pretty much the only company who has direct assess to the full stack, from the basic chip design to the power controllers to the power system to the chassis. They likely have unimaginable amount of data on this at a granularity other companies can only dream of. We outsiders have it easy criticizing Apple for what we see as engineering flaws, but I am certain that many of those "flaws" are actually intentional choices that make sense for the targets the designs are supposed to meet.
Sure, that's certainly often the case -- they know a lot more than we do. But remember that you can have both smart engineers that know exactly what they are doing, and bad engineering design decisions (that may be made above the engineers' heads). E.g., Apple's trashcan Mac Pro, for which Apple itself admitted they painted themselves into a thermal corner. We didn't get that design because the engineers were stupid -- they could have given Apple whatever thermal headroom it wanted. It was because of a design decision. So even with smart engineers, Apple can still make stupid (or shortsighted) thermal design decisions.

I.e., if you and I were discussing the thermal headroom of the trashcan Mac Pro back when it came out, I imagine you'd be arguing that the Mac Pro certainly must have a very smart thermal design, because "Apple engineers are not stupid. In fact, Apple probably knows more about power management than anybody else in the world...etc., etc.." I hope the example of the trashcan Mac Pro demonstrates the limitations of this type of argument.

I'd also put the current large iMac in that class (poor thermal design decision). Apple knew that, with an i7, they'd have a very noisy machine under load. Yet they went ahead with that design anyways, even though their smart engineers were perfectly capable of creating a quiet iMac, with a minimal increase in BOM (an extra fan, larger heatsink, some extra plastic ducting, and a slightly deeper case), as evidenced by what they did with the iMac Pro.
 
Last edited:
  • Like
Reactions: EntropyQ3
I’m still a little confused— are they just talking about Package on Package, or something else?
Sorry, I'm at the limit of my knowledge on this. I thought PoP was to a way to stack components within a single PCB, while sandwich construction combines two different PCB's. But I'm just guessing. Perhaps @leman can shed some light--he's more knowledgeable about some of the engineering details than I am. You could always ask @cmaier, since he's a former chip designer.

One thing I do find confusing about this is that, according to the pics from the iFixit teardown of the iPhone 13, the A15 chip is pictured as being on the outer surface of the sandwich, not buried within (https://www.ifixit.com/Teardown/iPhone+13+Pro+Teardown/144928)


1633988484247.png
 
Last edited:
As Leman wrote, the numbers are plausible.

The 4 efficiency cores in the M1 are approx. equiv to one perf core. Thus the M1 is effectively 5 perf cores, so 7779/5 = 1118/perf core.
If the M1X is 8 perf cores + 2 eff cores, where the 2 eff cores do the job of the 4 eff cores in the M1, then the M1X is effectively 9 perf cores, so: 15975/9 = 1775/perf core, which is a plausible 15% per-core improvement over the M1.

the A15 E cores do not do twice as much as the A14 E cores. (if the M1X gets A15 E cores, but presuming it does ... )

"... The efficiency cores this year don’t seem to have changed their cache sizes, remaining at 64KB L1D’s and 4MB shared L2’s, however we see Apple has increased the L2 TLB to 2048 entries, now covering up to 32MB, likely to facilitate better SLC access latencies. Interestingly, Apple this year now allows the efficiency cores to have faster DRAM access, with latencies now at around 130ns versus the +215ns on the A14, again something to keep in mind of in the next performance section of the article.
...
....The A15’s E-cores are extremely impressive when it comes to performance. The minimum improvement varies from +8.4 in the 531.deepsjeng_r, essentially flat up with clocks, to up to again +46% in 520.omnetpp_r,....The core has a median performance improvement of +23%, resulting in a median IPC increase of +11.6%. ...

The "2 do the work of 4" was mainly forum hand waving rather than any empirical measurement. Apple took the "handcuffs" off the E cores a bit in the A15 ( faster clocks more energy consumption, better TLB , and opened up the throttle to access to memory). That doesn't come anywhere is remotely close to twice the performance.

The 8P+2E core arrangement is more likely motivated on die space limiting and willingness to spend more power/energy than in magical performance jumps by the E cores. If Apple is doubling P cores but quadrupling the G cores then area space consumption it probably going to come into play. So is the need for bigger system cache is not scaling the memory controllers at same rate as the G core increase. ( more data consumers to feed and no as many lines at the "fast food" counter).


Pretty good chance that > 15,500 is an over optimistic estimate ( barring Apple does something major in memory bandwidth ).

If there are just two E cores what doing is swapping two E cores for new just two P cores. And then adding on two "full" P cores. The swapped cores won't add a "full" P because the old E cores made a contribution you'd have to "make up". So probalby closer to an 8.5 or less multiple.

The major effort is likely being thrown at the GPUs and other other un-core improvements.


I'd actually expect a somewhat higher per-core improvement over the M1. Maybe 10-15% for better process/design, and 10-15% for higher clock, for a total of ~25% higher. But we shall see soon :).

That presumes the increase in memory bandwidth will hold up.
 
Sorry, I'm at the limit of my knowledge on this. I thought PoP was to a way to stack components within a single PCB, while sandwich construction combines two different PCB's. But I'm just guessing. Perhaps @leman can shed some light--he's more knowledgeable about some of the engineering details than I am. You could always ask @cmaier, since he's a former chip designer.

One thing I do find confusing about this is that, according to the pics from the iFixit teardown of the iPhone 13, the A15 chip is pictured as being on the outer surface of the sandwich, not buried within (https://www.ifixit.com/Teardown/iPhone+13+Pro+Teardown/144928)


View attachment 1862639
That just looks like a many layer stack up with controlled impedance signal layers (top and bottom). IIRC, the SoC has the RAM module integrated as a PoP, so maybe that’s what they’re talking about. I don’t know if the SoC uses additional substrates as an interposer or heat transfer layer, though— Alumina, for instance, has very good thermal characteristics. It’d be pretty odd to me to call that a “PCB sandwich” though.

Edit: I had a look at the high res photo. Yeah, I’m wrong. It looks like three PCBs stacked. Is the SoC tucked in that center layer where the gold plating is?!
 
Last edited:
That just looks like a many layer stack up with controlled impedance signal layers (top and bottom). IIRC, the SoC has the RAM module integrated as a PoP, so maybe that’s what they’re talking about. I don’t know if the SoC uses additional substrates as an interposer or heat transfer layer, though— Alumina, for instance, has very good thermal characteristics. It’d be pretty odd to me to call that a “PCB sandwich” though.

Edit: I had a look at the high res photo. Yeah, I’m wrong. It looks like three PCBs stacked. Is the SoC tucked in that center layer where the gold plating is?!
Don't know. Also, I assume "alumina" (Al2O3) is a typo, and you meant aluminum.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.