I am not sure whether results from Nexus 6 translate to the latest iPhones. Neither do the authors offer the definition of "thermal throttling" or any source for their claim (they mention the PhoneLab paper but I don't find any discussion of throttling there)
One could easily test this: like do a long sustained performance benchmark (e.g. Wild Life Ultimate) on an iPhone in a warm and in a cold environment and see whether there are significant differences in the reached equilibrium levels.
Here are the results they show for performance on an unspecified machine-learning workload as a function of ambient temperature. Yes, it's unacceptably lacking in explanatory details (the IEEE peer reviewers were asleep at the wheel here), and the paper should certainly have been better written—but I don't think we need a precise defintion of "thermal throttling" to understand what's being shown here: With the stock Nexus phone, there's a drop in performance when ambient temperature exceeds 25C, due to thermal effects (and the overclocked variant is even more sensitive to ambient temp, as one would expect):
And yes, a Nexus isn't an iPhone, and it would be good to have iPhone-specific data, which I don't. But let's be reasonable here:
Isn't it more plausible than not that, if an iPhone's performance on a task is thermally limited, this performance is going to be more reduced at 32C than at 4C (to use the example ambient temps I gave in my last post)?
Well, wait a minute, that is not the same at all. Thermals of the trashcan Mac Pro were excellent and their cooling system nothing but beautiful. The "thermal corner" referred to the generations of the CPUs/GPUs that came afterwards: Apple apparently didn't anticipate the average TDP of high end chips to go up as it did. If you want, the problem of the cylinder MP was the inability to scale, it was a strategic mistake. For its contemporary hardware it was perfectly adequate. No engineering mistakes, managerial ones.
Edit: the cylinder MP had no problem cooling down a 130W CPU and a 250+W GPU. But later higher-core Xeons have brought that up to over 200W and the W6800X Duo has a whopping 400W TDP...
I think we agree here. I said in my last post that these aren't engineering mistakes--the engineers are smart enought to know what's right. I'm saying these are design mistakes--not implementing their best engineering.
[I do grant you that the Mac Pro example didn't correspond to what we discussed in specifics (though it did in a general sense, which is why I supplied it--but never mind that; arguing that would send us down a rabbit hole

).]
But I did provide another example that did correspond specificially to a poor thermal design decision (not for the future, but for the current product): The large iMac. Why didn't you comment on that, since it is more directly germane? The iMac's high noise levels under load obviously compromised user experience, and could have been addressed by an easy, low-cost engineering modification--which Apple didn't implement b/c of a design decision, rather than for engineering reasons. And if Apple made a suboptimal thermal design decision for the iMac, couldn't they have also have made one for the iPhone?
Just to clarify: I acknowledge we don't know enough to conclude that what Apple did with the iPhone is necessarily a poor thermal design decision, in spite of what Anandtech said. [As you rightly noted, Apple knows far more than we do, so what may not make sense to us may have been done for a good reason that we can't see with our current fact set.] [Though I regard the Anandtech writers as experienced industry professionals, so they wouldn't have written that without good reason.] What I'm saying that one can't simply conclude the opposite, i.e., that "It's Apple, so it must have been a good decision." Apple's decision are made by humans, and humans can make mistakes and misjudgements.