Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Huh. It wasn't on any roadmaps I've seen, it was meant to be Rocket Lake after Comet Lake - maybe Intel came to the conclusion they couldn't rehash the 14nm design any further so they just need to get something else out. It will be interesting to see if it can outperform the 14nm equivalent like Ice Lake struggles to, though what I've heard about Tiger Lake U makes it sound like finally a solid, if not particularly groundbreaking 10nm release.
A solid 10nm release would be wonderful, but it just seems like Intel cannot get their arms wrapped around this process. This is like an order of magnitude worse than Broadwell, which was just a complete ****show.
 
AMD’s new processors, as good as they are, almost certainly won’t beat these. AMD’s chips are likely more energy efficient, as you said, but these new Intel processors should keep Intel ahead.

The good news is we can’t go wrong with either of these new generations of processors. It’s great AMD is killing it on the desktop front and looking to kill it on the mobile front. Intel needs competition.
The benchmarks of the Ryzen mobile 4000 8-core model are already out and it mops the floor with Intel’s 14nm 8 core mobile CPUs. Knowing that the 10th gen 45W line is essentially a minor revision of the previous, its certainly looking like AMD is the better option.

Regardless, laptop manufacturers will churn out way more Intel models for various reasons and it is uncertain (albeit not impossible) if Apple will make a switch to AMD.
 
Last edited:
I think it’s fair to scoff at re-releasing the same CPU as last year’s with only minor changes to clock speed.

Yep. Comet Lake-H (this one), and even Rocket Lake-H about a year from now, are going to be quite a bummer. It's only with Alder Lake that things will get interesting again. Assuming Intel doesn't change plans, which they might. (They've skipped most of Cascade Lake, so maybe they'll skip Rocket Lake?)

Basically, what I wrote a few days ago: https://forums.macrumors.com/thread...2.2228208/page-21?post=28327886#post-28327886

So the 16-inch MBP? Might be a good candidate for AMD.
[automerge]1585866760[/automerge]
What's probably the most interesting thing is the iGPU that's coming with the 10th gen CPUs on the 16" laptops. This is going to be the first update to the iGPU in a couple years. It should help greatly in video editing as the quikcsync performance should be lot better.

I don't believe there's any GPU change in any Comet Lake product. You're thinking of Ice Lake, which doesn't exist in the H variant.
[automerge]1585866923[/automerge]
Anyway, I'm sad apple doesn't ship "full" mobile CPUs in the 13" Pro anymore. By offering intel's "-U" spec processors instead of the beefier "-H" CPUs, they position the Macbook pro 13" more as an ultrabook than as a true workstation "pro" notebook... which is weird considering the MacBook Air was the *original* ultrabook and is much more competitive in the thin-and-light space.

Here's hoping the "14" that's being rumored brings the beef when it comes to CPU wattage.

It's basically a non-starter because Intel doesn't offer H CPUs with Iris Plus any more (let alone Iris Pro), so Apple would have to put a dGPU on there. So you not only have much more CPU power draw, but also much more additional GPU power draw. You'd be talking an entirely different product design.
[automerge]1585867362[/automerge]
The 16” is actually a better deal than the 13” by far. The 14” will hopefully change the value equation a bit. I like the idea of using the H parts in cTDP down.

For one, that still leaves the GPU problem.

And two, why? Much nicer Comet Lake-U and Ice Lake-U chips exist. Comet Lake-H is awfully iterative by comparison.
 
Last edited:
The $64,000 question is where are the “10th Gen” U-Series parts that Apple would typically use for the 13” MacBook Pro? I have not seen any rumors regarding 15w or 28w U-Series containing Iris Plus GPUs, either at 10nm or at 14nm+++. Actually the lack or rumors around these CPUs has been deafening.

Ice Lake-U has multiple Iris Plus 15W SKUs, and I do believe those have been shipping?

The 28W seems DOA, though.
[automerge]1585867656[/automerge]
I was wondering the same - but maybe they could try to set the TDP of the Ice Lake parts up to their max 20W and use the MBP's better cooling infrastructure to eke better performance out of virtually the same chips as the MBA? The Air does leave a fair chunk of those chips' performance on the table with its semi passive cooling set up.

Well, the Air basically has cTDP'd up Ice Lake-Y, from 9W to 10W. (Yeah, yeah, they're weird special "NG" SKUs.) But even that is only 10W.

I don't see why Apple wouldn't use Ice Lake-U at 15W and cTDP that up to 25W. Or they could get custom Intel parts once more. It's not quite the same as the 1068G7 at 28W, but that one doesn't seem to be happening.

The one problem: none of those are six-core. Comet Lake-U is, but doesn't have Iris Plus SKUs.
[automerge]1585867704[/automerge]
People don’t really understand this.

It doesn't really matter. That's great that they were ambitious, but that was promised for 2015(!). They failed. Real artists ship.
[automerge]1585867854[/automerge]
If Apple’s going to A-series next year they don’t have any reason to use AMD parts.

I can see the case for moving some lower-end Macs to ARM, but if they move the 16-inch MacBook Pro to ARM at this point, that seems dumb to me.

(It also likely means I'm outta here.)
[automerge]1585868025[/automerge]
Except that their A-series only supports one and only one I/O port. Not even thunderbolt. No discrete GPU connection possible. If want to build a one port Macbook follow on ( iBook or iPadOSBook ) they have got something.

But if look at Intel PCH chip Apple really doesn't have much of anything comparable. Well they just buy an PCH like chip from ASMedia like AMD does. They could but .... no connection for that either on the A-series. Can they add all that stuff ? Yes . Do they have copious spare time and resources? .... hmm A12x -> A12Z and a warmed over iPad Pro early 2020. Maybe not.

Apple designed their own chipsets in the PowerPC era. They can do so with a much larger staff in the ARM era.
[automerge]1585868125[/automerge]
No mention of LPDDR4? I thought that was coming to Comet lake?

Only to Comet Lake-U. Not -Y and not -H.
[automerge]1585868198[/automerge]
As far as I'm aware, LPDDR4 will only be used in Intel's 10nm mobile processors. This processor is still on 14nm

No. LPDDR4 does come with Sunny Cove / Ice Lake, but it was backported to Comet Lake-U. (But only to U for some reason.)
 
Last edited:
  • Like
Reactions: Zdigital2015
The problem is there isn't any software or app using only the single core. We are living in the multicore world.

Basically all JS on the web is single-core. Tons of stuff is still single-threaded. Multithreading is hard.
[automerge]1585868831[/automerge]
The T2 is primarily a security chip not a PCH . Dumping the "kitchen sink" of PCH functionality onto the security chip is just plain dubious.

Err.

The T2 is the successor to the T1. The T1, among other things, runs bridgeOS, which drives the Touch Bar (both its touch controller, and its display; it's basically a custom variant of watchOS that runs a very long, narrow display instead of the square one on the Watch). And yes, it also does Touch ID.

The T2 also acts as the SSD controller, however. And yes, it also controls the boot chain.

So you got running an entire OS, driving a display and an input device, controlling storage, and security. Hardly "primarily a security chip".

There is argument has already got too much of a hodgepodge of stuff hanging off of it already. Even more would just open up more vectors . Especially if those vectors are outside the main system and entirely random external devices. No way doing physical information isolation at that point.

What are you talking about?

Technically , directly hard wired security sensors ( TouchID scanners , FaceID cameras, Webcam cameras (to prevent low level hijacking) , microphones ( Siri voice identification ) can be attached to the T2 as a security issue.

Except that the T2, just like the Apple A series, cannot actually read from Touch ID. Only the Secure Enclave can do that. And I don't know what you mean by Face ID when that doesn't exist on the Mac.

Touch ID (and yes, Face ID) only provides a very basic API of "add finger/face" and "verify finger/face". It doesn't have "hey, how about you give me all the fingers you know".
[automerge]1585869061[/automerge]
10nm Intel fab is not for consumers. It's a low yield fab process with lots of issues. You won't see 10nm for the next 12-18 months--only server space.

The MacBook Air would like to differ.
[automerge]1585869310[/automerge]
That's MacBook limitation not the CPU's. It's a result of Apple emphasis on "slick" designs.

If Intel doesn't think a design that houses a 45W CPU is appropriate for a 45W CPU, maybe Intel needs to change their TDP rating?
 
Last edited:
For one, that still leaves the GPU problem.

And two, why? Much nicer Comet Lake-U and Ice Lake-U chips exist. Comet Lake-H is awfully iterative by comparison.
Yeah I was thinking it had the improved GPU but that’s 10nm parts. (When will they come at 45W?) It’s the same GPU as Mac mini, i.e. marginal.

Why? Clock rates are pretty sad on both comet lake U and ice lake U, and the one 6-core part I see is 1.1GHz. Maybe there is or will be something better?
 
"Suitable"

If you want boost speed for a couple of seconds before your machine either catches fire or downclocks to 50% speed to cool down.
[automerge]1585871077[/automerge]
Basically all JS on the web is single-core. Tons of stuff is still single-threaded. Multithreading is hard.

Sure. But each tab is a new process. People continually going on about how they have hundreds of chrome tabs open - well guess what? Each chrome tab runs in its own process. And the process itself is multi-threaded.

And that's just your web browsing. Never mind stuff going on in the background like time machine backups, facetime, system updates, network processes, disk encryption, any of the libraries invoked by javascript (or the browser, to say decompress an image that it loads), etc.

Just look at activity monitor on a typical machine and see how much the load isn't just running on one core.

Sure, some things are single threaded, but there are a HEAP of threads running concurrently on a modern machine even doing basic workloads.
 
Everyone contemplating halting their purchase of a 16" MPB, I honestly wouldn't indulge your FOMO here. I have been using one for a week and it is a phenomenal upgrade over Apple's previous generation of laptops. The fan no longer runs constantly, meaning the processor is (ostensibly) not regularly throttled, and Apple seems to have improved the reliability of the TB3 ports considerably. My 2018 model had a devil of a time reliably connecting to my eGPU/hub and then constantly disconnected at the merest bump, sending my workflow into a tailspin. To say nothing of the keyboard...

The current top end i9 already offers turboboost frequencies up to 5.0GHz, so this seems like a heavy dollop of marketing hoopla on Intel's part. The iGPU on a pro laptop is, IMO, a moot point, as pro users are more likely concerned with the discrete grfx anyway. And the current 16" already has a build-to-order option of 64 GB of memory, which offers ample headroom. I doubt Apple is going to double it again, because anyone who genuinely needs more than that almost certainly needs desktop processors to handle whatever insane 3D workflow they have.

So all told, a late 2020 16" MBP is going to net you, a pro user, a whopping 0.3 Ghz higher frequency, (maybe) an improved iGPU that you will almost never use, and Wifi 6, which hardly seems like a dealbreaker relative to everything else. So what does an extra 0.3 Ghz of fequency do for your Photoshop render? Shave off 0.0001 of a second?

You are perfectly safe buying a 16" MacBook right now and will probably be very, very happy with it. I 100% am.
 
Earlier this week I put a MBC 16" into my Apple cart but did not purchase

The estimated delivery date was April 9th'

Now, I just checked and the estimated delivery date is April 24th

Could this mean that a spec bump is about to happen?
 

Linus reviewed 4900HS which is better than 9980HK in terms of performance and power consumption.
 
It would be nice if they introduced the powerful ones first instead of the silly first wave of low-power chips.

blame intel

two entirely different product lines.

also be aware that the high power "tenth generation" core CPUs (above 35-45w - those with 5 digit model codes after i7/i9 etc.) are STILL just a new revision of skylake. The lower power ones with 4 digit codes (e.g., i7-1060NG7) are entirely new architecture.

i.e., don't expect a massive difference from the past couple of years in that segment.
[automerge]1585894464[/automerge]

Intel is dead. AMD APU consumes only 50W for 8 cores Intel consumes 67.5W per core base on Reddit.

I wouldn't say "intel are dead".

But yeah. They're behind.

A lot. like... more than any time in the company's history, genuinely. Perfect storm of intel having massive problems with their new fabrication process for the past 5 years, and AMD doing far better than even they expected with Ryzen's performance. AMD expected to be competing with stuff from intel that now isn't due out until 2021-2022.
 
Last edited:
I wouldn't say "intel are dead".

But yeah. They're behind.

A lot. like... more than any time in the company's history, genuinely. Perfect storm of intel having massive problems with their new fabrication process for the past 5 years, and AMD doing far better than even they expected with Ryzen's performance. AMD expected to be competing with stuff from intel that now isn't due out until 2021-2022.

The question is when will Apple ditch Intel? I just dont wanna buy Mac until they replace Intel CPU.
 
The question is when will Apple ditch Intel? I just dont wanna buy Mac until they replace Intel CPU.

I suspect they will stick with intel until they switch wholesale to ARM. It will re-unify all their products under the one architecture. Expect macbook/macbook air first, then macbook pros, then imacs and mac pro. However once the Macbook Pros are switched the rest of the line will switch pretty much at the same time i would bet. Remember/look up how they changed from PPC to Intel. It started with the Macbooks, but the rest followed pretty quick after that - and that was a much smaller Apple with less resources (money, engineers) to throw around.

The mac pro just got updated and i suspect that won't be revised until say 2025 at least, at which point Apple should have been able to scale up their AXX series to laptop/desktop/workstation performance range, especially given in those form factors they'll have up to 45w for mobile and 100w for desktop to play with (and up to 300w for mac Pro or more!). The ipad Pros are already pretty competitive with laptops in 5-10 watts or so. I don't think many realise just how impressive the ipad/iphone CPUs are. Considering the minimal power draw and passive cooling - the performance they're getting is insane vs. intel. They're already punching far above their weight.

Unleash those designs with more power (for higher clocks and more cores) and more cooling and they could be very impressive indeed.

I'd love to see an AMD Ryzen based machine as I'm sure you would, but i don't think Apple are likely to jump from being dependent on one CPU manufacturer (intel - which has continually delayed their products since 2015) to a smaller one (amd - who are currently doing well, but had their own issues between 2010-2016) when they have their own processor design team and the flexibility that provides.

Apple are big enough to buy AMD though, which would be interesting. But i don't think that will happen either.


edit:
i didn't really want a new intel apple machine either, but i need to update my 2015 MBP for more ram/storage/cpu, and there's no AMD alternative yet that runs macOS so....
 
Last edited:
Dude, I know how it works. Do you really think I dont know that?

Bro, I have no idea what you know. When you ask “Can you sustain the clock speed at 5ghz with 16-inch MBP? The sustainable clock speed is actually 3.3 ghz or near, not even close to 5ghz” you quite clearly don’t understand how it works. Apparently your understanding is all jumbled up.

You must think the 5GHz turbo clock speed is an all-core sustainable rating, since you say it can’t reach 5GHz, only 3.3GHz—as if that’s a bad thing. Since when is a 2.4GHz chip running at 3.3GHz a problem lol?

Then you say “What's the point of having a maximum clock speed which can not be used mostly?” Well, depending on the workload it might or might not matter. Some workloads are mostly single core, and/or very intermittent and/or bursty. They might do a lot of their work in the range of 4.0-5.3GHz and take good advantage of dynamic frequency/voltage scaling in a race to idle type of scenario. Then again, if you’re running a workload that continuously pegs all cores, a max turbo speed spec is irrelevant 🤷‍♂️

So the answer to your question is, maybe it matters a lot, but maybe it doesn’t matter at all. Is that helpful? Maybe.

Mostly, people dont know that. They believe it is better. would you buy 2.4~4ghz or 2.4~5.3ghz even if they have the same sustainable clock speed?
See above.

If you’re point is that Intel should be responsible for educating people about how to choose the best CPU for their requirements, well ok. But how far can that realistically go?

The best Intel might be able to do is to provide some specs, ratings and benchmarks based on some hypothetical workloads. They could certainly be more clear and transparent about what kind of performance to expect from a given CPU. But it may or may not apply to a particular user. Would that help them? Maybe. Or possibly just confuse the hell out of them.

In reality, there are lots of factors that affect performance. In large part it depends on system design. You can’t just look at the CPU specs and power requirements in isolation.

What is the cooling capacity of the system, on average? At peak? What’s the ambient operating temp? A bunch of other relevant parameters related to cooling and thermals. What mix of instructions does the workload of the synthetic benchmark use? What percentage is AVX512?

Oh, your workload isn’t similar? Intel’s ratings with those benchmarks aren’t valid or applicable in that case. The provided info may be quite misleading, resulting in an unsophisticated user making the exact wrong choice, depending on their understanding of certain concepts.
 
Last edited:
  • Haha
Reactions: high heaven
The question is when will Apple ditch Intel? I just dont wanna buy Mac until they replace Intel CPU.
It depends on which Mac. The rumor is next year for maybe a few models. But who knows?

As always, if you don’t need a new machine now, you should wait. Something better, faster or cheaper will be available someday.

But if your current machine needs replacing, buy whatever best meets your requirements. If that’s a system with a couple TB of RAM and a $7,000 EPYC 7742 running Linux, awesome. If it’s a base MacBook Air for $999, that's cool too 🙂
 
  • Like
Reactions: throAU
lol, I know how it works. Then again, did Intel even mention anything about the maximum clock speed with how many cores? No. What about the power consumption base on the clock speed and core uses? No info.

The "power consumption base" is 45W.

The Turbo Boosts per core are typically not documented. Some say this is because Intel likes to keep it as secret sauce, but I'm not sure such fixed values even exist — Thermal Velocity Boost in particular is quite flexible.

What's the point of having a maximum clock speed which can not be used mostly?

The entire point of Turbo Boost is to temporarily, in short bursts, raise the clock above its regular speed. And to turn off a few cores and raise it even further, which is quite a common scenario, because so much code is effectively single-threaded.
[automerge]1585898930[/automerge]
Yeah I was thinking it had the improved GPU but that’s 10nm parts. (When will they come at 45W?) It’s the same GPU as Mac mini, i.e. marginal.

When will 10nm parts come at 45W? Apparently in Alder Lake-H, around two years from now. It's possible they skip Rocket Lake-H altogether (they did skip many Cascade Lake SKUs). So maybe not quite as far away.

When will the improved GPU come to 45W? I believe that's planned for Rocket Lake-H.

Why? Clock rates are pretty sad on both comet lake U and ice lake U, and the one 6-core part I see is 1.1GHz. Maybe there is or will be something better?

But Ice Lake brings a much nicer memory controller, a way better GPU, and other modernizations. It's really not as bad as it looks.

Comet Lake, in turn, would allow six-core options.

Sure. But each tab is a new process. People continually going on about how they have hundreds of chrome tabs open - well guess what? Each chrome tab runs in its own process. And the process itself is multi-threaded.

Sure, but most of those processes are dormant in the background. And the point is they don't really speed up your current tab. They don't impede it either, sure.

And that's just your web browsing. Never mind stuff going on in the background like time machine backups, facetime, system updates, network processes, disk encryption, any of the libraries invoked by javascript (or the browser, to say decompress an image that it loads), etc.

Yes, but that has diminishing returns. It's wonderful at two cores, and quite helpful at four, but how much does it help at six? How about eight?

Just look at activity monitor on a typical machine and see how much the load isn't just running on one core.

Sure, some things are single threaded, but there are a HEAP of threads running concurrently on a modern machine even doing basic workloads.

I'm quite aware. ;)
 
Last edited:
  • Like
Reactions: PickUrPoison
The entire point of Turbo Boost is to temporarily, in short bursts, raise the clock above its regular speed. And to turn off a few cores and raise it even further, which is quite a common scenario, because so much code is effectively single-threaded.

Yeah, people don't seem to understand boost.

It's for short term spikes to make the machine feel snappier, because most of the time the typical laptop is idle doing nothing waiting for input. It can use that time to basically go to sleep, cool the CPU down until you click that button or load something from disk, scroll the page, etc. - then briefly spike to max speed to get that done and then go back to sleep.

It doesn't help that most OS boost algorithms are stupid on laptops (and Apple is pretty terrible here), and boost to max speed then ramp the fan like crazy when in most cases, the user might say, want to render that video in the background at 3/4 speed without 80 decibels of high pitched fan noise.

Whether i'm waiting for it for 30 minutes or 40 minutes... its a background type task - don't scream fan noise at me BY DEFAULT. I think most people would rather take 3/4 performance and 1/4 noise if they're sitting in front of the machine working - but that doesn't win performance benchmarks. IMHO most portable machines should start dropping boost by about 50% RPM fan speed if temp is still climbing because that seems to be when laptop fans start getting real loud.

The problem occurs when people try to do things like video rendering or gaming on thin/light/portable machines and expect them to perform like a desktop because the box/spec says they boost to the same clock speed.

Sure, they are designed to boost the same but...

A typical desktop tower has cooling capable of 100-200 watts of heat dissipation for the CPU alone, plus another 70-300 watts for the GPU, and they have space for big slower rpm fans.

A laptop is really, really not suited for sustained high throughput workloads - if you're doing that you REALLY want to be using a desktop if at all possible because getting a laptop to perform anywhere near a desktop at that without being hot and loud is really freaking hard because of the laws of physics vs. trying to shed heat.

You can have a choice of:
  • quiet
  • sustained high speed
  • small
but you can only really have two....
[automerge]1585908648[/automerge]
Yes, but that has diminishing returns. It's wonderful at two cores, and quite helpful at four, but how much does it help at six? How about eight?

It's definitely noticeable between 4 and 8. Based on upgrading my primary desktop from 4 cores 8 threads to 8 cores 16 threads last year, and still owning the 4 core box. 2 cores are crippling in 2020. Yes, i'm writing this from a dual core MBP right now (i've got a lot of machines).
 
Last edited:
It's definitely noticeable between 4 and 8. Based on upgrading my primary desktop from 4 cores 8 threads to 8 cores 16 threads last year, and still owning the 4 core box. 2 cores are crippling in 2020. Yes, i'm writing this from a dual core MBP right now (i've got a lot of machines).

I’d say it depends a lot on the workload. I would also wonder how much your 8-core is faster due to boost rather than more threads.

Swift doesn’t even have a good concurrency story yet (basically still 10.6 Snow Leopard’s Grand Central Dispatch). But even if you go async/await, most of it is really just to free the current process’s thread during I/O-heavy rather than CPU-heavy loads. Rather little is about parallelism. (Some like to go heavy on await Task.Run(), and then realize they’ve actually made their code slower due to context switches.)
 
What does the die size matter? They still have the beefiest CPUs on the market and these clock speeds are nuts, especially for gaming. If you’re into VR, this is what you’ve been waiting for.

Their 10nm chips are incredibly dense, much more so than any so claimed 7nm.

People don’t really understand this.

Again, don’t get caught up on the marketing, it’s all about performance. Intel’s 10nm are still denser than those claimed to be 7nm.

Intel’s 9th and 10th gen CPUs run hotter, slower and cost more than AMD’s 4000 series mobile CPUs. AMD’s Vega-based iGPU performs better than even Intel’s Iris Plus. Worse, Intel forces OEMs to choose between higher but unsustainable clock speeds (Comet Lake) and better graphics performance (Ice Lake). AMD’s Ryzen 3 mobile 4000 series is better than both with less heat and approaches their own 3700x desktop-class CPU in performance.

Apple could use AMD and still integrate ThunderBolt, but they are fixated on their more profitable A-series. I am not holding my breath. I need x86 software compatibility and Apple’s lackluster iPad Pro update after 2 years (which is no better than Intel’s one year update from 9th Gen CPUs to 10th Gen Ice Lake) is an indication of what will come when Apple controls every aspect of their own hardware. It will be no panacea (except perhaps for shareholders).

The only A-series chips in which I am interested are made by “A”-MD. As you say, “it’s all about performance.”


I have been waiting on more news for the rumored 14” MacBook Pro. Any ideas if this would be suitable or if they’d use a different CPU?

I am waiting to compare Apple’s upcoming 14” MB(P) with Lenovo’s upcoming T14. Unfortunately even if Apple releases a 14” model, it will likely be Intel-based and have a much shorter support cycle.
[automerge]1585910732[/automerge]
No mention of LPDDR4? I thought that was coming to Comet lake?

AMD’s Ryzen 3 4000-series supports LPDDR4.
 
Last edited:
Alder Lake will be more than 4 cores and is queued up to arrive in 2021. Intel is adding public notification for the new instruction when means it likely isn't 2022. Prep for that is 2020 work.

https://www.anandtech.com/show/1568...-for-alder-lake-also-bf16-for-sapphire-rapids

may not like the other 4 cores but more than four.
[automerge]1585861631[/automerge]


Battery life at 105-115W ? The MBP 16" chassis doesn't even draw 105-115W from the wall when plugged in. It would need a fixed , dedicated power port. ( USB PD only goes to 100W )

You can still drain battery while on charger if draw is enough, usually means you’re exceeding the charger capacity. Quite a few thin laptops I’ve seen with higher end dedicated GPUs do this under combined heavy CPU+GPU loads. I’m almost certain even the current MBP 16 does too. I’ve seen it happen in BootCamp where Apples tweaks/limits on the CPU are probably not enabled.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.