Intel Unveils 10th-Generation 'Comet Lake' Chips With Speeds Over 5GHz

I simply can’t make heads or tails out of Intel marketing speak anymore. Too many lakes, for one thing, and they all seem to overlap in the market. Every article like this should include some sort of roadmap for context.

And their marketing department needs to talk with their engineering department so they stop looking like idiots. In a world where everyone is fixated on process node, and where Intel seems to be lagging, naming a process 14++nm is a mistake. To me that‘s 15nm... And what the hell is “thermal velocity” and how do you “boost” it? Isn’t “thermal velocity boost” really just “thermal acceleration”. When I say it that way, it sounds bad... I don’t want my thermals accelerated.

I half expect their next gen tech to be called “thermal runaway” because it sounds techy and cool...
 
I just hope and pray Apple stays with Intel chips. ARM hardware is great for phones and tablets, power efficiency and light tasks, but they simply do not perform the same or sustain peak computing for as long. As a gamer and Engineer who does a lot of simulation and uses taxing design software on the CPU/GPU ARM devices are not for me.

I hope Apple switch to much faster ARM processors ASAP.
Intel chips are slow and power hungry.

ARM CPU like Amazon N1 based server CPUs are faster than Xeon/EPYC on single core performance while maintaining all core boost clock for 32 cores.

So in total that is way faster the 18 core Xeon W in all direction already.
 
Even $50 Athalon CPUs support ECC memory


PCIe 4.0 NVMe SSDs have been for sale for about 6 months now. 1Tb will put you back about $150.

Huh, I stand corrected, although Apple do it a bit differently but I assume such an action would be rather easy for the engineers. Well if you need it then that's totally understandable and I assume a good proportion of MacBook Pro owners do need the faster speeds.

Thank you for pointing that out good sir :)
 
Both AMD and Intel have ultra-mobile parts.

And they aren't great, Intel's stuck at dual core with Tiger Lake getting bumped to four core. AMD has the worst power-hungry mobile CPUs on the market; they're behind Intel by a few years (AMD is rumored to announce the Zen 2 mobile CPUs today that should improve it a lot).

You're going to tell me that if Apple scales up their A CPUs to 7-10W, they wouldn't be able to get close to AMD or above them? I'm willing to bet they can. A14 is going to be on TSMC 5nm process later this year, which doesn't mean they're better than Intel's 14nm++++ process node but they'll be much closer to them than they ever were in the past.
 
I hope Apple switch to much faster ARM processors ASAP.
Intel chips are slow and power hungry.

ARM CPU like Amazon N1 based server CPUs are faster than Xeon/EPYC on single core performance while maintaining all core boost clock for 32 cores.

So in total that is way faster the 18 core Xeon W in all direction already.
Is it faster than 64-core Threadripper?
 
They can't do marathons either in the form factors of iPad Pro or iPhone.

Apple's A chips are specifically designed for the form factors; Intel/AMD are not.

Build A chips for the laptops, you're going to have better sustained perf.

After all, look at what happened with 2018 rMBP with i9 CPUs where it was severally throttling itself most of the time.

Thats Apple not understanding what Intel sold them! If Apple had stayed with the larger case design till 2019 the issue would have less likely! As the needed chips would have been available given the limits of the case design cooling.

Apple failed their customers selling the 2016 > 2019 15" models with such crappy CPU's. But, if you look at the iMac's the issue is not present as they have much better cooling.
 
Last edited:
amd just pushed their 4000 series that is focusing heavily on laptop/mobile.. and they blow intel out of the water.
To be as fair as I was to Intel, until we’re talking about a shipping product, it’s just what they SAY they’re going to do. :) With a cursory look at the performance, it looks to be better than it’s own prior iteration, but not enough ahead of Intel to really matter. But we’ll see who actually ships first soon enough, then we’ll be able to look at the real performance.
 
this sounds patently incorrect as arm is just an architecture and the insane performance on iPad Pro and iPhone 11 surpass most of apple’s intel laptop line

You are referring to benchmarks executed on very optimized hardware/software machines (iPhone and iPad Pro, both I own) to full on computers with a complete OS. As much as I love my iPad Pro, it simply cannot do what my computer does, no matter what the benchmaks say.

Look at the Surface Pro X (ARM) and Surface Pro 7 (Intel core i3 to i7). Even the half-range Pro 7 beats the full spec-ed Pro X in performance on Photoshop and engineering design software.
 
Even $50 Athalon CPUs support ECC memory


PCIe 4.0 NVMe SSDs have been for sale for about 6 months now. 1Tb will put you back about $150.

Did a quick check but couldn't find any MLC PCIe-4 SSDs, doesn't Apple solely use MLC? Most (even Samsung's) are TLC that I can find so far.
 
Even Microsoft is working on this with Surface Pro X and Windows on ARM.

It is nice that you mention the Surface Pro as an example. Look at the many YouTube reviews (Linus Tech being one of them) were they show how the mid-range Surface Pro 7 (Intel) beats the full spec-ed Surface Pro X (ARM) even tho the benchmarks favor the later and it is considerably more expensive.
 
Thats Apple not understanding what Intel sold them! If Apple had stayed with the larger case design till 2019 the issue would have less likely! As the needed chips would have been available given the limited of the case design.

Apple failed their customers selling the 2016 > 2019 15" models with such crappy CPU's. But if you look at the iMac's the issue is not present as they have much better cooling.

You're validating my point; Intel CPUs works when they're in the "proper form factors"; you can't compare both and say that ARM sucks because they can't sustain well. If Apple designed their A* CPUs to match the laptop form factors, they'll have a much better and higher sustained performance. Right now, they're not, they're designed for short bursts for battery life in a no-cooling form factor, so you can't compare both. There's Intel Y series with dual core vs. Apple's A12X that could be compared but I don't recall see anything like this, Y series kinda sucks.

iMac has the same issues, they're not just more noticeable because they have larger form factor; they've improved the cooling in iMac Pro.
 
Is it faster than 64-core Threadripper?

They were comparing N1 32 core to 32 cores zen1 EPYC.
So obviously not faster than current Zen 2 64 core EPYC (half core count, lower per core perf).

But that 32 core CPU is only 105W.

It just shows as a proof that ARM server chips can outperform Xeon/EPYC on both single core performance and multi core performance.
 
I simply can’t make heads or tails out of Intel marketing speak anymore. Too many lakes, for one thing, and they all seem to overlap in the market. Every article like this should include some sort of roadmap for context.

And their marketing department needs to talk with their engineering department so they stop looking like idiots. In a world where everyone is fixated on process node, and where Intel seems to be lagging, naming a process 14++nm is a mistake. To me that‘s 15nm... And what the hell is “thermal velocity” and how do you “boost” it? Isn’t “thermal velocity boost” really just “thermal acceleration”. When I say it that way, it sounds bad... I don’t want my thermals accelerated.

I half expect their next gen tech to be called “thermal runaway” because it sounds techy and cool...

Intel is lost in its now vortex of marking spin!

They stepped in a deep puddle trying to get to 10 nm which they are just getting to the point of getting out of. But they didn't have the needed products to hold them until the process was working at production levels.
So they created tweaked CPU's to hold there market until things worked out but as these chips where based on the older fab processes they ran hotter then they signaled to their product partners (i.e. Apple) so Apple got burnt by not having the needed laptop design to run these hotter chips.
 
Just because they release one ARM model it doesn't mean Intel is all of a sudden out of support. Microsoft has both ARM and Intel Windows, and they're releasing all sorts of hardware. For us engineers and creative professionals, Intel will stay here. Especially now that the Mac Pros have just been released, they're not going to stop supporting that anytime soon. An ARM MacBook Air may or may not be released as an ultra-portable, but it will be a long time before software vendors can catch up. It's the same with the Surface Pro X, you can't use Photoshop or Premier. It's obviously for lawyers, journalists and businessmen. Many people wouldn't mind the iPad Pro's performance and battery life in a real MacBook format. I know it's useless with no software on it. Microsoft's own Windows ARM is pretty limiting with x86 32-bit emulation only. They're betting that developers will eventually catch up, but Windows Mobile was the same thing and it failed before. There's no way Apple can pull off an instant transition and stop supporting Intel, including the brand new Mac Pros.

Finally someone who really understands the point.
 
You are referring to benchmarks executed on very optimized hardware/software machines (iPhone and iPad Pro, both I own) to full on computers with a complete OS. As much as I love my iPad Pro, it simply cannot do what my computer does, no matter what the benchmaks say.

Look at the Surface Pro X (ARM) and Surface Pro 7 (Intel core i3 to i7). Even the half-range Pro 7 beats the full spec-ed Pro X in performance on Photoshop and engineering design software.

This is something that most Apple fanboi's don't get. There are many functions that the A-series processors have been optimized for/hardware accelerated, etc. When you introduce workloads that are CPU heavy that don't have that optimization, and are essentially being done "in software" (aka CPU grunt work), then you'll find out pretty quickly why our main desktop/laptop machines are still using Intel CPU's.

Will that eventually change? Probably. Apple has shown remarkable progress on the A-series chips. But I suspect it's going to be a while yet.
 
Even the half-range Pro 7 beats the full spec-ed Pro X in performance on Photoshop and engineering design software.
I don’t think you can compare ARM in that instance as those ARM processors are nowhere close to the performance of what Apple’s shipping in their phones. Apple has also added undocumented instructions to their processor instruction set (part of their license agreement with ARM). So, I think looking at the performance of a Qualcomm part just shows what the absolute worst performance could be.
 
And their marketing department needs to talk with their engineering department so they stop looking like idiots. In a world where everyone is fixated on process node, and where Intel seems to be lagging, naming a process 14++nm is a mistake. To me that‘s 15nm...
It's actually called "14nm++", and it's not really a marketing term. It's simply the third generation of their 14nm process (after 14nm and "14nm+").
And what the hell is “thermal velocity” and how do you “boost” it? Isn’t “thermal velocity boost” really just “thermal acceleration”. When I say it that way, it sounds bad... I don’t want my thermals accelerated.
It's just their trademark for an additional (temperature-dependent) boost on top of "Turbo Boost". Here you can blame the marketing department. But then, you should also talk to AMD regarding names like "thread ripper". :p
 
Just because they release one ARM model it doesn't mean Intel is all of a sudden out of support. Microsoft has both ARM and Intel Windows, and they're releasing all sorts of hardware. For us engineers and creative professionals, Intel will stay here. Especially now that the Mac Pros have just been released, they're not going to stop supporting that anytime soon. An ARM MacBook Air may or may not be released as an ultra-portable, but it will be a long time before software vendors can catch up. It's the same with the Surface Pro X, you can't use Photoshop or Premier. It's obviously for lawyers, journalists and businessmen. Many people wouldn't mind the iPad Pro's performance and battery life in a real MacBook format. I know it's useless with no software on it. Microsoft's own Windows ARM is pretty limiting with x86 32-bit emulation only. They're betting that developers will eventually catch up, but Windows Mobile was the same thing and it failed before. There's no way Apple can pull off an instant transition and stop supporting Intel, including the brand new Mac Pros.
Microsoft has .NET. Apple does not even use Java.
 
It is nice that you mention the Surface Pro as an example. Look at the many YouTube reviews (Linus Tech being one of them) were they show how the mid-range Surface Pro 7 (Intel) beats the full spec-ed Surface Pro X (ARM) even tho the benchmarks favor the later and it is considerably more expensive.

Yea, I did, and I'm not surprised. They also did a comparison showing how well iPad Pro does against Surface Pro X as well, even the limited Photoshop app on iPad was better than Photoshop on Surface Pro X.

We are talking about a custom first-gen CPU from MS and Windows 10 that is pretty much constantly a work in progress with very little ecosystem supporting Windows on ARM (ARM versions of Photoshop is coming). Wait a few years and see if Microsoft is serious with the promised software optimizations and pushing their custom CPUs further.

As everyone just pointed out, iPad OS and A12X are both optimized for each other and you'll get excellent performance. Apple needs to push iPadOS further and allow devs to do more.
 
You are referring to benchmarks executed on very optimized hardware/software machines (iPhone and iPad Pro, both I own) to full on computers with a complete OS. As much as I love my iPad Pro, it simply cannot do what my computer does, no matter what the benchmaks say.

Look at the Surface Pro X (ARM) and Surface Pro 7 (Intel core i3 to i7). Even the half-range Pro 7 beats the full spec-ed Pro X in performance on Photoshop and engineering design software.

Surface Pro X in those benchmark are running x86 software not native ARM64 binaries.
Native benchmarks like 3Dmark or Geekbench usually runs twice as fast compare to x86 emulation mode.
[automerge]1578338991[/automerge]
This is something that most Apple fanboi's don't get. There are many functions that the A-series processors have been optimized for/hardware accelerated, etc. When you introduce workloads that are CPU heavy that don't have that optimization, and are essentially being done "in software" (aka CPU grunt work), then you'll find out pretty quickly why our main desktop/laptop machines are still using Intel CPU's.

Will that eventually change? Probably. Apple has shown remarkable progress on the A-series chips. But I suspect it's going to be a while yet.

Actually you got it wrong.
Apple's A series CPU is already matching desktop x86's performance based not on Geekbench score but on SPEC2006 which is an industry trusted benchmark that got official score submission from Intel AMD and IBM.

That's a complex workload and not something running accelerated by DSPs (aka "all work done in software").
 
Last edited:
You are referring to benchmarks executed on very optimized hardware/software machines (iPhone and iPad Pro, both I own) to full on computers with a complete OS. As much as I love my iPad Pro, it simply cannot do what my computer does, no matter what the benchmaks say.

Look at the Surface Pro X (ARM) and Surface Pro 7 (Intel core i3 to i7). Even the half-range Pro 7 beats the full spec-ed Pro X in performance on Photoshop and engineering design software.

The idea that iOS is not “a complete OS” seems like more of a feature-based critique of iOS rather than a correct evaluation of the hardware which the OS runs on. Benchmarks like Geekbench are designed to be platform-agnostic, and any functional problem you have with iOS is irrespective of the hardware which drives it. Render a 4k project in iMovie on a fan-less super thin iPad Pro head-to-head with a significantly thicker and louder 13” Macbook Pro and then try and tell me that the A-series ARM processors have no potential in a Mac. Additionally, implying that the “complete OS” is the one which is NOT “very optimized” makes little sense at all.
 
Hmmmmm. More like Intel told Apple one thing, shipped a different thing and Apple played the cards they were dealt.

It's likely a bit of both! The lead time Apple needs was with the idea Intel would deliver what they claimed. But Apple was foolish not having a plan B just in case. It was clear early on Intel was likely to come up short so Apple should have been smart enough to have other options. If they altered the case to have a bigger cooling solution and forgetting the 'Thin is In' mantra that would have served them better. Someone failed to think it through!
 
7nm, 5nm is a marketing term. I remember a news about a representative from TSMC stating that the 'nm' is more like a symbol, just like '10th gen'.
However, intel's inability to release 10nm as scheduled should still be considered as a management disaster.


My observation of this thread as a whole is that everyone is fixated on the "nm's", because lower is better...but I'm also noticing that the same people aren't engineers.

What is missing from this discussion is that the "process" is about the wave length of the laser used to expose the wafers.

Sure, they can fit more into a chip with higher wavelength, but is it always beneficial to make the circuitry as small as possible? I don't think so. I know there are issues that pop up when traces get too small.

And with AMD, just because they advertise that they are using a process with higher resolution, does that mean they are actually making designs that push the limits? My guess is no....a 1080p tv can play 720p content just fine...

I would like to hear an engineers take, but I anticipate there are a complex set of factors that lead to the choices being made in these chips. I bet the NMs aren't the biggest factor.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.
Back
Top