Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Would it be fair to say that there's a disjunction in that progression because of what Geekbench measures? The disjunction I'm thinking of is the point at which Apple started adding additional functional units into the A-series outside of the processing cores, the biggest example being the Neural Engine.

Sure. These benchmarks measure the CPU. (Well, the Compute benchmark measures a mix of the CPU and GPU.)

But that's what most apps will use. Almost nothing outside Apple's own stuff (such as the Camera app) benefits from the Neutral Engine.
 
I do not think it is needed either - therefore i was surprised about your "useless" statement. I personally would not consider a hypothetical AS Mac with A14/iPad Air single core performance but much higher MT performance useless.

Agreed. I'm just saying there are diminishing returns from adding more cores.
 
Ahhh...my mind. Are you saying a mobile chip without fans running on battery out performs a full fledged intel CPU with fans running off an electric socket while using 1/10th of the power? My brains can't comprehend this. This is akin to saying a bicycle is faster than a car.

Yes, an Apple A14 outperforms any Intel and AMD CPU on the market right now, in single-threaded tasks, in quick bursts.

And, if you really need an analogy, no, it's akin to saying a smaller CPU from 2020 is faster than a bigger CPU from 2015.
 
My generalization isn’t “wrong.” A12x and z are in a line of products that don’t update on a yearly cadence, so obviously 20% per year couldn’t apply. And if you’d like multi core scores, you’ll see the data works for that, too (i’m not your personal google. All this data is out there). And I’m not sure what your point is re a9 and a10. I never suggested that Apple achieved *only* 20% per year. (and i said 2015, which would have made the a9 the baseline, so the 70% improvement wouldn’t be on the list.

Being overly pedantic doesn’t advance the conversation.
Making generalisations that don't fit the data then making excuses for it, doesn't advance the conversation.
 
Making generalisations that don't fit the data then making excuses for it, doesn't advance the conversation.

It actually fits the data remarkably well. Here's a fit of the Geekbench scores in single/multi core and metal from 2015 (A9) to 2019 (A13 Bionic) using least-squares method. 22% Year over Year improvement in single core, 29% YoY in multicore and 27% YoY in Metal.

View attachment 963666
 

Attachments

  • Figure_1.png
    Figure_1.png
    842.5 KB · Views: 160
  • Like
Reactions: chucker23n1
But that's what most apps will use. Almost nothing outside Apple's own stuff (such as the Camera app) benefits from the Neutral Engine.

You might want to check in the App Store for the number of apps, especially pro apps, than now include CoreML models. The Neural engine accelerates inference for many typical ML layers (CNNs, & etc.) in these models.
 
Yes, an Apple A14 outperforms any Intel and AMD CPU on the market right now, in single-threaded tasks, in quick bursts.

And, if you really need an analogy, no, it's akin to saying a smaller CPU from 2020 is faster than a bigger CPU from 2015.

Not really, smartphone CPUs are different from Desktop CPUs. We all know mobile CPUs are never more powerful than Desktop...at least historically. For example I don't think the original iPhone CPU was more powerful than a G4 tower from 2002?
 
Not really, smartphone CPUs are different from Desktop CPUs. We all know mobile CPUs are never more powerful than Desktop...at least historically. For example I don't think the original iPhone CPU was more powerful than a G4 tower from 2002?
There are st least three logical fallacies in this post.
 
You might want to check in the App Store for the number of apps, especially pro apps, than now include CoreML models. The Neural engine accelerates inference for many typical ML layers (CNNs, & etc.) in these models.

Fair enough.

Not really, smartphone CPUs are different from Desktop CPUs.

Not really. They have less thermal headroom, obviously, and tend to integrate more chips on the SoC. That's about it.

We all know mobile CPUs are never more powerful than Desktop...at least historically. For example I don't think the original iPhone CPU was more powerful than a G4 tower from 2002?

Why does that history matter?

The original iPhone CPU was an ARM11. Very old core.

The iPhone 3GS through 4 used ARM Cortex-A8 derivatives. Much newer, but still fairly medioc.re

Starting with the iPhone 4S, Apple began using a custom design, and it's been well ahead of ARM Cortex.

Add to that that Intel has basically lost half a decade in their Skylake microarchitecture, for a number of reasons. They're only now starting to catch up with Ice Lake and Tiger Lake. But even Tiger Lake, at thrice the thermal headroom, barely competes against A14.
 
I've done some math. Given the performance ratio of the A10X towards the A10, and the A12Z towards the A12, we can fairly expect the A14Z to perform with xbench benchmarks as high as 2350 for single core and 6200 for multicore. But given that those bench are for a 4 high performance cores chip, for the 8 high performance cores chip rumored to be used in the Macs we can expect a 12400 multicore xbench result. That would be much faster in single core than any Intel chip money can buy, and faster in multicore result than the most expensive chip available for the iMac.

And this does not account for higher clock speeds that could very well be used in the Macs, thanks to better cooling.

For me, that much higher performance guaranteed by ARM chips is a fact. I'm more curious to understand how Apple will replace high performance discrete graphic cards available for the iMacs and what performance we could expect.
 
I've done some math. Given the performance ratio of the A10X towards the A10, and the A12Z towards the A12, we can fairly expect the A14Z to perform with xbench benchmarks as high as 2350 for single core and 6200 for multicore.

A10X vs. A10 or A12X vs. A12 is virtually identical in terms of single-core. It's in multi-core scores that it shines.

But given that those bench are for a 4 high performance cores chip,

I don't know about xbench, but generally speaking, I don't think an app running on iOS can specifically address only the high-performance cores.

Also, the A10 Fusion can only address either the high-performance cores or the low-performance cores, but the A11 and newer can address both at the same time, so especially with an A12X, you won't know at all to what extent the scores is from which cores. You can kind of extrapolate it (say, take a single-core score, and multiply with something like times of cores * a factor like 0.8), but that's guesswork.

for the 8 high performance cores chip rumored to be used in the Macs we can expect a 12400 multicore xbench result.

I really think that's throwing too many numbers at a wall. We neither know the core count, nor the multi-core behavior. It could also be 9,000. Or 18,000.

That would be much faster in single core than any Intel chip money can buy, and faster in multicore result than the most expensive chip available for the iMac.

True, although the gap will shrink a little with Rocket Lake (because it moves from Skylake to Willow Cove), and possibly a fair bit with Alder Lake (because it's 10nm, and because it also has heterogenous cores).

And this does not account for higher clock speeds that could very well be used in the Macs, thanks to better cooling.

True.
 
  • Like
Reactions: gautampw
I'm more curious to understand how Apple will replace high performance discrete graphic cards available for the iMacs and what performance we could expect.
Look for info about what Imagination Technologies has been working on. Some have said that Apple’s renewed agreement had to do with licensing the use of some of their newer tech.

 
A10X vs. A10 or A12X vs. A12 is virtually identical in terms of single-core. It's in multi-core scores that it shines.



I don't know about xbench, but generally speaking, I don't think an app running on iOS can specifically address only the high-performance cores.

Also, the A10 Fusion can only address either the high-performance cores or the low-performance cores, but the A11 and newer can address both at the same time, so especially with an A12X, you won't know at all to what extent the scores is from which cores. You can kind of extrapolate it (say, take a single-core score, and multiply with something like times of cores * a factor like 0.8), but that's guesswork.



I really think that's throwing too many numbers at a wall. We neither know the core count, nor the multi-core behavior. It could also be 9,000. Or 18,000.



True, although the gap will shrink a little with Rocket Lake (because it moves from Skylake to Willow Cove), and possibly a fair bit with Alder Lake (because it's 10nm, and because it also has heterogenous cores).



True.

I’m guessing we won’t see much higher clock speeds. We’ll just see the top clock speeds be sustained longer. But who knows.
 
the whole RISC vs CISC thing is an anachronism. Nobody takes it seriously any more.

intel’s complex instructions are split in to simpler micro-ops internally, until the result looks very RISC-like. At the same time, ARM has introduced all kinds of extensions such as pointer authentication (not to mention other instructions introduced by licensees), which looks very CISC-like.

ultimately it’s all about chip designers, and where they feel the appropriate balance is for each chip they design.

Thanks for correcting me and straighten things up. I just can't help everyone keep comparing a small powered chip with a chip that can easily draws 250+ watt under load. I believe technology have gone fast and Intel been slacking before Ryzen came out. But expecting a first generation chip, even if it's coming from Apple, to make something like the highest end Intel chip irrelevant is mad. AMD seems to succeed beating Intel with Zen 3, but even that doesn't make Intel chips suddenly irrelevant.
 
Thanks for correcting me and straighten things up. I just can't help everyone keep comparing a small powered chip with a chip that can easily draws 250+ watt under load. I believe technology have gone fast and Intel been slacking before Ryzen came out. But expecting a first generation chip, even if it's coming from Apple, to make something like the highest end Intel chip irrelevant is mad. AMD seems to succeed beating Intel with Zen 3, but even that doesn't make Intel chips suddenly irrelevant.

people said a car with a simple electric motor, no transmission, with a drive train weighing a fraction of an internal combustion engine, couldn’t possibly keep up with a Ferrari while costing a fraction of the price.

They were wrong about that, too.

risc vs x86 is exactly the sort of change where one should expect a big difference like this, especially because Intel is also doing a bad job at x86, and Intel can’t fab anything well anymore.
 
It is. Skylake-derived Intel CPUs are fairly mediocre at this point.

You'd have to look at Tiger Lake to see some competitive numbers again.

There aren't any Tiger Lake CPUs on the desktop yet, so 9900K is going to be for my workstation, for now. While I could upgrade to some 10th gen, it's marginal improvement for the cost of CPU + MB.

On the laptop front, however, I could stand an upgrade. Current laptop is Surface Book 2 with i7-8650U. Tiger Lake scores are nearly double. I need to get my hands on one of those laptops and check build times for my source repo against those with 9900K
 
That’s not presumptuous, just plain ignorant, considering what cmaier has already said.

No, rejecting your misunderstanding of what I wrote doesn't make me ignorant.

I took the time, and did the work, to construct a careful, logical argument concerning cmaier's post. And as can be seen from his reply, he himself understood and acknowledged the point I was making.

You, by contrast, did a lazy, sloppy analysis of my careful argument that gets things completely wrong, you ignore my explanations, and then you childishly lash out and call me ignorant because I won't accept your faulty thinking. The problem isn't just your complete lack of understanding, it's that you have no idea how much you don't understand.

I'm sure the people who believe you can build perpetual motion machines also think I'm ignorant, because I lack their profound insight. So congratulations, you're in good company.
 
Last edited:
The problem here is your own expectations and what you understand by "best". The world's best pro display is the one that professionals can actually afford to buy. Someone could say that Tesla Model S is the world's best car but this does not necessary mean it is the fastest or most luxurious one, or the one with the most features. The XDR monitor from Apple is the BEST display for the vast majority of professionals, except for a few individuals/companies who actually need or can afford a Sony BVM-HX310. Overall, the XDR monitor is the world's best display for pro users, regardless of the fact that there may be some other monitor out there, with an insane price tag that provides a better feature set for a specific task, such as true HDR. Something just cant be the "best" if it is virtually unaccesible for ordinary mortals, so I consider Apple's claim as absolutely valid in this case.
You're really spinning hard to help out Apple's marketing team, and I'm sure they appreciate the effort! But no. It's not about my personal expectations.

You need to watch (or re-watch) the WWDC presentation where they introduced the XDR. They were very specific in saying it was designed to do what the Sony did, as well as having capabilities the Sony did not. And that's simply a lie. Particularly since the Sony is specifically built to meet meet the Dolby Vision specifications for a professional HDR mastering monitor, and Apple certainly knew the XDR would not be able to do so. So Apple's claim was objectively a lie, independent of my expectations. [I'm assuming here that Apple was aware of the capabilities of its own product, and that this was not a mistake made out of ignorance.]

I.e., the Sony's ability to meet those specifications is its *defining feature*. It's why the Sony is so expensive. So it's a lie to say that a monitor that can't meet those specifications can do what the Sony can, and more. This seems so obvious I don't understand why I have to explain it. Consider the Nikonos cameras. Their definining feature was that they can work underwater without a case. So what Apple did is like introducting a camera that can't be submerged, and saying it has the capabilities of the Nikonos.

And the professional colorists (the kind of people that use a grading monitor) are saying the exact same thing. Here are some posts from a discussion thread at liftgammagain.com, a professional colorists' forum. These pretty much sum up the pros' reaction to it: They find the monitor may or may not be good for other things, but were disgusted by the phony claim that it can do what the BVM-X300 can do. These are all industry professionals, and all use their real names. You can check out their backgrounds by googling their names followed by "colorist":

1602973605635.png

1602973598917.png

1602973591709.png
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.