Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I'm not doubting you but, rather, just trying to educate myself. Did Apple confirm that they don't intent to launch the prior A12Z chip in future Macs but, instead, a new range that have not been used in the iPad/iPhone?
Apple hasn't confirmed anything.

Currently Apple sells Macs with 2, 4, 6, 8, and 12 to 28 cores. Common sense is that there won't be _one_ ARM chip for all Macs. Common sense is that the chips going into the lowest end Macs (replacing two cores) will be whatever mobile chip Apple has in six months time, clocked to the maximum that the chip can handle. That will be a huge improvement for dual core Macs even with Rosetta. Take these benchmarks, add 5% for an improved chip, 5% improvement in Rosetta, and 40% improvement due to running at 3.5 GHz, and another 50% for running ARM code.

Common sense is also that Apple will package two or four of these chips into one package, at which point we will have a huge improvement for all the Macs with four to eight cores. We will have a huge improvement running x86 codes through Rosetta, and native code will fly.
 
It's not that I don't understand all that, but it's still bad considering the 9 year gap in technology.
Why is it bad (It’s a 7 year gap, btw). You take a 2-year old chip that is designed for tablets, not personal computers. you take away half its cores. You reduce its clock speed. You take a benchmark tool designed for the wrong architecture and run it through a translator. And it still runs pretty competitively with current machines. That’s impressive, and it tells us that when we see chips from apple that are MEANT for macs, with more cores, all of which work, running at the intended speed, with a native version of Geekbench, it will kick your 9-year-old chip’s ass.
 
True, but his 9-years old PC costs 200 euros. I want to see you paying 10 times more to apple for a comparable performance.

That's the whole point. You can't pay Apple any amount of money for this even if you wanted to because it's not the release hardware. It's a barebones dev kit that was slapped together using parts they had lying around to allow developers to get their software ready ahead of time. They rent them out to approved devs for $500 (so significantly less than 10x$200). It's a stupid comparison to begin with and continuing to argue it like you have a valid point only makes it worse.

For a more reasonable comparison, put his old CPU up against a 2020 iPad Pro (which runs this same processor, although with less RAM). There you'll see it get stomped by the same CPU under the power and heat constraints of a fanless tablet, which should give you a much better idea of what these chips are capable of when given a fair setup.
 
Last edited:
  • Like
Reactions: CarlJ
Apple hasn't confirmed anything.

Currently Apple sells Macs with 2, 4, 6, 8, and 12 to 28 cores. Common sense is that there won't be _one_ ARM chip for all Macs. Common sense is that the chips going into the lowest end Macs (replacing two cores) will be whatever mobile chip Apple has in six months time, clocked to the maximum that the chip can handle. That will be a huge improvement for dual core Macs even with Rosetta. Take these benchmarks, add 5% for an improved chip, 5% improvement in Rosetta, and 40% improvement due to running at 3.5 GHz, and another 50% for running ARM code.

Common sense is also that Apple will package two or four of these chips into one package, at which point we will have a huge improvement for all the Macs with four to eight cores. We will have a huge improvement running x86 codes through Rosetta, and native code will fly.
Apple has absolutely said these will be new chips and not the a12z.
 
30% hit for single core performance is actually efficient for running non native code, I’m impressed unlike everyone here. Also, I have a feeling the 14X/Z or whatever will outperform Intel’s equivalent offerings by more than 30%
Either the emulation is better than I would have thought or the native performance is better than I thought.
 
Anyone that knows anything about emulation is going to be seriously impressed with those numbers. 27.5% performance loss for Rosetta over native code for single target and 40% for multi threaded is pretty darned insane. With the CPU clock speed being slightly lower and the low power cores not being utilised, it looks like an overall performance loss of about 30% might be possible for emulated versus native code. That is insane. Things are looking pretty good for even emulated x86 code when Apple actually starts rolling out production hardware with current generation ARM chips optimised for desktop use. This thrown together developer machine using 2 year old hardware originally designed for an iPad is already matching the entry level x86 Mac Mini in x86 code.
You would be absolutely correct if Rosetta used simulation. It does not. It translates x86 code into ARM code. The benchmark was executing a native ARM code. The code may not be optimized by the translator but it is still a native ARM code.
 
The translator must make conservative conversion in order to ensure that the produced ARM code is correct. Translating machine code optimally is in many ways more difficult than translating the source program written say, in C or Swift. Not only is the machine code targeted to a different architecture, so some patters just won't be optimal for the ARM host, but
the high-level language code contains more information that signals the developer's intent. There are other issues, like certain indirect operations, which often can't be translated directly and most likely have to be trapped and handles specially.

Not to mention that Rosetta has to be fast, so it can't spend too much time optimizing.

But again, these first results are encouraging. Not only is the performance quite good, but, more importantly, it can run a complex x86 software suite like Geekbench correctly! Which is quite a feat.

Ok, that is fair enough. Native ARM code is being run (and not run-time emulation as everyone seems to be insinuating in this thread), but the actual code has not been fully optimized and lowest common denominator translation.
 
About 10% slower than a core i3 mini at 3.6Ghz.

For a developer rig, that is impressive.

I would expect production macs to have 8+8 cores at a high clock rate in the latest generation.

if Rosetta can emulate the speed of a core i7 while native ARM apps are closer to core i9x200% then it’s a game changer.

I am not sure why performance would mean anything at this point. Isn't this just for developers to test their apps? Apple isn't going to be showing off their desktop chips at this point so they just used their iPad Pro chip instead. Seems pretty simple.
My assumption is the rig is designed to provide minimum acceptable performance for a developer to optimize their code. Like a base model mini Intel.

“If it works well enough on this, it will scream on the better production silicon.”
 
  • Like
Reactions: DNichter
Knowing this is just something they put together to get developers ready, I’m excited to see what the first production Mac-specific SoCs are capable of. I’m already postponing a needed iPad upgrade in case the money is better spent on one of these new Macs.
 
"3) Only using 4 out of 8 cores for some reason"

Isn't that just big.LITTLE ? Does the iPad actually utilize all cores when running Geekbench? Wouldn't even be worth it (from scheduling / emulation) perspective to split tasks between those cores on the fly?

Intel code assumes that a processor with eight cores has eight fast cores. So if Rosetta told MacOS that this iPad chip has eight cores, MacOS and MacOS software would wrongly believe it has eight fast cores. It would try to use eight cores and might slow down. Better to report 4 cores. They have six months to sort this out.

I have run some benchmarks on an iPhone XR (2 + 4 cores), and each little core has about 10% the performance of a fast core. But iOS knows how to handle this, so you get 2.4 times the performance of two fast cores. 20% difference again. Note that this didn't work with an A11, because running the little cores at max slowed down the fast cores a little bit, so you ended up gaining very little.

You do realise 4 of the 8 cores are low-power, high-efficiency cores not meant for demanding tasks like a benchmark?
It's not their purpose to run "demanding tasks like a benchmark", but every bit helps. On an iPhone XR (2 + 4) it makes 20% difference, on this machine it would be 10% difference. Same as changing the clock rate from 2500 to 2750 MHz. And since A12 you can run the slow cores forever as hard as you can, without slowing anything else down.
 
Last edited:
  • Like
Reactions: psychicist
The A12Z is a 7W chip.

That doesn't mean much cause performance scaling isn't always linear. Yet to CPU an ARM CPU (including Graviton 2) scales well at high TDP.

Unless I am seeing a High TDP ARM chip from Apple, I would be skeptical regarding its performance scalability with higher power envelope.
 
  • Disagree
Reactions: jdb8167
What it doesn't tell us is how well Apple can repurpose their chips from a power-constrained portable environment to a heat constrained desktop one, [ ...]
Heat and power use is exactly the same: every single watt of power used by the CPU becomes heat.
The A series CPUs are great in iPad/iPhone to provide significant computing resources that don't draw much (electric) power and hence keep the cooling requirements low.

Translate that to a laptop:
- either they can throw in a lot more cores, and keep the current battery and cooling architecture, giving you a much more potent computer.
- either they can give you a laptop at the same computing power levels of today but with a much smaller battery (and hence weight, bulk all reduced) and less thermal needs.Essentially the power of an iPad Pro in the size of an iPad Pro, but running macos.
- or somewhere in between
- or both: Air vs. Pro ...

From a pure macos point of view: this is GREAT!

From an owner of intel based macs point of view: If these new ones are this good, how long till 3rd parties drop support for the intel macs ?

From somebody who runs parallels to get access to multiple native copies of wintendo, (to check e.g. how my websites look in crappy old MSIE): that needs to get solved before I can move to ARM based machines - and not just for W10 or Linux, but for everythign that can run on a typical x86 machine not from Apple.

Bootcamp: be gone, that's good IMHO: don't compromise to keep MSFT's OS happy on Apple hardware.
 
It's not that I don't understand all that, but it's still bad considering the 9 year gap in technology.

But it's comparable to the currently entry level Mac Mini already, using a hacked together development unit using iPad hardware from two years ago. This is impressive stuff. I'm not surprised your 9 year old computer can run things better. My 11 year old Mac Pro still likely outperforms this development machine. This is part of the reason why Apple is changing their approach - CPU improvements have stalled in recent years, particularly from Intel. A new computer isn't much faster than a five year old computer.

These kind of benchmarks remind me of the early days of the Intel Pentium-M processor. They were mobile CPUs that ended up competing with Intel's mainstream Pentium 4 processors at the time, despite being developed for purely mobile. Overclockers got crazy performance out of them too. The Pentium-M was an absolute game changer, developed by a different research branch from the main laboratories and was based more on the Pentium 3 than the Pentium 4. Within short order, Intel pivoted and pumped everything into this different branch. They added a second core to the Pentium-M and the Core Duo was born.

Here we have a mobile chip again, being used in a desktop computer and it's already keeping up with current hardware despite being handicapped with an emulation layer, lower clock speed and two years old technology. I can't wait to see what Apple comes up with when they go all out and create current gen higher performance variants for actual production macs.
 
You would be absolutely correct if Rosetta used simulation. It does not. It translates x86 code into ARM code. The benchmark was executing a native ARM code. The code may not be optimized by the translator but it is still a native ARM code.

You clearly don't understand how this works. Its not compiling C code down to native ARM with optimizations. Its taking optimized x86-64 code and making the most conservative guesses at how to translate it into ARM, which makes code that is NOT optimized to run fast. Translated code will always run significantly slower than natively compiled code.
 
  • Like
Reactions: ader42

Looking at another x86 Emulation, I can call that Microsoft does much better x86 emulation than Apple.
According to Geekbench, Surface pro x through it x86 emulation scored the same number as their native.


Surface Pro X runs Geekbench natively, not emulating x86.

There's no version of GeekBench compiled for ARM on macOS (yet), but there's one for Windows.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.