Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Why would apple show their cards or what their chips are really doing right now?

Why wouldn't they? It could help developers to see how much reengineering they need do to make their code run on future ARM Macs as well?

They grab every chance to boast about their "superior performance" don't they?
 
So, these benchmarks are meaningless, being based upon old chips.

That said, 811 average is 73% of native performance for emulated (maybe a bit higher, if 2.4GHz vs 2.5GHz).

In single thread, it matches:

iMac (27-inch Late 2012) Intel Core i7-3770 @ 3.4 GHz (4 cores)

In multi-core, which is limited to the good cores, so 4 threads, it matches:

MacBook Pro (15-inch Mid 2012) Intel Core i7-3720QM @ 2.6 GHz (4 cores)
MacBook Air (Early 2020) Intel Core i5-1030NG7 @ 1.1 GHz (4 cores)

Although these will have had 8 threads.

So the A12Z, emulating x86, is already as fast as the most recent MacBookAir in multi-thread (although ST for this one is 1070).

Add 20% (A13) and 20% (A14) and the next MBA-ARM is going to be far far faster - and that might be 8C as well!

Also of note, this is still the standard A12Z that's 5-8W (SoC) compared to 15-45W+ laptop/desktop (CPU), plus turbo speed?

A13 is 20% on existing TDP. We don't even know what it'll be like when they scale it up to 15w.

Plus A14 will jump to 5nm (A12/A13 is on 7nm).
 
Honestly forget the performance, lets look at the delta between what one would expect and what we get to get a rough idea of Rosetta performance. A 30% performance penalty for Rosetta x86 -> ARM emulation is better than I would have expected. That bodes well for this transition.

Yup.
 
  • Like
Reactions: hlfway2anywhere
Just 30% is blowing my mind, usually you're in the range of 50 to 100% overhead for x86 emulation on ARM.

This isn't emulation, it's binary translation (transpilation) - you can get some pretty good results out of this technique if you put the work into the binary translator and give it enough information to make good choices when creating the resulting ARM code from the x86 code. I would expect Geekbench to be a worse-case scenario by the way, as the benchmark should run entirely within its own kernel. Most applications make huge numbers of OS calls and these are already native and fast.
 
"3) Only using 4 out of 8 cores for some reason"

Isn't that just big.LITTLE ? Does the iPad actually utilize all cores when running Geekbench? Wouldn't even be worth it (from scheduling / emulation) perspective to split tasks between those cores on the fly?

Apple doesn't use big.LITTLE, just something similar in concept. As far as I can tell, big.LITTLE is a feature of Cortex chip designs, not something specced in the ISA. So Apple can't use it.

Anyway, both big.LITTLE and Apple's Fusion support running both the low-power and high-power cores at the same time (though I believe not in the A10 Fusion). I'm not sure that applies to one and the same process, though.
 
Apple needs to do better than this for desk top performance. I am a little scared now.

EDIT: I retract this statement due to all the negative reactions. But, Apple still needs to do better than this for desk top performance. I am not scared though.
Don't retract it because it got negative reactions — retract it because it's wrong. This is not an indication of what we will be getting on the desktop/laptops because it is a gimped mobile chip from two years ago. it has no bearing on what they are doing on the desktop — but one thing is for sure is that it will be a more modern design utilizing a different environment (more ram, fan cooling, not underclocked, more cores accessible). This was to get it out the door for testing, not an example of a commercial product.
 
A good way to make sure code runs fast on the new machines is to give developers slower machines.

And a good way to make development quick and broaden app availability is to provide sufficient hardware for emulation.

If you are a developer with an app for Mac, it is pretty important for you to know what tier of hardware your users will get. It's a big difference between:
1. No action, emulation is fine
2. Recompile
3. Reengineer for ARM
 
  • Like
Reactions: Websnapx2
Just 30% is blowing my mind, usually you're in the range of 50 to 100% overhead for x86 emulation on ARM.
Makes one wonder how long they’ve been preparing for this transition. Read an article last week where an ex-intel engineer put the turning point at the Skylake release, as Apple filed as many chip bug reports as intel’s internal team did.
 
  • Like
Reactions: Mescagnus
I knew this was going to happen, but I wish it didn’t. Now, for however many months until the first Apple Silicon Mac is released, we’re going to hear about how Apple is going to fail.
 
And a good way to make development quick and broaden app availability is to provide sufficient hardware for emulation.

If you are a developer with an app for Mac, it is pretty important for you to know what tier of hardware your users will get. It's a big difference between:
1. No action, emulation is fine
2. Recompile
3. Reengineer for ARM

It’s in apple’s interests to prod developers to optimize more than is likely required, especially for when this all launches.
 
On the ipad it uses all cores, including the low power.

Also it’s not big.little. Big.little is the Arm version of heterogenous cores, and it works quite differently in terms of how it decides to use powerful and efficient cores, which cores it can run simultaneously, etc.

How do we know that apart from the app itself telling us it sees 8 cores?

Do we know emulation will work on the low power cores? As someone with little more insight, is it worth to add extra complexity to emulation by supporting two different type of cores?
 
And LOL at the form posters who were saying that there wouldn't be any benchmarks because "The EULA forbids it". 🤣

Yes. 🤣
In the beginning, I was not sure. Then, I saw the EULA and I thought: since Apple forbids it, now we can be sure it will be done one way or another. 🤣
 
  • Haha
Reactions: simonmet
How do we know that apart from the app itself telling us it sees 8 cores?

Do we know emulation will work on the low power cores? As someone with little more insight, is it worth to add extra complexity to emulation by supporting two different type of cores?
Why bother building Rosetta to work with low power cores if you don’t plan to use them in the released version of Apple Silicon? I’m thinking these low power cores won’t be part of Apple’s laptop/desktop chips.
 
  • Like
Reactions: robinp
These results were uploaded on June 20, so likely someone at Apple posted them. However, theoretically Geekbench for iOS would give more accurate results.
 
Also of note, this is still the standard A12Z that's 5-8W (SoC) compared to 15-45W+ laptop/desktop (CPU), plus turbo speed?

A13 is 20% on existing TDP. We don't even know what it'll be like when they scale it up to 15w.

Plus A14 will jump to 5nm (A12/A13 is on 7nm).

In native code, I would not be surprised to see the first Apple Silicon MacBook be twice as fast as the previous Intel MacBook Air.

That's assuming the "A14Z" gets +25% to native, +20% to A13 (same clock), +20% to A14 (same clock), +20% (higher clock, could be more).

But really we're in a multiplicative numbers game here, so small differences can multiply out to big differences.
 
This isn't emulation, it's binary translation (transpilation)

My understanding is Rosetta 2 does a mix of both — code that it can statically disassemble and recompile, it will, but other code will do at runtime. Basically a mix of AOT and JIT, depending on what's appropriate.

Whether you consider that "emulation" is a rather academic exercise. The app that runs is being made to believe it runs on x86 even though it doesn't. Sounds like emulation to me.
 
How do we know that apart from the app itself telling us it sees 8 cores?

Do we know emulation will work on the low power cores? As someone with little more insight, is it worth to add extra complexity to emulation by supporting two different type of cores?

There is no emulation. Rosetta uses translation. It does a one-time translation either at app installation or when the app runs the first time. (There are some specific rare exceptions to this that we don’t need to get into).

So what happens is you now have an ARM binary. To the thread scheduler it looks no different than any other binary. So it’s not like there is some emulation process running where the process is locked to 4 cores or anything.

Additionally, there is no work to be done to support the two types of cores - they handle the same opcodes. The thread scheduler just has to decide on which core to execute which thread, and it can move them at will without compatibility issues.
 
  • Like
Reactions: psychicist
I knew this was going to happen, but I wish it didn’t. Now, for however many months until the first Apple Silicon Mac is released, we’re going to hear about how Apple is going to fail.

Listen less to "pundits" and "experts". It's healthy!
 
  • Like
Reactions: uuaschbaer
Not really. Intel has been offering mild performance increases with crazy heat and power efficiency costs for a while now. This is a solution to a very real problem.

At the cost of breaking compatibility down the entirety of their product line, losing dual-boot, and hacking off everyone that has bought a machine that will now be completely useless when the transition completes in 2 years.
 
Apple doesn't use big.LITTLE, just something similar in concept. As far as I can tell, big.LITTLE is a feature of Cortex chip designs, not something specced in the ISA. So Apple can't use it.

Anyway, both big.LITTLE and Apple's Fusion support running both the low-power and high-power cores at the same time (though I believe not in the A10 Fusion). I'm not sure that applies to one and the same process, though.

Sure, got the terminology wrong. It's called Heterogeneous multi-processing apparently.

What I was trying to get at is that the cores used are likely the high-performance ones. The marginal benefit of using low-power cores will not result in a doubling.
 
I'm not doubting you but, rather, just trying to educate myself. Did Apple confirm that they don't intent to launch the prior A12Z chip in future Macs but, instead, a new range that have not been used in the iPad/iPhone?

Yes, Apple confirmed that in the WWDC Keynote.
 
  • Like
Reactions: ader42
My understanding is Rosetta 2 does a mix of both — code that it can statically disassemble and recompile, it will, but other code will do at runtime. Basically a mix of AOT and JIT, depending on what's appropriate.

Whether you consider that "emulation" is a rather academic exercise. The app that runs is being made to believe it runs on x86 even though it doesn't. Sounds like emulation to me.

The difference is that “emulation,” in the context being discussed here, implies there is some “emulator app” that is running in the background, doing things, and it can only access 4 cores (or whatever). Not at all how it works. And the JIT-aspect appears to be very rare in Rosetta 2, and is only necessary when doing some very quirky stuff (because, for example, Intel allows writeable code pages, so code can actually be modified on-the-fly). In most situations it’s just a static one-time translation.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.