Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Why would you run a dev kit that hides the potential of the chips? Give the power of the chip to the devs. If they don't see the potential they won't put in the effort.

Won't put in the effort for what? These are dev kits that they have to go into a lottery just for the chance to rent, anyone jumping through that many hoops is going to put in the effort. And those that don't now can catch up later when the final hardware launches.
 
You clearly don't understand how this works. Its not compiling C code down to native ARM with optimizations. Its taking optimized x86-64 code and making the most conservative guesses at how to translate it into ARM, which makes code that is NOT optimized to run fast. Translated code will always run significantly slower than natively compiled code.
What exactly did I get wrong? Rosetta produces native ARM code. I did not claim it was optimized. Interestingly enough, the code may actually inherit some high level optimization delivered by the C++ compiler on x86.
 
You are so right; if Apple does decide to launch a 2-year old design CPU, run it on just 4 of the 8 cores and under-clock it slightly and run everything through Rosetta then this benchmark will support your musings.

Many hold a view that Apple will not do any of the above. But you never know, you could be right.

Spot on analysis. I'm actually impressed with the performance based on the limited cores, rosetta, etc.
 
Why in the world would you want “benchmarks” from an underclocked version of the A12 meant for developers to test their apps knowing that Rosetta is adding in overhead and, more importantly, that the chip itself will never see the light of day in a product that will ship to consumers?
 
Spot on analysis. I'm actually impressed with the performance based on the limited cores, rosetta, etc.
Someone posted his own benchmarking results for iPad a few posts earlier. His observation was that the small core performance is 10% of the big core. If this is correct, adding 4 small cores would add 10% performance at best. The impact of Rosetta is hard to estimate.
 
I am actually blown away as this is a chip that's two generations old binned chip running MacOS and being able to translate the code well. I suspect the new iMac will blow socks away in terms of performance with its new form design.
 
  • Like
Reactions: mr_jomo and macwant
The move away from Intel is a solution in search of a problem to solve.

The problem they want to solve, is that they can't make their laptops so THIN that you can't see it when viewed in profile with anything Intel is currently manufacturing.

All they seem to care about is making everything, even desktops like the iMac, obnoxiously thin, regardless of how it affects basic function. (see butterfly keyboard, proper cooling most iMacs for the past 6 years, and now losing the option to run Windows on the same machine)

It wouldn't shock me if the first ARM based Mac laptop came with a zero-key-travel keyboard, just to make it as obnoxiously thin as possible. Don't think for a second there isn't a worse version of the POS butterfly KB in the works.

Form over function. And Apple has shown time and time again, thin is the only form they care about.

I'm just going to sit back and enjoy the show. I'm pretty sure I'm long since blocked, but maybe some of you who still care to ... could ...

apple.com/feedback

?
 
Mac is going to be downgraded in performance and software availability lol. Emulation is just a bandaid, and in the case of Macs... it’s going to be the way of life. Emulate everything that matters but hey! you get native crappy iOS apps on a $4000 machine as a trade off! Woohoo!
 
Last edited:
FWIW, My 2011 iMac i5...Native
1593453767741.png
 
I can understand not using the "efficiency" versus the "performance" cores, but why would Apple underclock them in a Mini compared to an iPad Pro? There's more room in the Mini enclosure for air flow so there ought not be a thermal reason and it's always plugged in to a wall socket so saving power ought not be an issue either.
Because this is not a real product so what difference does it make? Or perhaps since the chip will throttle down far less often than in an iphone, and it will run a high percentage of the time at this higher clock rate, there is some sort of limitation on whatever power supply parts they are using, or some other issue.
[automerge]1593453990[/automerge]
I'd say that it's at least possible that these benchmarks provide a close approximation of how non-native apps will perform (Running in Rosetta)?
I disagree entirely. We now know that at least one app takes something like a 30% hit. We don’t know anything about any other apps. And it’s only a percentage - the real chips may run 5x faster than this. We have no idea. So 30% of 5x faster is as much a possibility as anything.
 
  • Like
Reactions: jwdsail
I'm still not excited for or in favor of this transition, but I will say I don't know why some people are so negative on these results (perhaps they're just being brought back down to reality... LOL).

Keep in mind GeekBench 5's baseline is a Core i3 8100, a 3.6Ghz 4C/4T CPU that scores roughly 1000 in ST (duh, its the baseline) and 3000~3500 in MT, so a score of 830/2800(ish) for non-native code on the A12Z running at 2.4Ghz on only 4 cores is actually pretty damn impressive IMHO.

Does it mean this is an acceptable level of performance for a Mac in late 2020/early 2021? No, but assuming that the Mac chips will be A14 derivatives, that the A14 delivers a measurable jump over the A13, and that they will likely feature more cores and higher clocks/power envelopes vs the A12Z, it's not a bad initial showing. While I'm going to need to see more benchmarks (Cinibench please!) to really come to a conclusion, these initial Geekbench scores are encouraging and Apple at least gets a passing grade for Rosseta 2 in my book (certainly better than I was expecting)
 
So, these benchmarks are meaningless, being based upon old chips.

That said, 811 average is 73% of native performance for emulated (maybe a bit higher, if 2.4GHz vs 2.5GHz).

In single thread, it matches:

iMac (27-inch Late 2012) Intel Core i7-3770 @ 3.4 GHz (4 cores)

In multi-core, which is limited to the good cores, so 4 threads, it matches:

MacBook Pro (15-inch Mid 2012) Intel Core i7-3720QM @ 2.6 GHz (4 cores)
MacBook Air (Early 2020) Intel Core i5-1030NG7 @ 1.1 GHz (4 cores)

Although these will have had 8 threads.

So the A12Z, emulating x86, is already as fast as the most recent MacBookAir in multi-thread (although ST for this one is 1070).

Add 20% (A13) and 20% (A14) and the next MBA-ARM is going to be far far faster - and that might be 8C as well!

I find it ironic that two of those Macs are Macs I use daily, right now. The 2012 iMac is my workshop Mac, and I'm typing this on a 2012 non-retina 15" MBP?


Like deja vu all over again ;-)
 
Absolutely true. That's where you need caches. ARM has 128 Kbyte data and 128 Kbyte instruction L1 cache per core. Something that Intel can only dream of. ARM has 8 MB L2 cache. Some Intel chips don't have that much in L3 cache.
[automerge]1593451033[/automerge]

Hyperthreading has caused huge vulnerabilities in the last two years. And it gives you very little extra performance.

Means little to Intel. ARM needs more memory for instruction because they are larger. x86 has smaller memory footprint for instructions.
 
  • Disagree
Reactions: gnasher729
It will be much more interesting if some developer would post the results of his professional application natively compiled for ARM vs the same software on a comparable Intel.
 
I think the lowest core count with Mac silicon will be 10-12 on a Macbook Air. And Mac Pro's will have 80+ cores. Some of these cores might be efficiency cores.

I know Apple and Tim Cook. They are masters of supply chain. They want to minimize costs and risk while maximizing throughput. IMO, the reason to go to ARM is to vertically integrate their supply chain.

An idea for fun - Apple will create 1 chip for mobile, 1 chip for laptops, and 1 chip for desktop - all chips will have the maximum cores available but Apple will disable some of them for the cheaper machines. However, imagine being able to pay for performance when you need it. Like, I purchase a MBP with 4 core running at 2Ghz for $1k. But if I need a little more juice, and if apple is using a 1chip design, then I can pay to unlock more cores boosting performance.

Just an idea, but this move will open up many cool options for Apple to explore. Tim Cook, if you're reading this - DM me. I can make your company fly ;)
 
Last edited:
  • Haha
Reactions: e1me5
I can understand not using the "efficiency" versus the "performance" cores, but why would Apple underclock them in a Mini compared to an iPad Pro? There's more room in the Mini enclosure for air flow so there ought not be a thermal reason and it's always plugged in to a wall socket so saving power ought not be an issue either.
There is no intent here to make a powerful machine. What Apple is producing here is something that works, that's all. They are not selling these machines.
 
Means little to Intel. ARM needs more memory for instruction because they are larger. x86 has smaller memory footprint for instructions.
Arm working sets are not much bigger than Intel working sets. Given current compiler technology and assuming we are talking about x86-64 and not using 32-bit instructions, the main arm overhead is just additional bits for the predicate field, and occasionally more opcode bits.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.