Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
From somebody who runs parallels to get access to multiple native copies of wintendo, (to check e.g. how my websites look in crappy old MSIE): that needs to get solved before I can move to ARM based machines - and not just for W10 or Linux, but for everythign that can run on a typical x86 machine not from Apple.

TBH if even half this kind of performance is possible once optimised Virtualisation software comes out then I think most people who use virtual machines will be just fine. Ask yourself this: how much performance do you actually *need* in your virtual machines? You say you use it to check your websites in different versions of Windows. Do you really need your virtual machine to be able to run with the performance of a top end, latest gen i7 processor? Websites aren't exactly processor intensive, or at least they (largely) shouldn't be.
 
These Apple Silicon MacBooks will be such a nice upgrade for my MacBook Mid 2010. Fingers crossed that they price it slightly more sane than what they offer currently.
 
Launching tiny consumer apps is not a comparison to heavy compute tasks on desktop creative apps.
Absolutely true. That's where you need caches. ARM has 128 Kbyte data and 128 Kbyte instruction L1 cache per core. Something that Intel can only dream of. ARM has 8 MB L2 cache. Some Intel chips don't have that much in L3 cache.
[automerge]1593451033[/automerge]
Both this ARM and my 2600 have 4 cores. But my 2600 has the advantage of hyper threading.
Hyperthreading has caused huge vulnerabilities in the last two years. And it gives you very little extra performance.
 
That's just ridiculous. My system consumes about 230 watts under full load, which is maybe 3-4x more.
For what a CPU benchmark is worth: this thing is essentially an iPad Pro in a mac mini enclosure, even underclocked!
To compare the power it uses, it's a device that runs happily on a few watts, not hundreds.

What this benchmark shows however is how efficient the rosetta layer is: less than 30% overhead is VERY impressive.
And that's encouraging. The released system will have nothing in common with this rig that's just designed to let developers (they're the only ones getting one on loan) port their software to the native ARM cpu so it'll run at full speed once the new hardware is being sold.
 
A12Z is a 7W chip designed for a tablet, slightly underclocked (compared to the iPad Pro), yet it's almost there compared with the 28W range of i5 designed for laptops (way less thermal and energy constrained) in single and multicore. Keep in mind The A12Z CPU is basically the same 2018's A12X, so we are talking about a 2yo tablet chip if we only look at CPU benchmarks.

So knowing all this, how is that underwhelming?
And it's running the benchmark under emulation on top of all that!
 


While the terms and conditions for Apple's new "Developer Transition Kit" forbid developers from running benchmarks on the modified Mac mini with an A12Z chip, it appears that results are beginning to surface anyhow.

apple-developer-transition-kit-box.jpg

Image Credit: Radek Pietruszewski

Geekbench results uploaded so far suggest that the A12Z-based Mac mini has average single-core and multi-core scores of 811 and 2,781 respectively. Keep in mind that Geekbench is running through Apple's translation layer Rosetta 2, so an impact on performance is to be expected. Apple also appears to be slightly underclocking the A12Z chip in the Mac mini to 2.4GHz versus nearly 2.5GHz in the latest iPad Pro models.

rosetta-2-benchmarks-a12z-mac-mini.jpg

It's also worth noting that Rosetta 2 appears to only use the A12Z chip's four "performance" cores and not its four "efficiency" cores.

By comparison, iPad Pro models with the A12Z chip have average single-core and multi-core scores of 1,118 and 4,625 respectively. This is native performance, of course, based on Arm architecture.


Article Link: Rosetta 2 Benchmarks Surface From Mac Mini With A12Z Chip
I can understand not using the "efficiency" versus the "performance" cores, but why would Apple underclock them in a Mini compared to an iPad Pro? There's more room in the Mini enclosure for air flow so there ought not be a thermal reason and it's always plugged in to a wall socket so saving power ought not be an issue either.
 
I'm not doubting you but, rather, just trying to educate myself. Did Apple confirm that they don't intent to launch the prior A12Z chip in future Macs but, instead, a new range that have not been used in the iPad/iPhone?

The didn’t give much away, however the implication was that new dedicated desktop chips are in production and it makes complete sense that the chips we’ll see will be different to those used on mobile devices.
 
I can understand not using the "efficiency" versus the "performance" cores, but why would Apple underclock them in a Mini compared to an iPad Pro? There's more room in the Mini enclosure for air flow so there ought not be a thermal reason and it's always plugged in to a wall socket so saving power ought not be an issue either.
It's a dev kit. The performance is largely irrelevant as long as it lets dev test their apps.
 
The didn’t give much away, however the implication was that new dedicated desktop chips are in production and it makes complete sense that the chips we’ll see will be different to those used on mobile devices.
The Chip Lead (whatever the title is) explicitly stated there is a whole new family of chips designed explicitly for the macs forthcoming in his short Platform State of the Union appearance. I think it starts 8 minutes in.
 
  • Like
Reactions: ader42
Did people look at the Pentium Geekbench scores in the SDK during the intel transition? How much faster was the first Mac that shipped with an actual Core2Duo CPU compared to the SDK? :)
 
The didn’t give much away, however the implication was that new dedicated desktop chips are in production and it makes complete sense that the chips we’ll see will be different to those used on mobile devices.

No, Craig outwardly said in this interview with Gruber that that chip will not be used, and there is something else coming.
 
  • Like
Reactions: ader42
Not really. Intel has been offering mild performance increases with crazy heat and power efficiency costs for a while now. This is a solution to a very real problem.

Architectural differences aside (which are quite huge), that's what a worse node does: be it 14nm, or the 10nm performing worse than their super refined 14nm.
As Intel you are behind the competitors (mainly AMD) which have a non monolithic high efficeint architecture and a node lead (7nm-7nm+) , so to be able to compete you just push the frequency as much as you can. But frequency doesn't increase linearly but per square with energy cost, so for a little more frequency you are adding much more watts into it. So ironically your already unefficient CPU forces you to make it even more unefficient to be able to compete, thus you end up having a toaster
 
Apple hasn't confirmed anything.

Currently Apple sells Macs with 2, 4, 6, 8, and 12 to 28 cores. Common sense is that there won't be _one_ ARM chip for all Macs. Common sense is that the chips going into the lowest end Macs (replacing two cores) will be whatever mobile chip Apple has in six months time, clocked to the maximum that the chip can handle. That will be a huge improvement for dual core Macs even with Rosetta. Take these benchmarks, add 5% for an improved chip, 5% improvement in Rosetta, and 40% improvement due to running at 3.5 GHz, and another 50% for running ARM code.

Common sense is also that Apple will package two or four of these chips into one package, at which point we will have a huge improvement for all the Macs with four to eight cores. We will have a huge improvement running x86 codes through Rosetta, and native code will fly.
I think the lowest core count with Mac silicon will be 10-12 on a Macbook Air. And Mac Pro's will have 80+ cores. Some of these cores might be efficiency cores.
 
Why would you run a dev kit that hides the potential of the chips? Give the power of the chip to the devs. If they don't see the potential they won't put in the effort.
 
1) down-clocked slower than iPad Pro!
2) Running benchmark in rosetta
3) Only using 4 out of 8 cores for some reason
4) not the chip that will be used in macs

These benchmarks mean absolutely nothing.

I'd say that it's at least possible that these benchmarks provide a close approximation of how non-native apps will perform (Running in Rosetta)?

If nothing else, this should add to the pressure from customers on developers to deliver native apps asap? I know two programs that I use daily that were not likely to be updated in a timely matter, based on the transition from PPC to Intel, so I welcome any additional pressure for them to update?

I'm not worried, as I've already budgeted the purchase of at least one "backup" Intel system to provide the best possible performance with those important apps that I doubt will make the transition to native in a timely matter, if at all?

I'm just sitting back and enjoying the show at this point?
 
  • Like
Reactions: psychicist
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.