Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The move away from Intel is a solution in search of a problem to solve.
The problem is how many times Intel has underperformed and under-delivered on their road map. If a partner tells you they will deliver a core component to spec, then you need to make 11th-hour changes to product designs to support designs that didn't meet spec — and you have to do that enough times (iMac, MBP, MacPro, Macbook — multiple times the last 3-5 years) I would look for alternatives too. It was Job's mantra that if you can do the whole thing in-house and not rely on a partner to deliver the core experience, then bring it in-house. The A Chips were always a plan B, and it was obvious for anyone with foresight.
 
  • Like
Reactions: ader42
It’s in apple’s interests to prod developers to optimize more than is likely required, especially for when this all launches.

Well in that case they wouldn't launch Rosetta 2 right, that does some of the work instead of the developers?
 
Apple needs to do better than this for desk top performance. I am a little scared now.

EDIT: I retract this statement due to all the negative reactions. But, Apple still needs to do better than this for desk top performance. I am not scared though.

I don't think the comments were negative reactions as I think most of us agree that Apple has to do better in real world desktop performance. In my case I couldn't understand why you were scared.

The results are based on a developer kit that doesn't really exist as a consumer product - it's a iPad shoved into a Mac Mini with more RAM pretending to be an Intel Mac. Geekbench has to translate what it is so the results are naturally going to be worse. For now I'd say the real results would be on par with a 2020 iPad Pro - so more powerful that a lot of machines out there today.

As a Mac user who remembers the transition from Motorola 68000 architecture to Power PC to Intel trust me when I say these machines are going to be amazing.
 
Sure, got the terminology wrong. It's called Heterogeneous multi-processing apparently.

What I was trying to get at is that the cores used are likely the high-performance ones. The marginal benefit of using low-power cores will not result in a doubling.

They may or may not. Certainly at least 50%. It depends on the instruction stream - a lot of instructions will run more or less equally fast on a low power or high performances core. A lot won’t. And we don’t know much about how the little cores behave in an environment where they don’t need to be throttled for thermal reasons.
 
Sure, got the terminology wrong. It's called Heterogeneous multi-processing apparently.

What I was trying to get at is that the cores used are likely the high-performance ones. The marginal benefit of using low-power cores will not result in a doubling.

If you had read beyond the first sentence, you would have seen that my point wasn't the nomenclature, but that those techniques are not limited to running on either high-performance or low-power cores. They can be combined, depending on whether the scheduler decides that it'll help.
 
At the cost of breaking compatibility down the entirety of their product line, losing dual-boot, and hacking off everyone that has bought a machine that will now be completely useless when the transition completes in 2 years.
Well, I don’t know how to address concerns that are based on the willful misunderstanding of “completely useless when the transition completes in 2 years”.

Why do so many people here hold this opinion that is completely unfounded? It’s just as bad as the “my two month old computer is now obsolete because a new model has come out, thanks apple 🤬” posts.
 
Why bother building Rosetta to work with low power cores if you don’t plan to use them in the released version of Apple Silicon? I’m thinking these low power cores won’t be part of Apple’s laptop/desktop chips.

That is possible, especially for desktop chips.
 
At the cost of breaking compatibility down the entirety of their product line,
How so?

losing dual-boot,
This really sucks for the people affected, but its such a small group that I think Apple will be fine with it.

and hacking off everyone that has bought a machine that will now be completely useless when the transition completes in 2 years.
What are you talking about? Intel machines will still continue to run 2 years from now, and Apple has hinted strongly that they'll continue supporting them long after the ARM transition is complete.
 
  • Like
Reactions: psychicist
At the cost of breaking compatibility down the entirety of their product line, losing dual-boot, and hacking off everyone that has bought a machine that will now be completely useless when the transition completes in 2 years.
Why's it going to be useless in 2 years, can't you dual-boot into Windows? ;)

Seriously though, this isn't Apple's first or second chip transition and they've been killing it with the iPad/iPhone chips. If they had small plans, they could have presumably gone with AMD.
 
  • Like
Reactions: ader42
The difference is that “emulation,” in the context being discussed here, implies there is some “emulator app” that is running in the background, doing things, and it can only access 4 cores (or whatever). Not at all how it works. And the JIT-aspect appears to be very rare in Rosetta 2, and is only necessary when doing some very quirky stuff (because, for example, Intel allows writeable code pages, so code can actually be modified on-the-fly). In most situations it’s just a static one-time translation.

OK, but if there isn't a background task at runtime, then the 30% overhead kind of seems like a lot? Is it some kind of register misalignment?
 
I think many people rightly noted that these results are meaningless given that this is not a product meant for the market.

But just by curiosity, what are the regular benchmarks of Macbooks and other iMacs again?
 
I knew this was going to happen, but I wish it didn’t. Now, for however many months until the first Apple Silicon Mac is released, we’re going to hear about how Apple is going to fail.
We will see people mocking this old underclocked iPad chip running emulation for not being as fast as their Core i10 in their neon lighted PC tower. :D
 
I’m thinking these low power cores won’t be part of Apple’s laptop/desktop chips.

Maybe not on the desktops, but for a laptop those would make perfect sense.

"Always on" with days of battery life, wake it up and all your files have been synced with iCloud, Mail is up2date etc etc.
 
Just 30% is blowing my mind, usually you're in the range of 50 to 100% overhead for x86 emulation on ARM.

30% slower is just under 50% overhead.

But, this is also a bit of a worst case scenario. Apps don’t sit there churning tight CPU code all the time. So real apps sitting on AVFoundation, Metal, CoreAnimation, AppKit/UIKit, etc will get better perf the more they lean on the system frameworks which will run natively.

Maybe not on the desktops, but for a laptop those would make perfect sense.

"Always on" with days of battery life, wake it up and all your files have been synced with iCloud, Mail is up2date etc etc.

Even on desktops it makes sense if you can save power while sleeping, and still have the ability to do system upkeep and pull down network data so that everything is up to date when you sit down. The Mac Pro idles at 30W to do this today. With the efficiency cores it could probably do it in under 5W.

I fully expect the efficiency cores to still be there on the larger chips. That’s where the biggest gains are.
 
  • Like
Reactions: Zarniwoop
All the people saying this is useless are not thinking long term. This is very useful: we currently see a ~27% drop in single core performance when running through Rosetta. Not surprising, but now we have a solid metric to compare to future developments. If we see this figure decrease in the future, we’ll have higher confidence that the move to ARM will work well for programs that won’t be refactored to take advantage of whatever native instruction sets will be included with Apple silicon.
 
I'm not surprised at the high performance on Rosetta.


The API calls haven't changed, and the APIs are all native. Most application execution time really is inside of system frameworks - everything from graphics to UI to audio to disk access.... Connecting screens together and processing click events uses very little application code. Those frameworks are already running optimized and native for Mac. Unless your x86 code ran in complex computational loops and barely ever made a system call, Rosetta should perform very well. It all depends on the workload, but most workloads should do well.
 
Just for fun, I'm writing this on a 2017 MacBook Air, which has been my daily driver for the last 3 years. Single core is 667, and multicore is 1372, according to Mactracker.

The closest reference machine for this new ArmMac is a 2016 MacBook Pro 15" (about 800 for single core, and 3255 for Multicore) per Mac Tracker.

I don't think that's too bad at all for using an outdated chip on an emulator.
 
Well, I don’t know how to address concerns that are based on the willful misunderstanding of “completely useless when the transition completes in 2 years”.

Why do so many people here hold this opinion that is completely unfounded? It’s just as bad as the “my two month old computer is now obsolete because a new model has come out, thanks apple 🤬” posts.

Your machine isn't useless because a new model using the same basic architecture comes out. You might not enjoy performance benefits of a new machine, but you can still take advantage of new software and software / OS updates that get released.

When they change the architecture fundamentally, it closes off all future software from your machine from the moment the transition completes.

This was how it was during the last PPC to Intel transition. Support for the PPC Macs dropped like a stone.
 
If it is a one-time translation then it is fairly meaningless that the scores are less than people expected. I have no idea why anyone had any expectations other than I hope it works ok. Translations like Rosetta does are not optimized and cannot compete with a natively compiled application and never will be able to. If it was easy and efficient to do this no one would need to build for a specific platform. We do have something like that though and it is the piece of crap known as Java. The phrase I use is "smells like Java" when I come across an unusual interface or weird behavior and I'm right most of the time.

Wait for native applications running on the released processors to see what we will get. I am optimistic they would never attempt this if they thought the move would be questionable. I do expect emulators like Parallels to be a lot less great than they are running on Intel.
 
There is no emulation. Rosetta uses translation. It does a one-time translation either at app installation or when the app runs the first time. (There are some specific rare exceptions to this that we don’t need to get into).

So what happens is you now have an ARM binary. To the thread scheduler it looks no different than any other binary. So it’s not like there is some emulation process running where the process is locked to 4 cores or anything.

Okay, Geekbench likely uses "install-time" and the special case "dynamic" translation. It would be interesting to see the iOS version of Geekbench for comparision on the same machine.

Additionally, there is no work to be done to support the two types of cores - they handle the same opcodes. The thread scheduler just has to decide on which core to execute which thread, and it can move them at will without compatibility issues.

Do you have info on relative performance of the two types of cores?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.