Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
As a tech enthusiast but someone who doesn't necessarily understand the finer side of this kind of thing, can anyone confirm if my understanding is correct—the M1 in the lowest type of configuration (a MacBook Air), emulating Intel-based macOS applications, performs better than the best high-end Intel iMac running x86 natively?

Is there a way to see how the iMac Pro and Mac Pro perform compared to the computers in the graphic?

If I am reading the GeekBench numbers right, the M1 MacBook Pro is faster than both the Mac Pro and iMac Pro on single core tasks! You mean to tell me that the M1 MacBook Pro is faster than my $7500 Mac Pro? 😲
 
Yes, we are sure.

Parallelizing single threaded code automatically is one of the hardest computer science problems. No way Rosetta is doing that.
Yet, it is absolutely a very hard problem, but no.

Modern CPUs devote a lot of engineering and silicon to extract as much parallelism as possible from single-threaded code. Given its performance, the M1 probably devotes even more to it than other CPUs. However, something like Rosetta 2 potentially has even more information to use in making decisions about dependencies, out of order and speculative execution, etc. It would not be surprising if they are trying to extract even more parallelism at that layer.
 
  • Like
Reactions: fivenotrump
Something interesting to me is that all the intel processors score about evenly regardless of generation or clockspeed. A 27" iMac with an i9 @ 3.6ghz performs about the same as a macbook pro with an i7 @ 2.3 ghz. There might be a software limitation with the different silicon. I'd like to see performance comparisons with Adobe products or something to really understand what's happening.
 
  • Like
Reactions: Ukiyo Evenings
Yes, but the issue here is you are looking at the cpu only. Apple is delivering a package. Now you may or may not have experience in this field and as it’s the internet anyone can claim to be anything, but you are narrowing the focus onto the cpu design. The specific core. The fact of the matter is that a computer is the sum of its parts which you should well know. Apple packaging everything together applies efficiencies that other chips do not have and are outside the norms of computer design. Perhaps I should have said it that way rather than give the impression I was talking merely cpu core design. This influences benchmarks. Whether just the cpu is similar or not isn’t really the point, it’s the bigger picture.

Also not to be harsh, but ultrasparc v was cancelled. Are there shipping processors with that sort of design? It may have been planned to be similar but if it never shipped it doesn’t show that design shifted that way at all. Unless of course features of that were incorporated in other designs, for that I would have to defer to you.

The ultrasparc V I worked on was actually sold as ultrasparc iv. Code name was millennium. And both Athlon 64 and opteron also had deep reorder buffers.

And there is nothing unusual about apple’s SoC. There are LOTS of competing x86 processors with integrated gpus, encryption units, and secure cores. Of course they are all different, but there is nothing about M1 that is “outside the mainstream” of CPU architecture which is what the claim was.

And as for your accusation that I may be faking who I am, I’ve been here for a VERY long time, and my user name corresponds to this guy, so either I’m him or this is a very long con I’m playing:


etc. Etc.
 
As a tech enthusiast but someone who doesn't necessarily understand the finer side of this kind of thing, can anyone confirm if my understanding is correct—the M1 in the lowest type of configuration (a MacBook Air), emulating Intel-based macOS applications, performs better than the best high-end Intel iMac running x86 natively?

Is there a way to see how the iMac Pro and Mac Pro perform compared to the computers in the graphic?
Yes, in single threaded performance, that is using one core only, it beats any Intel system that uses only one core. Since it has four fast cores, you would expect it to beat any Intel system with four cores in multi-threaded performance, using all cores, and it does. It doesn't beat 6 core Intel system in multi-threaded performance (but the native benchmark does) and doesn't come close to systems with 8-28 cores (native beats older 8 core systems but not the latest ones, but it _is_ the lowest configuration with only four cores.

So what happens when you run a multi-threaded program like FFmpeg?
It will use as many threads as possible, and will run faster than an Intel quad core CPU.

But for many video applications Apple has heavily optimised libraries. So a video application using those libraries would automatically use Apple's ARM code, and not translate Apple's Intel code.
 
Last edited:
Yet, it is absolutely a very hard problem, but no.

Modern CPUs devote a lot of engineering and silicon to extract as much parallelism as possible from single-threaded code. Given its performance, the M1 probably devotes even more to it than other CPUs. However, something like Rosetta 2 potentially has even more information to use in making decisions about dependencies, out of order and speculative execution, etc. It would not be surprising if they are trying to extract even more parallelism at that layer.

This is ridiculous. There is no way that Rosetta is taking a single thread of x86 code and splitting it into multiple parallel Arm threads. Stop it.
 
If I get an ARM Mac it will be next year or the year after
16 GB RAM only option. NOT GOOD. NOT ENOUGH
By next year all the problems and defects will be worked out.
Why is it "NOT GOOD. NOT ENOUGH"? Not enough for what? 8GB is plenty for the majority of common IT tasks, and 16GB is still the maximum RAM in a lot of consumer laptops.

You can't say whether 16GB is sufficient or not unless you specify your usage. For the vast majority of people who would buy these computers, it will be enough. Maybe not for you, but you didn't say what you need to do.

What "problems and defects" are that need to be worked out? So far, we don't know of any. There may well be problems, but until they are shown to exist, they do not need to be fixed.

I have to say that I am mightily sick of the modern tendency for people to quote opinion and subjectivity as though it were fact. It's rife within society, and has been greatly amplified by Internet and social media use.

You need to prefix every single statement with "I believe", "in my opinion", or "I think", and then PUT FORWARD YOUR ARGUMENT FOR YOUR BELIEF.

Is this basic mental discipline no longer taught in schools????
 
If you need windows, then you will eventually likely need a windows machine. Luckily for Apple, only 1% of users use bootcamp, and something like 5% use VMs, so even if they lose those customers, they will more than make up for it with new buyers who want to run iOS software on their laptop or desktop.
I asked the guys at my company writing our server software, and at the moment i'm told everything we use except one application could be run on an ARM VM. Not that it matters, because iOS and Android developers don't run those VMs on their development machines.
 
You know that "VirtualApple" result is not anymore on Geekbench website, right?

Was it fake or unintentional mistake IDK, but in any case the test result and "news" are not valid any more
 
Small detail: Compiling with Rosetta doesn't throw the old Intel code away. Apple fully intends to ship future macOS versions with improved version of Rosetta which will recompile your code.
Yep, i was thinking that as well. Particularly if some recalcitrant developer lags in providing a native binary, Apple may be motivated to look into how to make Rosetta handle that app better. But, of course, they don’t want to make Rosetta TOO good, because that just discourages developers from providing native code.
 
Without a full version of Windows OS for ARM (which MS has no financial incentive to produce beyond Windows for their tablets), silicon Mac’s don‘t seem appealing.
Why do you buy Macs at all then? Buying Macs to run Windows (or based on spending a lot of time in Windows on it) is just getting the worst of both worlds imho.

As I see it... unless you have a very specific niche they're very appealing - they run MS office well enough, they run cloud apps as well or better than any other platform, they run native macOS applications which are generally better than Windows apps at what they do.

Windows? Personally I don't want Windows. I don't need Windows. Given I don't need/want Windows, the realistic choices right now are Linux or macOS and macOS is nicer. Couple that with 2x battery life and 3x processing speed in a portable... its a no brainer imho. Not running Windows is a feature.

Even Microsoft is supporting other platforms better than Windows these days - iPad Office 365 is actually touch friendly - runs way better than anything you can get for the MS Surface.

Hopefully these speed improvements will get more developers to jump ship. Even MS are more of an application/cloud company now - don't be surprised to see them pushing "better than PC" apps on the Mac if these chips are as good as they appear. It's just another application platform for MS now - one they don't even have to maintain.
 
Last edited:
The ultrasparc V I worked on was actually sold as ultrasparc iv. Code name was millennium. And both Athlon 64 and opteron also had deep reorder buffers.

And there is nothing unusual about apple’s SoC. There are LOTS of competing x86 processors with integrated gpus, encryption units, and secure cores. Of course they are all different, but there is nothing about M1 that is “outside the mainstream” of CPU architecture which is what the claim was.

And as for your accusation that I may be faking who I am, I’ve been here for a VERY long time, and my user name corresponds to this guy, so either I’m him or this is a very long con I’m playing:


etc. Etc.
It wasn’t an accusation so check your statement please. It was a statement of fact that people online may not be who they say they are. Posting a link doesn’t matter. There are people in the world who spend their lives pretending to be someone they are not. Therefore your comments fall on deaf ears regarding who you are because, as stated, there is insufficient proof other than that you may have convinced others you are who you say you are. You seem to have taken offence in this. Don’t. It’s a fact of the online world. I don’t believe what most say online unless it can be independently verified by a trusted third party, and expect others to view anything I say in the same light.

I will note on the count of refuting arguments other than vague hand waving you haven’t really identified how other processors are all the same and following the same design path. You did state knowledge 1year ago of what Apple was going to do, but chip design is far more long term than that, so surely you knew years ago? It’s also of note that other cpu designers are stating some surprise at the design and apparent performance of the m1. This doesn’t really indicate that it’s the normal path of chip design.
 
That is super zippy for entry level machines.

Those scores don’t mean I can load up one of my 50-60 layer After Effects projects though. Even a dedicated ultra fast desktop with an RTX card struggles with this load. Lots of RAM and a discrete GPU are the only choice until SOCs take a much more gigantic leap forward.
The integrated GPU in the M1 is pretty impressive as well.

 
Last edited:
How the heck is Apple so far ahead in performance? It's incredible how much of a lead they have it's like alien technology.
Intel has the anchor of backwards compatibility with code back to 1978 8086 cpu. Then decades of cruft on top of an operating system called windows which has backwards compatability with 16bit windows 3.1 code. All this adds up to massive efficiency issues.

Until Microsoft and Intel partners together to work on a new from scratch 21st hw/software architecture you will see little progress.
 
Well, that's just ridiculous. Not exactly surprising given what the last generation of Rosetta was capable of and the general performance numbers coming out of the M1, but still ridiculous.

Nitpick with the headline "Apple Silicon M1 Emulating x86 is Still Faster Than Every Other Mac in Single Core Benchmark", though:

Rosetta is not an emulator, at least in the standard technical use of the term. This kind of performance out of an emulator would be mind-boggling. Rosetta is a dynamic binary translator, which is a big reason the performance hit is so minor, but also why it only works on certain things.

I’d still be a little wary. Although the silicon appears impressive we dont know how load over time is dealt with. The architecture of the chip is significantly different in comparison to the normal train of thought (when it comes to chip design) that it may possibly be that the some quirk of the m1 layout might make it better at performing with these short workloads. Not saying that the performance isn’t real, just don’t assume because a handful of geek benches show it as powerful that it actually is.
Those who remember the g4/5 days know what I mean. Apple was good at demonstrating superiority that actual users were often unable to reproduce in normal workloads. I’d wait for actual reviews from trusted sources before buying into the hype.
I remember the G4/G5 days quite well--before things started to stagnate later in the PPC era, I remember running a paralleled scientific data-processing application I built in a cross-platform environment on one of our x86 Windows machines and one of our shiny new PPC Macs, and the Mac completed the data analysis so much faster I at first thought I had done something wrong.

That didn't necessarily carry through the entire PPC era, and I won't argue that the PPC's best performance areas weren't necessarily consumer-oriented, nor did they carry through toward the end of the era. But I've seen nothing in the past several years to indicate that the extreme difference between A-series benchmarks and Android CPU benchmarks are inaccurate in real-world use, so I don't see why the case would be any different with M1 versus x86.

I mean, I can go to Browserbench.org and run the exact same (single-core) benchmark on the exact same browser on my phone and my 15" i9 laptop with active cooling, hear the fan ramp up on my laptop, watch both do the same thing, and watch my phone outperform my laptop by a factor of 1.5 to 2. The I9-8950HK does outperform the A14 in a couple of specific tests, but the edge goes to the A-series in most cases, sometimes by a huge margin.

More importantly, and very differently from the PPC, Apple has been designing their chips for a single, consumer-use platform for a decade now, so if anything I'd expect them to perform better in some real world use cases than more general-purpose Intel chips. The chip design literally has one customer, and that customer only makes a handful of products, most of which have a similar target market.
 
I'm still running a mid-2013 MacBook Air and am keen to update but I've held off knowing that the transition to Apple silicon was coming. That appears to have been a wise choice as these numbers are looking great, and that's coupled with the improved battery life. However, as great as these numbers are, I'm going to wait until the rumoured 14" MacBook Pro drops. I think Apple has just used the 2020 Air and 13" Pro, in which the Apple SoC is the only meaningful change, as a platform to demonstrate the performance gains as a result of the M1 versus the same setup on Intel. As such, I'll wait for the more exciting refresh, which will hopefully drop before mid-2021. After all, I've already persevered with my 2013 MBA for a long time, so what's another 6 months or so wait for a device that expect to hold onto for another 6 - 7 years.
 
Intel has the anchor of backwards compatibility with code back to 1978 8086 cpu. Then decades of cruft on top of an operating system called windows which has backwards compatability with 16bit windows 3.1 code. All this adds up to massive efficiency issues.

Until Microsoft and Intel partners together to work on a new from scratch 21st hw/software architecture you will see little progress.
Yeah, and it’s tough to drop any sort of backward compatibility if you are the majority. I mean we can read how many people still unwilling to upgrade from Windows 7. Even now intel is forcing some people to not use an old OS by requiring Windows 10 on some newer CPUs.

But intel’s entrance into discrete GOU might at least give a glimpse that they finally care about GPU. Bit late of course.
 
It's not really that hard. All modern CPUs do out of order execution. If you're already executing stuff in the "wrong" order, then you can do it concurrently.
You are giving me a headache. With out of order execution, everything runs on _one_ core, and that core guarantees that the results are the same. The hardware can pick the next instructions that can be executed within 330 or so picosends out of a few hundred instructions that are waiting. With multiple threads, that's absolutely impossible.
 
The M1 is optimized for one thing: running apps. X86 processors are optimized for many, many different things, some of which conflict with each other.

The larger instruction caches on the m1 suggest to me that Apple discovered that speculative execution wins and code cache misses are more problematic than memory misses. That's something you can only figure out with profiling. In this multiple task/process world that might be more true than it used to be.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.