Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
That’s highly debatable. It might be a very important capability to you and a handful of others, but in terms of the overall Mac landscape it’s surely only relevant to a tiny fraction of Mac users. In the grand scheme of things, and especially in light of Apple’s own GPU investments, it’s not hard to imagine that Apple may decide that it’s not worth the effort to continue eGPU support.
You are right. Even though I use one, I'm not sure it' the best solution honestly.
 
So... from what I've gleaned so far, a bitter complaint has appeared to be that Bootcamp is dead. But this new benchmark would appear to contradict that. So is this a legit announcement that once these machines are in the real world, they WILL be able to run Windows under emulation, at speeds that are quicker than has previously been achieved thus far? Overnight?!
If it runs Windows it will be for ARM not x86. If it runs emulation in the ARM version it will be slow. If people are expecting to Windows game on this, I think you'll be disappointed.
 
  • Like
Reactions: NetMage
But what are all the Apple-haters and Me-doubters going to complain about if that’s the case?

Oh, I know. They’ll fall back to “but it doesn’t virtualize x86.”

Anyway, that’s actually more of a Rosetta-speed hit than I expected, but we’ll see when we get real world data.
I’m sorry, but I take issue with the dismissive nature of this comment. Virtualizing x86 IS a big deal. I have to deal with several proprietary Windows applications for work, and being able to run a VM with Windows is a pretty big deal for that and is the difference between whether or not I need a separate computer for work. Plus I like gaming on my MBP. The ability to virtualize Windows- or run Boot Camp- might literally be what tips me into buying a 16” last gen Intel MBP over an upcoming M1 based one.

I’m hopeful for a third party solution though. Back in the PowerPC days, VirtualPC was originally an x86 emulator in addition to virtualization. Maybe Parallels will move to adding in x86 emulation to a future product.

Also, the fact that you are disappointed in these Rosetta benchmarks is kind of astounding to me. They are phenomenal by emulation standards.
 
  • Disagree
Reactions: NetMage
How the heck is Apple so far ahead in performance? It's incredible how much of a lead they have it's like alien technology.
I think this is kind of what you'd expect to happen.

x86 ruled the roost for a long time. The fact that it ran what always ran before was valued. And people bought a lot of them. So the companies making x86 were awash in funds to apply to the R&D necessary to make it perform sufficiently well to keep selling.

x86 has been increasingly hobbled by the fact that it had to carry so much backwards compatibility, and we've always know it wasn't the "optimal" design. But what was going to replace it? There's no point investing a ton of money into a design that's just as good as x86-- who would buy it? It has to be enough better than x86 to justify breaking backwards compatibility.

So it's no surprise that when we see something truly compete with x86, that it's significantly better. We wouldn't see it if it weren't.

So why is Apple the company that did it? Because Apple had a revenue stream to fund R&D on a scale of what x86 sees. To compete with x86, it's not enough to simply have a better architecture, you need to to match its implementation. You don't just license Arm, click compile and go to fab-- if you want to win, you need to pay attention to the details. I've seen @cmaier hammering on this repeatedly: every other Arm maker has been using ASIC methodologies, while Apple put the time, money and effort into optimizing and tuning the actual implementation. You don't bother to do this if you're building an Arm into a toaster oven, but you can if you're trying to make the world's leading mobile devices and hoovering up the lion's share of smartphone profits year after year.

This put Apple into a position where they had a better architecture, an implementation that's at least as good as the x86 makers and, to Intel's great disgrace, access to a superior process. We've known this could happen, but now the planets aligned to make it happen.
 
Last edited:
I never said the iPad version was feature complete. I said Microsoft is supporting the iPad better than their own products.

Depends what you want. If you want a touch friendly version of office (as you do, when running on a tablet) the iPad version works better than the desktop version trying to pretend it is a touch based application running on an MS surface as a tablet.

MS released the surface, crippled Windows 10 with touch based garbage and then didn't support office on it properly.

Hence: Microsoft is supporting the iPad better than they are supporting their own Surface products.


edit:
besides, most people never use 90% of the office suite features anyway - but if you do, you run it on a desktop. not a tablet, in touch mode, like microsoft would like you to THINK the surface is good at.
Here's the thing...the surface is more than a tablet and so I can use FULL FAT office in a desktop [using a docking station as I do] or laptop mode making the use of real office far better than anything that can be achieved on an iPad [touch friendly or not]

Notwithstanding the above, using touch Excel on a Surface is perfectly achievable, easily on par with an iPad and includes full functionality.
 
Wrong. AMD had HSA based SoC back ten years ago. The problem with this approach is the Memory is Fixed width, a.k.a not expandable or scalable. The M1 has a subset of processes and applications spaces it shows promise in single core performance.

In reality, every applications running today is multi-core/multithreaded and large applications that require large data sets and are memory and computational intensive this CPU will not be used on. It would crash.

We have a fixed 16GB shared memory space. That's it. I routinely run 30GB rendering samples. This CPU would crash in Blender.

AAA games now requiring 12GB in dedicated VRAM for 4K Godsplay this CPU would crash and lock up the entire system.

These systems are disposable consumable, low power, low processing little efficient boxes for basic Web Browsing, Office Work. Heavy intensive, high core/high thread, memory intensive applications that thrive the more resources you throw at them--Computational Fluid Dynamics, Finite Element Analysis, large data sets that need to be stored in memory none of these and more will work with the M1.

If you're betting this solution has some linear scalable 1:1 capability of adding more cores, more shared memory as if the memory is limited on the SoC by your imagination then you'll be sorely disappointed.

Back in 2019 when Apple Demoed the Mac Pro onstage with 1024 channel strips running a full 100 unit orchestra w/o taxing a 28 core Xeon with 56 threads, an Afterburner and Dual Radeon Pro Duos with 128GB of HBM memory they could max Logic Pro [X at the time 10.5] without barely pushing all 28 cores, but it utilized all 28 cores.

That same score wouldn't even load in any of today's or for years to come M series SoCs. They'll have to offer a completely different Workstation class set of chips for that to ever become a reality.

By the way, AMD's Zen 4 is including their own Neural Engine FPGAs drawing upon the Xilinx merger, their own Tensor Cores, RDNA 3.0 based 5nm fab SoC that won't be limited to 16/32/64/128GB of HBM2e memory [that they can leverage even now seeing as they were first to market to use such]. Zen has a present limit of 2TB and the Zen 4 on DDR5/LPDDR5 will expand that memory footprint for the supercomputing markets, big data center markets.

People seem to delude themselves that this SoC is the future when it's a specialized solution.
You're right, of course, but those heavy apps concern about 10% of the customers. The three models released with M1 account for more than 90% of Mac sales volumes, where 16GB of RAM is plenty. From a sales volume perspective, the transition is already almost completed.
 
That’s highly debatable. It might be a very important capability to you and a handful of others, but in terms of the overall Mac landscape it’s surely only relevant to a tiny fraction of Mac users. In the grand scheme of things, and especially in light of Apple’s own GPU investments, it’s not hard to imagine that Apple may decide that it’s not worth the effort to continue eGPU support.
You're right, and it's a "niche" feature, but when they started to support eGPUs they had to know that Apple Silicon Macs were on the horizon, and they kept improving support for them across several macOS versions. Why bother adding support if they were going to drop it in a year?

I’m sorry, but I take issue with the dismissive nature of this comment. Virtualizing x86 IS a big deal. I have to deal with several proprietary Windows applications for work, and being able to run a VM with Windows is a pretty big deal for that and is the difference between whether or not I need a separate computer for work. Plus I like gaming on my MBP. The ability to virtualize Windows- or run Boot Camp- might literally be what tips me into buying a 16” last gen Intel MBP over an upcoming M1 based one.

I’m hopeful for a third party solution though. Back in the PowerPC days, VirtualPC was originally an x86 emulator in addition to virtualization. Maybe Parallels will move to adding in x86 emulation to a future product.

Also, the fact that you are disappointed in these Rosetta benchmarks is kind of astounding to me. They are phenomenal by emulation standards.
I used to think like this. It *would* be nice to have a machine that could do all three (Mac, gaming, and Windows). But Mac GPUs are not very powerful, there are almost no games in macOS and more apps have been ported to macOS now so no Windows is not as much of an issue (for me) as it was a few years ago.

It was nice that there was a do-it-all Mac for a while, but in the (near) future we'll need either Mac + gaming PC or Mac + console. The Mac will be a better Mac thanks to Apple Silicon and either the gaming PC or the console will be much better at games than any Mac.
 
  • Like
Reactions: NetMage
The integrated GPU in the M1 is pretty impressive as well.

😂😂😂

I don’t know why you answered unless you are referring only to integrated graphics performance. I said a high end RTX card (with 24GB memory) that scores over 200,000 isn’t even fast enough for real time interactions with a very heavy After Effects comp. A GPU that does less than 20,000 is not suitable for that. There isn’t even a high end CPU or GPU in existence that can do certain high end workflows in real time and that is years away.

Please have realistic expectations and educate yourself about how heavy some workloads are if you want to talk tech ✌️
 
Personal attacks are not permitted on these forums. My reaction score is almost 20,000 on here, so I guess many people like my posts. I’m happy to address any on-topic objections you have to what I’ve written.

Almost everyone of your posts encourages people to get off topic and to argue. Instead of just posting that you are surprised that there was more a Rosetta-speed hit than you expected - you add in crap like:
But what are all the Apple-haters and Me-doubters going to complain about if that’s the case?

Oh, I know. They’ll fall back to “but it doesn’t virtualize x86.”

Go count your internet points 😂
 
  • Disagree
Reactions: NetMage
Something I'm puzzled about.
It was always the cheap low end machines that used to have Graphics systems that had to share graphics memory with main memory.
Of course, you also then have the problem of the memory you wish to hold programs in, being taken to be used by the graphics chip.
We then moved onto higher end graphics cards which then had their own super fast dedicated graphics to stop them hitting the processor.
So the processor could get on with what it was good at, with it's own memory, and the Graphics cards to storm ahead with their own dedicated memory also.
This allowed graphics performance to storm ahead.
Now we seem to be going back to the shared memory again.
Can anyone explain this, why this is not going backwards?
 
  • Like
Reactions: uuaschbaer
I think this is kind of what you'd expect to happen.

x86 ruled the roost for a long time. The fact that it ran what always ran before was valued. And people bought a lot of them. So the companies making x86 were awash in funds to apply to the R&D necessary to make it perform sufficiently well to keep selling.

x86 has been increasingly hobbled by the fact that it had to carry so much backwards compatibility, and we've always know it wasn't the "optimal" design. But what was going to replace it? There's no point investing a ton of money into a design that's just as good as x86-- who would buy it? It has to be enough better than x86 to justify breaking backwards compatibility.

So it's no surprise that when we see something truly compete with x86, that it's significantly better. We wouldn't see it if it weren't.

So why is Apple the company that did it? Because Apple had a revenue stream to fund R&D on a scale of what x86 sees. To compete with x86, it's not enough to simply have a better architecture, you need to to match its implementation. You don't just license Arm, click compile and go to fab-- if you want to win, you need to pay attention to the details. I've seen @cmaier has been hammering on this repeatedly: every other Arm maker has been using ASIC methodologies, while Apple put the time, money and effort into optimizing and tuning the actual implementation. You don't bother to do this if you're building an Arm into a toaster oven, but you can if you're trying to make the world's leading mobile devices and hoovering up the lion's share of smartphone profits year after year.

This put Apple into a position where they had a better architecture, an implementation that's at least as good as the x86 makers and, to Intel's great disgrace, access to a superior process. We've known this could happen, but now the planets aligned to make it happen.
Fundamentally, Apple has better people and is more motivated. That makes a big difference.
 
The article is incorrect.

Apple Silicon M1 Emulating x86 is Still Faster Than Every Other Mac in Single Core Benchmark

No. It doesn’t emulate x86. It translates the binary to ARM before you run it. While it runs, it is native ARM code.

Since this version of Geekbench is running through Apple's translation layer Rosetta 2, an impact on performance is to be expected.

Yes, it is a translation layer (not emulation), but Geekbench is not “running through” that layer, as the translation happens before you run the application.

These mistakes significantly misrepresent what is actually being shown. Yes, the M1 chips are extremely fast, and sure - Apple’s translation technology is seriously impressive. But they are not running rings around those x86 chips to the extent that they can outperform them during emulation.
 
Nice single core scores. Now let's see the multi core emulated experience, as this will be the most relevant factor for more recent apps.
My 2014 rMBP is ready to be replaced with the 14" AS MBP next year after good service for nearly 7 years.
 
Now we seem to be going back to the shared memory again.
Can anyone explain this, why this is not going backwards?

its a really complex topic, but the simplest way to put it is that modern CPU designs have mostly overcome the limitations of utilizing shared memory, to the point where its "good enough" for a large majority of use cases.
 
Here is what Apple didn't discuss in the November event and what happen if Intel is ready to launch a CPU build with 7nm.



View attachment 1668982
Whenever this will be.... Apple got into a similar situation in 2005, when they were unable to deliver a G5 in the PowerBook and there was the need for a leap forward. I don't think anyone regrets this transition, Apple usually nails transitions right.
Let's be honest, Intel has been nothing but a disappointment for the last five years. Even AMD managed it to overtake them, with the public opinion declaring them dead already.
 
Last edited:
  • Like
Reactions: NetMage
This put Apple into a position where they had a better architecture, an implementation that's at least as good as the x86 makers and, to Intel's great disgrace, access to a superior process. We've known this could happen, but now the planets aligned to make it happen.

Apple is also in this unique position because they kept third-party developers active and they are not affraid to make changes that break compatibility for improving their long term goals (like removing 32 bit support in Catalina, enforcing sandboxing, notarization, etc.).
In the other camp, Windows has accustomed users to expect apps from 10+ years ago to still work.
 
No, we aren't sure, you are. And you're wrong by the way. No one said Rosetta 2 can't run on multiple cores. The article just emphasises the single core perf because that already exceeds all the Intels so it results in a funnier article. Reading comprehension, people.

There is a multi-core score on the benchmark site. It just happens to be beaten by Intel Macs, because apparently multi-core emulation doesn't nearly scale linearly as in a native run, where multi-core score for a proper multi-threaded executable with tasks that are easy to run in parallel is almost exactly single score times core number. Here the single core perf is 1313, so you'd expect a multi-core score of around 10504, yet in the real world it only scores 5888. So multi-core scalability isn't great, but it does exist.

Apple will work hard either to improve on that, or push ARM ports for the most important apps. Or both.
It's the same processor in all 3 products.
I looks like they pull further apart on the Multicore tests. Which makes sense as running more processors will generate more heat which will favour the systems with more cooling.
 
No, we aren't sure, you are. And you're wrong by the way. No one said Rosetta 2 can't run on multiple cores. The article just emphasises the single core perf because that already exceeds all the Intels so it results in a funnier article. Reading comprehension, people.

There is a multi-core score on the benchmark site. It just happens to be beaten by Intel Macs, because apparently multi-core emulation doesn't nearly scale linearly as in a native run, where multi-core score for a proper multi-threaded executable with tasks that are easy to run in parallel is almost exactly single score times core number. Here the single core perf is 1313, so you'd expect a multi-core score of around 10504, yet in the real world it only scores 5888. So multi-core scalability isn't great, but it does exist.

Apple will work hard either to improve on that, or push ARM ports for the most important apps. Or both.
 
The native Multicore Air performance scales by 4.3x (at least partly due to the M1 having 4 “fast” cores and 4 “slow” cores. Rosetta appears to scale 4.5x which seems pretty good.
 
Last edited by a moderator:
If we had VirtualPC for PowerPC before, then I assume there will be something for Apple Silicon. Didn't Microsoft take over that app from Connectix back in the days? So Microsoft can just cash in by bringing out VirtualPC for Apple M systems. Sure, it probably would do much for games, but for other stuff it might do.
 
If we had VirtualPC for PowerPC before, then I assume there will be something for Apple Silicon. Didn't Microsoft take over that app from Connectix back in the days? So Microsoft can just cash in by bringing out VirtualPC for Apple M systems. Sure, it probably would do much for games, but for other stuff it might do.
There are already a number of better solutions. Parallels and VMWare work great and Parallels is just about to get its Big Sur update.
 
  • Sad
Reactions: NetMage
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.