Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
You appear not to understand the massive gap between a CPU as seen by a modern 64-bit app and the "full" CPU as seen by the OS.

What do you expect when you demand "compatibility"? Do you expect the M1 to support SMM mode? To support x86 hypervisors? To support x86 debugging and performance monitoring? Do you expect it to be able to run 8086 DOS, or 286 OS/2 (with A20-line hack), or Windows 95 (with lotsa of tasty 386 virtual DOS boxes)?
THAT is what you are asking for if you demand "compatibility"...

Can it be faked? Sure, anything can be faked -- if neither cost nor performance matter.
Does it make SENSE for it to be faked? Well...

You're assuming what? All applications can only be written in C? ObjC, Swift? Nothing is ever written in assembly? While assembly still represents a gap from writing in pure binary (does anyone in their right mind want to do binary? I think it would drive most of us to the brink), it still has benefits over a high-level language.

Nonetheless, to address what you seem to take odd issue with. ‘Compatibility’ is a generalist term, used in this case to illustrate the inter operability of non-native instruction sets. As in, Rosetta 2 offers x86 compatibility with the native ARM64 architecture of Apple Silicon. Seems relatively self explanatory to me.

The creation of a complete x86 abstraction layer is no mean feat, though as we are well aware, not impossible. Whether such a task ‘makes sense’ for any company boils down to one simple thing, entirely unrelated to computers, money.

Does it make sense to create an entire subsystem which will run x86, rather than virtualise an ARM system, which then performs the x86 translation. Well to be perfectly honest, not a single one of us can answer that question.

What it amounts to for companies such as Parallels is, what might the competition be aiming for. Do we want to risk market share by not aiming higher than our competitors may be. But most importantly of all, does potential revenue justify the expenditure.

To be perfectly honest, I’d be surprised if it did. It would be significantly more cost effective to pursue the avenue they currently are. Virtualise ARM and let software such as the Windows ARM variant worry about the rest.

That assumes of course that Parallels knows something we don’t, or are making a big gamble. That Microsoft will even offer ARM Windows licences to end users who wish to purchase it separately from hardware. That’s an entirely different debate.

Without Microsoft offering that option to end users, making their own x86 dynamic recompilation system becomes a different prospect. But, yet again, it’s all assumption. None of us knows the end game for Parallels, or VMware, who have been even more tight lipped.

However, there are at the moment three options to the execution of Windows x86, that we are aware of. 1. Virtualise Windows ARM as they are doing. 2. Create x86 recompilation from the ground up. 3. Allow Rosetta 2 to do the heavy lifting, as crossover has already done. One of those options will be the most efficient and most cost effective way for them to proceed, at least in the short-term and I think we all could guess what that will be.


Obviously Windows isn’t the be all and end all, many of us will be significantly more interested in running Linux. But, Linux is already quite well catered for on ARM, so that makes it much less of an issue.
 
With all the 'it is not possible to..' it doesn't really seem it can do much!
Waaaaah!
Virtualisation is something that will improve over time even on these early chips. Might be something that can be accelerated or supported better in the future as the new processor range matures too so don’t write anything off just yet. There is a reason they released the 13” basic pro and air first, not everything for pros is quite there yet.
Thank you! The whining is strong with this group, at least on the first few pages. But you're right, it will improve over time.
It's a free beta at the moment, they're far from even close to a finished product at this point.
That's right.
I'm going to release a competing emulator that also doesn't run Windows. In fact, mine won't run anything! Success!
Okay, that was funny. :)
Do not forget to sell it via subscription only... :)
Now 30 years ago this would have had the same chance of success as making lemons out of lemonade. But today we're addicted to subscriptions, so...hey, go make some money. After all, people have proven that they will just give it to you! Every month! lol
Yeah when it's killed by the M1X or M2
Nicely done.
 
You're assuming what? All applications can only be written in C? ObjC, Swift? Nothing is ever written in assembly? While assembly still represents a gap from writing in pure binary (does anyone in their right mind want to do binary? I think it would drive most of us to the brink), it still has benefits over a high-level language.

I'm guessing you're overestimating how much code is written in assembly in 2020.
 
I'm guessing you're overestimating how much code is written in assembly in 2020.

Not at all. While, rather obviously, the vast majority of code these days is written using a high-level language. After all, Assembly has become more complex than when we were writing code for something like a Zilog Z80, naturally. However, it does still remain a viable option.

As a way for anyone needing to get that extra efficiency by getting as close to the metal as realistically possible, Assembly will always be around. As for its use today, look beyond Apple and Windows and you'll find it's still popular when coding for ARM devices such as the Raspberry Pi.

As popular as Python, for example, of course not, never will be, most people just can't be bothered to learn it. But it is most definitely still in use today by many people. Do a search for ARM Assembly and you'll find several hundred million matches for it, many of which are tutorials (there's a rather nice one here for beginners), and of course reference guides.. Along with the (probably) millions of articles declaring you to be crazy to learn it :D
 
Not at all. While, rather obviously, the vast majority of code these days is written using a high-level language. After all, Assembly has become more complex than when we were writing code for something like a Zilog Z80, naturally. However, it does still remain a viable option.

As a way for anyone needing to get that extra efficiency by getting as close to the metal as realistically possible, Assembly will always be around. As for its use today, look beyond Apple and Windows and you'll find it's still popular when coding for ARM devices such as the Raspberry Pi.

As popular as Python, for example, of course not, never will be, most people just can't be bothered to learn it. But it is most definitely still in use today by many people. Do a search for ARM Assembly and you'll find several hundred million matches for it, many of which are tutorials (there's a rather nice one here for beginners), and of course reference guides.. Along with the (probably) millions of articles declaring you to be crazy to learn it :D
The issue is not recompiling, and is not assembly vs higher level languages.
I get the feeling that you're demanding "perfect compatibility" without actually knowing anything about what it is that you are demanding. I raised a bunch of points, things like SMM or the handling of 386 VMs, or the handling of 8086 mode at boot time. You simply ignored those.

I suspect you would be better off looking into the complexities of exactly what it is you are demanding. Providing the ability to execute use-mode-only x86 binaries is one thing. Providing enough of a virtualization of an x86 CPU to run an unmodified x86 OS or hypervisor or even driver is something very different.
 
The issue is not recompiling, and is not assembly vs higher level languages.
I get the feeling that you're demanding "perfect compatibility" without actually knowing anything about what it is that you are demanding. I raised a bunch of points, things like SMM or the handling of 386 VMs, or the handling of 8086 mode at boot time. You simply ignored those.

I suspect you would be better off looking into the complexities of exactly what it is you are demanding. Providing the ability to execute use-mode-only x86 binaries is one thing. Providing enough of a virtualization of an x86 CPU to run an unmodified x86 OS or hypervisor or even driver is something very different.

And you consistently confuse possibilities with demands. I am personally demanding nothing, I do not need the ability to run x86 on anything but my Windows desktop. As for ignoring 386, 8086, or anything else. Why debate hypotheticals. Even though, again, it would be theoretically achievable. Impractical, without a doubt, improbable, absolutely without question. Impossible? Of course not.

The initial response, and everything following, is simply stating that it is possible. Not that it should be attempted.

Whether anyone would attempt such an extraordinarily mammoth and complex task is not in question here. It is obviously highly unlikely anyone would, at this time.

It is however, a simple fact that any architecture can be implemented by a non-native alternative, if you have both the computational capabilities and the ability (not to mention time) to code such a difficult prospect.

30 years ago we couldn’t imagine that the, very complex for the time, architecture of the Amiga would be implemented as a piece of software. Time, advancement in technology and computational power unimaginable to an Amiga user proved that theory wrong.

Nothing is impossible.
 
Recent Dev Build allows for x64 apps to run. Hard to admin but MS is getting better and better. They must be treating Azure ARM infrastructure seriously.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.