You appear not to understand the massive gap between a CPU as seen by a modern 64-bit app and the "full" CPU as seen by the OS.
What do you expect when you demand "compatibility"? Do you expect the M1 to support SMM mode? To support x86 hypervisors? To support x86 debugging and performance monitoring? Do you expect it to be able to run 8086 DOS, or 286 OS/2 (with A20-line hack), or Windows 95 (with lotsa of tasty 386 virtual DOS boxes)?
THAT is what you are asking for if you demand "compatibility"...
Can it be faked? Sure, anything can be faked -- if neither cost nor performance matter.
Does it make SENSE for it to be faked? Well...
You're assuming what? All applications can only be written in C? ObjC, Swift? Nothing is ever written in assembly? While assembly still represents a gap from writing in pure binary (does anyone in their right mind want to do binary? I think it would drive most of us to the brink), it still has benefits over a high-level language.
Nonetheless, to address what you seem to take odd issue with. ‘Compatibility’ is a generalist term, used in this case to illustrate the inter operability of non-native instruction sets. As in, Rosetta 2 offers x86 compatibility with the native ARM64 architecture of Apple Silicon. Seems relatively self explanatory to me.
The creation of a complete x86 abstraction layer is no mean feat, though as we are well aware, not impossible. Whether such a task ‘makes sense’ for any company boils down to one simple thing, entirely unrelated to computers, money.
Does it make sense to create an entire subsystem which will run x86, rather than virtualise an ARM system, which then performs the x86 translation. Well to be perfectly honest, not a single one of us can answer that question.
What it amounts to for companies such as Parallels is, what might the competition be aiming for. Do we want to risk market share by not aiming higher than our competitors may be. But most importantly of all, does potential revenue justify the expenditure.
To be perfectly honest, I’d be surprised if it did. It would be significantly more cost effective to pursue the avenue they currently are. Virtualise ARM and let software such as the Windows ARM variant worry about the rest.
That assumes of course that Parallels knows something we don’t, or are making a big gamble. That Microsoft will even offer ARM Windows licences to end users who wish to purchase it separately from hardware. That’s an entirely different debate.
Without Microsoft offering that option to end users, making their own x86 dynamic recompilation system becomes a different prospect. But, yet again, it’s all assumption. None of us knows the end game for Parallels, or VMware, who have been even more tight lipped.
However, there are at the moment three options to the execution of Windows x86, that we are aware of. 1. Virtualise Windows ARM as they are doing. 2. Create x86 recompilation from the ground up. 3. Allow Rosetta 2 to do the heavy lifting, as crossover has already done. One of those options will be the most efficient and most cost effective way for them to proceed, at least in the short-term and I think we all could guess what that will be.
Obviously Windows isn’t the be all and end all, many of us will be significantly more interested in running Linux. But, Linux is already quite well catered for on ARM, so that makes it much less of an issue.