Except it does provide x86 emulation:The biggest hurdle to any x86 emulation on ARM will be Intel. They don't like licensing x86 to anyone. This is why even Windows 10 on ARM doesn't support x86 emulation. I also don't see Intel supporting or encouraging the end of their reign...
Wouldn't that only be hardware x86 emulation? Software x86 emulation surely must be possible without Intel interfering?The biggest hurdle to any x86 emulation on ARM will be Intel. They don't like licensing x86 to anyone. This is why even Windows 10 on ARM doesn't support x86 emulation. I also don't see Intel supporting or encouraging the end of their reign...
Except it does provide x86 emulation:
https://docs.microsoft.com/en-us/windows/uwp/porting/apps-on-arm-x86-emulation
It just isn't particularly good.
Wouldn't that only be hardware x86 emulation? Software x86 emulation surely must be possible without Intel interfering?
Ok, that went under my radar. That will make it quite interesting to see what Apple does with its ARM transition. I guess the remaining option is for either Intel or AMD to make hybrid x86/ARM processors. Or, I guess it's still possibly to have an ARM based machine and put a tiny x86 chip in there somewhere for legacy apps.It does, but Intel has already started firing warning shots at it and other attempts to emulate X86. Obviously, some parts of X86 are patent free, but sorting that out is likely a non-trivial task.
https://arstechnica.com/information...t-claims-x86-emulation-is-a-patent-minefield/
You know what Geekbench isn't biased for Apple or ARM at ll, if anything it only measures peak performance which doesn't take into account of sustained performance and thermals. So if you just take a look at Geekbench, yes the A12X is just as fast as a 8th-gen quad-core i7, with the GPU being about as fast as Intel's best integrated graphics as well.
Sure there are some caveats to ARM performance, mostly with complex floating point and vector operations but that's completely irrelevant with any modern, consumer applications.
Oh I forgot, IBM has a license too, don't they? Or at least used to. Anyway, AMD and Intel have each other grabbed by the b*lls, that's why it works there. Any other player would probably end up having to pay so much that it's no longer worth it.Couldn’t there just be some sort of a licensing agreement, like the one AMD and Intel have going on?
AMD and Intel Patent Cross License Agreement:
https://www.sec.gov/Archives/edgar/data/2488/000119312509236705/dex102.htm
I've found Geekbench to give different results on the exact same hardware when booting different operating systems. Haven't tried on MBP yet but possibly I should. I don't think Geekbench results are very reliable cross platform, even though that seems to have been a design goal.What I never liked about Geekbench is that its this weird collection of tasks, often relying on third frameworks that can be implemented differently between architectures. I get that they wanted to have an idea of "real-world" performance, but this approach makes for a poor benchmark of CPU performance.
I've been resisting the urge to write solid and simple benchmarks that test various hardware aspects for cross platform devices in a clear and simple way. One of these days I'm sure my willpower will fail. Sadly it's a pretty big task probably.
Couldn’t there just be some sort of a licensing agreement, like the one AMD and Intel have going on?
AMD and Intel Patent Cross License Agreement:
https://www.sec.gov/Archives/edgar/data/2488/000119312509236705/dex102.htm
Well, a *lot* of people rely on that compatibility.. And the major software companies are so stuck in that world, that they'd also like to retain it, from a porting point of view, I guess.And that would accomplish what exactly? I don't think that Apple would be interested in developing an Intel-compatible CPU.
Is that really so unlikely? Apple has always been upspoken about how flexibel the kernel and whatnot is, when it comes to other platforms. And they've in recent years shown that they actually prefer to go their own way, instead of using already existing things (A#, Metal, APFS, and likely other major things I can't remember). If they want to leapfrog their chips for high-end machines, they might as well go all the way..Frankly, I wouldn't mind of x86 instruction set would just die. It became extremely messy and unwieldy. Ideally, I'd love the industry to move to Agner's Fog https://forwardcom.info but that probably won't happen![]()
Well, a *lot* of people rely on that compatibility.. And the major software companies are so stuck in that world, that they'd also like to retain it, from a porting point of view, I guess.
Wouldn't that violate the same patents that prevent hardware and software emulation?Ah, but there are simpler ways to achieve compatibility that wouldn't need any licensing or hardware support. They could simply recompile x86 code to ARM on demand. Machine code is just another programming language after all and LLVM is perfectly suited for a task like this.
Wouldn't that violate the same patents that prevent hardware and software emulation?
https://forums.macrumors.com/thread...-their-own-chips.2153468/page-2#post-26806050Oh, does the patent also prevents software emulation? At any rate, what even counts as emulation?that legal stuff is way above my paygrade...
In the end, I think it depends on the exact wording of the patent. I would be surprised if it forbid other parties to do symbolic manipulation/transformation on x86 bytecide. Because then almost every compiler and disassembler would be illegal...
You can be pretty sure that for 99% of all personal computers in the world, it's mostly the computer waiting for the human, rather than the human waiting for the computer.Everyone here waiting on ARM and all I want is a no Touch Bar option, a better keyboard, maybe FaceTime authentication/FHD camera. That would make the laptop more than perfect, I doubt anyone will be like "Wish I had some ARM performance, at the cost of bugs/emulation issues/compatibility issues" - these machines are pretty darn fast as they are. If I had to bet, I reckon the majority of users don't even use half the performance.
TLDR. I don't know how far Apple have really advanced with it but if it was good enough to be in a MacBook. It would be.Is it really that far behind? Especially for general purpose? I'd argue the complete opposite.The new iPad Pro has as much computing power as many 15" MacBook Pros (2016 Touch Bar models with the i7 2.9 and Pro 460!), no matter if you believe that benchmarks across architectures are comparable or not. The power is clear as day when doing real world CPU intensive stuff in iOS, like handling a 42 megapixel RAW image. And what's wrong with power and temperature needs? The A12X runs cooler (and with no heatsink or fan) and with less power consumption than an equivalent x86 chip. Intel is where power and temperature is a huge issue, not ARM. Hell, I bet you could run two or three overclocked A12X chips simultaneously where one 45w quad-core i7 exists today and still make less heat and consume less power than the Intel setup.
We're on the verge of a computing boom again I think. I can't wait to see what an A-series chip can do with a proper cooling setup and laptop-sized battery. What's difficult - and what's holding it back - is software. Just like it was moving from PowerPC to x86 back in the day, it's a big ask for legacy software (like Adobe's CC apps) to be rewritten for a new ARM-based desktop OS. They did it before, and they'll do it again very soon. We're already getting full Photoshop for iPad.