Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

shibboleth

macrumors newbie
Original poster
Mar 4, 2025
10
5
Here's a quote from an Apple Insider article: "They both pulled off an immensely complex move of processors — with Jobs taking Apple from PowerPC to Intel — and both did it so well that it's easy to forget what a task it was."

What would this task entail? How did they do it so well?
 
The issue us that going from processor family to another means existing code would not work. The flawless nature was producing Rosetta that allowed PPC code to run on intel without a huge performance penalty. They did it again with Rosetta 2, allowing x86 code to execute on ARM processors.

Rosetta and Rosetta 2 are bridging technology allowing the legacy code to run on the new processors as Apple pushed the developers to develop native (for the new processor) apps
 
@maflynn has mentioned Rosetta - one of the key technical solutions - and its also worth noting that Apple had already managed a (fairly seamless) processor transition - from 68000 to PPC - in the early 90s. Not Rosetta that time - but a technically different but similar approach of using a 68k emulator to run old apps - combined with the "fat binary" approach of allowing an application bundle to contain multiple versions of the compiled code for different processors.

However, I think more significant was the shift from "classic" MacOS to OS X, which was a totally different OS and meant that, ultimately, every MacOS App had to be substantially re-written. Apple did provide a "Classic" emulator (effectively running a copy of MacOS 9) to smooth the transition and the "Carbon" API/library which provided an easier route for developers to port MAcOS 9 applications without starting from scratch - but these were only temporary solutions, with "Classic" being killed off with the Intel transition and Carbon being phased out until the axe finally fell in 2012.

Microsoft did do a roughly comparable shift from the original DOS-descended Windows 9x to the all-new Windows NT-based series, but it was far more gradual and cautious - running alongside "classic" Windows as a server/workstation OS for years. Windows XP, the version that eventually got pushed to the mass market as a replacement for 9x, was effectively the 6th major release of Windows NT and was still compromised by the desire for backwards compatibility with 9x and DOS - some of the infamous security issues were down to many legacy applications needing admin privilege to run, knobbling Windows XPs more modern security model. A lot of the DOS/9x-era backwards compatibility only disappeared with Windows 11 (they were in the 32-bit version of Windows 10). Plus, the very early Windows stuff had a lot of baggage from the early x86 processors which weren't really true 32-bit, so even stuff written in high-level languages like C had kludgey processor-specific stuff. Mac started out with a true 32-bit (internal) processor.

What has this got to do with changing processor? Basically, the Mac ecosystem is a lot smaller, flexible and less legacy-obsessed than the only real basis for comparison - Windows. Nobody expects 20-year old Mac applications to install and run - you're lucky if a 5 year old application still runs. There have been a string of "software mass-extinction events" in the history of Mac: 68k->PPC, Mac OS 9->Mac OS 10, PPC->Intel, Dropping Carbon, Dropping 32 bit, Intel->Apple Silicon/ARM - plus a lot of applications get broken by every major MacOS release. That would probably not be acceptable by the conservative, corporate users of Windows - plus, it simplifies things when Apple want to make a big change like Intel->ARM because there are less awkward legacy cases to cater for (e.g. 32-bit apps had been totally killed off by the time ARM came along, so Apple Silicon, Rosetta 2 etc. could be pure 64-bit).

Credit to Apple where it is due for things like Rosetta, but some of these smooth transitions are down to the Mac not having all of Windows' legacy drag-anchors. Other platforms - like Unix and Linux - have long offered source-code-level compatibility across multiple different processors.
 
Here's a quote from an Apple Insider article: "They both pulled off an immensely complex move of processors — with Jobs taking Apple from PowerPC to Intel — and both did it so well that it's easy to forget what a task it was."

What would this task entail? How did they do it so well?

Basically it comes down to practice.

Apple has had literally DECADES of experience moving its code bases from one processor architecture to the next. The Macintosh was originally on Motorola's 68000 family, then PowerPC, then x86, and now ARM. Undoubtedly they have internally prototyped moves to other architectures to explore possible advantages and disadvantages — DEC Alpha, Intel's Itanium and i860/i960, MIPS, and now RISC V have probably all had versions of Mac operating systems running. Apple might be the best in the industry at this.
 
The other advantage Apple has is that they often run production builds of existing OSes on new hardware prior to even announcing the switch. Both the PPC-Intel and Intel-ARM announcements included demos running on the new architecture, they dropped that in as an aside after demoing new features coming to the Mac. Meanwhile, Microsoft's "transitions" have largely been along the lines of DOS - Windows on DOS - standalone Windows and 16-bit to 32-bit to 64-bit. The x86 underpinnings have been there all along, and Microsoft has been ironically like IBM with respect to not tearing everything down in order to move forward.
 
  • Like
Reactions: wyrdness
Microsoft was actually quite forward-looking with portability in Windows NT, which is the tech foundation of all modern Windows. Historically NT has been ported to 9 different ISAs, only two of which are x86 family:

x86-32
x86-64
Arm
64-bit Arm
Intel i860
DEC Alpha
Intel Itanium
MIPS
PowerPC

The thing is, x86-32 and x86-64 ground every one of these except Arm into dust, so Microsoft abandoned most of these ports long ago.

There's things not to like about how NT does its multi-architecture support (no equivalent to Apple's universal binaries, for example), but there's also no denying that it does it. Lots cleaner than Apple's first ISA transition, too.

(to expand a bit on that, 68K to PowerPC was a mess under the hood. They were unable to port all of the Mac System Software to PPC prior to launch of the first PowerMacs. This was mostly because large and important chunks of the System were written in hand-optimized 68K assembly, which made porting it to PPC a huge effort - you'd basically be completely rewriting every line of code, instead of recompiling and fixing a few bugs. Some mad geniuses at Apple came up with a crazy yet brilliant yet crazy design that let both the OS and applications be essentially arbitrary mixes of PPC and 68K code, switching between emulation and native execution on the fly. This wasn't an approach that was good in the long term, and was only possible by happy accident, and it kind of set classic MacOS up for some medium-term problems, but it let them ship on time.)
 
  • Like
Reactions: crazy dave
The thing is, x86-32 and x86-64 ground every one of these except Arm into dust, so Microsoft abandoned most of these ports long ago.
Not always on technical merit, though! I recall there were some quite nice DEC Alpha-based PC workstations at some point - but DEC failed, sold out to Compaq who put their money on Itanium (AKA "Itanic") instead and killed it.

Windows-on-ARM has persisted, but it still doesn't seem to have really taken off, even with the Snapdragon-based . There are two big stumbling blocks that have kept PCs on i86:

1. The PC market's reliance on legacy software going back to 16-bit 8086 (if not before - IBM chose x86 over 68k partly because it was semi-compatible with 8-bit 8080 assembly language... ) - as per my previus post. New NT architectures often got judged on how well they could run 286/DOS/16-bit-Windows code.

2. (Something I omitted first time): Apple get's to define what a Mac is - When Apple decides to switch processor or OS, within a year or so the old platform is off the market. Certainly, early PPC Mac was a bit so-so while waiting for native versions of applications that really took advantage of it, and might not have succeeded if customers could still have bought 68k Macs. With PCs, though, nobody can really decide that "PCs shall henceforth move to ARM" - Microsoft - who are only a bit player in PC hardware and rely mostly on third-party PC makers to make platforms for their software - can nudge but they can't force - short of the extreme option of dropping x86 support from Windows (and look at the trouble they have with even minor changes in support with, e.g. Windows 11). Even Apple doesn't drop support for an ISA until years after a successful transition, but their hardware transitions are usually done and dusted in a year.

(to expand a bit on that, 68K to PowerPC was a mess under the hood. They were unable to port all of the Mac System Software to PPC prior to launch of the first PowerMacs.
Yes, I had an early PowerMac and it was initially underwhelming. Also, ISTR, it relied on old school software emulation of 68k, which is inherently slow, rather than Rosetta-style code translation, which can produce near-native performance.

However - to make a fair comparison - Windows NT was MS's ground-up rewrite "modern" OS (albeit with lots of subsystem stuff to handle legacy software) with platform independence baked in, and didn't take over as the main PC operating system until the 00s, while PPC Macs had to run System 7(?) which was written for 80s tech (when lovingly hand-crafted assembly was often needed for performance).

Also, "Classic" MacOS on PPC was only ever supposed to be a stop gap (if it existed at all) because Apple's own "new technology" OS, Copeland, was coming real soon now.

Let's just say... that didn't go well :) and Classic MacOS on PPC was always a bit of a mess.

The original Mac OS lineage was eventually killed and replaced with a descendent of NeXTSTEP, an already existing UNIX-like OS (and, as such, built on a tradition of high-level source code portability - I believe that it's x86 support even pre-dates its PPC support!)
 
@theluggage "I had an early PowerMac and it was initially underwhelming...
...PPC Macs had to run System 7(?) which was written for 80s tech (when lovingly hand-crafted assembly was often needed for performance)."


System 7.1 for 64K, 7.5 for the first PPCs.
I moved (from DOS) to Mac in 1993.
I had worked for twenty years in British TV production as a BBC film, video editor and producer, but my ITV production company suddenly 'downsized' and shed 300+ jobs, mine included... :rolleyes:

I heard from a British Government's Training and Enterprise Council's employee that the TEC had become inspired by Al Gore's vision of 'On-ramps to the Information SuperHighway', and there was gov't money to be had for people who could develop that 'vision'.

The upshot was that I was made leader of a small team with an equipment grant of £50K to 'research' digital Television production strategies...

So, having someone else's money, I came to purchase a fully loaded Mac Quadra 840AV, (32K RAM @ £1600, 6.5 GB SCSI RAID HDDs @£7000) a full-size Sony Hi-8 camcorder and Betacam (pre-SP) recorder...
Inside in the NUBUS slots was an ATTO raid card, and a RADIUS digital video capture card with 19" rack breakout-box input processor (640x480/768x576) - another £6000). Non-square pixel DV would follow later haha... 😉

RadiusVVT.jpg


QuickTime was v1.6 and Premiere (pre-Pro) was v3.13.
Nothing worked properly, but everything could be 'worked-around'.
AVID at the time was 'offline' only, so nothing else (especially on the PC) could do long-form 'Full SD' video.
I loved my Apple gear. :)

But: QT video/audio wouldn't remain locked in sync and the SCSI bus was infinitely delicate.

Then Apple suddenly abandoned 68K and the new top-of-the-line PowerMac 8100 was available, running System 7.5
Nothing worked even more, but with endless (snail-mail+floppy disk) collaboration with the various company reps, all the necessary 'hand-crafted assembly' software patches were painstakingly installed on the new PPC Mac, and made it work (with work-arounds, and updating to 7.5.1, nothing further...).

Radius collaborated with Apple (and Randy Ubillos) to come up with a new stable version of QuickTime ( v1.85), and Ubilios developed Premiere v4 to improve it.

But having lived through that era of pre-internet software downloads, it is marvel to see how OS X/MacOS became the stable experience that we enjoy today. 👍
Edited to correct.
 
Last edited:
But having lived through that era of pre-internet software downloads, it is marvel to see how OS X/MacOS became the stable experience that we enjoy today.
Well, yes - mainly because Classic MacOS died an unseemly death and modern OS X/MacOS has more in common with NeXTStep and BSD. The originally intended "modern" MacOS, Copeland, is a horror story that software developers tell their children if they want them to grow up to be organic chicken farmers instead.

By the time OS X came along, "Classic" MacOS was long past its sell-by date (not as bad as Windows ME, though...) - the inexorable rise of Windows/Intel was the main reason for Apple's near failure in the 90s, but the OS situation didn't help...
 
With PCs, though, nobody can really decide that "PCs shall henceforth move to ARM" - Microsoft - who are only a bit player in PC hardware and rely mostly on third-party PC makers to make platforms for their software - can nudge but they can't force - short of the extreme option of dropping x86 support from Windows (and look at the trouble they have with even minor changes in support with, e.g. Windows 11). Even Apple doesn't drop support for an ISA until years after a successful transition, but their hardware transitions are usually done and dusted in a year.
Yep! This isn't something Microsoft can just dictate. (That said, they could've done more nudging than they have been. It feels like there's internal politics going on, so there's never been the full weight of Microsoft behind Windows-on-Arm.)

Yes, I had an early PowerMac and it was initially underwhelming. Also, ISTR, it relied on old school software emulation of 68k, which is inherently slow, rather than Rosetta-style code translation, which can produce near-native performance.
Yeah, Rosetta 2's style of ahead-of-time translation was not possible with the runtime model of the classic MacOS 68K emulator.

They did improve it over time. The first shipping version was a simple "interpretive" emulator, where every 68K instruction executed was re-decoded in software each time it was executed. Later versions switched to what was then called a dynamic recompilation emulator (today the more common term is a JIT), in which the emulator caches translation results dynamically as it steps through 68K code, and whenever a piece of code hits in the translation cache it just executes from cache instead of re-translating. Even with a limited cache size, many types of code spend a lot of time executing loops repeatedly, which means JIT emulators can get much closer to the ideal of zero overhead than interpretive emulators.

Speaking of JIT, Rosetta 2 isn't pure AOT (ahead-of-time). Any web browser compiled for x86 contains its own JIT engine which translates Javascript to x86 on the fly. Rosetta 2 can't pretranslate code that doesn't exist yet, so it needs a JIT to handle runtime-generated x86 code. JIT-inside-JIT inception is horrible for performance, but Apple needed to make sure compatibility was as broad as possible.

However - to make a fair comparison - Windows NT was MS's ground-up rewrite "modern" OS (albeit with lots of subsystem stuff to handle legacy software) with platform independence baked in, and didn't take over as the main PC operating system until the 00s, while PPC Macs had to run System 7(?) which was written for 80s tech (when lovingly hand-crafted assembly was often needed for performance).
Absolutely true, although you might be surprised to know that NT got its start in the late 1980s. It was originally called OS/2 3.0, because Microsoft and IBM hadn't yet had their messy breakup over OS/2 3.0's direction. (Briefly, after they'd both agreed on what OS/2 3.0 would be, Windows 3.0 popularity took off. Microsoft wanted to change OS/2 3.0 to include Windows-compatible APIs, while IBM wanted to continue with the original plan.) NeXTSTEP (which was going to become the future Mac OS X) was also a 1980s OS.

There was a tension in the 1980s and 1990s between early microcomputer operating systems - which, as you say, were very resource limited and thus needed minimalist software with lots of assembly - and older more mature operating systems that came from a world of bigger machines. Both NT and NeXTSTEP fell into that latter category, and it would take the prices of high performance CPUs, reasonable amounts of RAM, and reasonable amounts of disk all dropping in the 1990s for them to become suitable for use in mass-market personal computers.

Also, "Classic" MacOS on PPC was only ever supposed to be a stop gap (if it existed at all) because Apple's own "new technology" OS, Copeland, was coming real soon now.
Yup. And that wasn't the only "real soon now"... 1990s Apple loved dysfunctional and doomed next generation OS / other software tech projects, many of which were at cross purposes.

The original Mac OS lineage was eventually killed and replaced with a descendent of NeXTSTEP, an already existing UNIX-like OS (and, as such, built on a tradition of high-level source code portability - I believe that it's x86 support even pre-dates its PPC support!)
I'm not completely sure but I think it was PPC, then x86.

Just like Macintosh, NeXT started out shipping 68K machines, though since they were later to the market and a lot higher end, they started with the 68030 and moved up to the 68040. When Apple decided to jump to PPC, that forced NeXT into its own transition away from 68K, and they chose to go with PPC too. NeXT did a lot of work on designing and prototyping a dual-PowerPC workstation, and also a lot of the work on porting NeXTSTEP, but before any of this could ship they ran out of money, couldn't find enough new investment, and were forced to sell off everything related to their hardware business and continue as a software-only company.

That was when they started doing things like porting NeXTSTEP to x86, rebranding it as OPENSTEP in an attempt to glom onto 1990s Open Systems marketing vibes (exactly what was "open" in any of these systems was always a little hazy), and so on. x86 wasn't even the only place they ported it; they also attempted to get some momentum by porting to a few MIPS and SPARC workstations. None of these attempts to continue the OS on other people's hardware worked out very well, which is why NeXT was very interested in being acquired by Apple a few years down the road.
 
<quote maflynn>
Rosetta and Rosetta 2 are bridging technology allowing the legacy code to run on the new processors as Apple pushed the developers to develop native (for the new processor) apps
</quote>

Yes, but not only legacy code. It allows continuing development pretending to be on the old processor, using much of the latest frameworks, libraries, tools etc. Of course till the moment Apple axes the Rosetta in use.

<quote theluggage>
Nobody expects 20-year old Mac applications to install and run -
</quote>

Well...

<quote theluggage>
you're lucky if a 5 year old application still runs.
</quote>

Huh? That's certainly not my experience at all. Far from it. Besides, why would Apple put any effort in the Rosetta's if that is the case? Here, some 64bit apps running in Mac OS X 10.7 still run in macOS 15.5 with and without Rosetta 2.

<quote theluggage>
plus a lot of applications get broken by every major MacOS release.
</quote>

Apple's doing or the app maker?

So funny, one of the tools I use has a rather interesting life span:
1984 System 1 - 2018 macOS 10.14 (end of 32bit and Carbon)
Waitnextevent (sys 6?) in Mojave, ha ha! But lets not get into language wars, so silly. So no name. Clearly went through all the processors apart from AS and Rosetta 1 is used for the PPC version, runs acceptable, lovely.

<quote Admiral>
Apple has had literally DECADES of experience moving its code bases from one processor architecture to the next.
</quote>

This is true for many programmers and toolmakers going back many decades. With or without Apple. Like a 3rd-party programmer on Atari, Mac, and NeXT for the 680x0. Then Mac only with PPC, Intel and AS. Her/His experience grew along side Apple for the same decades. And a lot of assembler was and is used, evidence of different processors, no doubt. And also no problem ;-)

<quote Admiral>
Apple might be the best in the industry at this.
</quote>

Perhaps, don't know. And what industry?
What about companies providing programming tools, like cross-compilers, for (very) different processors used in the embedded world? Think for instance ARM, ColdFire, MSP430, AVR, 68HCS08, 68HC12, 68K, 68HC11, 8051 processor cores. But the top level of the instruction sets for the cross-compilers can be all the same. We're talking here about totally different quantities, another world.

There are so many technigues used and still in use for emulators, simulators, Rosetta-likes, with and without JIT and whatnot. So many experiments outside our (and Apple's) view. If Apple's the best, time will tell. But I don't know if that's relevant and important in the here and now.
 
<quote maflynn>
Rosetta and Rosetta 2 are bridging technology allowing the legacy code to run on the new processors as Apple pushed the developers to develop native (for the new processor) apps
</quote>

Yes, but not only legacy code. It allows continuing development pretending to be on the old processor, using much of the latest frameworks, libraries, tools etc. Of course till the moment Apple axes the Rosetta in use.
Two points, first there's a quote button that makes life a lot easier for the rest of us when quoting. There's a quote button that you can use, and use multi-quote for many parts of a post or multiple member's posts. This allows for increased readability, notifications that the person was quoted

Secondly this is why apple pressured developers to switch their code, during the transition from legacy macOS to OSX, it was harder for apple to convince developers to switch to native OSX apps, so it took longer. Now being as dominant as they are and the tools they provided to ease and simplify the transition, it was not a problem for apple.

There are so many technigues used and still in use for emulators, simulators, Rosetta-likes, with and without JIT and whatnot. So many experiments outside our (and Apple's) view. If Apple's the best, time will tell. But I don't know if that's relevant and important in the here and now.
I think you're reading too much into it, and failing to see the trees through the forest. Apple was best at moving from platform to another. while providing compatbility. It was generally viewed as a failure if a company had to switch platforms, and here apple not only did it once, but twice (thrice if you count 68K).

Its also not really relevant at this point, Apple has singled that support for intel processors is coming to an end, to its a moot point
 
JIT-inside-JIT inception is horrible for performance, but Apple needed to make sure compatibility was as broad as possible.

Fortunately, browsers and language runtimes with JITs are usually the first things to go native and in most cases only need fixing in one place - things like Safari, Chrome, Python and node.js (and their underlying JITs) were already ported to ARM64 when Apple switched the Mac. I guess the main problem was Electron (and similar) apps bundled with x86 versions of chromium/node.

Huh? That's certainly not my experience at all. Far from it. Besides, why would Apple put any effort in the Rosetta's if that is the case? Here, some 64bit apps running in Mac OS X 10.7 still run in macOS 15.5 with and without Rosetta 2.
Maybe you've been lucky :) Plus, I'm thinking of users who keep their hardware and OS reasonably up-to-date (in case you're still rocking Snow Leopard).

Things like Rosetta, Rosetta2, Carbon, Classic etc. are only transitional solutions, which Apple kills as soon as feasible.
Classic: 2001-2006 (Died with Intel Macs)
Carbon: 2001-2012 (the long-runner!)
Rosetta: 2005-2011
Rosetta 2: 2020-2027 (mostly - depreciation announced at WWDC).
x86-32 apps: 2005-2019 (another long-runner)

C.f. 16-bit Windows/DOS support in NT: 1993 - 2021
You could run Win16 apps and non-extended DOS apps in the 32-bit version of Windows 10, the 32 bit build was only dropped with Windows 11. If anybody wants to revise that to some point in time when "everybody" was using 64-bit Windows, fine, but we're still probably talking sometime in the 2010s.

Apple's doing or the app maker?

A mixture - and irrelevant to the home user. Also depends what the App was doing - no reason for a text editor to get broken by changes to kernel security etc. but (e.g.) Parallels only used to last for maybe 1 or 2 major OS upgrades and I gave up on Photoshop Elements because it kept getting broken by OS upgrades (file that one as the developer's fault, I think).

This is true for many programmers and toolmakers going back many decades. With or without Apple. Like a 3rd-party programmer on Atari, Mac, and NeXT for the 680x0.
Well, firstly, the current Mac platform has more in common with NeXT than "classic" Macintosh (already been discussed).

Then anything with a *nix heritage (NeXT, MacOS, Linux) is built around an ethos of API/source-code-level portability rather than binary compatibility. The C language is particularly handy in its support for macros that can help deal with byte order, word length etc. issues at compile-time, and the whole thing is descended from mini-computers and fast workstations that were less dependent on lovingly hand-crafted assembly language.

Also, advantage to anything that started with 68k rather than early (pre-386) x86 - although some versions of the 68k only had 16- or 8-bit busses the code was "clean" 32-bit where 8086 was initially only 16-bit with abominations such as near and far pointers that even infected high-level code, and latterly the whole real/protected mode etc. malarky. Even early ARM only used 24(?)-bit addressing, and the move to true 32 bits broke older RISC-OS binaries.

Not sure that Atari PPC was ever really a mainstream contender... Although things like ST, Amiga and RISC-OS are still going in some form or other today (mostly using Raspberry Pis last time I looked) they're tiny hobbyist niches happy to wasste time getting stuff to work, just for fun.

Perhaps, don't know. And what industry?
Well, yeah, if we're talking viable desktop/laptop personal computing it's really only a 2-or-3-horse race between Mac, Windows PC and maybe Linux (which kinda covers ChromeOS and Android). If you count Linux, then it wins, since it has been multi-platform since the 90s and has the *nix tradition of API-level compatibility plus is dominated by open source software that anyone can port and distribute.

Windows comes last - partly thanks to (until relatively recently) legacy roots going back to CP/M and the first microcomputers.
 
Secondly this is why apple pressured developers to switch their code, during the transition from legacy macOS to OSX, it was harder for apple to convince developers to switch to native OSX apps, so it took longer.
I'd expect that porting an application from Carbon to Cocoa was much more work than taking a relatively current application and checking a box in Xcode to compile it for ARM in addition to X86.
 
  • Like
Reactions: wyrdness
Even early ARM only used 24(?)-bit addressing, and the move to true 32 bits broke older RISC-OS binaries.
The same problem showed up on the Atari ST and Macintosh because the original 68000 only had a 24bit address bus. It wasn't as bad on the Amiga because Commodore told people to not use the upper 8bits of the addresses right from the start.
 
The same problem showed up on the Atari ST and Macintosh because the original 68000 only had a 24bit address bus.
Yeah, but the wrinkle on ARM 24-bit was that the program counter (R15, which could be read/written like any other register) was 32 bits but the spare 8 bits were used as the processor status flags. So they got preserved over subroutine calls (which was cool), but "just don't use them" wasn't an option.
 
  • Like
Reactions: Basic75
I'd expect that porting an application from Carbon to Cocoa was much more work than taking a relatively current application and checking a box in Xcode to compile it for ARM in addition to X86.
Well, yes, hence the Carbon framework hanging around for 10-11 years vs. 5-6 years for Rosetta 1 (and Rosetta 2 is already scheduled for deprciation by 2027).

Superficially, migrating to Windows-on-ARM should be even easier, considering that Microsoft has been promoting .NET/Common Language Runtime to developers since 2002 and that compiles to bytecode which runs on a virtual machine (a bit like Java) - however, it turns out that its not that simple in the real world (mainly because of legacy issues, I believe). There do exist .NET-based apps where the same bytecode runs on x86 Windows, x86+arm64 Mac and x86+arm Linux (I use one - NextPVR - but that's server-side with a web interface rather than a traditional GUI - although I think Plan A circa-Windows-Vista was for the Windows to become HTML-based...)

I'd predict that if Apple ever shifts processor again, the change will be no big deal. The App Store already lets developers distribute apps as LLVM bytecode that gets translated on delivery to the target processor (maybe just for different ARM variants at the moment), which could be expanded or - if not - Apple are fairly ruthless in killing off legacy APIs and frameworks & discouraging "bare metal" coding. Even things like virtual machines are now handled with standard frameworks, so it really will be "just tick the box".

In terms of "killing off legacy apps" the end of 32-bit support and Carbon probably had more casualties than the move to Apple Silicon.
 
  • Like
Reactions: Basic75
Mac is one of Apple's most important brands and product lines compared to something like Microsoft's own ARM powered Surface brand. They don't lack the resources to do what Apple did but to them the Surface brand isn't nearly as important. Surface sales make up a small fraction of their business model and revenue. To this day these devices remain a sad joke and it's because they don't care.

Apple succeeded because this is part of their core business and they care about getting it right. They didn't come up with Apple silicon to then ignore the software so half the apps would crash. They didn't create a fantastic compatibility layer only to throw in some garbage Snapdragon chips either. Nobody else designs their own silicon to run their own OS on (at least not for the consumer electronics we're talking about).

Mac might barely be 10% of Apple's revenue across all their brands but cultivating each one so carefully is what made iPhone the most popular smartphone and Airpods the most popular bluetooth headphones, and they've been doing this since the iPod over 20 years ago. As Tim would say, they know how to take it to the next level.
 
Both Rosetta and Rosetta 2 did something nobody else has done, which is convert instructions for one ISA into another ISA while running the applications in question. With the original Rosetta, PPC instructions were converted into x86 instructions upon execution. With Rosetta 2, the conversion is done on first launch of the app, with no need to recompile or translate on the fly while executing the code. There is no "pretending to be on" an older processor as Pipo2 suggested, it is literally converting instructions to run on the new architecture.

I also doubt Apple will need to shift architectures again, because now they finally control all aspects of the product design. During the 68k, PPC and Intel eras, Apple had to rely on what processors were being produced and made available for their systems. Consequently they were often caught flat footed when their partners started lagging behind in their own CPU development. Apple learned a lot about CPU and SoC manufacturing with the A-series used in every iPhone since the iPhone 4, and they were able to leverage that experience when building Apple Silicon for the Mac.
 
I feel it helps that Apple makes both the hardware and the OS, so you don't have to deal with getting vendors on board. At the time, Apple also enjoyed a particularly ardent fan base who happily and willingly moved with Apple (part of the advantages of having a smaller user base if that it's easier to herd to flock and get them to move with you, so to speak). Developers were also more invested, and had less qualms about supporting a new OS and a new platform.

To put it another way, everything worked because everybody believed.
 
Both Rosetta and Rosetta 2 did something nobody else has done, which is convert instructions for one ISA into another ISA while running the applications in question.
The Rosettas worked really well, but to say there were the first? FX!32 wasn't that much behind.
 
Both Rosetta and Rosetta 2 did something nobody else has done, which is convert instructions for one ISA into another ISA while running the applications in question. With the original Rosetta, PPC instructions were converted into x86 instructions upon execution. With Rosetta 2, the conversion is done on first launch of the app, with no need to recompile or translate on the fly while executing the code. There is no "pretending to be on" an older processor as Pipo2 suggested, it is literally converting instructions to run on the new architecture.
@Basic75 mentioned FX!32, but I also want to mention that Rosetta's translated binaries actually do still pretend to be an x86 CPU. The translated program is still structured to emulate the actions of each individual translated x86 instruction. There is little or no cross-x86-instruction optimization, and canonical x86-compatible state is always maintained. There are tables mapping x86 entry points (jump targets) to Arm entry points. It also provides precise exceptions.

All these properties mean that if you want to, you can attach a complete standard x86 debugger to a Rosetta 2 process, single-step through x86 instructions, and observe program execution that should look exactly like how a real x86 would be running the x86 instructions.

This is all very different to what you'd get if you recompiled the source code with the target CPU set to Arm v8 or v9.
 
By the way, this does mean Rosetta 2 leaves some potential performance on the table. An optimization pass that was permitted to blur the boundaries between translated x86 instructions would improve performance, but Apple chose to prioritize observability and formal correctness over best possible performance.
 
Oops a lot of critique. No problem. Right, what can I do? Perhaps something very simple. Simple regarding Rosetta2, which is given a bit too much credit in this thread. Fine, but I hesitate to agree 100%.

Testing the results from 2 different development systems for the Mac.
Results in app1 and app2. Both bumping a counter as fast as they can for 4 seconds within reasonable constraints. Similar interrupt masks. Just 1 thread. No GUI interaction. All in a terminal shell. The apps return the final amount of the counters after 4 seconds.

Originaly it was used to test code generation for Intel processors, between two similar but different dev systems. So two different code generators. Later it was used again to see the result wrt Rosetta2.

Higher number is better.
tested on 2020 M1 MBP Ventura 13.2 with Rosetta2
app1 76956
app2 5264808

tested on 2018 i7 Mini Ventura 13.2 (without Rosetta2)
app1 15988000
app2 16352180

Brilliant, quite a result for Rosetta2... :(
This is the real world, can't help it.

Sure, AS native versions would certainly give us superior results, but that was not what we tested or wanted to see.

Apple never said Rosetta2 was flawless. Certainly don't blame them. It's just that we create code which doesn't go well with Rosetta2.

I'm a simple programmer and have to deal with what is said certain code does, and what it actually does. And sometimes it's "aha, well no, it doesn't". Fine if you leave performance on the table, but do you mind if we're not using it because of that? Actually it's a stick to enforce the creation of AS native versions of our systems!
Agree with maflyn, just another 2 years, and Rosetta's gone. Another stick, we might get a wood.

And yes, I'm the one with 680x0, PPC, Intel and some AS experience. All on the metal, as they say. Atari, Mac and NeXT. And thus UNIX, thank you very much and not to forget Openfirmware. Also diverse DSP and embedded processors. Sorry never a PC.
I don't read books and articles, no armchair, but I do check how the effing code runs. Unlike others. And that's all you'll get.

Have fun!
 
Oops a lot of critique. No problem. Right, what can I do? Perhaps something very simple. Simple regarding Rosetta2, which is given a bit too much credit in this thread. Fine, but I hesitate to agree 100%.

Testing the results from 2 different development systems for the Mac.
Results in app1 and app2. Both bumping a counter as fast as they can for 4 seconds within reasonable constraints. Similar interrupt masks. Just 1 thread. No GUI interaction. All in a terminal shell. The apps return the final amount of the counters after 4 seconds.

Originaly it was used to test code generation for Intel processors, between two similar but different dev systems. So two different code generators. Later it was used again to see the result wrt Rosetta2.

Higher number is better.
tested on 2020 M1 MBP Ventura 13.2 with Rosetta2
app1 76956
app2 5264808

tested on 2018 i7 Mini Ventura 13.2 (without Rosetta2)
app1 15988000
app2 16352180

Brilliant, quite a result for Rosetta2... :(
This is the real world, can't help it.

Sure, AS native versions would certainly give us superior results, but that was not what we tested or wanted to see.

Apple never said Rosetta2 was flawless. Certainly don't blame them. It's just that we create code which doesn't go well with Rosetta2.

I'm a simple programmer and have to deal with what is said certain code does, and what it actually does. And sometimes it's "aha, well no, it doesn't". Fine if you leave performance on the table, but do you mind if we're not using it because of that? Actually it's a stick to enforce the creation of AS native versions of our systems!
Agree with maflyn, just another 2 years, and Rosetta's gone. Another stick, we might get a wood.

And yes, I'm the one with 680x0, PPC, Intel and some AS experience. All on the metal, as they say. Atari, Mac and NeXT. And thus UNIX, thank you very much and not to forget Openfirmware. Also diverse DSP and embedded processors. Sorry never a PC.
I don't read books and articles, no armchair, but I do check how the effing code runs. Unlike others. And that's all you'll get.

Have fun!

Without knowing exactly what you were running, it's impossible to know how accurate or relevant those test results are, let alone try to recreate the tests to assess the validity of your results. It's also just one example compared to the numerous examples, tests, and benchmarks specifically relating to software development which show results that contradict your own.

You say you don't read books and articles, which would be concerning to me as coding is in a constant state of change. How can you keep up with updates to coding languages, regulatory compliance and best practices if you aren't reading any books or articles?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.