bousozoku said:
It's possible but unlikely.
If Apple considered making the operating system even more hardware-independent, they could craft a virtual machine like that which IBM has already implemented on PowerPC hardware as a 512-bit processor. This would allow applications to be written for the virtual machine and require only minimal changes to the executable application from time to time, but nothing would have to be re-compiled. e.g., applications written for the original IBM System/38, which was released in 1979, are still able to run on the latest iSeries machine, even though the hardware has been changed many, many times, even changing from CISC to RISC.
The trouble with this is the overhead. It takes a strong machine to put forth reasonable performance even when the virtual machine code is in tune with the actual processor instruction set.
Gee, there's so much to write about this it's hard to know where to start. It could be just the way you've written your post (meaning, you may well be aware of what I'm about to write).
With the S/38 and AS/400 up to the NPM/NMI (new programming model, new machine interace) there is a multi-layered OS architecture. The operating system (CPF and OS/400) are written to a specification and API called the MI - the machine interface. Beneath the MI is another whole bunch of code (the lower half of an OS you could say) called VMC on the S/38 and SLIC on the AS/400. The scheduler, resource management and etc. are all down there as is the database code (which is what lead to the S/38 claims of a database in hardware - which is strecthing the truth to breaking point). But file management, for example, is up in CPF or OS/400. Beneath the VMC/SLIC is the microcode (used to be called HMC on the S/38) and HW (which used to be 42bit CISC chips).
When you compile an RPG program (for example) it is a multi-level compilation process. First, an MI representation is created which has all the symbolic information as well. This is called the program template. This is the "above the MI" part of the compile. Next a VMC/SLIC program called the translator runs and compiles the MI representation down to the executable (a program object). This is the "beneath the MI" part of the compile. However, the full MI representation and all the symbolic information is stored with the program object in an associated space. This is what allowed any user S/38 or AS/400 program (prior to NPM) to be symbolically debugged without having to do a special compile. The symbolic info was always kept.
So, when the AS/400 came along and replaced the S/38, the AS/400 SLIC translator could compile from AS/400 MI or from S/38 MI. Same thing happend when the AS/400 switched from the OPM (original programming model) to the NPM (SLIC was rewritten from PL/MP to C++) S/38 programs (and OPM AS/400 programs) were able to appear as if they ran without recompilation.
What actually happened was that when a S/38 program object was run the first time on an NPM AS/400, the second half of the compiler - the SLIC translator was re-run, thus taking the still stored S/38 MI representation and compiling that S/38 MI to an NPM AS/400 program object. In other words the NPM SLIC tranlator could compile NMI or OMI (AS/400 or S/38 versions).
Now, all of this stored MI and symbolic stuff became known as a program's observability info. For a long time you have been able to strip the observability info from a program object (to save disk space or for confidentiality). If you do remove the observability info, then you lose the ability to symbolically debug (wihout specifically recompiling the source code for that option) and of course you lose the ability for the SLIC translator to do any cross-architecture compilations for you. e.g. an OPM AS/400 program without observability info will NOT run on a PowerPC AS/400 without a full recompile from source code.