PDA

View Full Version : Transitive's CEO discusses Rosetta


MacBytes
Aug 10, 2005, 09:13 AM
http://www.macbytes.com/images/bytessig.gif (http://www.macbytes.com)

Category: Opinion/Interviews
Link: Transitive's CEO discusses Rosetta (http://www.macbytes.com/link.php?sid=20050810091300)

Posted on MacBytes.com (http://www.macbytes.com)
Approved by Mudbug

outerspaceapple
Aug 10, 2005, 09:40 AM
How's that different from plain old emulation?
Emulation typically involves converting one instruction at a time. We do something very different. We look at blocks of instructions and convert them to an intermediate representation that allows us to understand the higher-level semantics of the code. This is what allows us to achieve breakthrough performance.

can i be the first to say... huh?

this really is a breakthrough in emulation. maybe with this new concept we'll finally see n64 roms running at full speed on an 800MHz G4.

emulation is so slow usually, but wow. I bet these guys are gonna make alot of money if they can do this for other types of architecture as well.

mkrishnan
Aug 10, 2005, 09:53 AM
can i be the first to say... huh?

I understand what he means, but I don't understand how they are able to make it work reliably quickly -- I mean that I'd suspect such a scheme would frequently fail to parse, and be forced to drop back down to traditional emulating (or "parsing" basically single lines of code), but it seems like Rosetta doesn't have that issue....

Whatever mojo they're working, I'm duly impressed! :D

nagromme
Aug 10, 2005, 10:25 AM
Maybe it just parses a handful of really common, really useful code structures, handles the rest one at a time, but still sees a big boost?

broken_keyboard
Aug 10, 2005, 10:51 AM
What rude questions from the interviewer.
No wonder it was so short.

iMeowbot
Aug 10, 2005, 11:09 AM
This is a refinement of the way DEC dealt with the VAX to Alpha migration.

Back then, there was a lot of code written in Macro32 (VAX assembly language) It was decided to write a Macro32 compiler to get that old software ported. In that case it was source rather than binary compatibility, but it's the same idea: treat the machine language as a high-level language, turn it into something that runs efficiently on the target processor. HP repeated the trick for the migration to Itanic, and much of VMS is still written in VAX assembly language.

The same sorts of optimizations that can be applied to more traditional compiled languages can be used with this, so performance isn't bad.

Transitive have taken the idea to the next step, using the object code as source for a compiler, and doing its thing on the fly. It's one of those things that makes you think "duh, why didn't anyone think to do it that way before?" (all the really cool ideas seem to be like that, don't they?)

mkrishnan
Aug 10, 2005, 11:38 AM
What rude questions from the interviewer.
No wonder it was so short.

I think that is the standard format of this feature in Wired (Hot Seat) -- both in tone and in length. Kind of like Crossfire or McLaughlin Group....

Lacero
Aug 10, 2005, 11:41 AM
Would it be safe to use the analogy of someone translating individual words from a foreign language, and then trying to piece those translated words into a sentence? The words may not fit so a trial and error processes which slows down the translation.

Whereas with Rosetta, whole sentences are translated, which results in no errors and is fast.

iMeowbot
Aug 10, 2005, 12:14 PM
Would it be safe to use the analogy of someone translating individual words from a foreign language, and then trying to piece those translated words into a sentence? The words may not fit so a trial and error processes which slows down the translation.
Kind of. Machine language is baby talk. You can't simply say "make toast" and get toast. Instead you have to say "Retrieve the bag from the cabinet. Unfasten the plastic thingy. Place the plastic thingy aside. Remove two slices of bread from the bag. Place one slice in the left slot. Place the other slice in the right slot. Twist the end of the bag. Pick up the plastic thingy. Reattach the plastic thingy to the bag. Return the bag to the cabinet. Press the toaster lever. Wait for the bread to pop up. Remove one piece of toast from the left slot. Remove the other piece of toast from the right slot." A good compiler will see that see that familiar sequence and say "Oh! Make toast!" and translate that whole idea to its native form of baby talk, rather than translate each of those steps one at a time.

Eric5h5
Aug 10, 2005, 12:52 PM
can i be the first to say... huh?


As far as I can tell, that's just JIT emulation, which has been around for ages. Some examples of programs that use this technique to get much faster than the old-fashioned "one instruction at a time" emulation: UAE (on X86 anyway), SheepShaver and Basilisk II (ditto), VirtualPC, and MacOS. Yes, that's how early PPC machines could run 68K code anywhere near as fast as a real 68K machine. The only real drawback to JIT is that it uses up lots of memory to store those blocks of translated code. That's one reason why MacOS memory requirements suddenly shot way up on PPC machines compared to 68K machines.

this really is a breakthrough in emulation. maybe with this new concept we'll finally see n64 roms running at full speed on an 800MHz G4.

If someone would port the Mupen64 JIT code (which has existed for quite some time) from x86 to PPC, that would happen.

--Eric

michaelrjohnson
Aug 10, 2005, 01:36 PM
Kind of. Machine language is baby talk. You can't simply say "make toast" and get toast. Instead you have to say "Retrieve the bag from the cabinet. Unfasten the plastic thingy. Place the plastic thingy aside. Remove two slices of bread from the bag. Place one slice in the left slot. Place the other slice in the right slot. Twist the end of the bag. Pick up the plastic thingy. Reattach the plastic thingy to the bag. Return the bag to the cabinet. Press the toaster lever. Wait for the bread to pop up. Remove one piece of toast from the left slot. Remove the other piece of toast from the right slot." A good compiler will see that see that familiar sequence and say "Oh! Make toast!" and translate that whole idea to its native form of baby talk, rather than translate each of those steps one at a time.
Thank you very much for that example. It really makes a whole lot of sense... now. :) Very cool technology.

iMeowbot
Aug 10, 2005, 01:57 PM
As far as I can tell, that's just JIT emulation, which has been around for ages.
It is partly based on JIT, with some neat optimization twists. There are about a dozen patent applications (none apparently granted yet) on file with USPTO (search on inventor name rawsthorne) explaining their additions. Among other goodies, they are being very smart about mapping registers only enough to be necessary, ignoring side effects that aren't going to be used, keeping track of entry and exit points so those shortcuts don't backfire, and so on.

nagromme
Aug 10, 2005, 02:14 PM
Retrieve the bag from the cabinet. Unfasten the plastic thingy. Place the plastic thingy aside. Remove two slices of bread from the bag. Place one slice in the left slot. Place the other slice in the right slot. Twist the end of the bag. Pick up the plastic thingy. Reattach the plastic thingy to the bag. Return the bag to the cabinet. Press the toaster lever. Wait for the bread to pop up. Remove one piece of toast from the left slot. Remove the other piece of toast from the right slot.

Whoah... that works GREAT! Bookmarked!

michaelrjohnson
Aug 10, 2005, 02:25 PM
Whoah... that works GREAT! Bookmarked!
What? Now you know how to make toast?! :D

mkrishnan
Aug 10, 2005, 02:30 PM
It is partly based on JIT, with some neat optimization twists. There are about a dozen patent applications (none apparently granted yet) on file with USPTO (search on inventor name rawsthorne) explaining their additions. Among other goodies, they are being very smart about mapping registers only enough to be necessary, ignoring side effects that aren't going to be used, keeping track of entry and exit points so those shortcuts don't backfire, and so on.

In any event, the fact remains that, without the kind of limitations involved in the WINE approach (which, of course, can be very fast) of emulating APIs, Rosetta is managing to deliver a level of speed, robustly, that is pretty unheard of. So I tend to agree that whatever they're doing, it isn't completely old hat, and that there are at least some very innovative elements to it.

SiliconAddict
Aug 10, 2005, 04:12 PM
Hmmm I wonder how this tech will use dual core or dual processor systems? Or will it even take advantage of such systems. Hmmm. :confused:

GodBless
Aug 10, 2005, 04:15 PM
I've been wanting to hear more about this technology and its repercussions. This is a good article for explaining some of the possible results of extended use of this technology.

jhu
Aug 10, 2005, 08:42 PM
This is a refinement of the way DEC dealt with the VAX to Alpha migration.

Back then, there was a lot of code written in Macro32 (VAX assembly language) It was decided to write a Macro32 compiler to get that old software ported. In that case it was source rather than binary compatibility, but it's the same idea: treat the machine language as a high-level language, turn it into something that runs efficiently on the target processor. HP repeated the trick for the migration to Itanic, and much of VMS is still written in VAX assembly language.

The same sorts of optimizations that can be applied to more traditional compiled languages can be used with this, so performance isn't bad.

Transitive have taken the idea to the next step, using the object code as source for a compiler, and doing its thing on the fly. It's one of those things that makes you think "duh, why didn't anyone think to do it that way before?" (all the really cool ideas seem to be like that, don't they?)

so they've basically made a java compiler, but using ppc code instead of java byte-code.

mkrishnan
Aug 10, 2005, 08:46 PM
so they've basically made a java compiler, but using ppc code instead of java byte-code.

Well, except Java compilers take source code and convert it into something intermediate between code and binary (bytecode, I think, is the word they use?). Rosetta takes binaries and back-converts them to something like bytecode.

Willie Sippel
Aug 10, 2005, 09:13 PM
mkrishnan,

except that Wine Is Not an Emulator... It's a mere (?! ;-) ) re-implementation of the complete Windows API on top of UNIX, and therefore performs _natively_. There's no way emulation could ever reach native speeds (on comparable hardware, of course).

BTW, Wine is no small hack, it's in active development for more than 12 years now, with several hundred coders working on it - and it's not even beta code, yet (the first beta in Wine's history is scheduled for later this year). Writing an emulator (or a VM like VMWare or Bochs) would have been much faster and easier, but wouldn't give you native performance, direct hardware access and full integration... Those are the reasons why games or apps like Maya, Reason or FCP suck big time on Rosetta.

Plus, with 'standard' emulation like Rosetta, VPC or VMWare, you'd still need the OS, or parts of the OS, you want to emulate, which (for running Windows apps) means paying at least 200 dollars...

mkrishnan
Aug 10, 2005, 09:28 PM
except that Wine Is Not an Emulator... It's a mere (?! ;-) ) re-implementation of the complete Windows API on top of UNIX, and therefore performs _natively_. There's no way emulation could ever reach native speeds (on comparable hardware, of course).

Willie, WINE is not an emulator in the traditional sense (and in that it's its name) -- it doesn't emulate the x86 hardware architecture, but it does emulate the Windows APIs. Or reconstruct, or whatever you want. Which is what I said the first time. And yes, that's exactly why it's faster. And why it's surprising that Rosetta seems to get almost comparable speed to this approach using something much more like traditional emulation.

Eric5h5
Aug 11, 2005, 01:20 AM
Willie, WINE is not an emulator in the traditional sense (and in that it's its name) -- it doesn't emulate the x86 hardware architecture, but it does emulate the Windows APIs. Or reconstruct, or whatever you want. Which is what I said the first time. And yes, that's exactly why it's faster. And why it's surprising that Rosetta seems to get almost comparable speed to this approach using something much more like traditional emulation.

The problem is that you're talking about apples and oranges. Getting a program for one OS to run on a different OS (WINE) hasn't the least thing in the world to do with translating CPU instructions from one architecture to another (Rosetta).

In fact, you can combine the two, and have "WINE + Rosetta = run Windows programs on current PPC Macs without buying Windows". (Apparently Transitive says they made a version that does x86 to PPC last fall.) It would be interesting to compare that to Darwine, which is "WINE + Qemu = run Windows programs on current PPC Macs without buying Windows", and see how much faster Rosetta actually is.

Anyway, I still don't see Rosetta as particularly groundbreaking, because even with clever optimizations (thanks for that, iMeowbot), it still seems essentially like JIT, and the speed estimates I've seen aren't THAT far out of line with JIT engines that have been around for a long time. Faster, yeah, but not orders of magnitude or anything. Drawing a line between the hype and the skepticism, I'm going to guess that typical average performance--when looking at all different types of apps together--will be around 50%-60% native speed. Which is impressive but not industry-transforming.

--Eric