Just off the top of my head, three differences between direct (touch) and indirect (mouse/trackpad) interface:
1. A mouse allows you to hover. No such function in a direct interface system.
2. A direct interface system has at most one cursor on the screen at time: the text cursor. A mouse-based system has possibly two cursors: the mouse cursor and the text cursor. Believe it or not, this is confusing for beginners ("I put my cursor on it, but when I type the stuff appears over there!)
3. A direct interface (touch) system allows multitouch: you can directly manipulate two (or more) objects on the screen at a time. While trackpads and the Magic Mouse support multi-finger gestures, with an indirect interface, you can still only manipulate one object at a time because there's only one mouse pointer on the screen.
There's probably more, but the upshot is that if you're building a truly hybrid app, you'll need to take these factors into consideration. For example, you can't use hover for critical functions because you can't depend on a mouse pointer being available. You also can't use two-object manipulation (move this thing left while moving that thing up) because it's not supported if your interface is a mouse.
Having said that, I'm doing a bunch of work in Keynote today - on my rMBP, not on my IPP, because I have a trackpad on the rMBP. When I have some time, I'll move the project over to the IPP and see how it works, but I'm guessing it'll be much tougher to get pixel-precise moves with the touch-first interface. We'll see.