The voice analysis is done on the remote servers. These are Nuance servers. This is processed and returned to the device as text. The Siri app then determines what you meant and acts upon it. The value add of the Siri app, beyond what the Nuance servers provide, is the UX or the taking of what was said and interpreting what was meant and this is done locally. So yes, if you take the actual Siri app on a non-iP4S device, if you point it off to an equally capable voice analysis service (for instance other Nuance server not provisioned for Apple), you would have full blown Siri. The local app would be Siri and the remote servers would be Nuance (or Nuance class), which would exactly be the full Siri experience.