But how do you know this for a fact? You only knows Siri was unable to process your request.Again, during the outage, Siri could not analyse your voice into text at all.
As earlier stated I have no knowledge of how Siri works but it makes sense to me that it does straightforward voice-recognition on the device and sends data to Apples server. Case: User asks: "Siri, will I need an umbrella tomorrow?"
Text: "WILL.I.NEED.UMBRELLA.TOMORROW"
Location: GPS-position
Time: Timestamp
Device-ID: UDID(?)
Obviously simplified but let's say information akin to the above is sent to Apple. Apple's server will chew through the string of word, check with the time-stamp and device-ID to see if this is part of an ongoing "conversation". Does it need to take location into account? Yes, no part of the string or earlier conversation seem to present alternative location-information. Check GPS-position and fetch forecast for the given area. Present information to user. Even if the task the user wants to perform is done locally the deciphering of the information needs to be done on the server.
My reason for thinking a model such as this makes a lot more sense is threefold: First the insane amount of data being sent back if Apple were to receive sound-samples rather than compressed data-sets. Second the added expense for the user covering all this traffic and thirdly the time-delay which would make Siri extremely slow to respond.