I mentioned Siri because it is a very sophisticated interface. It understands and computes natural queries. I think it’s part of what thrives current developments in search technology, just as Google Voice and Cortana.
It seems to understand, but all of that is just guessing. Consider every phase you said as a symbol, some point towards a specific topic, and calculate a score toward a topic. Then it looks for a target phase (Like "where can I...", which leads to finding location), then use the topic score (e.g. Japanese) to find a predefined action.
This might seems like a good approach, until you try to add things that sounds alike, or something like this
(Using this model can build prototype extremely fast, but you need a giant database, and give up the chance for third party extension)
I built my HomeKit light, and since beta 2 of iOS 8, I've having problem turn off the light in my room, as Siri first guess is "Set the light", which doesn't failed since it's a legitimate command (I added a control in intensity), but automatically stop the second guess "Turn off the light".
In fact, fixing the problem of Siri actually become my graduate project, but this become problematic because there's no literature review on what Siri actually using. All I can do is traced back every single error Siri has done
Don't get me wrong, I love Siri. It's the highest standard I have encounter so far. (Google Now is basically voice search, which make sense since their stock price depends on how many people search per day. Cortana is just sad. The only advantageous is the voice recognition engine, and Apple can beat that either by their own team, or upgrade their backend server to Nuance NaturallySpeaking 13, not the 11/12 that OS X download as local dictation)
But when my iPhone 4s has become the oldest phone on duty in my family, I would expect Tim Cook has at least done some changes I've been complaining in email these few years.
(I know I have gone off topic, so if you want, I can just stop replying

)