I claim no expertise in the field, which is why I specifically cited the Wired article, so you should direct your questions to Robert McMillan (the article's author) or the comments section in the Wired piece. I could have misquoted the article, so please refer to it. Whatever technology that's employed, some of the recent improvements sound impressive, which was reason why I mentioned it.
I don't think you misquoted the article, I just think the article is naive. Work on neural net stimulus recognition exploded in the mid 1980's. It has been well funded by the military, businesses, medicine, etc. It doesn't really matter if the problem is to recognise tanks, patterns of stock fluctuations, tumors or phonemes. The problem is the same: the stimuli within a category can vary so much in comparison to variation among categories that subtle features need to be extracted that cannot rely on simple (linear) equations. The neural nets essentially learn what the more sophisticated equations should be to categorise stimuli. The Wired article cited Hinton, who's been around for ages. I am not an expert either, although I know something about the topic, but my impression is that the main breakthroughs were made in principle ages ago. The issue now is implementation on a small devic elike a mobile phone.
I too am suspicious of any claim of not using neural networking models previously.
The Wired article suggests that a new technique using neural nets was developed by a Canadian researcher who has gone on to help MS and Google dramatically improve their existing tech. Meanwhile, Nuance/Apple apparently have kept with their in-house approach and up to this point not adopted the newer methodology. The article does seem to imply Siri's existing make-up is not neural network-based, but I think it's just the author not being as clear as they should be.
Since Google can do some translating offline, I wonder if hardware neural nets are what's being hinted at. Given the history of desktop dictation requiring as much processing power as possible, and Apple's reliance on cloud-based computing for Siri, I wouldn't think offline would work very well absent dedicated silicon.
I wonder if Apple is working on a parallel processing chip that would do speech recognition....