Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I claim no expertise in the field, which is why I specifically cited the Wired article, so you should direct your questions to Robert McMillan (the article's author) or the comments section in the Wired piece. I could have misquoted the article, so please refer to it. Whatever technology that's employed, some of the recent improvements sound impressive, which was reason why I mentioned it.

I don't think you misquoted the article, I just think the article is naive. Work on neural net stimulus recognition exploded in the mid 1980's. It has been well funded by the military, businesses, medicine, etc. It doesn't really matter if the problem is to recognise tanks, patterns of stock fluctuations, tumors or phonemes. The problem is the same: the stimuli within a category can vary so much in comparison to variation among categories that subtle features need to be extracted that cannot rely on simple (linear) equations. The neural nets essentially learn what the more sophisticated equations should be to categorise stimuli. The Wired article cited Hinton, who's been around for ages. I am not an expert either, although I know something about the topic, but my impression is that the main breakthroughs were made in principle ages ago. The issue now is implementation on a small devic elike a mobile phone.

I too am suspicious of any claim of not using neural networking models previously.

The Wired article suggests that a new technique using neural nets was developed by a Canadian researcher who has gone on to help MS and Google dramatically improve their existing tech. Meanwhile, Nuance/Apple apparently have kept with their in-house approach and up to this point not adopted the newer methodology. The article does seem to imply Siri's existing make-up is not neural network-based, but I think it's just the author not being as clear as they should be.

Since Google can do some translating offline, I wonder if hardware neural nets are what's being hinted at. Given the history of desktop dictation requiring as much processing power as possible, and Apple's reliance on cloud-based computing for Siri, I wouldn't think offline would work very well absent dedicated silicon.

I wonder if Apple is working on a parallel processing chip that would do speech recognition....
 
Is this all because Apple doesn't like Nuance Samsung may be acquired?

oh yeah sure,, in-house is much better.. What possibly could go wrong ?

Another Map's "back to to basement" issue. Map's may have improved, but look how long it took to get there.

The same will happen to Siri..

I dunno what is is, but if Apple's going to start doing as many "in-house" things as possible, then they are not making the iPhone better, their making it worse. Nuance had experience with voice, Apple doesn't, which is why they must bring in people who do.....
 
Could be coming.

Many of those in Apple's Boston voice R&D group, came from a company called VoiceSignal Technologies, which had created a standalone recognizer for commands like that.

I remember reading about Apple having small R&D team in buildings in MA and in TX. Do you happen to have any more information to share?

Thanks.
 
You forgot processor design, the biggest such undertaking. I was extremely sceptical about it, but they certainly aced that one. LLVM, too.

That I did, and that has seemed to be a success :) Thats hardware, and apple do seem to do a good job with hardware. But software seems to be a whole other kettle of fish.
 
Siri was an App on the App store for those of you who don't remember.

I still have it on my 4 , it no longer works , but I have it.

The only problem was that it was tied to Nuance. So this will make it all in-house.

But we all know how well that worked for Apple MAPS :rolleyes:

And Apple Maps has become a lot better and more proficient. I use it more than Google Maps now.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.