I can't really say much about Siri considering SPRINT truly sucks in my area and I typically can't get "her" to help me much. That said, when I'm in big cities, she understands everything I ask, dictates like a champ and gets me the info I need. I think the service has taken a bad rap actually, probably from a lot of people in spotty areas.
Ugh...seems like a bit of a late scramble, not good. Did Samsung pull the rug out from under them and they weren't ready for it? Not the type of thing you can throw together at the last minute, I hope this isn't going to be the Maps debacle all over again.
Google voice has offline support already
Nuance would have been a better acquisition than Beats. Voice recognition is not just about having smart people think things up. There will be patents and proprietary algorithms involved that are owned by Nuance, so simply poaching ex-Nuance employees will be useless. Additionally, Nuance probably has the largest database of spoken language that they can use to train their AI networks to recognise speech (no doubt each time you send off a verbal request to Siri, it gets downloaded to Nuance's database to improve their recognition networks).
Simply reinventing speech recognition in-house will probably work out as poorly as Apple Maps. Apple will try, they'll struggle, and then they'll buy second-rate help from other companies. If Samsung get Nuance, Apple will have really fumbled the ball. Don't believe me? Then why is Apple using Nuance now given it has been working on speech recognition (and synthesis) for years?
It's limited which makes it even more useless as you try to figure out what works and what doesn't. I also feel like voice recognition doesn't work as well offline.
Apple's tradeoff in this case makes more sense IMO. Either you have wireless access and Siri works or you don't and Siri doesn't work. Simple.
Why not just buy Nuance rather than build a team that would take years to get to the same level of quality we are at now?
Lets check Apples recent in-house service success rate for a second here.
MobileMe Sync =Fail!
Apple Maps = Fail!
iCloud = average
iMessage = average
iAds = Fail!
Photo book printing = SUCCESS!
Voice recognition is going to turn out great!
TBH I can see this going the way of Apple Maps.
Mac OS always used to have pretty decent speech recognition, it's a shame it just didn't get developed any further except to add a couple of newer voices a while ago, so it's good to see Apple returning to it (even if it's because of the imminent threat of losing Siri).
I hope this also means more of the voice recognition taking place on the device itself, with only queries that need to be outsourced being sent elsewhere, could also make for easier app integration capabilities.
So yeah, it could actually be great news![]()
I wish it didn't need the internet for everything. Like adding reminders or shuffling songs.
Could be coming.
Many of those in Apple's Boston voice R&D group, came from a company called VoiceSignal Technologies, which had created a standalone recognizer for commands like that.
I'd be surprised if Nuance's contract with Apple allows a buyer to cancel it.
You mean Apple Maps today or Apple Maps at launch? Because Apple Maps today is far better (at least in the US) than it was at launch. And with recent acquisitions it should get even better.
For those who didn't read the Wired article, MacRumors neglected to mention a major development. The speech recognition team that Apple is building are well-known people from the neural network research community, and neural network-based speech recognition recently gave Google a 25% boost in accuracy. That's a huge improvement. Microsoft also demonstrated big improvements using this technology.
The article points out that Apple is the only major speech recognition player that hasn't released a neural network-based product yet, but suggests that Siri will soon adopt this technology and get a much needed boost in accuracy. I'm personally hoping that this will make it into iOS 8 come the fall, but Apple has been known to take their time.
You have to consider how much space [local storage of voice recognition decoder] that would take up locally. It's obviously much more important for them to include more required apps many people don't want, like Podcasts.![]()
Huh? How can speech recognition be achieved without neural nets? Frankly I cannot believe that there is a commercial voice recognition product that is not based on neural nets (either locally on a desktop or on a server). I knew some of the people who started the ball rolling with neural nets and speech recognition. The poor quality & extremly high variability of the signal pretty much mathematically precludes any other approach.
Are you talking about neural nets that adapt to the individual user?
You don't have to buy all of a business. You can just buy the speech recognition patents and engineers and let Nuance keep the rest.
You mean, considerably worse than it's competitor, just less considerably worse than it was, but still with a completely unassailable gap and over two years of unhappy customers?
Or where it's a complete catastrophe in the rest of the world, where the majority of Apple's revenue comes from?
This is a terrible idea. It combines all the worst elements of Apple trying to build a huge database on the cheap that it did with Maps with entering an area where the company they're trying to break off from owning huge amounts of important patents that are going to significantly damage development.
Seriously, this is a stupid, stupid idea.
Everyone here keeps telling people that the Maps are horrible without providing a shred of "recent" information backing claims up. Apple Maps debuted a long time ago. Since then there has been plenty of improvement.
Let us not forget that before Apple delivered their own Mapping solution we iOS users were at a deficit wrt the native Mapping app on iOS versus Googles Mapping app on Android. Google didn't seem to find the time to deliver vector maps, Night Time mode and more.
The reality is Apple Maps is improving and closing the gap
Nothing is standing still and Developers are taking notice.
With acquisitions like Hot Stop, Embark and Broadmap it appears that Mapping technologies are a topic of interest for Apple. Guess that kind of dovetails their backend improvements that are being done.
If you're still pushing the "Apple Maps is debacle" meme I seriously suggest taking a look a "current" events and leaving the past where it belongs. In the past.
Huh? How can speech recognition be achieved without neural nets? Frankly I cannot believe that there is a commercial voice recognition product that is not based on neural nets (either locally on a desktop or on a server). I knew some of the people who started the ball rolling with neural nets and speech recognition. The poor quality & extremly high variability of the signal pretty much mathematically precludes any other approach.
Are you talking about neural nets that adapt to the individual user?
The only solution for a mobile phone would be to use a chip with dedicated memory rather than software that uses RAM. This could be done provided the speech recognition neural net is fixed (by training with a huge dataset that covers human variation in speech), but I doubt this would for a network that would learn the user's speech patterns.