Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
If Nuance technology is what drives my T9/qwerty "dumb" phone... well, let's just say it has awhile to go. I realize it's not the same as a server farm running the stack (ie Google, Dragon, etc), but I mean really, this POS can't even get a simple "Call XXXX" right.
 
If Nuance technology is what drives my T9/qwerty "dumb" phone... well, let's just say it has awhile to go. I realize it's not the same as a server farm running the stack (ie Google, Dragon, etc), but I mean really, this POS can't even get a simple "Call XXXX" right.

Apples put a lot into this it seems. They bought Siri which was already cool if it wasnt so slow. They've had the vision, just look at that video (find on youtube) showing a proof of concept of how they see the future. Voice recognition is a vital component of that vision. Now their in a deal with the main company that has the best voice recognition. They possibly built a 1 billion dollar farm for this vision.

Its not a half ass attempt to tack on a half baked solution and call it innovative, or a feature. Well, how the hell do I know, im just saying, i trust apple with the digital future more than someone like microsoft or google or blackberry, give me a mother load of a break. what a nightmare that would be. what im saying is, i believe apple will do better than what you got going on your current phone. we can hope.
 
telling you people

I am tell you people the new data center is to hold Steve Jobs consciousness after he passes away. They just need Nuance to tune the voice recognition.
 
So they will charge you a monthly fee rather than just putting the software on...

"More specifically, we're hearing that Apple is running Nuance software - and possibly some of their hardware - in this new data center. Why? A few reasons. First, Apple will be able to process this voice information for iOS users faster. Second, it will prevent this data from going through third-party servers. And third, by running it on their own stack, Apple can build on top of the technology, and improve upon it as they see fit."

BS! I doubt they are going to let you access their remote hardware for free. I don't want to pay and will not pay monthly fees for remote services. Why not just put the damn software on your device? Because they want to to charge you monthly, that's why!
 
Other languages

I wonder how well it will handle non-english languages.
I love voice and speech recognition when it works, but being limited to English language limits it's usability.

"You are saying it wrong"
 
I predict that it would not work as well on the iPod touch. Apple loves to limit iPod touch speech recognition.
 
I can't see Voice Technology ever being MUCH use, other than just commands/phrases to turn things on/off type of stuff. Till there is some reasonable Brain/AI behind it.

Until a machine can understand what the hell you are meaning as opposed to just recognising a waveform and running a command we're not going to get very far.

Getting to know they person, and how words are said, and in the context they are said, and the mood in the voice are all things it's going to have to deal with, also seeing you say the words, facial expressions, would help to.

We are years, probably decades away from this.
 
omg, all our conversations can be recorded and translated to searchable clear text on American soil, that's an enormous amount of data for American intelligence and the corporate espionage they conduct. The police can now track your phone and translate everything around it, mind-boggling.

Corporations now have to keep in mind that there emails and phone conversations transcripts can be used in court. There is definitely room for a secure foreign solution.
 
Last edited:
ok, you may agree or disagree that voice recognition is a smart thing that requires a lot of computing power

but...
will it require 12 PB of storage?
https://forums.macrumors.com/threads/1132229/

possible that mobile me is reading our emails to us and plays "Beat it" as karaoke version...?

perhaps that is what the deals with the major labels is all about... host karaoke versions of songs and let your iOS sing along. :rolleyes:
 
I can't see Voice Technology ever being MUCH use, other than just commands/phrases to turn things on/off type of stuff. Till there is some reasonable Brain/AI behind it.

Until a machine can understand what the hell you are meaning as opposed to just recognising a waveform and running a command we're not going to get very far.

Getting to know they person, and how words are said, and in the context they are said, and the mood in the voice are all things it's going to have to deal with, also seeing you say the words, facial expressions, would help to.

We are years, probably decades away from this.

Then you haven't seen what Siri is capable of doing today... or should I say what it was capable of doing a year ago when Apple bought them. Add in the vast resources available to Apple and put it into improving Siri and you've got something very much like you described.

Siri is already capable of parsing words and picking out the relevant ones and making sense of them, then in turn determining which, if any actions you'd like to perform. It'll ignore all the filler you put in a sentence and pick out a calendar date for example and look out for an instance where you may have associated a request with that calendar date. If so, it can create an entry in iCal and notify you of what it has done.

This is similar to -- but much more powerful than -- what iOS already does in Mail for example.

We're not decades away from conversational level speech recognition. We're merely weeks away. WWDC2011.
 
I wonder how well it will handle non-english languages.
I love voice and speech recognition when it works, but being limited to English language limits it's usability.

"You are saying it wrong"

Even with the English language you've still got the problem with regional accents.

I'll be mighty impressed if it can work accurately with the likes of the Scouse, Geordie, Cockney and Glaswegian accents.
And where I come from, A is for Opple. :D

a is for opple.jpg
 
Even with the English language you've still got the problem with regional accents.

I'll be mighty impressed if it can work accurately with the likes of the Scouse, Geordie, Cockney and Glaswegian accents.
And where I come from, A is for Opple. :D

I would think that processing the West Midlands/Black Country accent is going to require a whole new data centre.
 
It's not about just speech recognition... it's not about the words. That's 100% Nuance and there's no way Apple can improve on that. Nuance is pretty good at picking up the words.

It's about what is done with those words. It's the natural language processing. The artificial intelligence. The computer trying to understand what it is you want to do and then doing it.

That's something no one has done to any degree of success. That's why Apple bought Siri, and that's where Apple is going to innovate.

Sorry, you're wrong. Cisco Unity did it back in 2002. It could transfer voicemail to text e-mail and text e-mail to voice synthesized messages on your voicemail.

It was usable. It wasn't made by Apple.
 
Sorry, you're wrong. Cisco Unity did it back in 2002. It could transfer voicemail to text e-mail and text e-mail to voice synthesized messages on your voicemail.

It was usable. It wasn't made by Apple.

The guy said "It's about what is done with those words. It's the natural language processing. The artificial intelligence. The computer trying to understand what it is you want to do and then doing it."

How is transliterating email to text and vice versa what he said?
 
I dont care for voice recognition, never used it. I find it stupid.

Well for those of us that have build in integration in our cars, we feel a lot differently they you about good voice recognition...
 
I can't see Voice Technology ever being MUCH use, other than just commands/phrases to turn things on/off type of stuff. Till there is some reasonable Brain/AI behind it.

Until a machine can understand what the hell you are meaning as opposed to just recognising a waveform and running a command we're not going to get very far.

Getting to know they person, and how words are said, and in the context they are said, and the mood in the voice are all things it's going to have to deal with, also seeing you say the words, facial expressions, would help to.

We are years, probably decades away from this.

I have sync in my car, I can tell it give me directions to 123 main st, Cleveland oh...and if its in a good mood it will do it :)
 
Do they mean Voice Recognition?

Voice Recognition is recognising WHO is talking, Speech Recognition is recognising WHAT they are saying. I know a lot of people get confused, but trust me, I am right. Dragon is SPEECH Recognition software not Voice recognition software. I don't believe Apple are interested in Voice recognition - well not in this context.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.