I find this whole discussion rather odd.
The whole point of using two microphones is that it allows you to partially triangulate the location of sounds (it can tell you the position on and distance from the axis drawn between the two microphones - three would allow you to pinpoint the exact location of the sound). In doing so, it allows you to filter out everything else. This is nothing new.
But the original example of sound localization is already on either side of your head - your ears. It's part of what allows you to listen to something quiet even in a noisy room. And the reason why music is recorded in stereo.
Noise reduction is related but not exactly the same. The second microphone, in a sense, is the listener's ear. As long as the relative locations between the mike and ear stay the same a speaker can be used to cancel out outside sound.
The only thing semi-original in apple's patent seems to be that the system takes into account the changing location of the microphones due to moving axes of rotation (think the hinge on your powerbook or the neck of your iMac) and compensate for those changes. It does not seem to take into consideration the changing location of your mouth, which also matters.
Nothing is mentioned about tracking your voice, which would be a solution to that, or the exact nature of the software parts of the system. It does mention that the front end filters the signal a lot before matches it against an acoustic model database, while the back end actually does something with that result (presumably varying with the situation and supporting multiple back ends).
I'd like to add that the present USPTO appears to be staffed with idiots who grant patents for pretty much anything, without checking for prior art and without regard for sheer obviousness and current use. Furthermore they don't even know how to scan images properly, or have the brains to use a file format for patents online that can be easily printed (i.e. pdf).
The whole point of using two microphones is that it allows you to partially triangulate the location of sounds (it can tell you the position on and distance from the axis drawn between the two microphones - three would allow you to pinpoint the exact location of the sound). In doing so, it allows you to filter out everything else. This is nothing new.
But the original example of sound localization is already on either side of your head - your ears. It's part of what allows you to listen to something quiet even in a noisy room. And the reason why music is recorded in stereo.
Noise reduction is related but not exactly the same. The second microphone, in a sense, is the listener's ear. As long as the relative locations between the mike and ear stay the same a speaker can be used to cancel out outside sound.
The only thing semi-original in apple's patent seems to be that the system takes into account the changing location of the microphones due to moving axes of rotation (think the hinge on your powerbook or the neck of your iMac) and compensate for those changes. It does not seem to take into consideration the changing location of your mouth, which also matters.
Nothing is mentioned about tracking your voice, which would be a solution to that, or the exact nature of the software parts of the system. It does mention that the front end filters the signal a lot before matches it against an acoustic model database, while the back end actually does something with that result (presumably varying with the situation and supporting multiple back ends).
I'd like to add that the present USPTO appears to be staffed with idiots who grant patents for pretty much anything, without checking for prior art and without regard for sheer obviousness and current use. Furthermore they don't even know how to scan images properly, or have the brains to use a file format for patents online that can be easily printed (i.e. pdf).