It also records everything you say and potentially reports you.My Echo Dots do this, and they cost £25 each.....
https://www.apple.com/shop/question...-record-everything-it-hears/QATF4Y4HX72CTJT7D
It also records everything you say and potentially reports you.My Echo Dots do this, and they cost £25 each.....
So I actually figured out why Siri does this. If you are asking Siri to control something within the same room that the HomePod is located, it won't say anything. It assumes that you see the action occuring (since you are in that room) and doesn't say anything. When you are asking Siri to do something in another room, it'll let you know because you aren't there to see it.
[doublepost=1543869912][/doublepost]
Yea, I get that. I also get why Apple made the decision as well. I have found Apple Music to have a lot of the music I request though so when people come over - I tell them just to request a song. Always works.
Agreed, it’s safe to assume they are working on something better though.The physical technology behind the assistant is best in class. After training my iPhone, Hey Siri works every time. But voice transcription and capabilities are far behind I'm afraid. Siri gets what I say wrong all the time, and when she understands what I said she often gets the interpretation wrong. I am actually surprised when she gives me the answer I need beyond the simple questions they demo during keynotes.
I'm so impressed what Amazon can do, and I'm so impressed how much effort you put into posting it on this site. Bezos will be thanking you and include you in his nighttime prayers.There's nothing magical about far-field voice capture. Amazon is very open about it for developers.
I'm sure they sound great too.
To your first response, nope. I can ask the same thing over and over and get varying ways at which things happen, whether it’s in the room or not.
It has indeed little meaning other than demonstrating what Apple overhypedI'm so impressed what Amazon can do, and I'm so impressed how much effort you put into posting it on this site. Bezos will be thanking you and include you in his nighttime prayers.
There's nothing magical about far-field voice capture. Amazon is very open about it for developers.
https://developer.amazon.com/alexa-voice-service/dev-kits/
If you think that compares to Apple's work on Machine Learning, you obviously didn't look at Apple's blog that's linked to from this article.
One of the main differences is that Apple is doing the work on the device in low power, while Amazon and Google are sending the data home to the mothership so it can be analyzed on their servers.
In addition to obvious security and privacy benefits, Apple's approach has implications for future low power wearable devices that might not always be connected to the cloud.
If you think that compares to Apple's work on Machine Learning, you obviously didn't look at Apple's blog that's linked to from this article.
One of the main differences is that Apple is doing the work on the device in low power, while Amazon and Google are sending the data home to the mothership so it can be analyzed on their servers.
In addition to obvious security and privacy benefits, Apple's approach has implications for future low power wearable devices that might not always be connected to the cloud.
I guess I should watch out! However, I always have Alexa.Sounds like your GF is quite a pleasantly humorous gal to be with. Now, the problem is, Heyyy Siri, how ya doing? I'm jealous too!!!
There is a huge market out there... Include it in the airpod and we will look cool while hearing betterI wonder if they can use the results of their research to improve hearing aids. The boomers are getting older...
In a new entry in its Machine Learning Journal, Apple has detailed how Siri on the HomePod is designed to work in challenging usage scenarios, such as during loud music playback, when the user is far away from the HomePod, or when there are other active sound sources in a room, such as a TV or household appliances.
An overview of the task:To accomplish this, Apple says its audio software engineering and Siri speech teams developed a multichannel signal processing system for the HomePod that uses machine learning algorithms to remove echo and background noise and to separate simultaneous sound sources to eliminate interfering speech.
Apple says the system uses the HomePod's six microphones and is powered continuously by its Apple A8 chip, including when the HomePod is run in its lowest power state to save energy. The multichannel filtering constantly adapts to changing noise conditions and moving talkers, according to the journal entry.
Apple goes on to provide a very technical overview of how the HomePod mitigates echo, reverberation, and noise, which we've put into layman's terms:Echo Cancellation: Since the speakers are close to the microphones on the HomePod, music playback can be significantly louder than a user's "Hey Siri" voice command at the microphone positions, especially when the user is far away from the HomePod. To combat the resulting echo, Siri on HomePod implements a multichannel echo cancellation algorithm.
Reverberation Removal: As the user saying "Hey Siri" moves further away from the HomePod, multiple reflections from the room create reverberation tails that decrease the quality and intelligibility of the voice command. To combat this, Siri on the HomePod continuously monitors the room characteristics and removes the late reverberation while preserving the direct and early reflection components in the microphone signals.
Noise Reduction: Far-field speech is typically contaminated by noise from home appliances, HVAC systems, outdoor sounds entering through windows, and so forth. To combat this, the HomePod uses state-of-the-art speech enhancement methods that create a fixed filter for every utterance.
Apple says it tested the HomePod's multichannel signal processing system in several acoustic conditions, including music and podcast playback at different levels, continuous background noise such as conversation and rain, and noises from household appliances such as a vacuum cleaner, hairdryer, and microwave.
During its testing, Apple varied the locations of the HomePod and its test subjects to cover different use cases. For example, in living room or kitchen environments, the HomePod was placed against the wall and in the middle of the room.
Apple's article concludes with a summary of Siri performance metrics on the HomePod, with graphs showing that Apple's multichannel signal processing system led to improved accuracy and fewer errors. Those interested in learning more can read the full entry on Apple's Machine Learning Journal.
Article Link: Apple Details How HomePod Can Detect 'Hey Siri' From Across a Room, Even With Loud Music Playing
They may not have all the same fancy technology,.
Mask-Based Echo Suppression
Reverberation Removal
Mask-Based Noise Reduction
Unsupervised Learning with Top-Down Knowledge to Mitigate Competing Speech
Competing Talker Separation
Deep Learning–Based Stream Selection
You're literally proving my point. They don't have the fancy technology which means they can't do these key things that the machine learning article talks about:
so, no, your little £25 Dots do not do these things. and amazon dots can't even play loud music at max volume (i have one), so it doesn't even do what part of the title suggests.
That works for you. Some people do care about that, so they buy better cars because performance.I don’t care about the steering rack in my car, I just care about turning the steering wheel and the front wheels turn, like it does on other cars.
[doublepost=1543953258][/doublepost]I've got a grandson that makes all kind of crazy sounds. This will be a good test.
In a new entry in its Machine Learning Journal, Apple has detailed how Siri on the HomePod is designed to work in challenging usage scenarios, such as during loud music playback, when the user is far away from the HomePod, or when there are other active sound sources in a room, such as a TV or household appliances.
An overview of the task:To accomplish this, Apple says its audio software engineering and Siri speech teams developed a multichannel signal processing system for the HomePod that uses machine learning algorithms to remove echo and background noise and to separate simultaneous sound sources to eliminate interfering speech.
Apple says the system uses the HomePod's six microphones and is powered continuously by its Apple A8 chip, including when the HomePod is run in its lowest power state to save energy. The multichannel filtering constantly adapts to changing noise conditions and moving talkers, according to the journal entry.
Apple goes on to provide a very technical overview of how the HomePod mitigates echo, reverberation, and noise, which we've put into layman's terms:Echo Cancellation: Since the speakers are close to the microphones on the HomePod, music playback can be significantly louder than a user's "Hey Siri" voice command at the microphone positions, especially when the user is far away from the HomePod. To combat the resulting echo, Siri on HomePod implements a multichannel echo cancellation algorithm.
Reverberation Removal: As the user saying "Hey Siri" moves further away from the HomePod, multiple reflections from the room create reverberation tails that decrease the quality and intelligibility of the voice command. To combat this, Siri on the HomePod continuously monitors the room characteristics and removes the late reverberation while preserving the direct and early reflection components in the microphone signals.
Noise Reduction: Far-field speech is typically contaminated by noise from home appliances, HVAC systems, outdoor sounds entering through windows, and so forth. To combat this, the HomePod uses state-of-the-art speech enhancement methods that create a fixed filter for every utterance.
Apple says it tested the HomePod's multichannel signal processing system in several acoustic conditions, including music and podcast playback at different levels, continuous background noise such as conversation and rain, and noises from household appliances such as a vacuum cleaner, hairdryer, and microwave.
During its testing, Apple varied the locations of the HomePod and its test subjects to cover different use cases. For example, in living room or kitchen environments, the HomePod was placed against the wall and in the middle of the room.
Apple's article concludes with a summary of Siri performance metrics on the HomePod, with graphs showing that Apple's multichannel signal processing system led to improved accuracy and fewer errors. Those interested in learning more can read the full entry on Apple's Machine Learning Journal.
Article Link: Apple Details How HomePod Can Detect 'Hey Siri' From Across a Room, Even With Loud Music Playing
Best Buy literally sold out of them with that sale....Even with the heavy discounts: $250.00 retail price during Black Friday the HomePod still stiffed out.
You cannot polish a turd even if Siri hears you with the music playing.
Best Buy literally sold out of them with that sale....
As they should as the music they play isn’t loud at all.But I don’t care what fancy names Apple give this stuff or how they fluff it up, I just care how it affects my usage. My echos can hear me and process what I say from other rooms when I have them playing music, or my Sonos, or my TV.
I don’t care about the steering rack in my car, I just care about turning the steering wheel and the front wheels turn, like it does on other cars.
As they should as the music they play isn’t loud at all.