Google Home Smart Speaker Now Supports Multiple Users

Discussion in 'iOS Blog Discussion' started by MacRumors, Apr 21, 2017.

  1. MacRumors macrumors bot

    MacRumors

    Joined:
    Apr 12, 2001
    #1
    [​IMG]


    Google Home received a major update to its voice recognition system on Thursday that lets owners set up the smart speaker to recognize multiple account holders.

    The software update means that up to six people can connect their Google account to one speaker and Google Assistant will be able to distinguish users by the sound of their voice. Amazon is said to be working on a similar feature for its Echo range of devices.

    [​IMG]

    The feature works by listening to how individual users say the phrases "Ok Google" and "Hey Google", and then runs the samples through a neural network that can detect certain voice characteristics and match vocal analyses in a matter of milliseconds. Google says the process happens "only on your device" and the samples aren't sent anywhere else.

    ArsTechnica asked Google how confident it was in the speaker's ability to distinguish users only by voice. Google responded by explaining that the feature was still being refined. "We don't recommend that users rely upon voice identification as a security feature," said the company.


    To enable multi-user support, owners need the latest version of the Google Home app. If the app doesn't highlight the new feature, click the icon in the top right to see all connected devices. After selecting the Google Home speaker from the list, tap "Link your account" and the app will run through the process that teaches Google Assistant to recognize your voice.

    The feature began rolling out in the U.S. yesterday, and Google says it will expand to the U.K. "in the coming months".

    Article Link: Google Home Smart Speaker Now Supports Multiple Users
     
  2. H2SO4 macrumors 68040

    Joined:
    Nov 4, 2008
    #2
    ArsTechnica asked Google how confident it was in the speaker's ability to distinguish users only by voice. Google responded by explaining that the feature was still being refined. "We don't recommend that users rely upon voice identification as a security feature," said the company.

    Funny that. HSBC are trying to push voice recognition to log on. Why have they cracked it yet Google have failed?
     
  3. orbital~debris macrumors 6502a

    orbital~debris

    Joined:
    Mar 3, 2004
    Location:
    England, UK, Europe
    #3
    Would like to read more rumours about Apple's entry into this product category.

    I already suggested a multiple user feature for Siri (and in the hope it would also be included on a future home assistant device) via Apple's feedback form.
     
  4. freezah macrumors newbie

    Joined:
    Aug 28, 2012
    #4
    Regarding Alexa,

    They could at least let customers set location other than US or UK address.
     
  5. konqerror macrumors 6502

    Joined:
    Dec 31, 2013
    #5
    Isn't the difference that HSBC is using it as a second factor, a password where the user is already known, whereas Google is using it as a single factor? The former is a 1:1 comparison where here you need a 1:several.

    The other difference is over the phone, the acoustics are far better. I've listened to Alexa captures in the app from across the room and they're definitely not phone quality.
     
  6. H2SO4 macrumors 68040

    Joined:
    Nov 4, 2008
    #6
    I'm not sure. I declined the offer when it was suggested to me. I’ll have to find out.
     
  7. Relentless Power macrumors G4

    Relentless Power

    Joined:
    Jul 12, 2016
    #7
    I actually am awaiting Apple to make an announcement with in the next year hopefully on a home automation device. In the same respect, Hopefully Siri is revamped accordingly when they release their version. It's just one more product to add from Apple's ecosystem for me.
     
  8. jacksmith21006 macrumors newbie

    jacksmith21006

    Joined:
    Aug 5, 2016
    #8
    If you look how Google has done this compared to how Amazon you can see the core difference between the two devices.

    The Echo has a code you use for different accounts and the Google Home (GH) just uses your voice.

    The Echo is really more of a computer interface and has you do the work and the GH is intelligent and far more human in how it does things.

    Really love the voice authentication with the GH as it makes so many use cases now possible.

    For example, in our home I prefer some of my kids to be unable to lower the AC thermostat. Now I can have some able to when they ask the GH in the kitchen and others are not able to. But no awkward passcodes, etc.

    The other is I am fine with guest able to do some things and then others I only want "privileged" users to be able to do. With the Echo it was trivial for people to learn the passcode. Now with the GH I say it and it will work and they say it and it will not.

    But everything is like this with the GH versus the Echo. A huge one is the Echo has commands you memorize and the GH you just talk to it like a human.

    So a little kid can use the GH as well as a grandma. Kind of like Google Search. Same text box for a 5 year old as a rocket scientist as well as grandma. Exactly how technology should be. Why on earth should we still have to use passcodes?

    Amazon needs to replace the foundation of the Echo to have intelligence if they want to be competitive.
     
  9. JRobinsonJr macrumors regular

    Joined:
    Aug 20, 2015
    Location:
    Arlington, Texas
    #9
    Excellent! I - and I suspect **many** others - have suggested the same thing. Hopefully Apple is listening, but I have my doubts.
     
  10. NT1440 macrumors G4

    NT1440

    Joined:
    May 18, 2008
    Location:
    Hartford, CT
    #10
    Soooooo...In order for these user detection systems to work properly and reliably you need an array of microphones that support beam forming so it can pin point the user. Google Home doesn't have them, Alexa doesn't have them, nobody is using these yet.

    This means that whoever upgrades to the Vesper manufactured piezo MEMS microphones is going to have a major advantage in user recognition (and therefore functionality/reliability).

    I'll never understand why these companies, who know damn well that they don't have the hardware in place to do it right, have instead put out a few million units of a device that won't be replaced often just to get this type of device out first. Why? Why not make it great and have a reason for existing instead of rushing into this half baked market just to get there first?
     
  11. WBRacing macrumors 6502a

    WBRacing

    Joined:
    Nov 19, 2012
    Location:
    UK
    #11
    How long have they had Siri out in the wild for? And still it is rubbish. I certainly wouldn't recommend that you hold your breath whilst you are waiting.
     
  12. jacksmith21006 macrumors newbie

    jacksmith21006

    Joined:
    Aug 5, 2016
    #12
  13. NT1440 macrumors G4

    NT1440

    Joined:
    May 18, 2008
    Location:
    Hartford, CT
    #13
    They need that for it to work seamlessly. Google has already stated that the feature shouldn't be relied on to work 100% of the time. By using beam forming you can isolate voices at the hardware level instead of a software implementation that is merely trying to compensate for the lack of that ability.
     
  14. jacksmith21006 macrumors newbie

    jacksmith21006

    Joined:
    Aug 5, 2016
    #14
    But who indicates hardware would be needed? I have my doubts as putting intelligence in software today is pretty powerful.
     
  15. 6836838 Suspended

    Joined:
    Jul 18, 2011
    #15
    It's called 'first to market'.
     
  16. usamaah macrumors regular

    usamaah

    Joined:
    Sep 23, 2008
    Location:
    Chicago
    #16
    I think Amazon's Echo does in fact use beam forming, it has an array of microphones (7 to be exact). Google tries to solve this in software rather than spending the money on hardware, as per usual Google operations (see Pixel's lack of hardware OIS).

    https://www.xmos.com/blog/xmos/post/introducing-xcore-voice-smart-microphone-applications

    EDIT: and speaking from personal experience, as I own both, Echo is far better at picking my voice out no matter how far away I am or how loud it may be in the room even if the Echo itself is playing music.
     
  17. coolfactor macrumors 68040

    Joined:
    Jul 29, 2002
    Location:
    Vancouver, BC CANADA
    #17
    I'm confused.

    First they say:
    and then they say:

    So this "neural network" is on the device, instead of in the cloud?
    --- Post Merged, Apr 21, 2017 ---
    A simpler way would be a different trigger keyword for each user.

    "Hey Google"
    "Yo Google"
    "Listen up Google!"
    "My lovely Google..."
    "Google, sweetheart..."
     
  18. kdarling macrumors demi-god

    kdarling

    Joined:
    Jun 9, 2007
    Location:
    Cabin by a lake
    #18
    Yeah, the quoted explanation is confusing. I'll bet that some marketing person messed with the wording, trying to make it sound fancier.

    I read it as two different possibilities.

    1. The neural network stores an pre-analyzed template in the device, which it can then use locally to find a match, similar to the way fingerprint sensors do matching. Or...

    2. The on-device part is the same as always, where nothing is sent until it hears a trigger phrase. Then the entire request is sent, and the neural network compares the initial trigger sequence to see which registered user it was.
     
  19. bozzykid macrumors 68020

    Joined:
    Aug 11, 2009
    #19
    That isn't practical and sounds like more of a hack. Thankfully Google did it the right way and will recognize the user by voice. It remains to be seen how accurate it is though.
     

Share This Page