Concerned about Apple's face sync / machine learning approach

Discussion in 'macOS High Sierra (10.13)' started by mark-vdw, Jun 10, 2017.

  1. mark-vdw macrumors regular

    Joined:
    Jan 20, 2013
    #1
    Listening to John Gruber's The Talk Show interview with Federighi and Schiller at WWDC 2016, Federighi said:

    A year later when the face sync feature was actually introduced, he said:

    I'm a bit concerned that that is a VERY different approach to syncing machine learning info about photos, especially if not all devices have access to the same set of photos, because some will have my full library and some only part of it. I was expecting that my Mac, which holds the full library, would do the most detailed analysis, having access to the most data points for running the clustering algorithms and training the neural networks or SVM's or whatever it is that they use. And the iPhones and iPads would then get an easy copy of those results.

    If anyone with some more insight in machine learning techniques knows arguments why they chose to only sync ground truth data rather than the analysis results, I would love to hear more.
     

Share This Page