Hi all, The Photos announcements left me a bit confused. I'm hoping someone with access to the beta can clear some things up for me: - Is the "People" tab significantly different from the "Faces" we previously had? Is the process of tagging people any different than before (other than better auto-detection), and do existing tags carry over? - How does this device-level computer vision combine with the iCloud photo library? Does each device perform its object recognition independently or are the recognized features stored in the photo metadata and synced back down to other devices? My wife & I have 2 Macs, 2 iPads and 2 iPhones linked to our iCloud photo library, i hope that they're not all recognizing faces & features independently from each other. - I also wonder what happens with photos that are not stored locally on any device anymore but only available in the cloud - do they get temporarily downloaded, examined, tagged, and sent back up? Or are they simply not searchable anymore? With 20.000 photos in the library and less then 1.000 on most devices, having this be purely device level could be a big issue. - Finally, is this object recognition only happening on iOS or can I let the Macs handle most of the work? The thought of my poor iPhone grunting through the entire library is not very appealing, would much rather let the workhorse handle that. On the other hand, it would be most welcome if I could start doing some face tagging on the iPad, so far we had to use the laptop for that and the face tags wouldn't even sync to the other Mac. Thanks for any clarification!