Something along these lines was discussed on the latest ATP podcast.
I mentioned the same thing in a recent post, that there are -
and just simplifying this for this discussion - 3 behaviors:
1) FaceID returns a no match, and the match difference is outside of a specific threshold, user can enter passcode, enrolled facial data (i.e., mathematical representation) is not affected.
2) FaceID returns a no match,
however, the difference is within a specific threshold (some combination of ongoing ML + "facial logic" [all that data analysis by Apple]) and when the user enters the passcode, the enrolled facial data is updated.
3) FaceID is a clear, positive match, the white paper indicates is still performs some difference comp vs. the enrolled FID data.
So it's #2 where there's a potential for the facial data to get, let's say "contaminated", with other face data. A few thoughts:
- The data is only updated within the threshold, which I'd imagine is a pretty tight tolerance
- The passcode also has to be known (this isn't really an attack vector, why bother to hack FID if you know the passcode <wink>)
- The threshold might tilt back to the original user as they unlock the phone over time (i.e., there's a repeated set of data that helps to make it more or less reinforced as valid)