True, but I think in this case the source data would be similar enough to match. My reasoning is this:
- The standard storage resolution (e.g. NIST, FBI) for fingerprint images is 500 DPI, which is the same DPI as the AuthenTec sensor.
- The (for example) FBI photographic database is of the skin surface ridges, whereas the Apple source image will be of the same ridges' subdermal foundations, but from what I can tell from RF patents, that means the Apple source image will simply be cleaner, without surface cuts. (*)
Thus my conjecture that running the same feature extraction / match algorithm across a photographic database, would likely match up with the results that Apple stores.
In other words, I think there's a good reason why Apple only said that the data cannot be used to recreate the original image. Doing that is not necessary for a photo DB match.
Regards.
Edit: again, this is just a technical discussion. I don't think Apple will send anything anywhere. However, I do think they're understandably glossing over some things in order to calm people's worries.
(*) Which brings up all the talk about "only detecting live skin". Yes, the sensor goes down to where live skin usually is, but because of all the variations in real humans, the base RF sensor itself cannot detect whether it's reading live skin or not. According to AuthenTec patents, other methods must be used for that purpose.