No matter how good it eventually becomes you still have to point the camera to your face for scan and comparison.
By the time I pull the phone out of my pocket it's already unlocked with my finger and inside the home screen.
While I have it laying flat on my desk the camera points to the ceiling. I don't have to lift it up to my face to do things I need done.
At what point in the steps the unlocking happens is irrelevant. You still have to look at your device to use it. Wether you’re looking at it after it’s unlocked (in your example of placing your finger on the reader as you’re pulling the phone out of pocket) or if you’re looking at it before the unlock happens . At the end of the day it’s the same steps.
TouchID:
- pull out phone
- place finger on sensor
- look at phone to use.
(Steps one and two can be interchangeable in their order done)
FaceID:
- pull out phone
- look at phone
- swipe up
It’s in the documentation and in an image during the keynote that it will/can work at angles, such as flat on your desk.