Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacRumors

macrumors bot
Original poster
Apr 12, 2001
67,701
38,168



A new feature in the latest iOS 13 beta makes users appear as if they're looking directly at the camera to make eye contact during FaceTime calls, when actually they're looking away from the camera at the image of the other person on their screen.

facetime-correction-feature-ios-13-1.jpg

The FaceTime Correction Feature as demoed by Will Sigmon (@Wsig)

The new "FaceTime Attention Correction" feature, first spotted by Mike Rundle on Twitter, can be turned on and off in the FaceTime section of the Settings app, although it only appears to work on iPhone XS and XS Max devices in the third iOS 13 beta sent out to developers on Tuesday.

Why the feature is limited to these devices right now remains unknown. It clearly relies on some form of image manipulation to achieve its results, so maybe the software algorithms require the more advanced processing power of Apple's latest devices.

Rundle predicted in 2017 that FaceTime attention correction would be introduced by Apple in "years to come," but its apparent inclusion in iOS 13, due to be released this fall, has surprised and impressed him.

For more details on the many features coming to iPhones with iOS 13, be sure to check out our comprehensive MacRumors roundup.

Update: As demonstrated by Dave Schukin, the feature uses ARKit depth maps to adjust eye position to make it appear the user is looking at the camera.

How iOS 13 FaceTime Attention Correction works: it simply uses ARKit to grab a depth map/position of your face, and adjusts the eyes accordingly.Notice the warping of the line across both the eyes and nose. pic.twitter.com/U7PMa4oNGN - Dave Schukin 🤘 (@schukin) July 3, 2019


Article Link: Attention Correction Feature in iOS 13 Beta Enables Appearance of Eye Contact During FaceTime Calls [Updated]
 
There would be a lot of complex AI work involved with doing this I imagine. Basically the phone needs to track where you’re looking relative to the screen, zone out your eyes and then generate a new set of eyes looking elsewhere. All in real-time to become a processed FaceTime video feed.

I reckon only the neural engine in the A12 can do this well enough but I’m surprised the XR isn’t supported since that has all the same Face ID and SOC hardware
 
I think it might have slipped in and they wanted to keep it as a secret and major new feature for the new iPhone this fall... we will see if they remove it again from the next beta.
 
Probably "needs" the faster neural processor, forced obsolescence, blah blah blah.
 
Now if they can only make it look like I'm there when I'm in the next room getting a snack.

Complete with audio adjustments so it sounds like I'm still there rather than shouting from the next room, and lip syncing? I'm in. Opens up the possibility of facetiming on the loo.
[doublepost=1562155409][/doublepost]
Not sure those images explain the feature very well at all.

Hopefully someone with blue or green eyes can chime in with some pictures.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.