Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Mansu944

macrumors 6502a
Mar 11, 2012
746
1,921
This is just the beginning.
In the near future AI combined with Quantum computers will be able to alter our physical reality in similar ways, bringing everything we see into question.

Near future? There’s a real possibility this is already being done.
 

matrix07

macrumors G3
Jun 24, 2010
8,226
4,891
Yes, but to illustrate the feature convincingly, the subject needs two screen captures without moving - one with the feature turned off, one with it turned on.

That’s exactly what it is. One - on the left - without (using Camera app that doesn’t have this feature), and another - on the right - with FaceTime app that has the feature. Both looking at the same spot, the screen (captured with front facing camera).
 

ddtmm

macrumors regular
Jul 12, 2010
223
739
"so maybe the software algorithms require the more advanced processing power of Apple's latest devices."

So in other words, drains your battery faster...
 

grjj

macrumors 6502
Apr 5, 2014
269
530
That’s exactly what it is. One - on the left - without (using Camera app that doesn’t have this feature), and another - on the right - with FaceTime app that has the feature. Both looking at the same spot, the screen (captured with front facing camera).

No, that's not what these two images are. The subject moved and changed expression so it's hard to discern exactly what the algorithm changed vs what changed in reality.
 

code-m

macrumors 68040
Apr 13, 2006
3,638
3,398
There would be a lot of complex AI work involved with doing this I imagine. Basically the phone needs to track where you’re looking relative to the screen, zone out your eyes and then generate a new set of eyes looking elsewhere. All in real-time to become a processed FaceTime video feed.

I reckon only the neural engine in the A12 can do this well enough but I’m surprised the XR isn’t supported since that has all the same Face ID and SOC hardware

DeepFake is nearing to fulfillment and it will be marketed as fun similar to Memoji/Animoji.
[doublepost=1562160272][/doublepost]
"so maybe the software algorithms require the more advanced processing power of Apple's latest devices."

So in other words, drains your battery faster...

Minimum requirement I suspect is A12 with 4GB RAM or more.
 

DrMotownMac

Contributor
Jul 11, 2008
383
207
Michigan
I'd actually LOVE this feature, not just on iOS (iPhone and iPad), but on the Mac as well! It's so annoying when I'm talking to people and I WANT to look at the screen to be able to see them, but I'm compelled to look at the camera so they think I'm looking at them (even though I'm not). It would be pretty amazing if they could get this to work seamlessly and make it unnoticeable to the person on the other side.
 
  • Like
Reactions: ksec and DoctorTech

Khedron

Suspended
Sep 27, 2013
2,561
5,755
There would be a lot of complex AI work involved with doing this I imagine. Basically the phone needs to track where you’re looking relative to the screen, zone out your eyes and then generate a new set of eyes looking elsewhere. All in real-time to become a processed FaceTime video feed.

I reckon only the neural engine in the A12 can do this well enough but I’m surprised the XR isn’t supported since that has all the same Face ID and SOC hardware

Prior art:

 
  • Like
Reactions: DoctorTech

BWhaler

macrumors 68040
Jan 8, 2003
3,788
6,244
This is brilliant. Classic Apple magic.

I hope they extend this feature to Mac and iPad. Needed for enterprise conference calls.
 
  • Like
Reactions: DoctorTech

code-m

macrumors 68040
Apr 13, 2006
3,638
3,398
The software encourages social interaction by manipulating the image to add a smile and blur your stubble

What if both parties are looking elsewhere? Here is for fake interaction when FaceTime was initially introduced it was all about engaging the other party through video chat, not it seems we are preoccupied with other things while video chatting that our phones have to fake it, instead of being rude and saying with your eyes that you just don’t care enough :p
 

matrix07

macrumors G3
Jun 24, 2010
8,226
4,891
No, that's not what these two images are. The subject moved and changed expression so it's hard to discern exactly what the algorithm changed vs what changed in reality.

You just have to focus on his eyes. This feature didn’t do anything else. (Movement or expression, just the direction of eye contact)
 

testcard

macrumors 68040
Apr 13, 2009
3,720
2,761
Northumbria, UK
You just have to focus on his eyes. This feature didn’t do anything else. (Movement or expression, just the direction of eye contact)
We get all that, the problem is that what we seem to be seeing are a couple of random selfies. To make this work, the framing, lighting and facial expression need to be the same in each image - the only difference being what the correction algorithm is doing to the second image.
 
  • Like
Reactions: matrix07

v0lume4

macrumors 68020
Jul 28, 2012
2,475
5,072
This is my biggest gripe when trying to get a selfie with other people, constantly having to say "LOOK AT THE CAMERA, NOT THE SCREEN!"
Oh boy. I know! I'd say a solid 90% minimum don't look at the camera. You can always see the eyes appearing to look slightly off into the distance in the finished photo.
 

symphony

macrumors 68020
Aug 25, 2016
2,191
2,547
Sorry but this is a horrible comparison picture. I can't even tell what it's supposedly doing. My guess based on the description is that it "fixes your eyes" so to speak but the entirely different image between the two shots isn't very useful.

Are you blind? No it’s not, you can clearly see him looking down on the left, and he’s looking straight in the right. lol
 

chrono1081

macrumors G3
Jan 26, 2008
8,453
4,158
Isla Nublar
Are you blind? No it’s not, you can clearly see him looking down on the left, and he’s looking straight in the right. lol

Not blind at all, his comparison picture is terrible and gives no initial frame of reference. We don't know if he's looking down and left in the second image because it's a completely different angle, his facial expression is completely different, and his distance to the camera is different too. This gives the appearance of him taking these pictures at wildly different times.

A good set of pictures would have him in the exact same spot, one picture of him looking down and left and the next picture turning the feature on (in the same position) to show the difference.
 

AngerDanger

Graphics
Staff member
Dec 9, 2008
5,452
29,003
Update: As demonstrated by Dave Schukin, the feature uses ARKit depth maps to adjust eye position to make it appear the user is looking at the camera.
That's INSANE! It seems to capture your face as a texture, uses depth mapping to model it, tilts the model up, and projection maps the texture back on. In real time. ****, I love technology!
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.