It’s scary that you view this as normal.
It is normal though. Never heard of a snapchat filter?
It’s scary that you view this as normal.
Can you elaborate on your issue with this particular feature?It’s scary that you view this as normal.
This is just the beginning.
In the near future AI combined with Quantum computers will be able to alter our physical reality in similar ways, bringing everything we see into question.
Not sure you really need to wait that long with DeepfakeThis is just the beginning.
In the near future AI combined with Quantum computers will be able to alter our physical reality in similar ways, bringing everything we see into question.
Not sure those images explain the feature very well at all.
Yes, but to illustrate the feature convincingly, the subject needs two screen captures without moving - one with the feature turned off, one with it turned on.
...and giving the appearance that you haven’t slept in 48 hours.The software encourages social interaction by manipulating the image to add a smile and blur your stubble
If only the software could get rid of my beard. But it’s an interesting feature.The software encourages social interaction by manipulating the image to add a smile and blur your stubble
That’s exactly what it is. One - on the left - without (using Camera app that doesn’t have this feature), and another - on the right - with FaceTime app that has the feature. Both looking at the same spot, the screen (captured with front facing camera).
There would be a lot of complex AI work involved with doing this I imagine. Basically the phone needs to track where you’re looking relative to the screen, zone out your eyes and then generate a new set of eyes looking elsewhere. All in real-time to become a processed FaceTime video feed.
I reckon only the neural engine in the A12 can do this well enough but I’m surprised the XR isn’t supported since that has all the same Face ID and SOC hardware
"so maybe the software algorithms require the more advanced processing power of Apple's latest devices."
So in other words, drains your battery faster...
There would be a lot of complex AI work involved with doing this I imagine. Basically the phone needs to track where you’re looking relative to the screen, zone out your eyes and then generate a new set of eyes looking elsewhere. All in real-time to become a processed FaceTime video feed.
I reckon only the neural engine in the A12 can do this well enough but I’m surprised the XR isn’t supported since that has all the same Face ID and SOC hardware
The software encourages social interaction by manipulating the image to add a smile and blur your stubble
No, that's not what these two images are. The subject moved and changed expression so it's hard to discern exactly what the algorithm changed vs what changed in reality.
We get all that, the problem is that what we seem to be seeing are a couple of random selfies. To make this work, the framing, lighting and facial expression need to be the same in each image - the only difference being what the correction algorithm is doing to the second image.You just have to focus on his eyes. This feature didn’t do anything else. (Movement or expression, just the direction of eye contact)
Oh boy. I know! I'd say a solid 90% minimum don't look at the camera. You can always see the eyes appearing to look slightly off into the distance in the finished photo.This is my biggest gripe when trying to get a selfie with other people, constantly having to say "LOOK AT THE CAMERA, NOT THE SCREEN!"
Sorry but this is a horrible comparison picture. I can't even tell what it's supposedly doing. My guess based on the description is that it "fixes your eyes" so to speak but the entirely different image between the two shots isn't very useful.
Are you blind? No it’s not, you can clearly see him looking down on the left, and he’s looking straight in the right. lol
That's INSANE! It seems to capture your face as a texture, uses depth mapping to model it, tilts the model up, and projection maps the texture back on. In real time. ****, I love technology!Update: As demonstrated by Dave Schukin, the feature uses ARKit depth maps to adjust eye position to make it appear the user is looking at the camera.