Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The obvious problem with gesture based phone navigation is the difficulty in one hand use.

It is being tested on handled phones, the true benefit is with AR/VR glasses. The sensor will be mounted on the device or a wrist based device like AW, that is where AirGestures meets it purpose.

People have to be taught how to incorporate new forms of gestures and actions, thus transitioning them into new technology.
[doublepost=1564427966][/doublepost]
yep...maybe 2 years later...but if it works as described then Google is implementing it better.

People don't like it when other companies do something better and make it personal for some reason. Google does search, maps, low light photography, voice assistance and now face authentication better than Apple. Does that mean Apple will not improve, absolutely not, it will pressure them to improve and try harder.
 
If it does a complete map of your face when you set it up, then yes, it can scan at odd angles. I'm not sure why you think its that difficult. If the setup involves me looking up and now knows my face in that angle, then it can unlock without me looking at it. And this may be odd, but just because Face ID works well for you doesn't mean it works well for other people. Heck, its one of the main reasons why I ditched my iPhone X for a Galaxy s10+.
[doublepost=1564426706][/doublepost]

Yea and you have to be at a pretty specific angle for it to do that. I'd take Touch ID over Face ID any day of the week.

The logic in this post is amazing.

  • The iPhone and Pixel both use dot projectors.
  • Dot projectors have a narrow range of operation in both angle and distance (which I already posted)

Please explain how the Pixel can scan faces at odd angles when it’s using the same system as the iPhone?
 
"Other phones require you to lift the device all the way up, pose in a certain way, wait for it to unlock, and then swipe to get to the homescreen,"

Google, you didn't even try. On my Xs I have to:

- Reach for my phone
- Grab it
- Lift it
- Face my phone
- Wait
- Put a finger on the screen
- Swipe up
- Lift the finger

So many steps, Pixel is better
Hmmm, I left my X and swipe and it unlocks without even a pause
 
defenders of the  cause will try to denigrate anything not , but this Pixel 4 seems superior in many ways to the awkward, slow and unreliable FaceID.
I disagree about Face ID, for me, fast and reliable and in ways better than Touch ID. As far as the generic Apple defender and critic their tires, let’s give them a rest.

Add to that no freaking horrid notch and a decision to forego the animoji cringefest, the Pixel 4 is in many many ways the iPhone I wish  made.
Notch, bezels, pop-Up flash etc. pick your poison. But the pixel 4 is the phone you wish Apple made? I’ll buy the iPhone 11, you buy the pixel 4.

Nothing in the 2019 iPhone rumor mill suggest anything but basically the same old stuff getting sold again out of Cupertino with a new coat of lipstick - so the Pixel 4 at the moment IS the interesting phone to watch
What’s different about the pixel 4 that makes it the phone to watch?
[doublepost=1564429358][/doublepost]
Yes that face recognition. Slim chance someone that finds my lost phone would have a picture of me. Anyways, there is an option to require a eye wink, a picture can't wink.

So is apple going to copy google and allow faceid to work sideways?
Does t it work any which way on an iPad already?
 
  • Like
Reactions: MacNeb and Heineken
"the phone will open as you pick it up, all in one motion the phone will open as you pick it up, all in one motion..."

Phone will open? Are they saying the phone will unlock before looking at it? I'll need to see this in testing.
 
The logic in this post is amazing.

  • The iPhone and Pixel both use dot projectors.
  • Dot projectors have a narrow range of operation in both angle and distance (which I already posted)

Please explain how the Pixel can scan faces at odd angles when it’s using the same system as the iPhone?

Ok, so the dot project essentially is mapping your face by projecting thousands of dots on it, correct? I'll assume yes because I've read about these and you claim you know how they work.

So, when you set up the facial recognition software, why would it not be possible to teach it everything it would see laying flat on a desk? Your neck, chin, cheeks, nose, etc is that kind of orientation . Even the Face ID set up does that to sum degree because you have to move your head all around, right?

Then, as your picking up your phone its recognizing all that information from that map and saying "yep, that looks right", gets a little closer "yep that looks right" repeat that multiple times in 1 second and by the time your phone is in a position to be useable, its ready to go.

BTW - you've claimed multiple times that the only reason you see Google implementing this is because their system is "slow", if they are using the same tech as apple, how can it be slower than apples?
 
Yes, many comments on here about how Apple is better because they wait until they get it right before releasing something, but FaceID isn't it. It is nowhere near as fluid as TouchID, nor is it as fast. I have to manually type my passcode in far more on FaceID than I ever had to on TouchID.

And before people talk about it being so much more secure, show me where and who has cracked the TouchID protection, and why it is all of a sudden so inferior?
You either have never used Face ID or you're doing it wrong. I just pick up my phone, and swipe it like I did with the old phones. It's so fast, it's seamless. And I use the passcode far less often than I did with Touch ID (sloppy food, exercise, showers, etc often defeated Touch ID).

As for hacking Touch ID...

https://www.tips-and-tricks.co/lifehacks/iphone-touch-id/

https://www.cnet.com/news/apples-touch-id-still-vulnerable-to-hack-security-researcher-finds/
 
  • Like
Reactions: FriendlyMackle
P/s those gestures are cool, but doesn’t that mean the dot projection and associated hardware are on to track the gestures. Wouldn’t that be bad for people with sensitivity to this light.
 
Last edited:
Ok, so the dot project essentially is mapping your face by projecting thousands of dots on it, correct? I'll assume yes because I've read about these and you claim you know how they work.

So, when you set up the facial recognition software, why would it not be possible to teach it everything it would see laying flat on a desk? Your neck, chin, cheeks, nose, etc is that kind of orientation . Even the Face ID set up does that to sum degree because you have to move your head all around, right?

Then, as your picking up your phone its recognizing all that information from that map and saying "yep, that looks right", gets a little closer "yep that looks right" repeat that multiple times in 1 second and by the time your phone is in a position to be useable, its ready to go.

BTW - you've claimed multiple times that the only reason you see Google implementing this is because their system is "slow", if they are using the same tech as apple, how can it be slower than apples?
Because Apple has the best SOC.
 
  • Like
Reactions: FriendlyMackle
The obvious problem with gesture based phone navigation is the difficulty in one hand use.

Yep, single-handed gestures do pose a challenge. No doubt.

I was thinking more of those scenarios (say kitchen) where hands (or hand) may be a bit messier or driving where steering a 2,000 lb. vehicle at 60+ mph while trying to touch a little target on a touch screen can, at times, not be the best idea.
 
  • Like
Reactions: TheSapient
Ok, so the dot project essentially is mapping your face by projecting thousands of dots on it, correct? I'll assume yes because I've read about these and you claim you know how they work.

So, when you set up the facial recognition software, why would it not be possible to teach it everything it would see laying flat on a desk? Your neck, chin, cheeks, nose, etc is that kind of orientation . Even the Face ID set up does that to sum degree because you have to move your head all around, right?

Then, as your picking up your phone its recognizing all that information from that map and saying "yep, that looks right", gets a little closer "yep that looks right" repeat that multiple times in 1 second and by the time your phone is in a position to be useable, its ready to go.

BTW - you've claimed multiple times that the only reason you see Google implementing this is because their system is "slow", if they are using the same tech as apple, how can it be slower than apples?

Meanwhile FaceID gets all that information as soon as you pick up your iPhone. It doesn’t need to “scan things” or “start working” beforehand. And it’s still fast.

Now with iOS 13 FaceID is even faster (30% according to Apple). Think about that for a second. Apple is making changes to their software to improve FaceID performance. Apple has been working on FaceID for years while this is Google’s first (real) attempt. There’s no way Google will have their code as highly optimized (fast) as Apple on their first attempt.

Further, Apple processors are far ahead of anything Google can use. Google developed their own external processor for handling things like image processing or machine learning. This is because Exynos and Snapdragon processors lag in these areas. The problem with an external processor is it can’t move data nearly as fast as if it were integrated into your SoC. Google can’t even match the NPU in the A12, let alone the upcoming A13.

So Apple has a several year head start, better software and superior hardware to work with. And we don’t even know what other changes are in the next iPhone regarding FaceID (dot projector/camera or processor).
 
defenders of the  cause will try to denigrate anything not , but this Pixel 4 seems superior in many ways to the awkward, slow and unreliable FaceID.

Add to that no freaking horrid notch and a decision to forego the animoji cringefest, the Pixel 4 is in many many ways the iPhone I wish  made.

Nothing in the 2019 iPhone rumor mill suggest anything but basically the same old stuff getting sold again out of Cupertino with a new coat of lipstick - so the Pixel 4 at the moment IS the interesting phone to watch

LOL. Notch is so big that is phone wide. LOL. Otherwise you hail something that is not yet used (bunny in bag) over something proved by two years of user usage that will get refresh even before bunny goes out of bag.

That is wise.
 
Last edited:
I'm not sure about X or XR phones but my XS is crazy fast with the FaceID. It only fails if I happen to yawn or look away after bringing up the phone. Glasses, hats, nothing seems to phase it. I don't have a twin, sibling or a child that looks like me, but even if I did, I probably wouldn't care if they got on my phone anyway.

Also, did I read something about orientation? My iPad Pro works landscape every day, but I don't think I've ever needed to unlock my iPhone in landscape or upside down, but I might try that later if I think about it but the swipe up part of unlocking trains you to pick up the phone in portrait, right side up, every time.
 
Meanwhile FaceID gets all that information as soon as you pick up your iPhone. It doesn’t need to “scan things” or “start working” beforehand. And it’s still fast.

Now with iOS 13 FaceID is even faster (30% according to Apple). Think about that for a second. Apple is making changes to their software to improve FaceID performance. Apple has been working on FaceID for years while this is Google’s first (real) attempt. There’s no way Google will have their code as highly optimized (fast) as Apple on their first attempt.

Further, Apple processors are far ahead of anything Google can use. Google developed their own external processor for handling things like image processing or machine learning. This is because Exynos and Snapdragon processors lag in these areas. The problem with an external processor is it can’t move data nearly as fast as if it were integrated into your SoC. Google can’t even match the NPU in the A12, let alone the upcoming A13.

So Apple has a several year head start, better software and superior hardware to work with. And we don’t even know what other changes are in the next iPhone regarding FaceID (dot projector/camera or processor).

OMG, Apple is making changes to improve FaceID performance??? No Way!! I can't wait for Tim Cook to say this is "the most powerful iPhone ever".

I do enjoy how the "its not about doing it first, its about doing it better" only applies on these threads when its convenient for apple products. I use both apple and android products, and each do certain things better than the other. Just because apple has had a several year head start doesn't mean this feature in the Pixel won't be better, or at least on par with Face ID.

The fact is neither of us have tried this on the next iphone, or on the pixel, so any comments about speed are just assumptions that we can banter about all day and likely won't change either of our opinions.

You asked me a direct question on how these feature can work using the installed technology and I answered it.
 
OMG, Apple is making changes to improve FaceID performance??? No Way!! I can't wait for Tim Cook to say this is "the most powerful iPhone ever".

I do enjoy how the "its not about doing it first, its about doing it better" only applies on these threads when its convenient for apple products. I use both apple and android products, and each do certain things better than the other. Just because apple has had a several year head start doesn't mean this feature in the Pixel won't be better, or at least on par with Face ID.

The fact is neither of us have tried this on the next iphone, or on the pixel, so any comments about speed are just assumptions that we can banter about all day and likely won't change either of our opinions.

You asked me a direct question on how these feature can work using the installed technology and I answered it.

Where did I ever talk about “doing it first”? Oh right, nowhere.

Let’s see, where else has Apple had a head start that nobody else caught up on their first try (or ever)?

  • TouchID? It took almost 2 years before anyone had a sensor as fast and accurate.
  • 64bit processors? 1.5 years before the first Android device had 64bit, and Android has stayed years behind and is still playing catch-up.
  • Inline hardware encryption? Uh, oh. Sorry to bring this one up. It took Google/Pixel a whopping 7 years after Apple to implement this.
  • NVMe storage? I don’t know - is there even an Android device with this yet?

You want me to go on?

You said:
if they are using the same tech as apple, how can it be slower than apples?
And I gave you several reasons. Mainly because they AREN'T using the same tech as Apple. Apple has superior software and vastly superior processors. You know, the parts that have to process the facial data from those sensors in real time. Apple can design silicon in their processor for this specific task. Google’s stuck using an off-the-shelf processor
 
And I gave you several reasons. Mainly because they AREN'T using the same tech as Apple. Apple has superior software and vastly superior processors. You know, the parts that have to process the facial data from those sensors in real time. Apple can design silicon in their processor for this specific task. Google’s stuck using an off-the-shelf processor

You'd need to explain then why Pixel camera produces better quality pictures than iPhone camera. I believe it is a common knowledge that Google has superior software design talent.
 
I really like the gesture control feature though. I feel it could be useful in the car where it will allow me to not have to focus where I'm tapping on the phone. To be clear I don't mean I use my phone while driving, I mean like a simple gesture to skip a song or a quick gesture to show me the route preview/options on maps
 
You'd need to explain then why Pixel camera produces better quality pictures than iPhone camera. I believe it is a common knowledge that Google has superior software design talent.

First off, taking pictures with a camera is not the same as facial recognition. So why are you bringing that up as if it’s relevant to facial recognition technology?

As to Google and “software superiority”.....

  • Why was Google’s first version of Face Unlock so bad you could unlock it with a picture printed on regular paper?
  • Why can’t Google get color management to work in Android?
  • Why can’t Google do a proper implementation of audio/MIDI in Android for the huge market of mobile musicians or match the latency of 5+ year old iOS devices?
  • Why can’t Google fix their ridiculous security/update problem, after years and years and numerous “initiatives”?
  • Why couldn’t Google develop their own programming language for Android (instead of stealing Java)?
  • Why can’t Google develop their own advanced file system to compete with APFS?

So much for your “common knowledge”.
 
  • Like
Reactions: MacNeb
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.