Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
:rolleyes::rolleyes:

Apple typically supports iPhones 4 yrs or so from the time of release. Google only gives the Pixels what? 2-3yrs of support? Seriously - matching Apple here shouldn't be that hard.
It's unfortunate that Apple doesn't backport camera features to older phones. There's really no good reasons why iPhone X can't do Smart HDR or Depth Control while even the Pixel 1 by has Night Sight and Background Blur from the Pixel 3.
[doublepost=1564549324][/doublepost]
What does Maps have to do with any of this? Are you only capable of deflection when you lack a counter argument?

It's simple physics and high-school geometry. You previously said:



I'll tell you why. The dot projector in the iPhone is 30,000 dots. To work while flat on a desk the dot projector in the Pixel 4 would need at least 480,000 dots, or 16x as many.

What happens to the area of a square when you double the length of a side? The area goes up by 4x. The dot projector in the iPhone has a narrow range of view. This picture (while not exactly accurate) by Apple sums it up better than I could.

View attachment 850615


People seem to think "hey, just make a wide-angle dot projector". It's not that simple. Just as the area of a square goes up by 4 when you double the length of a side, so to does the area of a cone (which is what this image shows) when you increase the angle. In order to keep the number of dots that reach the users face the same (so you have enough resolution to do an accurate 3D map) you need to increase the number of dots so that each square inch still gets the same number of dots.

Imagine a "cone" the size of the red lines I drew on the picture (so you could capture facial data at an angle you'd see when your phone is on your desk). You can already see the area it has to cover would go up substantially. My numbers show a cone with an angle of 140 degrees would require 16x the number of dots, or the 480,000 figure I gave above. This presents several problems.

  • Power Draw: Unless Google has some magical IR cameras that suddenly became 16x as good at picking up IR light, then those dots will need to be projected at the same intensity as you see with the iPhone. 16x as many dots requires 16x as much power. That's going to be a lot of heat/energy to dissipate.
  • Projecting at Extreme Angles: I don't know of any dot projector that can project dots over such a wide angle. None of the ones currently on the market are anywhere near wide enough.
  • Flood Illuminator: As with the dot projector, the flood illuminator would also need to light up a very wide area. This is much simpler to do than with a dot projector, but you still need a flood illuminator with 16x the output. Again, much more power and heat generated.
  • IR Cameras: Illuminating the subject is only one problem. You also need a very wide angle IR camera in order to capture that data.

So no, it's not as simple as "teaching it while laying flat on a desk". That's about as ridiculous as saying "why doesn't Intel simply double the performance of their processor every year?"

Why do you think Pixel 4 has two IR cameras at opposite corners? Is that to juzt get the same performance as a single IR camera for Face ID? You seem to be pretty knowledgeable with all these numbers. Where can I learn more about these numbers? Is the power consumed by the IR cameras linearly proportional to the number of dots?
 
Last edited:
Why do you think Pixel 4 has two IR cameras at opposite corners? Is that to juzt get the same performance as a single IR camera for Face ID? You seem to be pretty knowledgeable with all these numbers. Where can I learn more about these numbers? Is the power consumed by the IR cameras linearly proportional to the number of dots?

Did you even read my post? IR cameras don’t consume more power - the dot projector and flood illimunator do. And yes, producing the same light output over a larger area requires more power.
 
Did you even read my post? IR cameras don’t consume more power - the dot projector and flood illimunator do. And yes, producing the same light output over a larger area requires more power.
My mistak; I misread. There's also something called efficiency similar to LED vs incandescent. So a 16x increase in dots = 16x increase in power? Where do you get these numbers?
 
My mistak; I misread. There's also something called efficiency similar to LED vs incandescent. So a 16x increase in dots = 16x increase in power? Where do you get these numbers?

A 16x increase in area, while maintaining the same light intensity, would require 16x the power.
 
Comparing features is like splitting hairs - that really doesn’t make a difference for me. I’m more interested in how stable and secure the device runs over the long haul
 
Good that Google's adding additional security and biometrics. Definitely a bit of a copy paste on that front

however, article is misleading on the gesture recognition as android has had several gestures for a long time. Gestures is more a copy from the old BBRY and PalmOS days, which even apple copied on that front.

Gestures probably date back earlier. Most of the gestures found in both iOS and Android have been pretty much mainstays in those OS's. If you want to see an OS that was built around these gestures, find an old BB10 device to play with for a while. It was actually IMHO the best OS. Unfortunately, having the "Best OS" doesn't save you if you've ran your company like grbage
 
What does Maps have to do with any of this? Are you only capable of deflection when you lack a counter argument?

It doesn't, you brought up a bunch of stuff that Android/Google hasn't done or just did vs. Apple, and i was pointing out a bunch of stuff that took Apple a while to catch up on. Simple as that, then I went on to counter the rest of your point soooo swing and a miss on your dig :).

I've also made it clear that I'm not talking about a wide angle dot projector, and if it hasn't been clear, i'll say it again, I'm not talking about a wide angle dot projector. On the same page now?? Ok...

All I'm saying is that when the phone senses you reaching for it and it activates the dot project, it can start to scan for your face. Depending on the angle of your phone it may be down at your neck or up at your forehead. The picture you provided showed an image with the cone aimed perfectly at the persons face. If the phone was tilted down slightly, that cone would hit their neck/chin area, right? And if it was tilted up it would hit more of the forehead/eyes, right? So when you pick up your phone, that cone is going to hit one of those areas before it hits your face straight on, right? It's not waiting until the angle is perfect, thus speeding up the process. And this is totally a guess, but you asked a question and I provided an answer. I literally have nothing else to explain, so if you don't like that answer, or don't think that it can happen, then cool, I don't care.
 
Did you even read my post? IR cameras don’t consume more power - the dot projector and flood illimunator do. And yes, producing the same light output over a larger area requires more power.
Does it? You are the one making the claim but there are lot of tricks to keep the power requirements from being 16x. It could run lower power or change aiming to the target so smaller area. Or have a scanning way doing it so it really using less power as it really putting out fewer dots at a time or even the 16x more power is negligible power increase at reducing the battery life by less than 0.01% does not matter. There are easier and better lower hanging fruit to save power.

Either way it is a long list of things you know nothing about and the entire argument there is worthless.
How about you prove with specs that it is using 16x the power? Not just you saying so. Prove it. You need to back it up with the specs of the part. Right now you do not know that so you have no proof. Under your argument I can think of plenty of way it is using less power with a bigger area.
 
Does it? You are the one making the claim but there are lot of tricks to keep the power requirements from being 16x. It could run lower power or change aiming to the target so smaller area. Or have a scanning way doing it so it really using less power as it really putting out fewer dots at a time or even the 16x more power is negligible power increase at reducing the battery life by less than 0.01% does not matter. There are easier and better lower hanging fruit to save power.

Either way it is a long list of things you know nothing about and the entire argument there is worthless.
How about you prove with specs that it is using 16x the power? Not just you saying so. Prove it. You need to back it up with the specs of the part. Right now you do not know that so you have no proof. Under your argument I can think of plenty of way it is using less power with a bigger area.

VCSEL sensors aren't like OLED or LCD - the dots aren't addressable, so your idea of "scanning" isn't possible. Now they could couple several VCSEL emitters together to form an array, but that would take up far too much space for a smartphone.

And you're telling me "it is a long list of things you know nothing about"?
 
VCSEL sensors aren't like OLED or LCD - the dots aren't addressable, so your idea of "scanning" isn't possible. Now they could couple several VCSEL emitters together to form an array, but that would take up far too much space for a smartphone.

And you're telling me "it is a long list of things you know nothing about"?
again provide proof with the specs vs the one Apple is using that it is taking an insane amount more power.....

Like I said earlier you are making yourself look like an Apple fanboy and just repeating headlines and sound bits.

You are the one making the claim of 16x more power.
Now back it up and prove it with the specs......
 
again provide proof with the specs vs the one Apple is using that it is taking an insane amount more power.....

Like I said earlier you are making yourself look like an Apple fanboy and just repeating headlines and sound bits.

You are the one making the claim of 16x more power.
Now back it up and prove it with the specs......

Logical fallacy. I can’t provide specs on an unreleased device. But I know enough about physics to know you can’t massively increase the area and maintain intensity without also increasing power correspondingly.

Apple fanboy? The standard response of someone who can’t argue facts and has to resort to name calling.
 
Logical fallacy. I can’t provide specs on an unreleased device. But I know enough about physics to know you can’t massively increase the area and maintain intensity without also increasing power correspondingly.

Apple fanboy? The standard response of someone who can’t argue facts and has to resort to name calling.

Not name calling but stating a fact about how your post are looking. That is not name calling.....
People already have listed multiple ways that power reduction could of been done follow by the most important question. Does the power increase even matter? How much does it affect battery life? If said answer are in the area of near 0 then it does not matter. End of the day people doing design know a lot more about it than some random keyboard warrior who going to bash anything not Apple.
 
Not name calling but stating a fact about how your post are looking. That is not name calling.....
People already have listed multiple ways that power reduction could of been done follow by the most important question. Does the power increase even matter? How much does it affect battery life? If said answer are in the area of near 0 then it does not matter. End of the day people doing design know a lot more about it than some random keyboard warrior who going to bash anything not Apple.

How is explaining basic physics of light “bashing”?

“Keyboard warrior”? Again with the name calling. You’re on a roll.
 
  • Like
Reactions: rjohnstone
A 16x increase in area, while maintaining the same light intensity, would require 16x the power.
I not familiar with the physics and electronics of optics. I am an electrical engineer so this seems very interesting. Are there literature where you can calculate that the power increased vs. area increased for the same intensity? What other ways do you think Google could achieve a wider angle of detection? Or is it simply not possible?
 
I not familiar with the physics and electronics of optics. I am an electrical engineer so this seems very interesting. Are there literature where you can calculate that the power increased vs. area increased for the same intensity? What other ways do you think Google could achieve a wider angle of detection? Or is it simply not possible?

I think there are several ways they could make it work over a wider angle. I just don’t think they can make the angle wide enough so a phone laying flat on your desk can still map your face. I wouldn’t say it’s impossible, just not feasible in a smartphone with current technology.

Since you’re an electrical engineer, do you think I could double the number of transistors in a CPU while keeping the exact same process node, and somehow do twice the work while consuming the same power as before? Or double the voltage to allow much higher clock speeds and keep the power consumption the same?

When the Pixel 4 comes out we’ll have to see exactly what conditions it works under. My guess is it’s nowhere near as wide as people expect and you’re still going to have to have the phone pointed towards your face somewhat.
 
From what I'm reading, it is used to unlock the phone without having to make physical contact with the phone. The iPhone requires you to make physical contact with your phone to initiate the unlock process. I can see that being useful in certain circumstances.
We won't know how fast it is until someone get's their hands on it for testing.
Not really, the iPhone unlocks as soon as you look at it (you have to activate the screen first by either touching it, pressing the side button or lifting up the phone but I assume that also applies to the Pixel 4).
 
I not familiar with the physics and electronics of optics. I am an electrical engineer so this seems very interesting. Are there literature where you can calculate that the power increased vs. area increased for the same intensity? What other ways do you think Google could achieve a wider angle of detection? Or is it simply not possible?
Imagine a type of 'projection system' that illuminates exactly one square centimetre (and nothing outside that square centimetre). How many of those would you need to illuminate 16 square centimetres? And how much power would you need to operate X of those projection systems?

In reality, your 'projection system' might illuminate that square centimetre but also have some light fall onto the area around that square centimetre and thus if you want to illuminate a larger area, you wouldn't need a full 16x of those 'projection systems' as that stray light isn't lost but counted towards illuminating that larger area. On the other hand, it might more difficult to design a 'projection system' that covers a larger area without also illuminating the central areas to a brighter level than strictly necessary.

Thus, whether you need 16x the power or more or less depends on exactly your 'projection system' works. A flood illuminator will almost certainly produce 'stray light', a dot projection much less so (the whole point of it is to have a very targeted 'projection of light', those dots). Then you might also be able to shine a (very fast) 'moving light' over your target area, which naturally requires less power than illuminating the whole area for a longer time.
[doublepost=1564678119][/doublepost]
Or... its scanning your face and trying to recognize a match ahead of time, so its already unlocked when you actually want to use it. Unlike Face ID.
You mean like FaceID that is scanning my face as soon as I pick up the phone (as indicated by the iPhone showing the unlock symbol without me needing to touch the screen)?
 
Not really, the iPhone unlocks as soon as you look at it (you have to activate the screen first by either touching it, pressing the side button or lifting up the phone but I assume that also applies to the Pixel 4).
That's pretty much what I said regarding the iPhone.
The iPhone requires some physical interaction to begin the process of unlocking. Just looking at an iPhone will do nothing. You gotta wake it up first.

From what I'm reading about the Pixel 4's system, simply reaching for the phone (not actually making contact) will initiate the unlock process.
 
That's pretty much what I said regarding the iPhone.
The iPhone requires some physical interaction to begin the process of unlocking. Just looking at an iPhone will do nothing. You gotta wake it up first.

From what I'm reading about the Pixel 4's system, simply reaching for the phone (not actually making contact) will initiate the unlock process.
I understood that “simply reaching for the phone” to be essentially the same as the raise-to-wake feature of iPhones.
 
Yes, many comments on here about how Apple is better because they wait until they get it right before releasing something, but FaceID isn't it. It is nowhere near as fluid as TouchID, nor is it as fast. I have to manually type my passcode in far more on FaceID than I ever had to on TouchID.

You may want to try setting up an alternate face scan, if there is a particular change (such as putting something specific on your head, or at certain times like laying in bed) that causes it to fail. after they added that option in ios12, I stopped having failures push me to enter my passcode. I haven't personally had a single one since ios12 beta.

If that is a particular pair of glasses, you may want to change that option as well. Certain polarizations will block the IR mapping.

Failures are meant to be analyzed after entering your passcode as false negatives, but if they were bad reads or are substantially different they won't be used for updating the model.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.