Sure. But none of that makes it superior to TouchID as TouchID is completely sunglass and facial hair neutral.
FaceID is completely wet hands or gloved hands neutral. They both have advantages and disadvantages.
Sure. But none of that makes it superior to TouchID as TouchID is completely sunglass and facial hair neutral.
It's unfortunate that Apple doesn't backport camera features to older phones. There's really no good reasons why iPhone X can't do Smart HDR or Depth Control while even the Pixel 1 by has Night Sight and Background Blur from the Pixel 3.
Apple typically supports iPhones 4 yrs or so from the time of release. Google only gives the Pixels what? 2-3yrs of support? Seriously - matching Apple here shouldn't be that hard.
What does Maps have to do with any of this? Are you only capable of deflection when you lack a counter argument?
It's simple physics and high-school geometry. You previously said:
I'll tell you why. The dot projector in the iPhone is 30,000 dots. To work while flat on a desk the dot projector in the Pixel 4 would need at least 480,000 dots, or 16x as many.
What happens to the area of a square when you double the length of a side? The area goes up by 4x. The dot projector in the iPhone has a narrow range of view. This picture (while not exactly accurate) by Apple sums it up better than I could.
View attachment 850615
People seem to think "hey, just make a wide-angle dot projector". It's not that simple. Just as the area of a square goes up by 4 when you double the length of a side, so to does the area of a cone (which is what this image shows) when you increase the angle. In order to keep the number of dots that reach the users face the same (so you have enough resolution to do an accurate 3D map) you need to increase the number of dots so that each square inch still gets the same number of dots.
Imagine a "cone" the size of the red lines I drew on the picture (so you could capture facial data at an angle you'd see when your phone is on your desk). You can already see the area it has to cover would go up substantially. My numbers show a cone with an angle of 140 degrees would require 16x the number of dots, or the 480,000 figure I gave above. This presents several problems.
- Power Draw: Unless Google has some magical IR cameras that suddenly became 16x as good at picking up IR light, then those dots will need to be projected at the same intensity as you see with the iPhone. 16x as many dots requires 16x as much power. That's going to be a lot of heat/energy to dissipate.
- Projecting at Extreme Angles: I don't know of any dot projector that can project dots over such a wide angle. None of the ones currently on the market are anywhere near wide enough.
- Flood Illuminator: As with the dot projector, the flood illuminator would also need to light up a very wide area. This is much simpler to do than with a dot projector, but you still need a flood illuminator with 16x the output. Again, much more power and heat generated.
- IR Cameras: Illuminating the subject is only one problem. You also need a very wide angle IR camera in order to capture that data.
So no, it's not as simple as "teaching it while laying flat on a desk". That's about as ridiculous as saying "why doesn't Intel simply double the performance of their processor every year?"
Why do you think Pixel 4 has two IR cameras at opposite corners? Is that to juzt get the same performance as a single IR camera for Face ID? You seem to be pretty knowledgeable with all these numbers. Where can I learn more about these numbers? Is the power consumed by the IR cameras linearly proportional to the number of dots?
My mistak; I misread. There's also something called efficiency similar to LED vs incandescent. So a 16x increase in dots = 16x increase in power? Where do you get these numbers?Did you even read my post? IR cameras don’t consume more power - the dot projector and flood illimunator do. And yes, producing the same light output over a larger area requires more power.
My mistak; I misread. There's also something called efficiency similar to LED vs incandescent. So a 16x increase in dots = 16x increase in power? Where do you get these numbers?
What does Maps have to do with any of this? Are you only capable of deflection when you lack a counter argument?
Does it? You are the one making the claim but there are lot of tricks to keep the power requirements from being 16x. It could run lower power or change aiming to the target so smaller area. Or have a scanning way doing it so it really using less power as it really putting out fewer dots at a time or even the 16x more power is negligible power increase at reducing the battery life by less than 0.01% does not matter. There are easier and better lower hanging fruit to save power.Did you even read my post? IR cameras don’t consume more power - the dot projector and flood illimunator do. And yes, producing the same light output over a larger area requires more power.
Does it? You are the one making the claim but there are lot of tricks to keep the power requirements from being 16x. It could run lower power or change aiming to the target so smaller area. Or have a scanning way doing it so it really using less power as it really putting out fewer dots at a time or even the 16x more power is negligible power increase at reducing the battery life by less than 0.01% does not matter. There are easier and better lower hanging fruit to save power.
Either way it is a long list of things you know nothing about and the entire argument there is worthless.
How about you prove with specs that it is using 16x the power? Not just you saying so. Prove it. You need to back it up with the specs of the part. Right now you do not know that so you have no proof. Under your argument I can think of plenty of way it is using less power with a bigger area.
again provide proof with the specs vs the one Apple is using that it is taking an insane amount more power.....VCSEL sensors aren't like OLED or LCD - the dots aren't addressable, so your idea of "scanning" isn't possible. Now they could couple several VCSEL emitters together to form an array, but that would take up far too much space for a smartphone.
And you're telling me "it is a long list of things you know nothing about"?
again provide proof with the specs vs the one Apple is using that it is taking an insane amount more power.....
Like I said earlier you are making yourself look like an Apple fanboy and just repeating headlines and sound bits.
You are the one making the claim of 16x more power.
Now back it up and prove it with the specs......
Logical fallacy. I can’t provide specs on an unreleased device. But I know enough about physics to know you can’t massively increase the area and maintain intensity without also increasing power correspondingly.
Apple fanboy? The standard response of someone who can’t argue facts and has to resort to name calling.
Not name calling but stating a fact about how your post are looking. That is not name calling.....
People already have listed multiple ways that power reduction could of been done follow by the most important question. Does the power increase even matter? How much does it affect battery life? If said answer are in the area of near 0 then it does not matter. End of the day people doing design know a lot more about it than some random keyboard warrior who going to bash anything not Apple.
I not familiar with the physics and electronics of optics. I am an electrical engineer so this seems very interesting. Are there literature where you can calculate that the power increased vs. area increased for the same intensity? What other ways do you think Google could achieve a wider angle of detection? Or is it simply not possible?A 16x increase in area, while maintaining the same light intensity, would require 16x the power.
I not familiar with the physics and electronics of optics. I am an electrical engineer so this seems very interesting. Are there literature where you can calculate that the power increased vs. area increased for the same intensity? What other ways do you think Google could achieve a wider angle of detection? Or is it simply not possible?
Not really, the iPhone unlocks as soon as you look at it (you have to activate the screen first by either touching it, pressing the side button or lifting up the phone but I assume that also applies to the Pixel 4).From what I'm reading, it is used to unlock the phone without having to make physical contact with the phone. The iPhone requires you to make physical contact with your phone to initiate the unlock process. I can see that being useful in certain circumstances.
We won't know how fast it is until someone get's their hands on it for testing.
Imagine a type of 'projection system' that illuminates exactly one square centimetre (and nothing outside that square centimetre). How many of those would you need to illuminate 16 square centimetres? And how much power would you need to operate X of those projection systems?I not familiar with the physics and electronics of optics. I am an electrical engineer so this seems very interesting. Are there literature where you can calculate that the power increased vs. area increased for the same intensity? What other ways do you think Google could achieve a wider angle of detection? Or is it simply not possible?
You mean like FaceID that is scanning my face as soon as I pick up the phone (as indicated by the iPhone showing the unlock symbol without me needing to touch the screen)?Or... its scanning your face and trying to recognize a match ahead of time, so its already unlocked when you actually want to use it. Unlike Face ID.
That's pretty much what I said regarding the iPhone.Not really, the iPhone unlocks as soon as you look at it (you have to activate the screen first by either touching it, pressing the side button or lifting up the phone but I assume that also applies to the Pixel 4).
I understood that “simply reaching for the phone” to be essentially the same as the raise-to-wake feature of iPhones.That's pretty much what I said regarding the iPhone.
The iPhone requires some physical interaction to begin the process of unlocking. Just looking at an iPhone will do nothing. You gotta wake it up first.
From what I'm reading about the Pixel 4's system, simply reaching for the phone (not actually making contact) will initiate the unlock process.
Yes, many comments on here about how Apple is better because they wait until they get it right before releasing something, but FaceID isn't it. It is nowhere near as fluid as TouchID, nor is it as fast. I have to manually type my passcode in far more on FaceID than I ever had to on TouchID.