Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Point of clarification: 10 times the data points does not automatically mean 10 times the accuracy, at least from a usability standpoint. I can draw a straight line, for example, with two points or 50 points and be equally accurate. Of course faces are more complicated, but I’m pointing out that more data points aren’t necessarily more accurate. In fact, if the logic behind the data points isn’t smart, such a setup could theoretically actually be less accurate by interpreting subtle changes in facial expression as a different face.
This reminds me so much of the megapixel camera war. More pixels does not always mean a better picture.
 
  • Like
Reactions: Kaibelf and Kabeyun
I thought it was said that it would take years to equal let alone exceed Apples technology in this area.

The Chinese company must have some amazing engineers /S.
[doublepost=1530098911][/doublepost]

It all depends on the facial recognition software.

It would be interesting to compare the facial recognition software on the Vivo to the facial recognition software on the iPhone.

This hasn’t even made it to market yet and we don’t know the price... So yes, the prediction is still possible.
 
I don't think faceid or touchid is a failure but I think the term It just works, is an inaccurate over used meme that no longer describes apple products.

I do, however I think it has a caveat nowadays.

(When it works,) It just works. Having the full ecosystem is both a glorious thing, and an amazing when it comes to troubleshooting it when something isn't working as it should.
 
Yes but does it have a notch?
I'm content with Touch ID and iOS 10 on my 7+
 
Time-of-flight LiDAR technology is old school. Very few professional-grade sensors still use that method by itself. Most are just phase-based and there are some that are a blend of the two methods. Let’s be honest though, 30,000 points spread across your face is a ton of points. Why in the hell would you need more, then to just one up someone for the sake of saying we can?
This is so familiar. Spec oneupmanship is an age-old tradition typically waged by manufacturers who either offered an inferior overall user experience they couldn’t match or by manufacturers whose products weren’t fundamentally different from competitors so they resorted to an irrelevant numbers game. Remember the hertz wars (“our gigahertz is bigger than your gigahertz, so our computer must be better”), and also the megapixel wars (“our digicam has more megapixels so ours must be a better digicam than theirs”)? We also have the K wars (“our TV has more
K so it must be a better TV than theirs”) and now the points wars (“our facial detection system has more points so it must be better than theirs”). There are many others. The common denominator is they all capitalize on consumer ignorance, certainly a reliable parameter.
 
Vivo nex
 

Attachments

  • IMG_20180627_204731.jpg
    IMG_20180627_204731.jpg
    190.1 KB · Views: 94
  • Like
Reactions: Regime2008
This is just pure dishonest claim by Vivo. (if the article is completely accurate) How can the number of data points be comparable if the underlying technology is completely different. According to this article Vivo is using Time of Flight which is very bad to track anything but motion (like Microsoft 2nd gen Kinect). Apple didn't use Time of Flight for face recognition. They only use Time of Flight as the proximity sensor to trigger rest of the FaceID main components. The main technology Apple use for FaceID is Structured-light which is far more accurate in 3d space recognition than Time of Flight can ever hope to achieve.

To explain why Time of Flight is bad for face recognition no matter how many data point they add. It is due to physic especially speed of the light. Time of Flight rely on detecting extremely tiny difference in time of travel for the light to bounce back from the target. The speed of light is extremely fast, and for the tiny distance between your face and where you normal hold your phone the difference in distance between how far is point A and point B on your face from your phone is too small to tell for any modern CPU (The accuracy is limited by the clock speed. for any CPU it can only detect difference no smaller than its single clock cycle, and for the time between a single clock cycle for a 3Ghz CPU the light can travel almost 10cm during that time. It can barely tell that you had a nose) It might be good for application such as autonomous driving sensor or AR/VR in large space where resolution of few centimeters isn't really that much an issue when combine with other sensors, but unless this article is wrong about which technology they use I wouldn't trust it any more than the pattern lock on Android phone.

As someone who helped build the Kinect v1 and v2 sensors, you can’t be more wrong in your understanding of physics SLS or ToF sensors. First, measurements are not based on CPU clock. In fact, most devices do not use CPU clock for sync and have not in two decades. ToF sensors are pulsed and scanned across a scene from two separate apparatus, and a sensor like used in the Kinect v2 has sub-millimeter accuracy. Last time I checked, sub-millimeter is smaller than 10 centimeters. If you wanted to get into an intellectual discussion on the Kinect or other ToF or SLS sensors like those used by Apple or Vivo, I’d be happy to oblige. However, frankly, you are wrong in your assumptions on these devices to the point I question if you’ve even seen one of these devices in person.
 
The accuracy is limited by the clock speed. for any CPU it can only detect difference no smaller than its single clock cycle, and for the time between a single clock cycle for a 3Ghz CPU the light can travel almost 10cm during that time.
Sure you didn’t drop a decimal place?
 
Shocker! Apple reveals a new idea and Vivo steals it then makes a newer better version?
They're using a totally different method to capture facial data, so no, they didn't "steal" anything.
The concept has been around long before the iPhone X.

Apple's method is not entirely unique either. They use the same point scanning array tech used in the MS Kinect sensor.
Apple was the first time miniaturize the emmiter and camera array for use in a cell phone. The software behind it is pretty slick as well.

It's all going to come down to implementation.
Apple's FaceID is far from perfect, but it's fast, when it works.
 
I don't think faceid or touchid is a failure but I think the term It just works, is an inaccurate over used meme that no longer describes apple products.
It never was an accurate meme. But as a philosophy it did have and does have a truth to it.

Setting up a new Apple product today is even easier than it was when Apple actually used “it just works” as a slogan.

Yesterday I had to replace my iPhone X because I took mine on an multiple trips around a theme park lazy river. The camera was fogged and the Faceid was disabled. Otherwise the phone still worked. For setting up the new one all I had to do was place it near the old one for a few seconds. After an hour the new phone had re-downloaded all of my apps. I had to enter my AppleId password for security and the CVV numbers from my credit cards for Apple Pay. An hour after bringing the new phone home it was virtually indistinguishable from the one it replaced. And I spent that hour preparing dinner for my family.

It just worked.
 
Last edited:
  • Like
Reactions: chucker23n1
Sure you didn’t drop a decimal place?
He lacks a profound understanding of how sample rates work, and instead took the speed of light and divided it by 3 billion, assuming that it’s as good as things get because CPUs are obviously the clock rate of the entire system we are talking and FFTs don’t exist.
 
  • Like
Reactions: Regime2008
He lacks a profound understanding of how sample rates work, and instead took the speed of light and divided it by 3 billion, assuming that it’s as good as things get because CPUs are obviously the clock rate of the entire system we are talking and FFTs don’t exist.
I’ve got almost zero knowledge on any of that stuff.
The only thing I recognised was FFT being, Fourier Fast Transform?????
Even being very simplistic here on my iPhone calc I thought the maths was off slightly?
 
10x the accuracy??

... I want to see this in action NOW. Seems like sales talk to me; hot air over a loaf of baloney.
You should probably take that up with the MR author. Vivo doesn't claim 10x the accuracy. Check the source article. Tim Hardwick simply made an unjustified leap in logic: "10x more data points must mean 10x the accuracy".

Pro tip: If you ever want to see anything in action, try youtube... everything is on youtube.
 
My FaceID on the iPhone X didn't just work
It just works for me... as long as I am facing the camera correctly and not outside in direct sunlight with my mirrored sunglasses on. (It does work with my sunglasses in the shade or in my car just fine.) I’m kind of amazed how well it does work.
 
To explain why Time of Flight is bad for face recognition no matter how many data point they add. It is due to physic especially speed of the light. Time of Flight rely on detecting extremely tiny difference in time of travel for the light to bounce back from the target. The speed of light is extremely fast, and for the tiny distance between your face and where you normal hold your phone the difference in distance between how far is point A and point B on your face from your phone is too small to tell for any modern CPU (The accuracy is limited by the clock speed. for any CPU it can only detect difference no smaller than its single clock cycle, and for the time between a single clock cycle for a 3Ghz CPU the light can travel almost 10cm during that time. It can barely tell that you had a nose) It might be good for application such as autonomous driving sensor or AR/VR in large space where resolution of few centimeters isn't really that much an issue when combine with other sensors, but unless this article is wrong about which technology they use I wouldn't trust it any more than the pattern lock on Android phone.

And, moreover, if you have 10 times as many data points, you'll need a lot more processing power, even if those data points are completely inaccurate!
 
  • Like
Reactions: chabig
Seems pretty silly to claim your tech is 10x better. They're simply assuming that the current 30k is all that Apple is capable of. The reality is that they may have the ability to go much further but chose to stick to the lower 30k for a reason. It's like claiming a 256GB iPhone is better than a 128GB iPhone, when the user has no reason to need the additional space so they didn't buy the bigger version.
 
  • Like
Reactions: Kabeyun
Seems pretty silly to claim your tech is 10x better. They're simply assuming that the current 30k is all that Apple is capable of. The reality is that they may have the ability to go much further but chose to stick to the lower 30k for a reason. It's like claiming a 256GB iPhone is better than a 128GB iPhone, when the user has no reason to need the additional space so they didn't buy the bigger version.
They didn't claim it and they aren't assuming anything. MR made the claim of 10x more accurate.
[doublepost=1530109216][/doublepost]
I know it works for many people flawlessly, but that doesn't mean it works for everyone.
Going by the responses you've been getting, it doesn't work flawlessly for many at all. Most of them have included caveats to cover the occasions when it doesn't work. In terms of your response to the original poster, you were right. It doesn't "just work". It works most of the time. 'Cept when it doesn't. Hence all the caveats.:D
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.