Detecting person from camera/image

Discussion in 'iOS Programming' started by coolnitman, Aug 23, 2016.

  1. coolnitman macrumors member

    Joined:
    Nov 29, 2012
    #1
    Anyone here worked on OpenCV for detecting person(full body) in image/camera ? Or any other way to detect a person ?
     
  2. teagls macrumors regular

    Joined:
    May 16, 2013
    #2
    Yes, I think the simplest approach is looking for a face. There are many algorithms out there for face detection and i'm sure OpenCV has a few like eigenfaces. Once you find the face you can then assume general position of the body because it will be relative to the face.
     
  3. xStep macrumors 68000

    Joined:
    Jan 28, 2003
    Location:
    Less lost in L.A.
    #3
    As I recall, Apple has face detection in a framework.
     
  4. coolnitman thread starter macrumors member

    Joined:
    Nov 29, 2012
    #4
    I'm using HaarCascade's Full body xml to detect but its not working perfectly. Sometimes apart from the person other things are also getting detected.
    --- Post Merged, Aug 24, 2016 ---
    Any using Hog descriptor ? I read somewhere that its results are more accurate. If anyone has code for it or any link which will help then please share.
     
  5. teagls macrumors regular

    Joined:
    May 16, 2013
    #5
    Therein lies the grand challenge of computer vision and machine learning. If you really want the best results using deep learning to construct a model will give you that. Otherwise using computer vision techniques like SIFT, eigenfaces, hog descriptor, etc will work and provide good results, but they won't be perfect. You need to decide the trade off yourself. How much is the extra accuracy worth to you and the effort involved to achieve that.
     
  6. coolnitman thread starter macrumors member

    Joined:
    Nov 29, 2012
    #6
    I'm planning to try Hog descriptor to detect People. It is inbuilt in OpenCV's framework but couldn't find any good tutorial or anything which explains how to implement it.
     
  7. AxoNeuron, Aug 24, 2016
    Last edited: Aug 24, 2016

    AxoNeuron macrumors 65816

    AxoNeuron

    Joined:
    Apr 22, 2012
    Location:
    The Left Coast
    #7
    About a year ago I started learning how to build neural networks. Recently I built neural networks to do general handwriting recognition.

    This is NOT a simple task. Even the mediocre algorithms are staggeringly intricate and complex. Only in the last few years have the top-tier machine learning experts created algo's that produce reliably accurate results.

    I would suggest you start with simpler tasks, such as trying to come up with a digit handwriting recognition algo based on the MNIST data. Then you can move on to more complicated tasks.

    The problem isn't learning how to build a neural network, they are relatively simple from a mathematical perspective, really anyone could build one. The problem is learning how to build the right neural network. Even with simple three-layer networks, the number of possible architectures is infinite. Should you have 1000 neurons in the middle layer or 2000...? Should you feed in 40x40 images, or will higher resolutions be needed...?

    These architectural decisions are called 'hyperparameters', and trying to find the right 'size' of neural network (along with other decisions) is called hyperparameter optimization. THAT is the most challenging part of machine learning, and it takes years to really get good at it. The main problem is...if you build it too big, the neural network is essentially 'too powerful', and it will find 'correlations' that are simple artifacts of the data, and don't actually exist. This means it will do reallllllly well with the data you trained it with, but when you start feeding it real-world data, it will suck. But if you build it too small, it won't have enough 'power' to analyze the problem right, and won't even do well on the training data, let alone real world data.
     
  8. teagls macrumors regular

    Joined:
    May 16, 2013
    #8
    You honestly make it sound harder and scarier then it really is. Just use caffe or tensorflow. There are so many examples it's easy to get started. The real problem is having the right hardware to do training if you don't have a Nvidia GPU you are going to be waiting around for a long time.

    Also why are you building your own neural net for image classification, especially handwriting recognition? Thats essentially a solved problem. If it's merely for learning purposes okay, but there are countless seasoned researchers with PhDs in machine learning and labs dedicated to this stuff. There are several very state of the art convolutional neural networks that exist exactly for image classification. AlexNet, VGG, GoogleNet. There is no reason to re-implement the wheel. The work they've done has been well reviewed and evaluated. Best of all it's publicly available.

    If you have a linux machine and a Nvidia GPU just install Digits 4. Or purchase some GPU time on amazon web services. https://devblogs.nvidia.com/parallelforall/deep-learning-object-detection-digits/ it's very easy to use you literally almost drop in your data and train and thats it. You don't have to write a single line of code.
     
  9. AxoNeuron macrumors 65816

    AxoNeuron

    Joined:
    Apr 22, 2012
    Location:
    The Left Coast
    #9
    I built my own neural net just for training, to learn how to do it from the ground up. It really helps, I think, to start from the ground up, literally building your own neural net code without Caffe/etc. It really helps you understand how it does what it does, to get a 'gut' feel for how it works.

    Definitely agreed with using linux. After a year of intensive use, you'll save a lot of money by using your own machine as opposed to AWS.
     
  10. teagls macrumors regular

    Joined:
    May 16, 2013
    #10
    Big props for doing it from the ground up. I agree with you the best way to learn it is to actually implement it yourself.
     

Share This Page