Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

1458279

Suspended
Original poster
May 1, 2010
1,601
1,521
California
The camera on the iPT 4 is 0.9MP vs 5MP and 8MP on the iPhone and I'm using iPT for testing (I don't have an iPhone)

I've been playing with some Augmented Reality code and was wondering what effect the low quality camera would have on the end result.

In other words, if it works one way on the 0.9MP camera will it work different on the 5MP or 8MP camera?

I'm guessing that the code would have to be adjusted to compensate as the data comes from the digital video cameras, the frames would be different.

Anyone have any background or ever tested on these different cameras?
 

ArtOfWarfare

macrumors G3
Nov 26, 2007
9,560
6,059
I've never done anything with OCR/AR before, but my understanding was that if they were handling images in real time, they had to down sample.

It seems to me that it's a bit pointless to be dealing with more pixels than can be presented to the user at once. In the case of the iPhone 5, that's 1136x640 = .727 megapixels. So it seems to me that no matter how low the picture quality is, it's going to be higher than the screen quality and thus it'll always be downsampling.

That's just a guess.

If you're not handling images in real time (I suppose OCR might ordinarily not be done in real time?), it might make a difference.
 

1458279

Suspended
Original poster
May 1, 2010
1,601
1,521
California
I'm no expert, but my understanding was the the screen didn't really matter because you are taking live video feeds, grabbing frames, analyzing the data in the frame to determine what to do.

If you are looking for something to follow (face detection, barcode, etc...), then you insert/overwrite the frame data and put it back into the stream of frames.

_IF_ this is correct, the higher res camera would have larger frames and that should be more accurate (true to the source)

Example, my cameras are very grainy. My Kodak Zi8 is VERY clear. If I were to grab the data (frames) from a grainy input, it could result in different outcomes when trying to find faces/barcodes/etc...

I don't know if the same code could be used on both devices or even how you would adjust your code to account for a lower res camera.
 

ArtOfWarfare

macrumors G3
Nov 26, 2007
9,560
6,059
Can you pick up faces in pictures your iPhone takes? You managed to do that using no more pixels than the iPhone has to display images with. Therefore, it seems to me that your image processing software shouldn't need so much as a single megapixel to detect a face or barcode; it should be able to only look at every 9th pixel and from that be able to pick up a face.

This is all theoretical for me; I've never tried implementing anything that actually processed an image.
 

1458279

Suspended
Original poster
May 1, 2010
1,601
1,521
California
Again, I'm no expert to say the least, but I understand the display has nothing to do with this.

In other words, you have 3 things:
1. camera
2. output data/stream from the camera
3. display

You grab the data, search for patterns/colors or whatever. Process the data (change colors/data/etc)
Feed the data to the display.

So the display really wouldn't be a part of the process of finding faces or barcodes.

The data stream from the 0.9MP camera is less than 20% of the data from the 5mp camera.

So the picture/frame would either be 5X the size or more info per inch.

In other words, if your face is fit to 80% of the screen on a 0.9MP camera vs 5.0MP camera, you should have more data showing the nose/eyes/etc...

With that much more data you should be better able to tell what is what.

So if the face is 70% of the data, it would be about .7mb vs 3.5mb of face data.

As far as the display goes, I would guess that the iPhone would just be less grainy. (I don't have an iPhone so I have to guess) In fact I don't know for a fact that face recon works on the iPT4.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.