Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

ArtOfWarfare

macrumors G3
Original poster
Nov 26, 2007
9,627
6,151
As far as I know, there hasn't been a really great feeling First Person game for iOS yet... everything is always crippled by the lack of physical joysticks and buttons. (Correct me if I'm wrong on this.)

I was thinking about how this could be overcome. You could require a physical controller for your game, but then you guarantee that nobody plays it.

I was thinking about how the Wii had only a single joystick but managed to have amazing First Person games on it. I think a lot of people thought of this and have tried using motion to replicate it. But that never works well because of the motion drift. You need to regularly zero it again.

The Wii didn't have that problem though - you never needed to zero it. Because it didn't use the accelerator or gyroscope for aiming - it used the IR sensor + sensor bar for that. That was always absolute - there was never any drift.

iOS has a similar sensor - the camera. Has anyone tried doing this? Is it possible to use the camera to track motion and use it for aiming in a game?

Can anyone think of a reason it wouldn't work? I figured maybe so it has something to specifiically track, I could have it so the game asks you to build your own sensor bar - just a piece of paper with a bold red circle taped to something in front of you.

Or am I all wrong on this? Does motion drift no longer exist? If that isn't an issue, why are there still no good first person games on iOS? Or did I miss those?
 
Pokemon Go?

I don't think Pokemon Go uses the camera for aiming... Pretty sure it just uses the compass to decide where to place the Pokemon along the X axis, and then I think it might somehow use shadows to decide where flat ground is to determine the Y axis.

Anyways, initial tests using the webcam on my laptop seem promising. Compiling and installing from my iMac to my iPhone is ludicrously slow so I haven't tried that yet. I'll have to try it soon - no point in moving forward with the idea if the iPhone can't process the camera input fast enough. Given what I've seen other people do (I.E., snapchat filters), I don't think that should be a problem.
 
I don't think Pokemon Go uses the camera for aiming... Pretty sure it just uses the compass to decide where to place the Pokemon along the X axis, and then I think it might somehow use shadows to decide where flat ground is to determine the Y axis.

Anyways, initial tests using the webcam on my laptop seem promising. Compiling and installing from my iMac to my iPhone is ludicrously slow so I haven't tried that yet. I'll have to try it soon - no point in moving forward with the idea if the iPhone can't process the camera input fast enough. Given what I've seen other people do (I.E., snapchat filters), I don't think that should be a problem.

IR is much easier to track in the case of the Wii. To do it with the camera you would need some reference point then track the reference point as the camera moves. Feature descriptors come to mind like SIFT, but it might be computational heavy. Snapchat filters aren't computationally intense because they utilize a pre-built model thats been hand constructed for matching to a human face.
 
IR is much easier to track in the case of the Wii. To do it with the camera you would need some reference point then track the reference point as the camera moves. Feature descriptors come to mind like SIFT, but it might be computational heavy. Snapchat filters aren't computationally intense because they utilize a pre-built model thats been hand constructed for matching to a human face.

I wrote some code for tracking red objects. Draw that with whatever you have - a crayon, a colored pencil, a marker, a red pen, on any piece of paper... or just find a bold red object. Put that wherever you want your sensor bar.

It loops over all the pixels from the camera, calculates how red it is to determine how "heavy" the pixel is, and decides based on that where the center of the red circle (or other object) is. It seems pretty steady / consistent when I tested on my laptop.

Xcode currently has issues with my Apple ID so won't let me install to my iPhone right now. I have an open support ticket with Apple... hopefully they'll find what's wrong and get back to me.

iOS development is a total PITA... I'm surprised that Apple has managed with so many apps when there's so many dumb hurdles. I never have to deal with this nonsense when testing on an Android device or any computer. It's easier to upload code changes to a remote server than it is to my iPhone.
 
I wrote some code for tracking red objects. Draw that with whatever you have - a crayon, a colored pencil, a marker, a red pen, on any piece of paper... or just find a bold red object. Put that wherever you want your sensor bar.

It loops over all the pixels from the camera, calculates how red it is to determine how "heavy" the pixel is, and decides based on that where the center of the red circle (or other object) is. It seems pretty steady / consistent when I tested on my laptop.

Xcode currently has issues with my Apple ID so won't let me install to my iPhone right now. I have an open support ticket with Apple... hopefully they'll find what's wrong and get back to me.

iOS development is a total PITA... I'm surprised that Apple has managed with so many apps when there's so many dumb hurdles. I never have to deal with this nonsense when testing on an Android device or any computer. It's easier to upload code changes to a remote server than it is to my iPhone.

Ahh okay. That is a very straightforward approach. Several years ago for my graduate computer vision/ image processing class I wrote an iPad racing game that tracked two colored object you held in your hands. So when you rotated your hands it was like rotating the steering wheel. It also tracked a few other gestures.

Two things that might help you out that I did was calibration. You could hold up one of the colored objects to the iPad camera. Then you could select the color on the screen that you wanted to track from the camera input and it would take those RGB values with some variance to create a thresholding range in order to account for lighting etc.

Second thing I did was use OpenGL ES to do thresholding and then computed the centroid of the remaining pixels. It worked very well and wasn't computation intensive at all. It was done on an iPad 2, which now iPhones and iPads are much more powerful.
 
The problem is that 'real world' users will not always be in well-lit environments. And sometimes they'll be in the passenger seat in a car, a plane, etc.

The list of possible environments is endless.

An accelerometer still works in all these situations

PS: recent iPhone models use an accelerometer + gyroscope combination chip that mostly remains accurate over time. Constant calibration is unnecessary with iPhones these past few years.

It's an interesting idea but it has so many problems and offers so few real advantages that it's really not worth it.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.