Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

xArtx

macrumors 6502a
Original poster
Mar 30, 2012
764
1
Hi Guys,
It looks like the image data inside a UIImage object is not retrievable.
I'm wondering if it is possible instead, to copy a UIImage to a bitmap context,
and be able to access the image data from there?

It sounds like something that should be possible, but doesn't appear to be.
Cheers, Art.

----------

Somewhere in there, this rotation function has copied a UIImage to Bitmap context indirectly:
Code:
UIImage *objectImage


CGImageRef imageRef = [self CGImageRotatedByAngle:[objectImage CGImage] angle:(360-rotation)/57.295791];


// dodgy rotation function
- (CGImageRef)CGImageRotatedByAngle:(CGImageRef)imgRef angle:(CGFloat)angle
{
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef bmContext = CGBitmapContextCreate(NULL,1024,1024,8,0,colorSpace,kCGImageAlphaPremultipliedFirst);
    CGColorSpaceRelease(colorSpace);
    CGContextTranslateCTM(bmContext,+originx,+originy);
    CGContextRotateCTM(bmContext, angle);
    CGContextTranslateCTM(bmContext,-originx,-originy);
    CGContextDrawImage(bmContext, CGRectMake(0,0,1024,1024),imgRef);
    
    rotatedImage = CGBitmapContextCreateImage(bmContext);
    CFRelease(bmContext);
    
    return rotatedImage;
}
 

xStep

macrumors 68020
Jan 28, 2003
2,031
143
Less lost in L.A.
Short answer regarding 'somewhere in there';
Code:
CGContextDrawImage(bmContext, CGRectMake(0,0,1024,1024),imgRef);

Really though, you should describe what it is that you are after.
 

xArtx

macrumors 6502a
Original poster
Mar 30, 2012
764
1
Short answer regarding 'somewhere in there';
Code:
CGContextDrawImage(bmContext, CGRectMake(0,0,1024,1024),imgRef);

Really though, you should describe what it is that you are after.

I have only saved to the photo album, but I assume (maybe incorrectly),
that if you read a photo from the photo album, you are going to get a UIImage.
I want pixel access to the UIImage image data, so you can tell the colour value
of every pixel in the image.

and yes, I think you just have to free the CGImageRef when you use it.
So there is a major hole for any app to send or receive data to or from any other app
without any Apple approved transportation of data, using the photo album as a cache.
 

Duncan C

macrumors 6502a
Jan 21, 2008
853
0
Northern Virginia
I have only saved to the photo album, but I assume (maybe incorrectly),
that if you read a photo from the photo album, you are going to get a UIImage.
I want pixel access to the UIImage image data, so you can tell the colour value
of every pixel in the image.

and yes, I think you just have to free the CGImageRef when you use it.
So there is a major hole for any app to send or receive data to or from any other app
without any Apple approved transportation of data, using the photo album as a cache.

Using the photo album is entirely optional. You are free to save image files into a directory like "Documents" inside your app's sandbox.

The images in the photo album are usually JPEGs. You can also save TIFFs, PNGs, and various other formats.

The normals system calls do indeed load image files as UIImage objects.

If you ask a UIImage for it's CGImage, you get a read-only CGImage. You should NOT CFRelease the CGImage you get from reading the CGImage property of a UIImage.
 

Duncan C

macrumors 6502a
Jan 21, 2008
853
0
Northern Virginia
Hi Guys,
It looks like the image data inside a UIImage object is not retrievable.
I'm wondering if it is possible instead, to copy a UIImage to a bitmap context,
and be able to access the image data from there?

It sounds like something that should be possible, but doesn't appear to be.
Cheers, Art.

----------

Somewhere in there, this rotation function has copied a UIImage to Bitmap context indirectly:
Code:
UIImage *objectImage


CGImageRef imageRef = [self CGImageRotatedByAngle:[objectImage CGImage] angle:(360-rotation)/57.295791];


// dodgy rotation function
- (CGImageRef)CGImageRotatedByAngle:(CGImageRef)imgRef angle:(CGFloat)angle
{
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef bmContext = CGBitmapContextCreate(NULL,1024,1024,8,0,colorSpace,kCGImageAlphaPremultipliedFirst);
    CGColorSpaceRelease(colorSpace);
    CGContextTranslateCTM(bmContext,+originx,+originy);
    CGContextRotateCTM(bmContext, angle);
    CGContextTranslateCTM(bmContext,-originx,-originy);
    CGContextDrawImage(bmContext, CGRectMake(0,0,1024,1024),imgRef);
    
    rotatedImage = CGBitmapContextCreateImage(bmContext);
    CFRelease(bmContext);
    
    return rotatedImage;
}


As for fetching the bitmap data from a UIImage, yes it's possible, and yes you use a bitmap context to do the fetching.

Erica Sadun's excellent "The Advanced iOS 6 Developer's Cookbook" includes a sample application (a "Recipe" in her terms) that shows exactly how to do it. The book "The Core iOS 6 Developer's Cookbook" also includes similar code, in the recipe that describes how to hit-test a bitmap to see if you tapped on a non-opaque part.

I highly recommend both her "Core iOS 6 Developer's Cookbook" and her advanced cookbook. They are both chock-full of very useful nuggets of tested, working code that shows how to do all kind of great stuff.

In the case of getting pixels from a bitmap, the trick is to create a CGContextRef using the call CGBitmapContextCreate(), which takes a bunch of parameters, including a pointer to a destination memory buffer into which the data from drawing is written.

That call is a little tricky to use, since you need to specify all the settings to create the context. If you buy the book you'll get working code to do what you need.

Once you've created the context, you render your image into it, close the context, and then keep the data buffer you've created, which now holds your pixel data. You then index into the memory buffer to fetch the pixel data as needed.
 

xArtx

macrumors 6502a
Original poster
Mar 30, 2012
764
1
Using the photo album is entirely optional. You are free to save image files into a directory like "Documents" inside your app's sandbox.
But then the data could not be retrieved (theoretically) by other apps.
Also jpeg compression would break the data, whereas png compression doesn't

I have got it all working, maybe not the best way, and not all together yet.
Not particularly of use anyway, but I find it interesting.
 

xArtx

macrumors 6502a
Original poster
Mar 30, 2012
764
1
Hide 512K of data in an apparently black image (where kick array is the data) :
Code:
    unsigned char bone;
    unsigned char btwo;
    unsigned char bthr;
    unsigned char bfor;
    
    int kickindex = 0;
    char kickbyte = 0;
    for (int ix = 2181173; ix > 2181173-(524288*4); ix=ix-4) {
        
        bone = 0;
        btwo = 0;
        bthr = 0;
        bfor = 0;
        
        kickbyte = kick13[kickindex];
        if (kickbyte >> 0 & 1) {bone = 1;} // 10
        if (kickbyte >> 1 & 1) {bone = bone + 2;} // 1
        if (kickbyte >> 2 & 1) {btwo = 1;} // 10
        if (kickbyte >> 3 & 1) {btwo = btwo + 2;} // 1
        if (kickbyte >> 4 & 1) {bthr = 1;} // 10
        if (kickbyte >> 5 & 1) {bthr = bthr + 2;} // 1
        if (kickbyte >> 6 & 1) {bfor = 1;} // 10
        if (kickbyte >> 7 & 1) {bfor = bfor + 2;} // 1
        
        bmpfile[ix] = 0x00 + bone;
        bmpfile[ix-1] = 0x00 + btwo;
        bmpfile[ix-2] = 0x00 + bthr;
        bmpfile[ix-3] = 0x00 + bfor;
        kickindex++;
    }
    
     for (int ix = 54; ix < 84021; ix++) {
    bmpfile[ix] = 0x01;
     }


save to photo album (bitmaps can be saved to photo album to, but png is ok).

Next App loads the image from photoalbum and retrieves the data without Apple knowing about it:

Code:
    CGImageRef sourceImage = [[UIImage imageFromResource:@"Codedimage.png"]CGImage];
	CFDataRef theData;
	theData = CGDataProviderCopyData(CGImageGetDataProvider(sourceImage));
	UInt8 *pixelData = (UInt8 *) CFDataGetBytePtr(theData);
	int dataLength = CFDataGetLength(theData);
	NSLog(@"ROM length: %i",dataLength);
    int outbyter;
    int outbyteg;
    int outbyteb;
    int outbytex;
    int index = 0;
    int skip = 0;
    // orientation still needs to be fixed here
    // image needs to be flipped vertical
	while (index < 1000) {
    outbyter = pixelData[index]; index++; skip++;
    if (skip > 2) {skip = 0; index++;}
    outbyteg = pixelData[index]; index++; skip++;
    if (skip > 2) {skip = 0; index++;}
    outbyteb = pixelData[index]; index++; skip++;
    if (skip > 2) {skip = 0; index++;}
    outbytex = pixelData[index]; index++; skip++;
    if (skip > 2) {skip = 0; index++;}
	NSLog(@"ROM file: %i:%i:%i:%i index %i",outbyter,outbyteg,outbyteb,outbytex,index);
	}
 

Duncan C

macrumors 6502a
Jan 21, 2008
853
0
Northern Virginia
So this is a hack to pass large blocks of binary data between apps, and isn't about reading pixels at all?

Hide 512K of data in an apparently black image (where kick array is the data) :
Code:
    unsigned char bone;
    unsigned char btwo;
    unsigned char bthr;
    unsigned char bfor;
    
    int kickindex = 0;
    char kickbyte = 0;
    for (int ix = 2181173; ix > 2181173-(524288*4); ix=ix-4) {
        
        bone = 0;
        btwo = 0;
        bthr = 0;
        bfor = 0;
        
        kickbyte = kick13[kickindex];
        if (kickbyte >> 0 & 1) {bone = 1;} // 10
        if (kickbyte >> 1 & 1) {bone = bone + 2;} // 1
        if (kickbyte >> 2 & 1) {btwo = 1;} // 10
        if (kickbyte >> 3 & 1) {btwo = btwo + 2;} // 1
        if (kickbyte >> 4 & 1) {bthr = 1;} // 10
        if (kickbyte >> 5 & 1) {bthr = bthr + 2;} // 1
        if (kickbyte >> 6 & 1) {bfor = 1;} // 10
        if (kickbyte >> 7 & 1) {bfor = bfor + 2;} // 1
        
        bmpfile[ix] = 0x00 + bone;
        bmpfile[ix-1] = 0x00 + btwo;
        bmpfile[ix-2] = 0x00 + bthr;
        bmpfile[ix-3] = 0x00 + bfor;
        kickindex++;
    }
    
     for (int ix = 54; ix < 84021; ix++) {
    bmpfile[ix] = 0x01;
     }


save to photo album (bitmaps can be saved to photo album to, but png is ok).

Next App loads the image from photoalbum and retrieves the data without Apple knowing about it:

Code:
    CGImageRef sourceImage = [[UIImage imageFromResource:@"Codedimage.png"]CGImage];
	CFDataRef theData;
	theData = CGDataProviderCopyData(CGImageGetDataProvider(sourceImage));
	UInt8 *pixelData = (UInt8 *) CFDataGetBytePtr(theData);
	int dataLength = CFDataGetLength(theData);
	NSLog(@"ROM length: %i",dataLength);
    int outbyter;
    int outbyteg;
    int outbyteb;
    int outbytex;
    int index = 0;
    int skip = 0;
    // orientation still needs to be fixed here
    // image needs to be flipped vertical
	while (index < 1000) {
    outbyter = pixelData[index]; index++; skip++;
    if (skip > 2) {skip = 0; index++;}
    outbyteg = pixelData[index]; index++; skip++;
    if (skip > 2) {skip = 0; index++;}
    outbyteb = pixelData[index]; index++; skip++;
    if (skip > 2) {skip = 0; index++;}
    outbytex = pixelData[index]; index++; skip++;
    if (skip > 2) {skip = 0; index++;}
	NSLog(@"ROM file: %i:%i:%i:%i index %i",outbyter,outbyteg,outbyteb,outbytex,index);
	}
 

xArtx

macrumors 6502a
Original poster
Mar 30, 2012
764
1
So this is a hack to pass large blocks of binary data between apps, and isn't about reading pixels at all?

Yes and no. Like I said, I find it interesting, but I think your account would get the boot for it.
I think Apple have blocked images being received in email for that reason.

I also have my own reasons... it would be a long image sequence to post here:
http://forums.overclockers.com.au/showthread.php?t=1092935
I also want to try to identify types of birds.
Many of our own (Australia) have more than one distinctive colour (on the bird itself).
I don't need a camera to shoot a common dove...
but I think this can actually be done with the right Canon camera rather than iPhone.
 

Duncan C

macrumors 6502a
Jan 21, 2008
853
0
Northern Virginia
Yes and no. Like I said, I find it interesting, but I think your account would get the boot for it.
I think Apple have blocked images being received in email for that reason.

I also have my own reasons... it would be a long image sequence to post here:
http://forums.overclockers.com.au/showthread.php?t=1092935
I also want to try to identify types of birds.
Many of our own (Australia) have more than one distinctive colour (on the bird itself).
I don't need a camera to shoot a common dove...
but I think this can actually be done with the right Canon camera rather than iPhone.

I didn't make much sense of that. Are you trying to load pictures from the camera roll and evaluate the colors in the picture?

What does shooting with a Canon camera have to do with any of this discussion?

::shakes head in confusion::
 

xArtx

macrumors 6502a
Original poster
Mar 30, 2012
764
1
ie. leave a powered device somewhere looking at a blank scene,
or Bower Bird's bower, and have it trigger itself.
If I had a camera looking for blue movement (blue is not common in nature),
I would capture the bower being built.
But there are other birds that would be better to capture.
 

xStep

macrumors 68020
Jan 28, 2003
2,031
143
Less lost in L.A.
xArtx, it sounds to me like you want to create a time lapse only when the bird is in the nest and you want to do this by analyzing the color in a frame and if some of it matches the birds color you'll take that frame.

I believe there is a WWDC video that does something like this. They tracked a red ball if I recall right. It was likely one of the AF Foundation sessions. Of course this doesn't help if you want to port the code to one of your Canon cameras.

BTW, checked out your page and thought Laser Invaders was a cool app. I take it that the way you tracked the laser is different than what you want to do here.

Hey, maybe instead of looking for color, you write a generic bird recognition routine. ;)
 

xArtx

macrumors 6502a
Original poster
Mar 30, 2012
764
1
Thanks, for Laser Invaders I used object tracking by Brad Larson on this
platform, on the other two, I could see pixel values and make them simpler
by just assuming it's a laser in a dark room, and looking for a few bright pixels.
Brad's demo code requires the user tap the screen to tell the phone that's
the object to track. By looking at that code, I could, in the end, look at pixels,
and was able to write the auto laser detection for the game.

If I capture that screen programatically, I only get the low resolution video preview.

I should look into whether it's possible to use the camera API
to start the Camera app and focus, and take a photo by itself.
I suppose this is possible because there are camera timer and time lapse apps.

Then I still need to reload the photo and do what I'm talking about from the beginning,
because the thing that decides to take the photo doesn't have time to look at the image closely,
and that is the thing that needs to notify me.
I think it is really the data swap thing that's driving me with the thread,
but not to actually do it, but simply that image data in UIImage objects
is apparently hidden, and I'm a do it yourselfer who is always going to want
it in my toolbox for one thing or another.









xArtx, it sounds to me like you want to create a time lapse only when the bird is in the nest and you want to do this by analyzing the color in a frame and if some of it matches the birds color you'll take that frame.

I believe there is a WWDC video that does something like this. They tracked a red ball if I recall right. It was likely one of the AF Foundation sessions. Of course this doesn't help if you want to port the code to one of your Canon cameras.

BTW, checked out your page and thought Laser Invaders was a cool app. I take it that the way you tracked the laser is different than what you want to do here.

Hey, maybe instead of looking for color, you write a generic bird recognition routine. ;)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.