Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Ides

macrumors member
Original poster
Mar 27, 2012
95
0
I'm having problems converting a UIImage to a pure black and white format. So far I've looked at these two sites:
http://paulsolt.com/2010/09/ios-converting-uiimage-to-rgba8-bitmaps-and-back/
http://stackoverflow.com/questions/4401567/getting-a-black-and-white-uiimage-not-grayscale

I followed the first tutorial on converting an image to a bitmap and back. And I'm using code similar to the code on stackoverflow:

Code:
+(UIImage *)blackAndWhiteImageForImage:(UIImage *)image {
    unsigned char *bitmap = [self convertImageToBitmap:image];
    
    for (int i=0;i<image.size.width * image.size.height * 4; i+=4) {
        if ((bitmap[i] + bitmap[i + 1] + bitmap[i + 2]) < (255 * 3  / 2)) {
            bitmap[i ] = 0;
            bitmap[i + 1] = 0;
            bitmap[i + 2] = 0;
        } else {
            bitmap[i ] = 255;
            bitmap[i + 1] = 255;
            bitmap[i + 2] = 255;
        }
    }
        
    image = [self convertBitsToImage:bitmap withSize:image.size];
    
    return image;
}

I know this can working because when I supply an image such as my app's default image it does get converted to black and white, quite nicely. But if I use an image taken on the phone (an image of, say, a paper with large text print on it) the image comes out as an ugy soup of black and white, no matter how good I make the lighting and how clear the image is. Can anyone suggest anything?

Also, I have tried messing around with the darkness threshold, to no avail.
 
I'm having problems converting a UIImage to a pure black and white format. So far I've looked at these two sites:
http://paulsolt.com/2010/09/ios-converting-uiimage-to-rgba8-bitmaps-and-back/
http://stackoverflow.com/questions/4401567/getting-a-black-and-white-uiimage-not-grayscale

I followed the first tutorial on converting an image to a bitmap and back. And I'm using code similar to the code on stackoverflow:

Code:
+(UIImage *)blackAndWhiteImageForImage:(UIImage *)image {
    unsigned char *bitmap = [self convertImageToBitmap:image];
    
    for (int i=0;i<image.size.width * image.size.height * 4; i+=4) {
        if ((bitmap[i] + bitmap[i + 1] + bitmap[i + 2]) < (255 * 3  / 2)) {
            bitmap[i ] = 0;
            bitmap[i + 1] = 0;
            bitmap[i + 2] = 0;
        } else {
            bitmap[i ] = 255;
            bitmap[i + 1] = 255;
            bitmap[i + 2] = 255;
        }
    }
        
    image = [self convertBitsToImage:bitmap withSize:image.size];
    
    return image;
}

I know this can working because when I supply an image such as my app's default image it does get converted to black and white, quite nicely. But if I use an image taken on the phone (an image of, say, a paper with large text print on it) the image comes out as an ugy soup of black and white, no matter how good I make the lighting and how clear the image is. Can anyone suggest anything?

Also, I have tried messing around with the darkness threshold, to no avail.

You should use a Core Image filter for this. It uses better algorithms, has much finer control, and will do it many times faster than you can.

Failing that, you'll need to read up on converting a color image to 1-bit black and white. It's a very destructive process.

Your algorithm is extremely "naive". I'd have to research better algorithms myself, but I do know enough to know that this is not the way to do it. At the simplest, you should weight the red, green, and blue values differently. Green counts a great deal more in human perception than red or blue. There is a standard weighting used for converting a color image to grayscale. Using that would be a step in the right direction.
 
Last edited by a moderator:
What do you think black & white means? No one uses a one-bit image for anything anymore. Most likely what you want is greyscale.

You can generate a greyscale image by creating a bitmap context using a greyscale color space and drawing your RGB image into the bitmap context. Then retrieve a new image from the bitmap context. This will most likely give a better result than your own pixel manipulation.

If you do your own pixel manipulation a simple average of the rgb values and then setting the resulting pixel to that average for all three components would be a common way to do this.

Your code assumes an alpha channel is present without checking for it.
 
Thanks for your replies everyone. I got it working using an average threshold. I would also like to do some basic recognition of straight lines in the photo. I've downloaded OpenCV but I don't understand how to use it to find straight lines. I understand it's called a Hough transform. Can anyone recommend a tutorial?

Edit: I have spent approximately 3 hours attempting to include the OpenCv files in my project and it will not work. I keep getting an error saying "statement-expressions are allowed only inside function". I read in lots of places that this is solved by simply including the OpenCV headers before any other header file. But this is not working for me. I also tried including the headers at the top of my prefix header, also does not work. And yes, I have converted my source files from .m to .mm
 
Last edited:
Why do you want 1-bit B&W and not grayscale?

Thanks for your replies everyone. I got it working using an average threshold. I would also like to do some basic recognition of straight lines in the photo. I've downloaded OpenCV but I don't understand how to use it to find straight lines. I understand it's called a Hough transform. Can anyone recommend a tutorial?

Edit: I have spent approximately 3 hours attempting to include the OpenCv files in my project and it will not work. I keep getting an error saying "statement-expressions are allowed only inside function". I read in lots of places that this is solved by simply including the OpenCV headers before any other header file. But this is not working for me. I also tried including the headers at the top of my prefix header, also does not work. And yes, I have converted my source files from .m to .mm

Phoney has a good point. Why are you trying to create a 1-bit image? Those are hardly ever used any more. The results are just too crude. Grayscale images are much, much better looking.
 
FWIW binary images are often used in machine vision problems like object recognition.

A typical use would be running a feature detector (e.g. an edge or corner detector) and then thresholding the resulting image to find locations with strong responses.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.