Convert a UIImage to Black and White

Discussion in 'iOS Programming' started by Ides, Aug 25, 2012.

  1. macrumors member

    Mar 27, 2012
    I'm having problems converting a UIImage to a pure black and white format. So far I've looked at these two sites:

    I followed the first tutorial on converting an image to a bitmap and back. And I'm using code similar to the code on stackoverflow:

    +(UIImage *)blackAndWhiteImageForImage:(UIImage *)image {
        unsigned char *bitmap = [self convertImageToBitmap:image];
        for (int i=0;i<image.size.width * image.size.height * 4; i+=4) {
            if ((bitmap[i] + bitmap[i + 1] + bitmap[i + 2]) < (255 * 3  / 2)) {
                bitmap[i ] = 0;
                bitmap[i + 1] = 0;
                bitmap[i + 2] = 0;
            } else {
                bitmap[i ] = 255;
                bitmap[i + 1] = 255;
                bitmap[i + 2] = 255;
        image = [self convertBitsToImage:bitmap withSize:image.size];
        return image;
    I know this can working because when I supply an image such as my app's default image it does get converted to black and white, quite nicely. But if I use an image taken on the phone (an image of, say, a paper with large text print on it) the image comes out as an ugy soup of black and white, no matter how good I make the lighting and how clear the image is. Can anyone suggest anything?

    Also, I have tried messing around with the darkness threshold, to no avail.
  2. Duncan C, Aug 26, 2012
    Last edited by a moderator: Aug 26, 2012

    macrumors 6502a

    Duncan C

    Jan 21, 2008
    Northern Virginia
    You should use a Core Image filter for this. It uses better algorithms, has much finer control, and will do it many times faster than you can.

    Failing that, you'll need to read up on converting a color image to 1-bit black and white. It's a very destructive process.

    Your algorithm is extremely "naive". I'd have to research better algorithms myself, but I do know enough to know that this is not the way to do it. At the simplest, you should weight the red, green, and blue values differently. Green counts a great deal more in human perception than red or blue. There is a standard weighting used for converting a color image to grayscale. Using that would be a step in the right direction.
  3. macrumors 68030


    Sep 2, 2008
    What do you think black & white means? No one uses a one-bit image for anything anymore. Most likely what you want is greyscale.

    You can generate a greyscale image by creating a bitmap context using a greyscale color space and drawing your RGB image into the bitmap context. Then retrieve a new image from the bitmap context. This will most likely give a better result than your own pixel manipulation.

    If you do your own pixel manipulation a simple average of the rgb values and then setting the resulting pixel to that average for all three components would be a common way to do this.

    Your code assumes an alpha channel is present without checking for it.
  4. macrumors member


    Aug 6, 2012

    This. Images taken from the camera won't have an alpha channel.
  5. Ides, Aug 26, 2012
    Last edited: Aug 26, 2012

    thread starter macrumors member

    Mar 27, 2012
    Thanks for your replies everyone. I got it working using an average threshold. I would also like to do some basic recognition of straight lines in the photo. I've downloaded OpenCV but I don't understand how to use it to find straight lines. I understand it's called a Hough transform. Can anyone recommend a tutorial?

    Edit: I have spent approximately 3 hours attempting to include the OpenCv files in my project and it will not work. I keep getting an error saying "statement-expressions are allowed only inside function". I read in lots of places that this is solved by simply including the OpenCV headers before any other header file. But this is not working for me. I also tried including the headers at the top of my prefix header, also does not work. And yes, I have converted my source files from .m to .mm
  6. macrumors 6502a

    Duncan C

    Jan 21, 2008
    Northern Virginia
    Why do you want 1-bit B&W and not grayscale?

    Phoney has a good point. Why are you trying to create a 1-bit image? Those are hardly ever used any more. The results are just too crude. Grayscale images are much, much better looking.
  7. macrumors member


    Aug 6, 2012
    FWIW binary images are often used in machine vision problems like object recognition.

    A typical use would be running a feature detector (e.g. an edge or corner detector) and then thresholding the resulting image to find locations with strong responses.

Share This Page