Help with 16bit images and CGBitmapContext

Discussion in 'Mac Programming' started by Dr144, Oct 20, 2010.

  1. Dr144 macrumors newbie


    Oct 6, 2010
    I'd LOVE to know what I'm doing wrong here. I'm a bit of a newbie with CGImageRefs so any advice would help.

    I'm trying to create a bitmap image that has as it's pixel values a weighted sum of the pixels from another bitmap, and both bitmaps are 16bits per channel. For some reason I had no trouble getting this to work with 8bit images but it fails miserably with 16bit. My guess is that I'm just not setting things up correctly. I've tried using CGFloats, floats and UInt16s as the data types but nothing has worked. The input image has no alpha channel. The output image I get looks liked colored snow.

    relevant stuff from the header:

    UInt16 *inBaseAddress;
    UInt16 *outBaseAddress;
    CGFloat inAlpha[5];
    CGFloat inRed[5];
    CGFloat inGreen[5];
    CGFloat inBlue[5];
    CGFloat alphaSum, redSum, greenSum, blueSum;
    int shifts[5];
    CGFloat weight[5];
    CGFloat weightSum;
    I create the context for the input bitmap (a CGImageRef created with CGImageSourceCreateImageAtIndex(source, 0, NULL)) using:

    size_t width	 = CGImageGetWidth(inBitmap);
    size_t height	 = CGImageGetHeight(inBitmap);
    size_t bitmapBitsPerComponent	 = CGImageGetBitsPerComponent(inBitmap);
    size_t bitmapBytesPerRow	 = (pixelsWide * 4 * bitmapBitsPerComponent / 8);
    CGColorSpaceRef colorSpace	 = CGImageGetColorSpace(inImage);
    CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaNoneSkipLast;
    CGContextRef inContext = CGBitmapContextCreate (NULL,
    The context for the output bitmap is created in the same way. I draw the inBitmap into the inContext using:

    CGRect rect = {{0,0},{width,height}}; 
    CGContextDrawImage(inContext, rect, inBitmap);
    Then I initialize the inBaseAddress and outBaseAddress like so:

    inBaseAddress = CGBitmapContextGetData(inContext);
    outBaseAddress = CGBitmapContextGetData(outContext);
    Then I fill the outBaseAddress with values from the inBaseAddress:

    for (n = 0; n < 5; n++)
    inRed[n]	= inBaseAddress[inSpot + 0 + shifts[n]];
    inGreen[n]	= inBaseAddress[inSpot + 1 + shifts[n]];
    inBlue[n]	= inBaseAddress[inSpot + 2 + shifts[n]];
    inAlpha[n]	= inBaseAddress[inSpot + 3 + shifts[n]];
    alphaSum	= 0.0;
    redSum	= 0.0;
    greenSum	= 0.0;
    blueSum	= 0.0;
    for (n = 0; n < 5; n++)
    redSum	+= inRed[n] * weight[n];
    greenSum	+= inGreen[n] * weight[n];
    blueSum	+= inBlue[n] * weight[n];
    alphaSum	+= inAlpha[n] * weight[n];
    outBaseAddress[outSpot + 0] = (UInt16)roundf(redSum);
    outBaseAddress[outSpot + 1] = (UInt16)roundf(greenSum);
    outBaseAddress[outSpot + 2] = (UInt16)roundf(blueSum);
    outBaseAddress[outSpot + 3] = (UInt16)roundf(alphaSum);

    As a simple check I've tried:

    outBaseAddress[outSpot + 0] = inBaseAddress[inSpot + 0];
    outBaseAddress[outSpot + 1] = inBaseAddress[inSpot + 1];
    outBaseAddress[outSpot + 2] = inBaseAddress[inSpot + 2];
    outBaseAddress[outSpot + 3] = inBaseAddress[inSpot + 3];
    which works and at least means that the contexts and pointers to the bitmap data are working.

    Thanks for any input. This has been pretty frustrating since it worked just fine with 8bit images.
  2. jared_kipe macrumors 68030


    Dec 8, 2003
    If there is no alpha channel why are you stepping around it? Seems that that this code still uses it.

    Secondly, GCFloats are a REALLY bad type to try to use for this since their size changes based on the architecture. ie CGFloats = 32bits on i386 and = 64bits on x86_64

    Thirdly, I'm not expert but in my head I've always imagined bitmapped images that were 8 bits to basically have 8 bits per channel, so an unsigned char (1 byte 0-255)
    So I'd assume a 16 bit image to have 16 bits per channel, so an unsigned short (2 bytes 0-65535)
    In EITHER case, a CGFloat would be too big, so the offsets would be looking at random data (not to mention improperly formatted data)

    So what was the "type" you were using when it was working for 8 bit images?
  3. Dr144 thread starter macrumors newbie


    Oct 6, 2010
    I tried several different types. For the 8bit images I was just using UInt8s. When that didn't work (obviously) I tried UInt16s. Those didn't work so I tried floats and CGFloats.

    I just tried setting inBaseAddress, outBaseAddress and inRed, inGreen, inBlue and inAlpha to shorts (and changed the others to floats, casting inRed etc to floats before computing the weighted sums) and still get colored snow.

    The reason for still using the alpha channel is that I just want to keep it general so that it will work for images with alpha.
  4. chown33 macrumors 604

    Aug 9, 2009
    Sailing beyond the sunset
    I'd be looking at a few things, all which involve manually verifying expected values.

    First, NSLog or printf the actual values of bitmapBitsPerComponent, bitmapBytesPerRow, width, height, etc.

    Second, dump the first 20 pixels or so of the inContext's data as hex, and confirm they are what's expected from a simple test image. As a test image, I'd use something like first pixel column pure black, 2nd col pure red, 3rd col pure green, and so on: pure blue, pure white, cyan, yellow, etc. Any reasonably recognizable pattern that varies column by column would work.

    Third, I'd look at the output display code and parameters, to make sure it matches what the data is. You don't say how you're displaying output, but if it's by writing a file and opening it in another program, make sure you're writing the file properly.

    Fourth, you didn't post any initializers for your shifts and weight arrays. And you haven't done anything with the floating-point values to clip them to the proper range. If a CGFloat value exceeds 65535, you will get wrap-around (i.e. modulo 65536) which frequently looks like snow.

    I suspect you're going to find something isn't what you expect it to be. For example, byte-ordering of the 16-bit values. If the order doesn't match the endianness of the machine it's on, then simple assignment will work (like your simple check), but any calculations on the values will not. If your simple check was to add 10 to each pixel's component, then that would be a better quick-check than simple assignment. Or you could divide each component by 4, then multiply by 4 (using shifts, if desired). Or mask the low byte to all-1's. Or countless other single-component transformations that have a predictable yet simple visual effect.
  5. Dr144 thread starter macrumors newbie


    Oct 6, 2010
    OK, I've got it figured out. I needed to set the bitmapInfo to kCGBitmapByteOrder16Little for the 16bit images and to kCGBitmapByteOrder32Little for the 8bit images. I'm a bit surprised by this actually as would have expected it to be the other way around (32Little for 16 bit and 16Little for 8bit).

    I also needed to type def the pointers to the bitmaps as UInt8* and UInt16*. It also appears that I have to include an alpha channel in the bitmapContext. I'm not sure why but the context returned was always nil without it.

    Thanks ya'll!

Share This Page