Hi!
Ran into an issue with PNGs having transparent backgrounds and applying them into a pixel array using a CGBitmapContext. This was the "faulty" code :
This ended up giving artifacts (from previously used and discarded memory regions in all probability) that basically looked like this :
(Please save the laughter and comments about my graphical skills for yourselves. Consider it an honor to have seen a very, very early screenshot).
Now, my question, short of resetting the entire pixel buffer memory after allocation (either with memcpy() or with a for loop, which [-(void) resetToSolidColor basically is), is there some parameters I can pass to CGBitmapContextCreate to tell it to not do alpha blending and just squash whatever is there ?
Reading through CGImageAlphaInfo, I'm not quite sure I understand the Premultiplied vs well... not premultiplied bit. Is that what I'm looking for or really is there just no way to do this short of resetting memory myself ?
(BTW, yes, I realise the BytesPerChannel variable is very poorly named. There is only 1 byte per channel, 4 channels per pixel... whatever, call it lazyness on my part, this code was written a very long time ago and I'm just now using it with transparent PNGs).
Ran into an issue with PNGs having transparent backgrounds and applying them into a pixel array using a CGBitmapContext. This was the "faulty" code :
Code:
-(id) initWithCGImage: (CGImageRef) image
{
NSInteger bytes;
self = [super init];
CGColorSpaceRef colorSpace;
width = CGImageGetWidth(image);
height = CGImageGetHeight(image);
bytesPerChannel = 4;
colorSpace = CGColorSpaceCreateDeviceRGB();
bytes = width * height * sizeof(BRUnifiedPixelRGBA);
imagePixels = malloc(bytes);
//[self resetToSolidColor: 0x00000000];
CGContextRef context = CGBitmapContextCreate(imagePixels,
width,
height,
8,
width * bytesPerChannel,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context,
CGRectMake(0.0f, 0.0f, width, height),
image);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return self;
}
This ended up giving artifacts (from previously used and discarded memory regions in all probability) that basically looked like this :

(Please save the laughter and comments about my graphical skills for yourselves. Consider it an honor to have seen a very, very early screenshot).
Now, my question, short of resetting the entire pixel buffer memory after allocation (either with memcpy() or with a for loop, which [-(void) resetToSolidColor basically is), is there some parameters I can pass to CGBitmapContextCreate to tell it to not do alpha blending and just squash whatever is there ?
Reading through CGImageAlphaInfo, I'm not quite sure I understand the Premultiplied vs well... not premultiplied bit. Is that what I'm looking for or really is there just no way to do this short of resetting memory myself ?
(BTW, yes, I realise the BytesPerChannel variable is very poorly named. There is only 1 byte per channel, 4 channels per pixel... whatever, call it lazyness on my part, this code was written a very long time ago and I'm just now using it with transparent PNGs).