drawing part of an image into a CGContext

Discussion in 'iOS Programming' started by stirfie, Mar 11, 2012.

  1. stirfie, Mar 11, 2012
    Last edited: Mar 11, 2012

    stirfie macrumors newbie


    Feb 11, 2012
    Western Australia
    Hi All

    I just need some advice. I am using the function below to determine if the area in a uiview that I touched contains part of the image or not. The idea is to create a context that is 6 x 6 , the point that was touched is in the middle of this. I am then using a loop to determine if any of the alpha values in the 36 pixels are greater than 0.00.

    I had it working with 1 pixel, but for a fine line it is hard to touch part of the line ( hence allowing a larger area to test ).

    If the function returns true, the UIView is allowed to be dragged. With the code below, even if the UIView is empty some of the alpha values are 0.00 and some are above 0.
    Can anybody see where I am going wrong?

        // point is the CGPoint to test
        //create a copy of the on screen image to check for alpha
        //quartzcore framework must be #indluded.
        [self.layer renderInContext:UIGraphicsGetCurrentContext()];
        UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
        // end of create copy of onscreen image
        // create the context to draw the single pixel into
        BOOL colorMatch = NO;
        unsigned char pixel[36];
        CGContextRef context = CGBitmapContextCreate(pixel, 
                                                     6, 6, 8, 6, NULL,
        [viewImage  drawAtPoint:CGPointMake(-touchPoint.x+3, -touchPoint.y+3)];
        CGFloat alpha ;
        for (int i =0; i<36; i++) {
            alpha = (pixel[i]/255.0);
            NSLog(@"alpha  %f",alpha);
            if (alpha > 0.00) {
                colorMatch = YES;
        NSLog(@"end of loop");
        return colorMatch; //return the result

Share This Page