Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Dr144

macrumors newbie
Original poster
Oct 6, 2010
13
0
Does anyone know what the limitations are on image size with custom CIFilters? I've created a filter that performs as expected when the images are up to 2 mega pixels but then produce very strange results when the images are larger. I've tested this both in my cocoa app as well as in quartz composer. The filter I've developed is a geometry-type distortion filter that (I think) requires an ROI and a DOD that spans the entire input image. I've created this filter for remapping panoramic images so I'd like this to work on very large (50-100 mega pixel) images.

As a simple test the consider the following CIFilter (can be used in Quartz Composer) that simply translates the image so that the lower-left corner of the images is translated to the center (I know this could be done with an affine transform but I need to perform such an operation in a more complex filter). This filter works as expected when the image is 2000x1000 but produces odd results when the input image is 4000x2000 pixels. The problem is that either the translation does not move the corner to the center exactly or that the image output is gone entirely. I've noticed other odd problems with more complicated filters on large images but I think this simple filter illustrates my issue and can be replicated in Quartz Composer.

kernel vec4 equidistantProjection(sampler src, __color color)
{
vec2 coordinate = samplerCoord(src);
vec2 result;
vec4 outputImage;

result.x = (coordinate.x - samplerSize(src).x / 2.0);
result.y = (coordinate.y - samplerSize(src).y / 2.0);

outputImage = unpremultiply(sample(src,result));

return premultiply(outputImage);
}

The same odd behavior appears when using the working coordinates instead of the sampler coordinates but in this case the error occurs for images of size 2000x1000 but works fine for images of size 1000x500

kernel vec4 equidistantProjection(sampler src, __color color, vec2 destinationDimensions)
{
vec2 coordinate = destCoord();
vec2 result;
vec4 outputImage;

result.x = (coordinate.x - destinationDimensions.x / 2.0);
result.y = (coordinate.y - destinationDimensions.y / 2.0);

outputImage = unpremultiply(sample(src,result));
outputImage = unpremultiply(sample(src,samplerTransform(src, result)));

return premultiply(outputImage);
}

For reference I have added to the Objective-C portion of my filter's - (CIImage *)outputImage method the following to set the DOD to be the entire input image.

- (CIImage *)outputImage
{
CISampler *src = [CISampler samplerWithImage: inputImage];



NSArray * outputExtent = [NSArray arrayWithObjects:
[NSNumber numberWithInt:0],
[NSNumber numberWithInt:0],
[NSNumber numberWithFloat:[inputImage extent].size.width],
[NSNumber numberWithFloat:[inputImage extent].size.height],nil];


return [self apply: filterKernel, src, inputColor, zoom, viewBounds, inputOrigin,
kCIApplyOptionDefinition, [src definition], kCIApplyOptionExtent, outputExtent, nil];

}

Additionally I added the following method to set the ROI which I call in my - (id)init method with this: [filterKernel setROISelector:mad:selector(regionOf:destRect:userInfo:)];

- (CGRect) regionOf:(int)samplerIndex destRect:(CGRect)r userInfo:eek:bj
{

return r;
}

Any help or advice on this issue would be greatly appreciated. I'm sure that CIFilters can work with larger images as I've used the CIBumpDistortion with greater than 50 megapixel images so I must be doing something wrong. Any ideas?
 
Did you get it working? I looked over it yesterday, but don't have any experience programming in Core Image.

I bet the code is correct, but one thing that jumped out at me was the odd naming convention (for Apple) since usually things with **Size have members width and height, instead of x,y
 
No luck getting it working yet. I tested on a mac pro too (I work mostly on a MBP) to make sure that it wasn't just my machine and it fails in the same way. I'm beginning to wonder if I need code this up using openCL and move away from the CIFilters (they're just so darn convenient to use!).

The "size" thing is actually more of an openGL thing than an Apple thing. samplerSize returns a vec2 which has components .x and .y.

I did some experiments to try and figure out whats going on and it seems that when an image reaches a certain size that the values returned from samplerSize become a multiple of 2. I managed to find away to predict how this works but then my method breaks once I add more to the CIKernel's algorithm. Frustrating. There should be some documentation about this somewhere but I can't find any.

I wish I could output to the console from within a CIKernel as it would make debugging much easier.

I'll wait a while before committing to reworking this in openCL and see if anyone else has some ideas.

Thanks!
 
You might want to start a new project and make some simple sample code that demonstrates the problem and upload it. I don't really know where to begin but I'm pretty good at tinkering.
 
Ask and you shall receive! I've uploaded a very simple app that illustrates this problem. Build and run the app and open an image file from: file: open: to see this is action. If the image is sufficiently small you should see the image in the view translated so that the lower left corner of the image has been moved to the center of the view. For larger images the translation becomes weird. You can resize the window / view and see how the weirdness changes.

Thanks in advance for any input you might have!
 

Attachments

  • TestCIFilter.zip
    35.7 KB · Views: 135
Well I added a section of code to save the CIImage to a JPEG which is so far semi useful. When it f's up it seems to save an all white image.

I think it is related to this little highlight from the Core Image Kernel Language documentation
Code:
samplerCoord
varying vec2 samplerCoord (uniform sampler src)
Returns the position, in sampler space, of the sampler src that is associated with the current output pixel (that is, after any transformation matrix associated with src is applied). The sample space refers to the coordinate space of that you are texturing from.
Note that if your source data is tiled, the sample coordinate will have an offset (dx/dy). You can convert a destination location to the sampler location using the samplerTransform function.

That and some pages I found that seem to indicate that after a certain size, the input image cannot fit in texture memory and becomes tiled. I belive the problem is that this code is not compatible with the "tiled" mode.

I've tried quite a lot of different things, mostly changing the way it calculates translateCoord, meaning from samplerExtent(), destCoord(), samplerTransform(), and EACH time I'm able to replicate the desired behavior on small filers say 1000x1000 but not on large files say 5000x5000
 
I too have had the same issues with using destCoord(). Do you know if there is a way to know how it tiles and what to do with the tiles?
 
Confirmed Tiling problem!!

if you make your kernel like this
Code:
	vec4 result;
	vec2 translateCoord;
	vec2 dest = destCoord();
	
	translateCoord.x = dest.x - 10.0;
	translateCoord.y = dest.y - 10.0;
	
	translateCoord = samplerTransform(src, translateCoord);
	result = unpremultiply(sample(src,translateCoord));

	return premultiply(result);

You can actually SEE the tiling. In the saved image (I couldn't see it in the windowed image)

This image was resampled to 5000x5000 pixels and then fed through that filter.
This is a screen grab of opening the FILE that was saved. There were no lines in the image that was shown to me via the cocoa app.

EDIT: It dawns on me that my paraphrasing might make it not work so I put my actual code in there.
 

Attachments

  • screen.png
    screen.png
    209.5 KB · Views: 163
I too have had the same issues with using destCoord(). Do you know if there is a way to know how it tiles and what to do with the tiles?

Yeah, you're gonna have to use ROI function.

However from what I've read this example might never work because the Region of Interest would always have to be the whole image (or at least on the same order). Which is bound by hardware. :eek:

For simple filter that actually filter (meaning maybe just need a couple of pixels around the destination pixel, then the ROI function can just inset a couple of pixels to make sure each tile is supplied with a sample that is big enough for every destination pixel.
 
Thats what I was afraid of. Looks like it might be time to bit the bullet and learn to code the filters I need up in openCL. I guess I shouldn't have expected to be able to take the easy road.
 
Thats what I was afraid of. Looks like it might be time to bit the bullet and learn to code the filters I need up in openCL. I guess I shouldn't have expected to be able to take the easy road.

Even in OpenCL, I think you'll be constrained by the on-board memory of the GPU. So I still wouldn't hold out much hope that you'll be able to process a monlithic 50-100 megapixel image without tiling it in some fashion.

Or I could be completely mistaken. It's been some time since I last looked at OpenCL.
 
Even in OpenCL, I think you'll be constrained by the on-board memory of the GPU. So I still wouldn't hold out much hope that you'll be able to process a monlithic 50-100 megapixel image without tiling it in some fashion.

Or I could be completely mistaken. It's been some time since I last looked at OpenCL.

I 100% agree. Both systems obviously will do some kind of breaking up for efficiency, and due to hardware constraints.

The thing about this is that you cannot use an algorithm like this for affine transforms if you cannot fit the whole thing into one giant heap.

Why not describe what you were trying to acomplish when you found this limitation in the first place, then I can look at ways around the limitation.
 
Well I'm thinking now that I'll just scale down the image appropriately for display to the screen and only worry about using the large images when saving to disk. Is there a way to tell core image to process the CIImage through the CIKernels using the CPU instead of the GPU? I would think so since it's designed to have the CPU as a backup but I can't find any info on how to do it.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.