Core Image Filters: Is it meant to be so slow?

Discussion in 'Mac Programming' started by Soulstorm, Nov 6, 2007.

  1. Soulstorm macrumors 68000


    Feb 1, 2005
    Perhaps you are getting tired of me starting new threads. However, I thought I should just start a new topic, since this has nothing to do with memory management. It's more like performance and optimization methods.

    I am still building image Filterizer as an exercise. However, I am not using the same approach, so you may need to redownload the project.

    I am using a filter in core image called Disk Blur. I apply the filter onto the image, using a value above 20, and it is SLOW. That's ok, if it's a heavy filter. But when I try to scroll the or resize the window, it really is slow, as if it applies the same filter over and over again. Can you tell me why is this happening?

    I thought that when I applied the filter to the image and made that image my main image to be drawn, I avoided forcing the processor to reapply the filters, thus saving memory, and processor resources. However, I see that this isn't the case.

    Here is the project. Any recommendations?

    Attached Files:

  2. kainjow Moderator emeritus


    Jun 15, 2000
    From the docs:

    So it'd probably be better to create an NSImage from the CIImage and draw that instead.
  3. Soulstorm thread starter macrumors 68000


    Feb 1, 2005
    Hm... I used Core Image directly because there isn't any clear connection between CIImage and NSImage. Seems I must find a way to create an NSImage object from a CIImage, and do this the other way around...
  4. Soulstorm thread starter macrumors 68000


    Feb 1, 2005

    I used NSImage and it didn't make any difference... Am I doing something wrong? I loaded the file as an NSImage, then in each filter, I used Core Image. Then, I converted the resulting Core Image to NSImage, and I displayed that to the NSImageView. However, I see no change in performance.

    Attached Files:

  5. kainjow Moderator emeritus


    Jun 15, 2000
  6. Soulstorm thread starter macrumors 68000


    Feb 1, 2005
    Hm... So let me get this straight.

    At first, I have an NSImage. I take that NSImage and convert it to a CIImage object in order to apply some filters. Then, I will need to make an NSBitmapImageRep from the CIImage object and add that representation to the NSImage object that will be displayed on the NSImageView? And I will make that using the NSGraphicsContext?
  7. cblackburn macrumors regular

    Jul 5, 2005
    London, UK
    The problem that you have is that CIImages are calculated on the Graphics Card. Then to draw them in a NSImageView requires you to copy them from the VRAM into an allocated chunk of RAM and then draw them back into the VRAM to show it on the screen. This is slow not only because there are multiple copy operations but because you are never supposed to do that it is not optimised. So instead of doing the VRAM -> RAM first and then RAM -> VRAM it might do one bit at a time through each operation. Very slow indeed. Also there is a bug in the Core Image Framework where if you copy an image from the VRAM to the RAM it leaks memory, a lot of memory, equivalent to the size of the image. I designed a program that did this with video from the iSight and it leaked about 250MB per second. Bear this in mind if you do it.

    If you are only interested in showing it to the user then keep it in the VRAM and render it using an NSOpenGLView subclass (there is a good example here,

    If you want to do some other pixel level alterations not using a CIFilter then you are going to have to copy the data down into an NSBitmapImageRep but beware of the bug mentioned above.


  8. Soulstorm thread starter macrumors 68000


    Feb 1, 2005
    Got it. Thanks a lot for the information, I will put it to good use. However, I have a question.

    Why do I make that move from the VRAM onto the RAM? Actually, that will happen during applying the filter. But that will happen only once. After that, when resizing the window or moving the scroll view, only the NSImage is called for redraw. And that is already on the RAM. I am not calling anything that would require the graphics card to intervene.

    So, why does resizing take so much processor speed? Is it because of the bug you mentioned? A memory leak has been created? And that bug exists in Leopard?
  9. WeeBull macrumors newbie

    Aug 18, 2004
    Ok, Core Image works by applying the filters to the image on the GPU (btw, you haven't mentioned what GPU you have. That will make major differences on the speed)

    Speed is retained by keeping the information on the GPU and in it's VRAM. Copying data back is slow, as Chris says, especially if the image is large (sounds like it must be as you're scrolling around it).

    Creating an NSBitmapImageRep, or drawing to an NSImageView will create a host copy (i.e. one on the CPU side). This is because these are not GPU based classes families. NSImage (I think) has been expanded to be able to contain CIImages, so creating an NSImage from a CIImage probably doesn't have a high cost, but you only do this to do something like draw it in a NSImageView, so the cost comes somewhere in the chain of events.

    By keeping the CIImage, and using an NSOpenGLView, everything stays on the GPU, so no speed cost, and no memory leak.

    Two other points:

    1) Large images will be slower (obvious, but bare it in mind)
    2) Changing inputs requires things to be recalculated, and defeats caching.

    The second one is important.

    Say you've got your image going through a blur. If you say [blurFilter setValue":x ForKey:mad:"blurRadius"] in drawRect:, then every time the image is redisplayed the filter will re-blur the whole image. If you don't it can re-use the image from last time round.

    All you want in your drawRect method is drawImage: call and no other messing with the filter chain. If you're doing video or animation, that will cause slow down, but should still be elsewhere in your code so it's only done when necessary.

    In my (limited) experience this is far bigger than host<->GPU transfers.
  10. Soulstorm thread starter macrumors 68000


    Feb 1, 2005
    The image I test it on is only 96 kbytes. And my system config is in my signature.

    I am using this drawing method. I only draw the image:

    	NSLog(@"redrawing now...");
    	[theImage drawInRect:[self bounds] fromRect:[self bounds] operation:NSCompositeSourceOver fraction:1.0];
    I only apply the filter once, and I create an NSImage from that object. I then draw that image to an NSImageView object. No matter how much time it took for the resulting NSImage to be created, such calculations will not have to be done again, when displaying that image on the NSImageView. That's why I can't understand the resulting speed.

    I didn't have the time to get involved with OpenGL in Cocoa in my project, when I have the time, I will convert my application to use NSOpenGLView instead of an NSImageView, to see if it handles more properly the memory allocated.

    Btw, this is a very serious bug. How come apple has not fixed this memory leak?
  11. Krevnik macrumors 68040


    Sep 8, 2003
    You haven't filed many bugs with apple, have you? :)

    I have had bugs filed which can cause an application crash because APIs that Apple exposed in 10.5 weren't properly guarded (A malformed search predicate hard-locked an app back in the WWDC seed)... and they still are open issues.
  12. WeeBull macrumors newbie

    Aug 18, 2004
    I missed the config, exactly the same as me, but when I was talking about image size I was talking about resolution rather than file size.

    Ok, that looks fairly minimal. Only thing I'd try is setting the operation to NSCompositeCopy, unless you are actually blending one image over another.

    So theImage is an NSImage there right?

    Agreed, if I understand you correctly, and all that's happening is you're scrolling an NSImageView with an NSImage inside it then CoreImage isn't the problem.

    Might be time to get Shark out and profile your app. It's in your /Developer/Applications/Performance Tools. Start it, run a debug build of your app, hit the start button in shark, and then make your app do it's slow thing. After 30 secs shark will stop recording, analyize for a bit, and hopefully tell you where you're spending your time.

    Sharks a really good tool, and worth learning how to use. Sometimes, it's not the thing that you expect that's slowing you down.

    I personally hadn't noticed it. A lot of the time you don't need to create host copies of images, so no problem. It shouldn't be what's causing you problems, you're only doing one conversion.

Share This Page