Register FAQ / Rules Forum Spy Search Today's Posts Mark Forums Read
Go Back   MacRumors Forums > Apple Systems and Services > Programming > Mac Programming

Reply
 
Thread Tools Search this Thread Display Modes
Old Nov 6, 2007, 11:52 AM   #1
Soulstorm
macrumors 68000
 
Soulstorm's Avatar
 
Join Date: Feb 2005
Core Image Filters: Is it meant to be so slow?

Perhaps you are getting tired of me starting new threads. However, I thought I should just start a new topic, since this has nothing to do with memory management. It's more like performance and optimization methods.

I am still building image Filterizer as an exercise. However, I am not using the same approach, so you may need to redownload the project.

I am using a filter in core image called Disk Blur. I apply the filter onto the image, using a value above 20, and it is SLOW. That's ok, if it's a heavy filter. But when I try to scroll the or resize the window, it really is slow, as if it applies the same filter over and over again. Can you tell me why is this happening?

I thought that when I applied the filter to the image and made that image my main image to be drawn, I avoided forcing the processor to reapply the filters, thus saving memory, and processor resources. However, I see that this isn't the case.

Here is the project. Any recommendations?
Attached Files
File Type: zip Image Filterizer.zip (53.1 KB, 107 views)
Soulstorm is offline   0 Reply With Quote
Old Nov 6, 2007, 12:13 PM   #2
kainjow
Moderator emeritus
 
kainjow's Avatar
 
Join Date: Jun 2000
From the docs:

Quote:
Although a CIImage object has image data associated with it, it is not an image. You can think of a CIImage object as an image “recipe.” A CIImage object has all the information necessary to produce an image, but Core Image doesn’t actually render an image until it is told to do so. This “lazy evaluation” method allows Core Image to operate as efficiently as possible.
So it'd probably be better to create an NSImage from the CIImage and draw that instead.
kainjow is offline   0 Reply With Quote
Old Nov 6, 2007, 12:52 PM   #3
Soulstorm
Thread Starter
macrumors 68000
 
Soulstorm's Avatar
 
Join Date: Feb 2005
Quote:
Originally Posted by kainjow View Post
So it'd probably be better to create an NSImage from the CIImage and draw that instead.
Hm... I used Core Image directly because there isn't any clear connection between CIImage and NSImage. Seems I must find a way to create an NSImage object from a CIImage, and do this the other way around...
Soulstorm is offline   0 Reply With Quote
Old Nov 8, 2007, 11:10 AM   #4
Soulstorm
Thread Starter
macrumors 68000
 
Soulstorm's Avatar
 
Join Date: Feb 2005
Unfortunately...

I used NSImage and it didn't make any difference... Am I doing something wrong? I loaded the file as an NSImage, then in each filter, I used Core Image. Then, I converted the resulting Core Image to NSImage, and I displayed that to the NSImageView. However, I see no change in performance.
Attached Files
File Type: zip Image Filterizer.zip (55.1 KB, 45 views)
Soulstorm is offline   0 Reply With Quote
Old Nov 8, 2007, 11:18 AM   #5
kainjow
Moderator emeritus
 
kainjow's Avatar
 
Join Date: Jun 2000
You probably need to convert it to an NSBitmapImageRep and add that to the NSImage. This can be done using NSGraphicsContext's graphicsContextWithBitmapImageRep method. There is sample code for this here http://developer.apple.com/samplecod...listing16.html
kainjow is offline   0 Reply With Quote
Old Nov 8, 2007, 12:23 PM   #6
Soulstorm
Thread Starter
macrumors 68000
 
Soulstorm's Avatar
 
Join Date: Feb 2005
Hm... So let me get this straight.

At first, I have an NSImage. I take that NSImage and convert it to a CIImage object in order to apply some filters. Then, I will need to make an NSBitmapImageRep from the CIImage object and add that representation to the NSImage object that will be displayed on the NSImageView? And I will make that using the NSGraphicsContext?
Soulstorm is offline   0 Reply With Quote
Old Nov 9, 2007, 03:09 AM   #7
cblackburn
macrumors regular
 
Join Date: Jul 2005
Location: London, UK
The problem that you have is that CIImages are calculated on the Graphics Card. Then to draw them in a NSImageView requires you to copy them from the VRAM into an allocated chunk of RAM and then draw them back into the VRAM to show it on the screen. This is slow not only because there are multiple copy operations but because you are never supposed to do that it is not optimised. So instead of doing the VRAM -> RAM first and then RAM -> VRAM it might do one bit at a time through each operation. Very slow indeed. Also there is a bug in the Core Image Framework where if you copy an image from the VRAM to the RAM it leaks memory, a lot of memory, equivalent to the size of the image. I designed a program that did this with video from the iSight and it leaked about 250MB per second. Bear this in mind if you do it.

If you are only interested in showing it to the user then keep it in the VRAM and render it using an NSOpenGLView subclass (there is a good example here, http://developer.apple.com/samplecod.../listing9.html).

If you want to do some other pixel level alterations not using a CIFilter then you are going to have to copy the data down into an NSBitmapImageRep but beware of the bug mentioned above.

HTH

Chris
cblackburn is offline   0 Reply With Quote
Old Nov 9, 2007, 09:41 AM   #8
Soulstorm
Thread Starter
macrumors 68000
 
Soulstorm's Avatar
 
Join Date: Feb 2005
Got it. Thanks a lot for the information, I will put it to good use. However, I have a question.

Why do I make that move from the VRAM onto the RAM? Actually, that will happen during applying the filter. But that will happen only once. After that, when resizing the window or moving the scroll view, only the NSImage is called for redraw. And that is already on the RAM. I am not calling anything that would require the graphics card to intervene.

So, why does resizing take so much processor speed? Is it because of the bug you mentioned? A memory leak has been created? And that bug exists in Leopard?
Soulstorm is offline   0 Reply With Quote
Old Nov 14, 2007, 07:15 AM   #9
WeeBull
macrumors newbie
 
Join Date: Aug 2004
Quote:
Originally Posted by Soulstorm View Post
Why do I make that move from the VRAM onto the RAM?
Ok, Core Image works by applying the filters to the image on the GPU (btw, you haven't mentioned what GPU you have. That will make major differences on the speed)

Speed is retained by keeping the information on the GPU and in it's VRAM. Copying data back is slow, as Chris says, especially if the image is large (sounds like it must be as you're scrolling around it).

Creating an NSBitmapImageRep, or drawing to an NSImageView will create a host copy (i.e. one on the CPU side). This is because these are not GPU based classes families. NSImage (I think) has been expanded to be able to contain CIImages, so creating an NSImage from a CIImage probably doesn't have a high cost, but you only do this to do something like draw it in a NSImageView, so the cost comes somewhere in the chain of events.

By keeping the CIImage, and using an NSOpenGLView, everything stays on the GPU, so no speed cost, and no memory leak.

Two other points:

1) Large images will be slower (obvious, but bare it in mind)
2) Changing inputs requires things to be recalculated, and defeats caching.

The second one is important.

Say you've got your image going through a blur. If you say [blurFilter setValue":x ForKey:@"blurRadius"] in drawRect:, then every time the image is redisplayed the filter will re-blur the whole image. If you don't it can re-use the image from last time round.

All you want in your drawRect method is drawImage: call and no other messing with the filter chain. If you're doing video or animation, that will cause slow down, but should still be elsewhere in your code so it's only done when necessary.

In my (limited) experience this is far bigger than host<->GPU transfers.
WeeBull is offline   0 Reply With Quote
Old Nov 14, 2007, 03:45 PM   #10
Soulstorm
Thread Starter
macrumors 68000
 
Soulstorm's Avatar
 
Join Date: Feb 2005
The image I test it on is only 96 kbytes. And my system config is in my signature.

Quote:
All you want in your drawRect method is drawImage: call and no other messing with the filter chain. If you're doing video or animation, that will cause slow down, but should still be elsewhere in your code so it's only done when necessary.
I am using this drawing method. I only draw the image:

Code:
-(void)drawRect:(NSRect)rect
{
	NSLog(@"redrawing now...");
	[theImage drawInRect:[self bounds] fromRect:[self bounds] operation:NSCompositeSourceOver fraction:1.0];
	
}
Quote:
Say you've got your image going through a blur. If you say [blurFilter setValue":x ForKey:@"blurRadius"] in drawRect:, then every time the image is redisplayed the filter will re-blur the whole image. If you don't it can re-use the image from last time round.
I only apply the filter once, and I create an NSImage from that object. I then draw that image to an NSImageView object. No matter how much time it took for the resulting NSImage to be created, such calculations will not have to be done again, when displaying that image on the NSImageView. That's why I can't understand the resulting speed.

Quote:
By keeping the CIImage, and using an NSOpenGLView, everything stays on the GPU, so no speed cost, and no memory leak.
I didn't have the time to get involved with OpenGL in Cocoa in my project, when I have the time, I will convert my application to use NSOpenGLView instead of an NSImageView, to see if it handles more properly the memory allocated.

Btw, this is a very serious bug. How come apple has not fixed this memory leak?
Soulstorm is offline   0 Reply With Quote
Old Nov 14, 2007, 05:34 PM   #11
Krevnik
macrumors 68020
 
Krevnik's Avatar
 
Join Date: Sep 2003
Quote:
Originally Posted by Soulstorm View Post
Btw, this is a very serious bug. How come apple has not fixed this memory leak?
You haven't filed many bugs with apple, have you?

I have had bugs filed which can cause an application crash because APIs that Apple exposed in 10.5 weren't properly guarded (A malformed search predicate hard-locked an app back in the WWDC seed)... and they still are open issues.
__________________
iMac 2013 27", 13" rMBP, iPad 4, iPhone 5s
Krevnik is offline   0 Reply With Quote
Old Nov 16, 2007, 03:29 AM   #12
WeeBull
macrumors newbie
 
Join Date: Aug 2004
Quote:
Originally Posted by Soulstorm View Post
The image I test it on is only 96 kbytes. And my system config is in my signature.
I missed the config, exactly the same as me, but when I was talking about image size I was talking about resolution rather than file size.

Quote:
I am using this drawing method. I only draw the image:

Code:
-(void)drawRect:(NSRect)rect
{
	NSLog(@"redrawing now...");
	[theImage drawInRect:[self bounds] fromRect:[self bounds] operation:NSCompositeSourceOver fraction:1.0];
	
}
Ok, that looks fairly minimal. Only thing I'd try is setting the operation to NSCompositeCopy, unless you are actually blending one image over another.

So theImage is an NSImage there right?

Quote:
I only apply the filter once, and I create an NSImage from that object. I then draw that image to an NSImageView object. No matter how much time it took for the resulting NSImage to be created, such calculations will not have to be done again, when displaying that image on the NSImageView. That's why I can't understand the resulting speed.
Agreed, if I understand you correctly, and all that's happening is you're scrolling an NSImageView with an NSImage inside it then CoreImage isn't the problem.

Might be time to get Shark out and profile your app. It's in your /Developer/Applications/Performance Tools. Start it, run a debug build of your app, hit the start button in shark, and then make your app do it's slow thing. After 30 secs shark will stop recording, analyize for a bit, and hopefully tell you where you're spending your time.

Sharks a really good tool, and worth learning how to use. Sometimes, it's not the thing that you expect that's slowing you down.

Quote:
Btw, this is a very serious bug. How come apple has not fixed this memory leak?
I personally hadn't noticed it. A lot of the time you don't need to create host copies of images, so no problem. It shouldn't be what's causing you problems, you're only doing one conversion.
WeeBull is offline   0 Reply With Quote

Reply
MacRumors Forums > Apple Systems and Services > Programming > Mac Programming

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Similar Threads
thread Thread Starter Forum Replies Last Post
Copying Files To Disk Image VERY Slow? John55455 OS X 10.8 Mountain Lion 4 Aug 19, 2013 11:51 AM
MacPro Quad Core 32g Ram running slow Logic using only 9g in Activity Monitor julian6400 Mac Pro 3 Jun 27, 2013 04:55 AM
Core animation image not drawn ramy1989 Mac Programming 0 Dec 2, 2012 09:54 AM
12 Core Mac Pro - Super Slow Internet? destinybrandon Mac Pro 11 Sep 1, 2012 05:28 AM
If its meant to be its meant to be phas3 Politics, Religion, Social Issues 17 Jun 12, 2012 11:18 AM

Forum Jump

All times are GMT -5. The time now is 03:12 AM.

Mac Rumors | Mac | iPhone | iPhone Game Reviews | iPhone Apps

Mobile Version | Fixed | Fluid | Fluid HD
Copyright 2002-2013, MacRumors.com, LLC