Image Editing / Image Effects

Discussion in 'iOS Programming' started by Blakeasd, May 12, 2013.

  1. Blakeasd macrumors 6502a

    Joined:
    Dec 29, 2009
    #1
    Hello,

    Lately I have seen a plethora of image editing/effects apps on the App Store and I have been curious about how this is done. I'm not actually interested in making my own -- just curious about the process as I am an OS X developer and not an iOS developer. In OS X I would go about making an editor/effects app using Core Image. As far as I know iOS Core Image isn't as developed as the OS X version (forgive me if I'm mistaken.) So are the photos in these apps being manipulated with OpenGL ES or are they actually being manipulated with Core Image. Again, just curious.

    Thanks.
     
  2. Duncan C, May 12, 2013
    Last edited: May 12, 2013

    Duncan C macrumors 6502a

    Duncan C

    Joined:
    Jan 21, 2008
    Location:
    Northern Virginia
    #2
    You are right that Core Image (CI) is not as advanced in iOS as it is in Mac OS, but it's catching up. Apple introduced CI with iOS 5, and added a bunch of filters in iOS 6. I don't think you can attach filters to a layer like you can in Mac OS, but the list of supported filters is getting respectable.

    In addition to the documented filters, there are "image enhancement" filters that you can ask for and the system builds a stack of those to apply to an image. As far as I know that's the only way to get to red eye reduction, for example.

    For those filters that are not supported in CI, developers would have to come up with another solution like OpenGL ES.

    It's possible to share the same block of memory between a Core Video Pixel Buffer, and OpenGL texture, and a CIImage. That lets you apply image adjustments very fast, without having to copy memory around.

    Our company has an app in review right now called Face Dancer that does just that. We load frames of video into a CVPixelBuffer, map that to a CIImage (the same block of memory) and use it to do face recognition, then map the SAME block of memory to an OpenGL texture. Then we do mesh warping on the texture and output the result to a still image (or an output CVPixelBuffer if we're creating a video.) The result is fast enough to do full-frame video at 30 or 60 frames a second.

    We can also generate animated GIF images like this one:

    [URL="http://imageshack.us/a/img59/2892/widehead.gif/[/URL]

    (our app doesn't use CI filters because it's focused on mesh warping, but it could.)
     
  3. PhoneyDeveloper macrumors 68030

    PhoneyDeveloper

    Joined:
    Sep 2, 2008
    #3
    I asure you that Photoshop doesn't use core image. It has its own algorithms. You can write your own algorithms as well. I think there were plenty of image manipulation apps on the app store before core image was available on iOS.
     
  4. Duncan C macrumors 6502a

    Duncan C

    Joined:
    Jan 21, 2008
    Location:
    Northern Virginia
    #4
    You are likely right that "PS touch" does not use Core Image, but that's not very relevant.

    Getting decent performance for image adjustments almost requires GPU acceleration on iOS. Core Image filters make that fairly easy. Otherwise you have to use OpenGL, OpenCV, or other technologies that require a much higher level of expertise and lots more developer hours.

    Yes, there were a fair number of image manipulation programs in the app store before CI was released, but most of them were very slow. Those that weren't were likely using techniques like the ones we're discussing.

    Adobe can afford to spend hundreds of thousands of dollars to develop an app like PS Touch, and if they do a good job on it the PS brand means they will likely make their money back and more. That's not the case for most other developers.

    Thus the OPs question is valid, and deserves a thoughtful response.
     

Share This Page