I added a new iOS 8 Photo Extension to my existing photo editing app. My app has quite a complex filter pipeline and needs to keep multiple textures in memory at a time. How
Here is how you apply two consecutive convolution kernels in Core Image, with the "intermediary result" between them:
- (CIImage *)outputImage {
const double g = self.inputIntensity.doubleValue;
const CGFloat weights_v[] = { -1*g, 0*g, 1*g,
-1*g, 0*g, 1*g,
-1*g, 0*g, 1*g};
CIImage *result = [CIFilter filterWithName:@"CIConvolution3X3" keysAndValues:
@"inputImage", self.inputImage,
@"inputWeights", [CIVector vectorWithValues:weights_v count:9],
@"inputBias", [NSNumber numberWithFloat:1.0],
nil].outputImage;
CGRect rect = [self.inputImage extent];
rect.origin = CGPointZero;
CGRect cropRectLeft = CGRectMake(0, 0, rect.size.width, rect.size.height);
CIVector *cropRect = [CIVector vectorWithX:rect.origin.x Y:rect.origin.y Z:rect.size.width W:rect.size.height];
result = [result imageByCroppingToRect:cropRectLeft];
result = [CIFilter filterWithName:@"CICrop" keysAndValues:@"inputImage", result, @"inputRectangle", cropRect, nil].outputImage;
const CGFloat weights_h[] = {-1*g, -1*g, -1*g,
0*g, 0*g, 0*g,
1*g, 1*g, 1*g};
result = [CIFilter filterWithName:@"CIConvolution3X3" keysAndValues:
@"inputImage", result,
@"inputWeights", [CIVector vectorWithValues:weights_h count:9],
@"inputBias", [NSNumber numberWithFloat:1.0],
nil].outputImage;
result = [result imageByCroppingToRect:cropRectLeft];
result = [CIFilter filterWithName:@"CICrop" keysAndValues:@"inputImage", result, @"inputRectangle", cropRect, nil].outputImage;
result = [CIFilter filterWithName:@"CIColorInvert" keysAndValues:kCIInputImageKey, result, nil].outputImage;
return result;
}
If you're using a Core Image "recipe," you needn't worry about memory at all, just as Marco said. No image on which Core Image filters are applied is rendered until the image object is returned to the view.
That means you could apply a million filters to a highway billboard-sized photo, and memory would not be the issue. The filter specifications would simply be compiled into a convolution or kernel, which all come down to the same size—no matter what.
Misunderstandings about memory management and overflow and the like can be easily remedied by orienting yourself with the core concepts of your chosen programming language, development environment and hardware platform.
Apple's documentation introducing Core Image filter programming is sufficient for this; if you'd like specific references to portions of the documentation that I believe pertain specifically to your concerns, just ask.
I am developing a Photo Editing extension for my company, and we are facing the same issue. Our internal image processing engine needs more than 150mb to apply certain effects to an image. And this is not even counting panorama images which will take around ~100mb of memory per copy.
We found only two workarounds, but not an actual solution.
or
According to the Apple's App Extension Programming Guide, page 55, chapter "Handling Memory Constraints", the solution for memory pressure in extensions is to review your image-processing code. So far we are porting our image processing engine to CoreImage, and the results are far better than our previous engine.
I hope I could help a bit. Marco Paiva