问题
I added a new iOS 8 Photo Extension to my existing photo editing app. My app has quite a complex filter pipeline and needs to keep multiple textures in memory at a time. However, on devices with 1 GB RAM I'm easily able to process 8 MP images.
In the extension, however, there are much higher memory constraints. I had to scale down the image to under 2 MP in order to get it processed without crashing the extension. I also figured that the memory problems only occurred when not having a debugger attached to the extension. With it, everything works fine.
I did some experiments. I modified a memory budget test app to work within an extension and came up with the following results (showing the amount of RAM in MB that can be allocated before crashing):
╔═══════════════════════╦═════╦═══════════╦══════════════════╗
║ Device ║ App ║ Extension ║ Ext. (+Debugger) ║
╠═══════════════════════╬═════╬═══════════╬══════════════════╣
║ iPhone 6 Plus (8.0.2) ║ 646 ║ 115 ║ 645 ║
║ iPhone 5 (8.1 beta 2) ║ 647 ║ 97 ║ 646 ║
║ iPhone 4s (8.0.2) ║ 305 ║ 97 ║ 246 ║
╚═══════════════════════╩═════╩═══════════╩══════════════════╝
A few observations:
- With the debugger attached the extension behaves like the "normal" app
- Even though the 4s has only half the total amount of memory (512 MB) compared to the other devices it gets the same ~100 MB from the system for the extension.
Now my question: How am I supposed to work with this small amount of memory in a Photo Editing extension? One texture containing an 8 MP (camera resolution) RGBA image eats ~31 MB alone. What is the point of this extension mechanism if I have to tell the user that full size editing is only possible when using the main app?
Did one of you also reach that barrier? Did you find a solution to circumvent this constraint?
回答1:
I am developing a Photo Editing extension for my company, and we are facing the same issue. Our internal image processing engine needs more than 150mb to apply certain effects to an image. And this is not even counting panorama images which will take around ~100mb of memory per copy.
We found only two workarounds, but not an actual solution.
- Scaling down the image, and applying the filter. This will require way less memory, but the image result is terrible. At least the extension will not crash.
or
- Use CoreImage or Metal for image processing. As we analyzed the Sample Photo Editing Extension from Apple, which uses CoreImage, can handle very large image and even panoramas without quality or resolution loss. Actually, we were not able to crash the extension by loading very large images. The sample code can handle panoramas with a memory peek of 40mb, which is pretty impressive.
According to the Apple's App Extension Programming Guide, page 55, chapter "Handling Memory Constraints", the solution for memory pressure in extensions is to review your image-processing code. So far we are porting our image processing engine to CoreImage, and the results are far better than our previous engine.
I hope I could help a bit. Marco Paiva
回答2:
If you're using a Core Image "recipe," you needn't worry about memory at all, just as Marco said. No image on which Core Image filters are applied is rendered until the image object is returned to the view.
That means you could apply a million filters to a highway billboard-sized photo, and memory would not be the issue. The filter specifications would simply be compiled into a convolution or kernel, which all come down to the same size—no matter what.
Misunderstandings about memory management and overflow and the like can be easily remedied by orienting yourself with the core concepts of your chosen programming language, development environment and hardware platform.
Apple's documentation introducing Core Image filter programming is sufficient for this; if you'd like specific references to portions of the documentation that I believe pertain specifically to your concerns, just ask.
回答3:
Here is how you apply two consecutive convolution kernels in Core Image, with the "intermediary result" between them:
- (CIImage *)outputImage {
const double g = self.inputIntensity.doubleValue;
const CGFloat weights_v[] = { -1*g, 0*g, 1*g,
-1*g, 0*g, 1*g,
-1*g, 0*g, 1*g};
CIImage *result = [CIFilter filterWithName:@"CIConvolution3X3" keysAndValues:
@"inputImage", self.inputImage,
@"inputWeights", [CIVector vectorWithValues:weights_v count:9],
@"inputBias", [NSNumber numberWithFloat:1.0],
nil].outputImage;
CGRect rect = [self.inputImage extent];
rect.origin = CGPointZero;
CGRect cropRectLeft = CGRectMake(0, 0, rect.size.width, rect.size.height);
CIVector *cropRect = [CIVector vectorWithX:rect.origin.x Y:rect.origin.y Z:rect.size.width W:rect.size.height];
result = [result imageByCroppingToRect:cropRectLeft];
result = [CIFilter filterWithName:@"CICrop" keysAndValues:@"inputImage", result, @"inputRectangle", cropRect, nil].outputImage;
const CGFloat weights_h[] = {-1*g, -1*g, -1*g,
0*g, 0*g, 0*g,
1*g, 1*g, 1*g};
result = [CIFilter filterWithName:@"CIConvolution3X3" keysAndValues:
@"inputImage", result,
@"inputWeights", [CIVector vectorWithValues:weights_h count:9],
@"inputBias", [NSNumber numberWithFloat:1.0],
nil].outputImage;
result = [result imageByCroppingToRect:cropRectLeft];
result = [CIFilter filterWithName:@"CICrop" keysAndValues:@"inputImage", result, @"inputRectangle", cropRect, nil].outputImage;
result = [CIFilter filterWithName:@"CIColorInvert" keysAndValues:kCIInputImageKey, result, nil].outputImage;
return result;
}
来源:https://stackoverflow.com/questions/26405876/how-to-handle-memory-constraints-in-ios-8-photo-extensions