How to handle memory constraints in iOS 8 Photo Extensions?

前端 未结 3 1608
鱼传尺愫
鱼传尺愫 2020-12-31 19:50

I added a new iOS 8 Photo Extension to my existing photo editing app. My app has quite a complex filter pipeline and needs to keep multiple textures in memory at a time. How

相关标签:
3条回答
  • 2020-12-31 20:17

    Here is how you apply two consecutive convolution kernels in Core Image, with the "intermediary result" between them:

    - (CIImage *)outputImage { 
    
    const double g = self.inputIntensity.doubleValue;
    const CGFloat weights_v[] = { -1*g, 0*g, 1*g,
                                  -1*g, 0*g, 1*g,
                                  -1*g, 0*g, 1*g};
    
    CIImage *result = [CIFilter filterWithName:@"CIConvolution3X3" keysAndValues:
              @"inputImage", self.inputImage,
              @"inputWeights", [CIVector vectorWithValues:weights_v count:9],
              @"inputBias", [NSNumber numberWithFloat:1.0],
              nil].outputImage;
    
    CGRect rect = [self.inputImage extent];
    rect.origin = CGPointZero;
    
    CGRect cropRectLeft = CGRectMake(0, 0, rect.size.width, rect.size.height);
    CIVector *cropRect = [CIVector vectorWithX:rect.origin.x Y:rect.origin.y Z:rect.size.width W:rect.size.height];
    result = [result imageByCroppingToRect:cropRectLeft];
    
    result = [CIFilter filterWithName:@"CICrop" keysAndValues:@"inputImage", result, @"inputRectangle", cropRect, nil].outputImage;
    
    
    const CGFloat weights_h[] = {-1*g, -1*g, -1*g,
        0*g,   0*g,   0*g,
        1*g,   1*g,     1*g};
    
    
    result = [CIFilter filterWithName:@"CIConvolution3X3" keysAndValues:
              @"inputImage", result,
              @"inputWeights", [CIVector vectorWithValues:weights_h count:9],
              @"inputBias", [NSNumber numberWithFloat:1.0],
              nil].outputImage;
    
    result = [result imageByCroppingToRect:cropRectLeft];
    
    result = [CIFilter filterWithName:@"CICrop" keysAndValues:@"inputImage", result, @"inputRectangle", cropRect, nil].outputImage;
    
    result = [CIFilter filterWithName:@"CIColorInvert" keysAndValues:kCIInputImageKey, result, nil].outputImage;
    
    return result;
    

    }

    0 讨论(0)
  • 2020-12-31 20:35

    If you're using a Core Image "recipe," you needn't worry about memory at all, just as Marco said. No image on which Core Image filters are applied is rendered until the image object is returned to the view.

    That means you could apply a million filters to a highway billboard-sized photo, and memory would not be the issue. The filter specifications would simply be compiled into a convolution or kernel, which all come down to the same size—no matter what.

    Misunderstandings about memory management and overflow and the like can be easily remedied by orienting yourself with the core concepts of your chosen programming language, development environment and hardware platform.

    Apple's documentation introducing Core Image filter programming is sufficient for this; if you'd like specific references to portions of the documentation that I believe pertain specifically to your concerns, just ask.

    0 讨论(0)
  • 2020-12-31 20:41

    I am developing a Photo Editing extension for my company, and we are facing the same issue. Our internal image processing engine needs more than 150mb to apply certain effects to an image. And this is not even counting panorama images which will take around ~100mb of memory per copy.

    We found only two workarounds, but not an actual solution.

    1. Scaling down the image, and applying the filter. This will require way less memory, but the image result is terrible. At least the extension will not crash.

    or

    1. Use CoreImage or Metal for image processing. As we analyzed the Sample Photo Editing Extension from Apple, which uses CoreImage, can handle very large image and even panoramas without quality or resolution loss. Actually, we were not able to crash the extension by loading very large images. The sample code can handle panoramas with a memory peek of 40mb, which is pretty impressive.

    According to the Apple's App Extension Programming Guide, page 55, chapter "Handling Memory Constraints", the solution for memory pressure in extensions is to review your image-processing code. So far we are porting our image processing engine to CoreImage, and the results are far better than our previous engine.

    I hope I could help a bit. Marco Paiva

    0 讨论(0)
提交回复
热议问题