What is the most memory-efficient way of downscaling images on iOS?

前端 未结 6 2075
后悔当初
后悔当初 2021-01-31 19:18

In background thread, my application needs to read images from disk, downscale them to the size of screen (1024x768 or 2048x1536) and save them back to disk. Original images are

相关标签:
6条回答
  • 2021-01-31 19:32

    I think if you want to save the memory you can read the source image from tile to tile and compress the tile and save to the destination tile.

    There is an example from apple. It is the implementation of the way.

    https://developer.apple.com/library/ios/samplecode/LargeImageDownsizing/Introduction/Intro.html

    You can download this project and run it. It is MRC so you can use it very smoothly.

    May it help. :)

    0 讨论(0)
  • 2021-01-31 19:42

    With libjpeg-turbo you can use the scale_num and scale_denom fields of jpeg_decompress_struct, and it will decode only needed blocks of an image. It gave me 250 ms decoding+scaling time in background thread on 4S with 3264x2448 original image (from camera, image data placed in memory) to iPhone's display resolution. I guess it's OK for an image that large, but still not great.

    (And yes, that is memory efficient. You can decode and store the image almost line by line)

    0 讨论(0)
  • 2021-01-31 19:43

    While I can't definetly say it will help, I think it's worth trying to push the work to the GPU. You can either do that yourself by rendering a textured quad at a given size, or by using GPUImage and its resizing capabilities. While it has some texture size limitations on older devices, it should have much better performance than CPU based solution

    0 讨论(0)
  • 2021-01-31 19:45

    I would try using a c-based library like leptonica. I'm not sure whether ios optimizes Core Graphics with the relatively new Accelerate Framework, but CoreGraphics probably has more overhead involved just to re-size an image. Finally... If you want to roll your own implementation try using vImageScale_??format?? backed with some memory mapped files, I can't see anything being faster.

    http://developer.apple.com/library/ios/#documentation/Performance/Conceptual/vImage/Introduction/Introduction.html

    PS. Also make sure to check the compiler optimization flags.

    0 讨论(0)
  • 2021-01-31 19:48

    What you said on twitter does not match your question.

    If you are having memory spikes, look at Instruments to figure out what is consuming the memory. Just the data alone for your high resolution image is 10 megs, and your resulting images are going to be about 750k, if they contain no alpha channel.

    The first issue is keeping the memory usage low, for that, make sure that all of the images that you load are disposed as soon as you are done using them, that will ensure that the underlying C/Objective-C API disposes the memory immediately, instead of waiting for the GC to run, so something like:

     using (var img = UIImage.FromFile ("..."){
         using (var scaled = Scaler (img)){
              scaled.Save (...);
         }
     }
    

    As for the scaling, there are a variety of ways of scaling the images. The simplest way is to create a context, then draw on it, and then get the image out of the context. This is how MonoTouch's UIImage.Scale method is implemented:

    public UIImage Scale (SizeF newSize)
    {
        UIGraphics.BeginImageContext (newSize);
    
        Draw (new RectangleF (0, 0, newSize.Width, newSize.Height));
    
        var scaledImage = UIGraphics.GetImageFromCurrentImageContext();
        UIGraphics.EndImageContext();
    
        return scaledImage;            
    }
    

    The performance will be governed by the context features that you enable. For example, a higher-quality scaling would require changing the interpolation quality:

     context.InterpolationQuality = CGInterpolationQuality.High
    

    The other option is to run your scaling not on the CPU, but on the GPU. To do that, you would use the CoreImage API and use the CIAffineTransform filter.

    As to which one is faster, it is something left for someone else to benchmark

    CGImage Scale (string file)
    {
        var ciimage = CIImage.FromCGImage (UIImage.FromFile (file));
    
        // Create an AffineTransform that makes the image 1/5th of the size
        var transform = CGAffineTransform.MakeScale (0.5f, 0.5f);
    
        var affineTransform = new CIAffineTransform () { 
            Image = ciimage,
            Transform = transform
        };
        var output = affineTransform.OutputImage;
        var context = CIContext.FromOptions (null);
        return context.CreateCGImage (output, output.Extent);
    }
    
    0 讨论(0)
  • 2021-01-31 19:48

    If either is more efficient of the two then it'll be the former.

    When you create a CGImageSource you create just what the name says — some sort of opaque thing from which an image can be obtained. In your case it'll be a reference to a thing on disk. When you ask ImageIO to create a thumbnail you explicitly tell it "do as much as you need to output this many pixels".

    Conversely if you draw to a CGBitmapContext then at some point you explicitly bring the whole image into memory.

    So the second approach definitely has the whole image in memory at once at some point. Conversely the former needn't necessarily (in practice there'll no doubt be some sort of guesswork within ImageIO as to the best way to proceed). So across all possible implementations of the OS either the former will be advantageous or there'll be no difference between the two.

    0 讨论(0)
提交回复
热议问题