Correct crop of CIGaussianBlur

后端 未结 9 1556
北海茫月
北海茫月 2020-11-29 22:09

As I noticed when CIGaussianBlur is applied to image, image\'s corners gets blurred so that it looks like being smaller than original. So I figured out that I need to crop i

相关标签:
9条回答
  • 2020-11-29 22:12

    I saw some of the solutions and wanted to recommend a more modern one, based off some of the ideas shared here:

    private lazy var coreImageContext = CIContext() // Re-use this.
    
    func blurredImage(image: CIImage, radius: CGFloat) -> CGImage? {
        let blurredImage = image
            .clampedToExtent()
            .applyingFilter(
                "CIGaussianBlur",
                parameters: [
                    kCIInputRadiusKey: radius,
                ]
            )
            .cropped(to: image.extent)
    
        return coreImageContext.createCGImage(blurredImage, from: blurredImage.extent)
    }
    

    If you need a UIImage afterward, you can of course get it like so:

    let image = UIImage(cgImage: cgImage)
    

    ... For those wondering, the reason for returning a CGImage is (as noted in the Apple documentation):

    Due to Core Image's coordinate system mismatch with UIKit, this filtering approach may yield unexpected results when displayed in a UIImageView with "contentMode". Be sure to back it with a CGImage so that it handles contentMode properly.

    If you need a CIImage you could return that, but in this case if you're displaying the image, you'd probably want to be careful.

    0 讨论(0)
  • 2020-11-29 22:15

    This works for me :)

    CIContext *context = [CIContext contextWithOptions:nil];
    CIImage *inputImage = [[CIImage alloc] initWithImage:image];
    CIFilter *blurFilter = [CIFilter filterWithName:@"CIGaussianBlur"];
    [blurFilter setDefaults];
    [blurFilter setValue:inputImage forKey:@"inputImage"];
    CGFloat blurLevel = 20.0f;          // Set blur level
    [blurFilter setValue:[NSNumber numberWithFloat:blurLevel] forKey:@"inputRadius"];    // set value for blur level
    CIImage *outputImage = [blurFilter valueForKey:@"outputImage"];
    CGRect rect = inputImage.extent;    // Create Rect
    rect.origin.x += blurLevel;         // and set custom params
    rect.origin.y += blurLevel;         // 
    rect.size.height -= blurLevel*2.0f; //
    rect.size.width -= blurLevel*2.0f;  //
    CGImageRef cgImage = [context createCGImage:outputImage fromRect:rect];    // Then apply new rect
    imageView.image = [UIImage imageWithCGImage:cgImage];
    CGImageRelease(cgImage);
    
    0 讨论(0)
  • 2020-11-29 22:17

    To get a nice blurred version of an image with hard edges you first need to apply a CIAffineClamp to the source image, extending its edges out and then you need to ensure that you use the input image's extents when generating the output image.

    The code is as follows:

    CIContext *context = [CIContext contextWithOptions:nil];
    
    UIImage *image = [UIImage imageNamed:@"Flower"];
    CIImage *inputImage = [[CIImage alloc] initWithImage:image];
    
    CIFilter *clampFilter = [CIFilter filterWithName:@"CIAffineClamp"];
    [clampFilter setDefaults];
    [clampFilter setValue:inputImage forKey:kCIInputImageKey];
    
    CIFilter *blurFilter = [CIFilter filterWithName:@"CIGaussianBlur"];
    [blurFilter setValue:clampFilter.outputImage forKey:kCIInputImageKey];
    [blurFilter setValue:@10.0f forKey:@"inputRadius"];
    
    CIImage *result = [blurFilter valueForKey:kCIOutputImageKey];
    
    CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
    UIImage result = [[UIImage alloc] initWithCGImage:cgImage scale:image.scale orientation:UIImageOrientationUp];
    
    CGImageRelease(cgImage);
    

    Note this code was tested on iOS. It should be the similar for OS X (substituting NSImage for UIImage).

    0 讨论(0)
  • 2020-11-29 22:19

    Here is the Swift 5 version of blurring the image. Set the Clamp filter to defaults so you will no need to give transform.

    func applyBlurEffect() -> UIImage? {
    
        let context = CIContext(options: nil)
        let imageToBlur = CIImage(image: self)
        let clampFilter = CIFilter(name: "CIAffineClamp")!
        clampFilter.setDefaults()
        clampFilter.setValue(imageToBlur, forKey: kCIInputImageKey)
    
        //The CIAffineClamp filter is setting your extent as infinite, which then confounds your context. Try saving off the pre-clamp extent CGRect, and then supplying that to the context initializer.
        let inputImageExtent = imageToBlur!.extent
    
        guard let currentFilter = CIFilter(name: "CIGaussianBlur") else {
            return nil
        }
        currentFilter.setValue(clampFilter.outputImage, forKey: kCIInputImageKey)
        currentFilter.setValue(10, forKey: "inputRadius")
        guard let output = currentFilter.outputImage, let cgimg = context.createCGImage(output, from: inputImageExtent) else {
            return nil
        }
        return UIImage(cgImage: cgimg)
    
    }
    
    0 讨论(0)
  • 2020-11-29 22:21

    There are two issues. The first is that the blur filter samples pixels outside the edges of the input image. These pixels are transparent. That's where the transparent pixels come from. The trick is to extend the edges before you apply the blur filter. This can be done by a clamp filter e.g. like this:

    CIFilter *affineClampFilter = [CIFilter filterWithName:@"CIAffineClamp"];
    
    CGAffineTransform xform = CGAffineTransformMakeScale(1.0, 1.0);
    [affineClampFilter setValue:[NSValue valueWithBytes:&xform
                                               objCType:@encode(CGAffineTransform)]
                         forKey:@"inputTransform"];
    

    This filter extends the edges infinitely and eliminates the transparency. The next step would be to apply the blur filter.

    The second issue is a bit weird. Some renderers produce a bigger output image for the blur filter and you must adapt the origin of the resulting CIImage by some offset e.g. like this:

    CGImageRef cgImage = [context createCGImage:outputImage
                                       fromRect:CGRectOffset([inputImage extend],
                                                             offset, offset)];
    

    The software renderer on my iPhone needs three times the blur radius as offset. The hardware renderer on the same iPhone does not need any offset at all. Maybee you could deduce the offset from the size difference of input and output images, but I did not try...

    0 讨论(0)
  • 2020-11-29 22:22

    Here is Swift version:

    func applyBlurEffect(image: UIImage) -> UIImage {
        let context = CIContext(options: nil)
        let imageToBlur = CIImage(image: image)
        let blurfilter = CIFilter(name: "CIGaussianBlur")
        blurfilter!.setValue(imageToBlur, forKey: "inputImage")
        blurfilter!.setValue(5.0, forKey: "inputRadius")
        let resultImage = blurfilter!.valueForKey("outputImage") as! CIImage
        let cgImage = context.createCGImage(resultImage, fromRect: resultImage.extent)
        let blurredImage = UIImage(CGImage: cgImage)
        return blurredImage
    
    }
    
    0 讨论(0)
提交回复
热议问题