Core Image filter CISourceOverCompositing not appearing as expected with alpha overlay

五迷三道 提交于 2019-12-05 04:24:39

FOR BLACK-AND-WHITE TEXT

If you're using .normal compositing operation you'll definitely get not the same result as using .hardLight. Your picture shows the result of .hardLight operation.

.normal operation is classical OVER op with formula: (Image1 * A1) + (Image2 * (1 – A1)).

Here's a premultiplied text (RGB*A), so RGB pattern depends on A's opacity in this particular case. RGB of text image can contain any color, including a black one. If A=0 (black alpha) and RGB=0 (black color) and your image is premultiplied – the whole image is totally transparent, if A=1 (white alpha) and RGB=0 (black color) – the image is opaque black.

If your text has no alpha when you use .normal operation, I'll get ADD op: Image1 + Image2.


To get what you want, you need to set up a compositing operation to .hardLight.

.hardLight compositing operation works as .multiply

if alpha of text image less than 50 percent (A < 0.5, the image is almost transparent)

Formula for .multiply: Image1 * Image2


.hardLight compositing operation works as .screen

if alpha of text image greater than or equal to 50 percent (A >= 0.5, the image is semi-opaque)

Formula 1 for .screen: (Image1 + Image2) – (Image1 * Image2)

Formula 2 for .screen: 1 – (1 – Image1) * (1 – Image2)

.screen operation has much softer result than .plus, and it allows to keep alpha not greater than 1 (plus operation adds alphas of Image1 and Image2, so you might get alpha = 2, if you have two alphas). .screen compositing operation is good for making reflections.

func editImage() {

    print("Drawing image with \(selectedOpacity) alpha")

    let text = "hello world"
    let backgroundCGImage = #imageLiteral(resourceName: "background").cgImage!
    let backgroundImage = CIImage(cgImage: backgroundCGImage)
    let imageRect = backgroundImage.extent

    //set up transparent context and draw text on top
    let colorSpace = CGColorSpaceCreateDeviceRGB()
    let alphaInfo = CGImageAlphaInfo.premultipliedLast.rawValue

    let bitmapContext = CGContext(data: nil, width: Int(imageRect.width), height: Int(imageRect.height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: alphaInfo)!
    bitmapContext.draw(backgroundCGImage, in: imageRect)

    bitmapContext.setAlpha(CGFloat(selectedOpacity))
    bitmapContext.setTextDrawingMode(.fill)

    //TRY THREE COMPOSITING OPERATIONS HERE 
    bitmapContext.setBlendMode(.hardLight)
    //bitmapContext.setBlendMode(.multiply)
    //bitmapContext.setBlendMode(.screen)

    //white text
    bitmapContext.textPosition = CGPoint(x: 15 * UIScreen.main.scale, y: (20 + 60) * UIScreen.main.scale)
    let displayLineTextWhite = CTLineCreateWithAttributedString(NSAttributedString(string: text, attributes: [.foregroundColor: UIColor.white, .font: UIFont.systemFont(ofSize: 58 * UIScreen.main.scale)]))
    CTLineDraw(displayLineTextWhite, bitmapContext)

    //black text
    bitmapContext.textPosition = CGPoint(x: 15 * UIScreen.main.scale, y: 20 * UIScreen.main.scale)
    let displayLineTextBlack = CTLineCreateWithAttributedString(NSAttributedString(string: text, attributes: [.foregroundColor: UIColor.black, .font: UIFont.systemFont(ofSize: 58 * UIScreen.main.scale)]))
    CTLineDraw(displayLineTextBlack, bitmapContext)

    let outputImage = bitmapContext.makeImage()!

    topImageView.image = UIImage(cgImage: outputImage)
}

So for recreating this compositing operation you need the following logic:

//rgb1 – text image 
//rgb2 - background
//a1   - alpha of text image

if a1 >= 0.5 { 
    //use this formula for compositing: 1–(1–rgb1)*(1–rgb2) 
} else { 
    //use this formula for compositing: rgb1*rgb2 
}

I recreated an image using compositing app The Foundry NUKE 11. Offset=0.5 here is Add=0.5.

I used property Offset=0.5 because transparency=0.5 is a pivot point of .hardLight compositing operation.

FOR COLOR TEXT

You need to use .sourceAtop compositing operation in case you have ORANGE (or any other color) text in addition to B&W text. Applying .sourceAtop case of .setBlendMode method you make Swift use the luminance of the background image to determine what to show. Alternatively you can employ CISourceAtopCompositing core image filter instead of CISourceOverCompositing.

bitmapContext.setBlendMode(.sourceAtop)

or

let compositingFilter = CIFilter(name: "CISourceAtopCompositing")

.sourceAtop operation has the following formula: (Image1 * A2) + (Image2 * (1 – A1)). As you can see you need two alpha channels: A1 is the alpha for text and A2 is the alpha for background image.

bitmapContext.textPosition = CGPoint(x: 15 * UIScreen.main.scale, y: (20 + 60) * UIScreen.main.scale)
let displayLineTextOrange = CTLineCreateWithAttributedString(NSAttributedString(string: text, attributes: [.foregroundColor: UIColor.orange, .font: UIFont.systemFont(ofSize: 58 * UIScreen.main.scale)]))
CTLineDraw(displayLineTextOrange, bitmapContext)

After a lot of back and forth trying different things, (thanks @andy and @Juraj Antas for pushing me in the right direction) I finally have the answer. So drawing into a Core Graphics context results in the correct appearance, but it is more costly to draw images using that approach. It seemed the problem was with CISourceOverCompositing, but the problem actually lies with the fact that, by default, Core Image filters work in linear space whereas Core Graphics works in perceptual space, which explains the different results. You can however create a Core Graphics image from the Core Image filter using a Core Image context that performs no color management, thus matching the output of the Core Graphics approach. So the original code was just fine, just had to output the image a bit differently.

let ciContext = CIContext(options: [kCIContextWorkingColorSpace: NSNull()])
let outputImage = ciContext.createCGImage(outputCIImage, from: outputCIImage.extent) 
//this image appears as expected

Final answer: Formula in CISourceOverCompositing is good one. It is right thing to do.

BUT

It is working in wrong color space. In graphic programs you most likely have sRGB color space. On iOS Generic RGB color space is used. This is why results don't match.

Using custom CIFilter I recreated CISourceOverCompositing filter.
s1 is text image.
s2 is background image.

Kernel for it is this:

 kernel vec4 opacity( __sample s1, __sample s2) {
     vec3 text = s1.rgb;
     float textAlpha = s1.a;
     vec3 background = s2.rgb;

     vec3 res = background * (1.0 - textAlpha) + text;
     return vec4(res, 1.0);
 }

So to fix this color 'issue' you must convert text image from RGB to sRGB. I guess your next question will be how to do that ;)

Important: iOS does not support device-independent or generic color spaces. iOS applications must use device color spaces instead. Apple doc about color spaces

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!